Jan 25 07:56:56 crc systemd[1]: Starting Kubernetes Kubelet... Jan 25 07:56:56 crc restorecon[4689]: Relabeled /var/lib/kubelet/config.json from system_u:object_r:unlabeled_t:s0 to system_u:object_r:container_var_lib_t:s0 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/device-plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/device-plugins/kubelet.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/volumes/kubernetes.io~configmap/nginx-conf/..2025_02_23_05_40_35.4114275528/nginx.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/22e96971 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/21c98286 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/0f1869e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/46889d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/5b6a5969 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/6c7921f5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4804f443 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/2a46b283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/a6b5573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4f88ee5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/5a4eee4b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/cd87c521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/38602af4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/1483b002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/0346718b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/d3ed4ada not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/3bb473a5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/8cd075a9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/00ab4760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/54a21c09 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/70478888 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/43802770 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/955a0edc not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/bca2d009 not reset as customized by admin to system_u:object_r:container_file_t:s0:c140,c1009 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/b295f9bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/bc46ea27 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5731fc1b not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5e1b2a3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/943f0936 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/3f764ee4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/8695e3f9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/aed7aa86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/c64d7448 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/0ba16bd2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/207a939f not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/54aa8cdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/1f5fa595 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/bf9c8153 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/47fba4ea not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/7ae55ce9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7906a268 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/ce43fa69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7fc7ea3a not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/d8c38b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/9ef015fb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/b9db6a41 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/b1733d79 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/afccd338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/9df0a185 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/18938cf8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/7ab4eb23 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/56930be6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_35.630010865 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/0d8e3722 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/d22b2e76 not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/e036759f not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/2734c483 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/57878fe7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/3f3c2e58 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/375bec3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/7bc41e08 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/48c7a72d not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/4b66701f not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/a5a1c202 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_40.1388695756 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/26f3df5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/6d8fb21d not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/50e94777 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208473b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/ec9e08ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3b787c39 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208eaed5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/93aa3a2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3c697968 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/ba950ec9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/cb5cdb37 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/f2df9827 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/fedaa673 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/9ca2df95 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/b2d7460e not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2207853c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/241c1c29 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2d910eaf not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/c6c0f2e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/399edc97 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8049f7cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/0cec5484 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/312446d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c406,c828 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8e56a35d not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/2d30ddb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/eca8053d not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/c3a25c9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c168,c522 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/b9609c22 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/e8b0eca9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/b36a9c3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/38af7b07 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/ae821620 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/baa23338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/2c534809 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/59b29eae not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/c91a8e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/4d87494a not reset as customized by admin to system_u:object_r:container_file_t:s0:c442,c857 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/1e33ca63 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/8dea7be2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d0b04a99 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d84f01e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/4109059b not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/a7258a3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/05bdf2b6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/f3261b51 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/315d045e not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/5fdcf278 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/d053f757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/c2850dc7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fcfb0b2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c7ac9b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fa0c0d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c609b6ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/2be6c296 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/89a32653 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/4eb9afeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/13af6efa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/b03f9724 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/e3d105cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/3aed4d83 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/0765fa6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/2cefc627 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/3dcc6345 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/365af391 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b1130c0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/236a5913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b9432e26 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/5ddb0e3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/986dc4fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/8a23ff9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/9728ae68 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/665f31d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/136c9b42 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/98a1575b not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/cac69136 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/5deb77a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/2ae53400 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/e46f2326 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/dc688d3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/3497c3cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/177eb008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/af5a2afa not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/d780cb1f not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/49b0f374 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/26fbb125 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/cf14125a not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/b7f86972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/e51d739c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/88ba6a69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/669a9acf not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/5cd51231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/75349ec7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/15c26839 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/45023dcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/2bb66a50 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/64d03bdd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/ab8e7ca0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/bb9be25f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/9a0b61d3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/d471b9d2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/8cb76b8e not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/11a00840 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/ec355a92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/992f735e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d59cdbbc not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/72133ff0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/c56c834c not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d13724c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/0a498258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa471982 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fc900d92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa7d68da not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/4bacf9b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/424021b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/fc2e31a3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/f51eefac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/c8997f2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/7481f599 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/fdafea19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/d0e1c571 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/ee398915 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/682bb6b8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a3e67855 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a989f289 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/915431bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/7796fdab not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/dcdb5f19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/a3aaa88c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/5508e3e6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/160585de not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/e99f8da3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/8bc85570 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/a5861c91 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/84db1135 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/9e1a6043 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/c1aba1c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/d55ccd6d not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/971cc9f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/8f2e3dcf not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/ceb35e9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/1c192745 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/5209e501 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/f83de4df not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/e7b978ac not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/c64304a1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/5384386b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/cce3e3ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/8fb75465 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/740f573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/32fd1134 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/0a861bd3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/80363026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/bfa952a8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..2025_02_23_05_33_31.333075221 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/793bf43d not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/7db1bb6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/4f6a0368 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/c12c7d86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/36c4a773 not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/4c1e98ae not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/a4c8115c not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/setup/7db1802e not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver/a008a7ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-syncer/2c836bac not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-regeneration-controller/0ce62299 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-insecure-readyz/945d2457 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-check-endpoints/7d5c1dd8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/index.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/bundle-v1.15.0.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/channel.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/package.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/bc8d0691 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/6b76097a not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/34d1af30 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/312ba61c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/645d5dd1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/16e825f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/4cf51fc9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/2a23d348 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/075dbd49 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/dd585ddd not reset as customized by admin to system_u:object_r:container_file_t:s0:c377,c642 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/17ebd0ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c343 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/005579f4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_23_11.1287037894 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/bf5f3b9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/af276eb7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/ea28e322 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/692e6683 not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/871746a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/4eb2e958 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/ca9b62da not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/0edd6fce not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/containers/controller-manager/89b4555f not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/655fcd71 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/0d43c002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/e68efd17 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/9acf9b65 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/5ae3ff11 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/1e59206a not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/27af16d1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c304,c1017 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/7918e729 not reset as customized by admin to system_u:object_r:container_file_t:s0:c853,c893 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/5d976d0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c585,c981 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/d7f55cbb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/f0812073 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/1a56cbeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/7fdd437e not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/cdfb5652 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/fix-audit-permissions/fb93119e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver/f1e8fc0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver-check-endpoints/218511f3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server/serving-certs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/ca8af7b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/72cc8a75 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/6e8a3760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4c3455c0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/2278acb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4b453e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/3ec09bda not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2/cacerts.bin not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java/cacerts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl/ca-bundle.trust.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/email-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/objsign-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2ae6433e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fde84897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75680d2e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/openshift-service-serving-signer_1740288168.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/facfc4fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f5a969c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CFCA_EV_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9ef4a08a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ingress-operator_1740288202.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2f332aed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/248c8271.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d10a21f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ACCVRAIZ1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a94d09e5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c9a4d3b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40193066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd8c0d63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b936d1c6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CA_Disig_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4fd49c6c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM_SERVIDORES_SEGUROS.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b81b93f0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f9a69fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b30d5fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ANF_Secure_Server_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b433981b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93851c9e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9282e51c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7dd1bc4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Actalis_Authentication_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/930ac5d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f47b495.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e113c810.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5931b5bc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Commercial.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2b349938.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e48193cf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/302904dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a716d4ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Networking.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93bc0acc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/86212b19.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b727005e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbc54cab.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f51bb24c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c28a8a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9c8dfbd4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ccc52f49.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cb1c3204.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ce5e74ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd08c599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6d41d539.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb5fa911.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e35234b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8cb5ee0f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a7c655d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f8fc53da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/de6d66f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d41b5e2a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/41a3f684.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1df5a75f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_2011.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e36a6752.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b872f2b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9576d26b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/228f89db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_ECC_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb717492.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d21b73c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b1b94ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/595e996b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_RSA_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b46e03d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/128f4b91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_3_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81f2d2b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Autoridad_de_Certificacion_Firmaprofesional_CIF_A62634068.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3bde41ac.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d16a5865.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_EC-384_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0179095f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ffa7f1eb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9482e63a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4dae3dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e359ba6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7e067d03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/95aff9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7746a63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Baltimore_CyberTrust_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/653b494a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3ad48a91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_2_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/54657681.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/82223c44.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8de2f56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d9dafe4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d96b65e2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee64a828.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40547a79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5a3f0ff8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a780d93.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/34d996fb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/eed8c118.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/89c02a45.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b1159c4c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d6325660.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4c339cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8312c4c1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_E1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8508e720.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5fdd185d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48bec511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/69105f4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b9bc432.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/32888f65.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b03dec0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/219d9499.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5acf816d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbf06781.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc99f41e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AAA_Certificate_Services.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/985c1f52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8794b4e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_BR_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7c037b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ef954a4e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_EV_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2add47b6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/90c5a3c8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0f3e76e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/53a1b57a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_EV_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5ad8a5d6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/68dd7389.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d04f354.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d6437c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/062cdee6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bd43e1dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7f3d5d1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c491639e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3513523f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/399e7759.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/feffd413.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d18e9066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/607986c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c90bc37d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1b0f7e5c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e08bfd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dd8e9d41.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed39abd0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a3418fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bc3f2570.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_High_Assurance_EV_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/244b5494.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81b9768f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4be590e0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_ECC_P384_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9846683b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/252252d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e8e7201.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_RSA4096_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d52c538d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c44cc0c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Trusted_Root_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75d1b2ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a2c66da8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ecccd8db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust.net_Certification_Authority__2048_.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/aee5f10d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e7271e8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0e59380.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4c3982f2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b99d060.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf64f35b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0a775a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/002c0b4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cc450945.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_EC1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/106f3e4d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b3fb433b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4042bcee.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/02265526.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/455f1b52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0d69c7e1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9f727ac7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5e98733a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0cd152c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc4d6a89.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6187b673.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/FIRMAPROFESIONAL_CA_ROOT-A_WEB.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ba8887ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/068570d1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f081611a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48a195d8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GDCA_TrustAUTH_R5_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f6fa695.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab59055e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b92fd57f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GLOBALTRUST_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fa5da96b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ec40989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7719f463.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1001acf7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f013ecaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/626dceaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c559d742.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1d3472b9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9479c8c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a81e292b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4bfab552.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e071171e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/57bcb2da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_ECC_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab5346f4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5046c355.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_RSA_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/865fbdf9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da0cfd1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/85cde254.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_ECC_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbb3f32b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureSign_RootCA11.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5860aaa6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/31188b5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HiPKI_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c7f1359b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f15c80c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hongkong_Post_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/09789157.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/18856ac4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e09d511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Commercial_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cf701eeb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d06393bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Public_Sector_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/10531352.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Izenpe.com.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureTrust_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0ed035a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsec_e-Szigno_Root_CA_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8160b96c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8651083.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2c63f966.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_ECC_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d89cda1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/01419da9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_RSA_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7a5b843.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_RSA_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf53fb88.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9591a472.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3afde786.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Gold_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NAVER_Global_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3fb36b73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d39b0a2c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a89d74c2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd58d51e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7db1890.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NetLock_Arany__Class_Gold__F__tan__s__tv__ny.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/988a38cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/60afe812.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f39fc864.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5443e9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GB_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e73d606e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dfc0fe80.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b66938e9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e1eab7c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GC_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/773e07ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c899c73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d59297b8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ddcda989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_1_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/749e9e03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/52b525c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7e8dc79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a819ef2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/08063a00.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b483515.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/064e0aa9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1f58a078.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6f7454b3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7fa05551.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76faf6c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9339512a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f387163d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee37c333.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e18bfb83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e442e424.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fe8a2cd8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/23f4c490.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5cd81ad7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0c70a8d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7892ad52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SZAFIR_ROOT_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4f316efb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_RSA_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/06dc52d5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/583d0756.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0bf05006.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/88950faa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9046744a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c860d51.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_RSA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6fa5da56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/33ee480d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Secure_Global_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/63a2c897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_ECC_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bdacca6f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ff34af3f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbff3a01.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_ECC_RootCA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_C1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/406c9bb1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_C3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Services_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Silver_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/99e1b953.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/14bc7599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TUBITAK_Kamu_SM_SSL_Kok_Sertifikasi_-_Surum_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a3adc42.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f459871d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_ECC_Root_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_RSA_Root_2023.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TeliaSonera_Root_CA_v1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telia_Root_CA_v2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f103249.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f058632f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-certificates.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9bf03295.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/98aaf404.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1cef98f5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/073bfcc5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2923b3f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f249de83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/edcbddb5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P256_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b5697b0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ae85e5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b74d2bd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P384_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d887a5bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9aef356c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TunTrust_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd64f3fc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e13665f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Extended_Validation_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f5dc4f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da7377f6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Global_G2_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c01eb047.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/304d27c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed858448.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f30dd6ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/04f60c28.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_ECC_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fc5a8f99.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/35105088.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee532fd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/XRamp_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/706f604c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76579174.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d86cdd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/882de061.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f618aec.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a9d40e02.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e-Szigno_Root_CA_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e868b802.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/83e9984f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ePKI_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca6e4ad9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d6523ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4b718d9b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/869fbf79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/containers/registry/f8d22bdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/6e8bbfac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/54dd7996 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/a4f1bb05 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/207129da not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/c1df39e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/15b8f1cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/77bd6913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/2382c1b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/704ce128 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/70d16fe0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/bfb95535 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/57a8e8e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/1b9d3e5e not reset as customized by admin to system_u:object_r:container_file_t:s0:c107,c917 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/fddb173c not reset as customized by admin to system_u:object_r:container_file_t:s0:c202,c983 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/95d3c6c4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/bfb5fff5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/2aef40aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/c0391cad not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/1119e69d not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/660608b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/8220bd53 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/85f99d5c not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/4b0225f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/9c2a3394 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/e820b243 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/1ca52ea0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/e6988e45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/6655f00b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/98bc3986 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/08e3458a not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/2a191cb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/6c4eeefb not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/f61a549c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/24891863 not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/fbdfd89c not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/9b63b3bc not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/8acde6d6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/node-driver-registrar/59ecbba3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/csi-provisioner/685d4be3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/containers/route-controller-manager/feaea55e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:56 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/63709497 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/d966b7fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/f5773757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/81c9edb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/57bf57ee not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/86f5e6aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/0aabe31d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/d2af85c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/09d157d9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c0fe7256 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c30319e4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/e6b1dd45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/2bb643f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/920de426 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/70fa1e87 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/a1c12a2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/9442e6c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/5b45ec72 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/3c9f3a59 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/1091c11b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/9a6821c6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/ec0c35e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/517f37e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/6214fe78 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/ba189c8b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/351e4f31 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/c0f219ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/8069f607 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/559c3d82 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/605ad488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/148df488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/3bf6dcb4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/022a2feb not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/938c3924 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/729fe23e not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/1fd5cbd4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/a96697e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/e155ddca not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/10dd0e0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/6f2c8392 not reset as customized by admin to system_u:object_r:container_file_t:s0:c267,c588 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/bd241ad9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/plugins/csi-hostpath not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/plugins/csi-hostpath/csi.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/plugins/kubernetes.io not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/plugins/kubernetes.io/csi not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983 not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/vol_data.json not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 25 07:56:57 crc restorecon[4689]: /var/lib/kubelet/plugins_registry not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 25 07:56:57 crc restorecon[4689]: Relabeled /var/usrlocal/bin/kubenswrapper from system_u:object_r:bin_t:s0 to system_u:object_r:kubelet_exec_t:s0 Jan 25 07:56:57 crc kubenswrapper[4832]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 25 07:56:57 crc kubenswrapper[4832]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Jan 25 07:56:57 crc kubenswrapper[4832]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 25 07:56:57 crc kubenswrapper[4832]: Flag --register-with-taints has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 25 07:56:57 crc kubenswrapper[4832]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 25 07:56:57 crc kubenswrapper[4832]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 25 07:56:57 crc kubenswrapper[4832]: I0125 07:56:57.450609 4832 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 25 07:56:57 crc kubenswrapper[4832]: W0125 07:56:57.454716 4832 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 25 07:56:57 crc kubenswrapper[4832]: W0125 07:56:57.454741 4832 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 25 07:56:57 crc kubenswrapper[4832]: W0125 07:56:57.454747 4832 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 25 07:56:57 crc kubenswrapper[4832]: W0125 07:56:57.454758 4832 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 25 07:56:57 crc kubenswrapper[4832]: W0125 07:56:57.454765 4832 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 25 07:56:57 crc kubenswrapper[4832]: W0125 07:56:57.454772 4832 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 25 07:56:57 crc kubenswrapper[4832]: W0125 07:56:57.454778 4832 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 25 07:56:57 crc kubenswrapper[4832]: W0125 07:56:57.454785 4832 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 25 07:56:57 crc kubenswrapper[4832]: W0125 07:56:57.454794 4832 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 25 07:56:57 crc kubenswrapper[4832]: W0125 07:56:57.454801 4832 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 25 07:56:57 crc kubenswrapper[4832]: W0125 07:56:57.454808 4832 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 25 07:56:57 crc kubenswrapper[4832]: W0125 07:56:57.454815 4832 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 25 07:56:57 crc kubenswrapper[4832]: W0125 07:56:57.454822 4832 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 25 07:56:57 crc kubenswrapper[4832]: W0125 07:56:57.454832 4832 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 25 07:56:57 crc kubenswrapper[4832]: W0125 07:56:57.454840 4832 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 25 07:56:57 crc kubenswrapper[4832]: W0125 07:56:57.454847 4832 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 25 07:56:57 crc kubenswrapper[4832]: W0125 07:56:57.454853 4832 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 25 07:56:57 crc kubenswrapper[4832]: W0125 07:56:57.454860 4832 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 25 07:56:57 crc kubenswrapper[4832]: W0125 07:56:57.454868 4832 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 25 07:56:57 crc kubenswrapper[4832]: W0125 07:56:57.454874 4832 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 25 07:56:57 crc kubenswrapper[4832]: W0125 07:56:57.454881 4832 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 25 07:56:57 crc kubenswrapper[4832]: W0125 07:56:57.454888 4832 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 25 07:56:57 crc kubenswrapper[4832]: W0125 07:56:57.454894 4832 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 25 07:56:57 crc kubenswrapper[4832]: W0125 07:56:57.454899 4832 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 25 07:56:57 crc kubenswrapper[4832]: W0125 07:56:57.454904 4832 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 25 07:56:57 crc kubenswrapper[4832]: W0125 07:56:57.454910 4832 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 25 07:56:57 crc kubenswrapper[4832]: W0125 07:56:57.454916 4832 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 25 07:56:57 crc kubenswrapper[4832]: W0125 07:56:57.454921 4832 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 25 07:56:57 crc kubenswrapper[4832]: W0125 07:56:57.454927 4832 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 25 07:56:57 crc kubenswrapper[4832]: W0125 07:56:57.454932 4832 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 25 07:56:57 crc kubenswrapper[4832]: W0125 07:56:57.454937 4832 feature_gate.go:330] unrecognized feature gate: Example Jan 25 07:56:57 crc kubenswrapper[4832]: W0125 07:56:57.454943 4832 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 25 07:56:57 crc kubenswrapper[4832]: W0125 07:56:57.454948 4832 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 25 07:56:57 crc kubenswrapper[4832]: W0125 07:56:57.454954 4832 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 25 07:56:57 crc kubenswrapper[4832]: W0125 07:56:57.454960 4832 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 25 07:56:57 crc kubenswrapper[4832]: W0125 07:56:57.454967 4832 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 25 07:56:57 crc kubenswrapper[4832]: W0125 07:56:57.454973 4832 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 25 07:56:57 crc kubenswrapper[4832]: W0125 07:56:57.454978 4832 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 25 07:56:57 crc kubenswrapper[4832]: W0125 07:56:57.454984 4832 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 25 07:56:57 crc kubenswrapper[4832]: W0125 07:56:57.454990 4832 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 25 07:56:57 crc kubenswrapper[4832]: W0125 07:56:57.454996 4832 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 25 07:56:57 crc kubenswrapper[4832]: W0125 07:56:57.455002 4832 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 25 07:56:57 crc kubenswrapper[4832]: W0125 07:56:57.455007 4832 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 25 07:56:57 crc kubenswrapper[4832]: W0125 07:56:57.455012 4832 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 25 07:56:57 crc kubenswrapper[4832]: W0125 07:56:57.455020 4832 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 25 07:56:57 crc kubenswrapper[4832]: W0125 07:56:57.455027 4832 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 25 07:56:57 crc kubenswrapper[4832]: W0125 07:56:57.455034 4832 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 25 07:56:57 crc kubenswrapper[4832]: W0125 07:56:57.455040 4832 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 25 07:56:57 crc kubenswrapper[4832]: W0125 07:56:57.455046 4832 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 25 07:56:57 crc kubenswrapper[4832]: W0125 07:56:57.455051 4832 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 25 07:56:57 crc kubenswrapper[4832]: W0125 07:56:57.455057 4832 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 25 07:56:57 crc kubenswrapper[4832]: W0125 07:56:57.455063 4832 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 25 07:56:57 crc kubenswrapper[4832]: W0125 07:56:57.455068 4832 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 25 07:56:57 crc kubenswrapper[4832]: W0125 07:56:57.455075 4832 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 25 07:56:57 crc kubenswrapper[4832]: W0125 07:56:57.455080 4832 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 25 07:56:57 crc kubenswrapper[4832]: W0125 07:56:57.455085 4832 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 25 07:56:57 crc kubenswrapper[4832]: W0125 07:56:57.455091 4832 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 25 07:56:57 crc kubenswrapper[4832]: W0125 07:56:57.455096 4832 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 25 07:56:57 crc kubenswrapper[4832]: W0125 07:56:57.455101 4832 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 25 07:56:57 crc kubenswrapper[4832]: W0125 07:56:57.455107 4832 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 25 07:56:57 crc kubenswrapper[4832]: W0125 07:56:57.455112 4832 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 25 07:56:57 crc kubenswrapper[4832]: W0125 07:56:57.455117 4832 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 25 07:56:57 crc kubenswrapper[4832]: W0125 07:56:57.455123 4832 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 25 07:56:57 crc kubenswrapper[4832]: W0125 07:56:57.455128 4832 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 25 07:56:57 crc kubenswrapper[4832]: W0125 07:56:57.455135 4832 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 25 07:56:57 crc kubenswrapper[4832]: W0125 07:56:57.455143 4832 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 25 07:56:57 crc kubenswrapper[4832]: W0125 07:56:57.455149 4832 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 25 07:56:57 crc kubenswrapper[4832]: W0125 07:56:57.455155 4832 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 25 07:56:57 crc kubenswrapper[4832]: W0125 07:56:57.455160 4832 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 25 07:56:57 crc kubenswrapper[4832]: W0125 07:56:57.455165 4832 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 25 07:56:57 crc kubenswrapper[4832]: W0125 07:56:57.455170 4832 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 25 07:56:57 crc kubenswrapper[4832]: I0125 07:56:57.455282 4832 flags.go:64] FLAG: --address="0.0.0.0" Jan 25 07:56:57 crc kubenswrapper[4832]: I0125 07:56:57.455294 4832 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]" Jan 25 07:56:57 crc kubenswrapper[4832]: I0125 07:56:57.455310 4832 flags.go:64] FLAG: --anonymous-auth="true" Jan 25 07:56:57 crc kubenswrapper[4832]: I0125 07:56:57.455326 4832 flags.go:64] FLAG: --application-metrics-count-limit="100" Jan 25 07:56:57 crc kubenswrapper[4832]: I0125 07:56:57.455336 4832 flags.go:64] FLAG: --authentication-token-webhook="false" Jan 25 07:56:57 crc kubenswrapper[4832]: I0125 07:56:57.455346 4832 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Jan 25 07:56:57 crc kubenswrapper[4832]: I0125 07:56:57.455355 4832 flags.go:64] FLAG: --authorization-mode="AlwaysAllow" Jan 25 07:56:57 crc kubenswrapper[4832]: I0125 07:56:57.455363 4832 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Jan 25 07:56:57 crc kubenswrapper[4832]: I0125 07:56:57.455370 4832 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Jan 25 07:56:57 crc kubenswrapper[4832]: I0125 07:56:57.455377 4832 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" Jan 25 07:56:57 crc kubenswrapper[4832]: I0125 07:56:57.455406 4832 flags.go:64] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/kubeconfig" Jan 25 07:56:57 crc kubenswrapper[4832]: I0125 07:56:57.455414 4832 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki" Jan 25 07:56:57 crc kubenswrapper[4832]: I0125 07:56:57.455421 4832 flags.go:64] FLAG: --cgroup-driver="cgroupfs" Jan 25 07:56:57 crc kubenswrapper[4832]: I0125 07:56:57.455427 4832 flags.go:64] FLAG: --cgroup-root="" Jan 25 07:56:57 crc kubenswrapper[4832]: I0125 07:56:57.455433 4832 flags.go:64] FLAG: --cgroups-per-qos="true" Jan 25 07:56:57 crc kubenswrapper[4832]: I0125 07:56:57.455439 4832 flags.go:64] FLAG: --client-ca-file="" Jan 25 07:56:57 crc kubenswrapper[4832]: I0125 07:56:57.455445 4832 flags.go:64] FLAG: --cloud-config="" Jan 25 07:56:57 crc kubenswrapper[4832]: I0125 07:56:57.455451 4832 flags.go:64] FLAG: --cloud-provider="" Jan 25 07:56:57 crc kubenswrapper[4832]: I0125 07:56:57.455458 4832 flags.go:64] FLAG: --cluster-dns="[]" Jan 25 07:56:57 crc kubenswrapper[4832]: I0125 07:56:57.455466 4832 flags.go:64] FLAG: --cluster-domain="" Jan 25 07:56:57 crc kubenswrapper[4832]: I0125 07:56:57.455472 4832 flags.go:64] FLAG: --config="/etc/kubernetes/kubelet.conf" Jan 25 07:56:57 crc kubenswrapper[4832]: I0125 07:56:57.455478 4832 flags.go:64] FLAG: --config-dir="" Jan 25 07:56:57 crc kubenswrapper[4832]: I0125 07:56:57.455484 4832 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json" Jan 25 07:56:57 crc kubenswrapper[4832]: I0125 07:56:57.455490 4832 flags.go:64] FLAG: --container-log-max-files="5" Jan 25 07:56:57 crc kubenswrapper[4832]: I0125 07:56:57.455498 4832 flags.go:64] FLAG: --container-log-max-size="10Mi" Jan 25 07:56:57 crc kubenswrapper[4832]: I0125 07:56:57.455504 4832 flags.go:64] FLAG: --container-runtime-endpoint="/var/run/crio/crio.sock" Jan 25 07:56:57 crc kubenswrapper[4832]: I0125 07:56:57.455510 4832 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock" Jan 25 07:56:57 crc kubenswrapper[4832]: I0125 07:56:57.455516 4832 flags.go:64] FLAG: --containerd-namespace="k8s.io" Jan 25 07:56:57 crc kubenswrapper[4832]: I0125 07:56:57.455523 4832 flags.go:64] FLAG: --contention-profiling="false" Jan 25 07:56:57 crc kubenswrapper[4832]: I0125 07:56:57.455529 4832 flags.go:64] FLAG: --cpu-cfs-quota="true" Jan 25 07:56:57 crc kubenswrapper[4832]: I0125 07:56:57.455535 4832 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms" Jan 25 07:56:57 crc kubenswrapper[4832]: I0125 07:56:57.455541 4832 flags.go:64] FLAG: --cpu-manager-policy="none" Jan 25 07:56:57 crc kubenswrapper[4832]: I0125 07:56:57.455547 4832 flags.go:64] FLAG: --cpu-manager-policy-options="" Jan 25 07:56:57 crc kubenswrapper[4832]: I0125 07:56:57.455564 4832 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s" Jan 25 07:56:57 crc kubenswrapper[4832]: I0125 07:56:57.455570 4832 flags.go:64] FLAG: --enable-controller-attach-detach="true" Jan 25 07:56:57 crc kubenswrapper[4832]: I0125 07:56:57.455577 4832 flags.go:64] FLAG: --enable-debugging-handlers="true" Jan 25 07:56:57 crc kubenswrapper[4832]: I0125 07:56:57.455583 4832 flags.go:64] FLAG: --enable-load-reader="false" Jan 25 07:56:57 crc kubenswrapper[4832]: I0125 07:56:57.455589 4832 flags.go:64] FLAG: --enable-server="true" Jan 25 07:56:57 crc kubenswrapper[4832]: I0125 07:56:57.455595 4832 flags.go:64] FLAG: --enforce-node-allocatable="[pods]" Jan 25 07:56:57 crc kubenswrapper[4832]: I0125 07:56:57.455606 4832 flags.go:64] FLAG: --event-burst="100" Jan 25 07:56:57 crc kubenswrapper[4832]: I0125 07:56:57.455612 4832 flags.go:64] FLAG: --event-qps="50" Jan 25 07:56:57 crc kubenswrapper[4832]: I0125 07:56:57.455618 4832 flags.go:64] FLAG: --event-storage-age-limit="default=0" Jan 25 07:56:57 crc kubenswrapper[4832]: I0125 07:56:57.455624 4832 flags.go:64] FLAG: --event-storage-event-limit="default=0" Jan 25 07:56:57 crc kubenswrapper[4832]: I0125 07:56:57.455630 4832 flags.go:64] FLAG: --eviction-hard="" Jan 25 07:56:57 crc kubenswrapper[4832]: I0125 07:56:57.455638 4832 flags.go:64] FLAG: --eviction-max-pod-grace-period="0" Jan 25 07:56:57 crc kubenswrapper[4832]: I0125 07:56:57.455644 4832 flags.go:64] FLAG: --eviction-minimum-reclaim="" Jan 25 07:56:57 crc kubenswrapper[4832]: I0125 07:56:57.455650 4832 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s" Jan 25 07:56:57 crc kubenswrapper[4832]: I0125 07:56:57.455656 4832 flags.go:64] FLAG: --eviction-soft="" Jan 25 07:56:57 crc kubenswrapper[4832]: I0125 07:56:57.455662 4832 flags.go:64] FLAG: --eviction-soft-grace-period="" Jan 25 07:56:57 crc kubenswrapper[4832]: I0125 07:56:57.455669 4832 flags.go:64] FLAG: --exit-on-lock-contention="false" Jan 25 07:56:57 crc kubenswrapper[4832]: I0125 07:56:57.455674 4832 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false" Jan 25 07:56:57 crc kubenswrapper[4832]: I0125 07:56:57.455680 4832 flags.go:64] FLAG: --experimental-mounter-path="" Jan 25 07:56:57 crc kubenswrapper[4832]: I0125 07:56:57.455686 4832 flags.go:64] FLAG: --fail-cgroupv1="false" Jan 25 07:56:57 crc kubenswrapper[4832]: I0125 07:56:57.455693 4832 flags.go:64] FLAG: --fail-swap-on="true" Jan 25 07:56:57 crc kubenswrapper[4832]: I0125 07:56:57.455700 4832 flags.go:64] FLAG: --feature-gates="" Jan 25 07:56:57 crc kubenswrapper[4832]: I0125 07:56:57.455709 4832 flags.go:64] FLAG: --file-check-frequency="20s" Jan 25 07:56:57 crc kubenswrapper[4832]: I0125 07:56:57.455717 4832 flags.go:64] FLAG: --global-housekeeping-interval="1m0s" Jan 25 07:56:57 crc kubenswrapper[4832]: I0125 07:56:57.455724 4832 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge" Jan 25 07:56:57 crc kubenswrapper[4832]: I0125 07:56:57.455732 4832 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1" Jan 25 07:56:57 crc kubenswrapper[4832]: I0125 07:56:57.455740 4832 flags.go:64] FLAG: --healthz-port="10248" Jan 25 07:56:57 crc kubenswrapper[4832]: I0125 07:56:57.455747 4832 flags.go:64] FLAG: --help="false" Jan 25 07:56:57 crc kubenswrapper[4832]: I0125 07:56:57.455753 4832 flags.go:64] FLAG: --hostname-override="" Jan 25 07:56:57 crc kubenswrapper[4832]: I0125 07:56:57.455759 4832 flags.go:64] FLAG: --housekeeping-interval="10s" Jan 25 07:56:57 crc kubenswrapper[4832]: I0125 07:56:57.455766 4832 flags.go:64] FLAG: --http-check-frequency="20s" Jan 25 07:56:57 crc kubenswrapper[4832]: I0125 07:56:57.455772 4832 flags.go:64] FLAG: --image-credential-provider-bin-dir="" Jan 25 07:56:57 crc kubenswrapper[4832]: I0125 07:56:57.455780 4832 flags.go:64] FLAG: --image-credential-provider-config="" Jan 25 07:56:57 crc kubenswrapper[4832]: I0125 07:56:57.455786 4832 flags.go:64] FLAG: --image-gc-high-threshold="85" Jan 25 07:56:57 crc kubenswrapper[4832]: I0125 07:56:57.455792 4832 flags.go:64] FLAG: --image-gc-low-threshold="80" Jan 25 07:56:57 crc kubenswrapper[4832]: I0125 07:56:57.455798 4832 flags.go:64] FLAG: --image-service-endpoint="" Jan 25 07:56:57 crc kubenswrapper[4832]: I0125 07:56:57.455804 4832 flags.go:64] FLAG: --kernel-memcg-notification="false" Jan 25 07:56:57 crc kubenswrapper[4832]: I0125 07:56:57.455810 4832 flags.go:64] FLAG: --kube-api-burst="100" Jan 25 07:56:57 crc kubenswrapper[4832]: I0125 07:56:57.455816 4832 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" Jan 25 07:56:57 crc kubenswrapper[4832]: I0125 07:56:57.455823 4832 flags.go:64] FLAG: --kube-api-qps="50" Jan 25 07:56:57 crc kubenswrapper[4832]: I0125 07:56:57.455829 4832 flags.go:64] FLAG: --kube-reserved="" Jan 25 07:56:57 crc kubenswrapper[4832]: I0125 07:56:57.455835 4832 flags.go:64] FLAG: --kube-reserved-cgroup="" Jan 25 07:56:57 crc kubenswrapper[4832]: I0125 07:56:57.455841 4832 flags.go:64] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig" Jan 25 07:56:57 crc kubenswrapper[4832]: I0125 07:56:57.455848 4832 flags.go:64] FLAG: --kubelet-cgroups="" Jan 25 07:56:57 crc kubenswrapper[4832]: I0125 07:56:57.455854 4832 flags.go:64] FLAG: --local-storage-capacity-isolation="true" Jan 25 07:56:57 crc kubenswrapper[4832]: I0125 07:56:57.455860 4832 flags.go:64] FLAG: --lock-file="" Jan 25 07:56:57 crc kubenswrapper[4832]: I0125 07:56:57.455866 4832 flags.go:64] FLAG: --log-cadvisor-usage="false" Jan 25 07:56:57 crc kubenswrapper[4832]: I0125 07:56:57.455872 4832 flags.go:64] FLAG: --log-flush-frequency="5s" Jan 25 07:56:57 crc kubenswrapper[4832]: I0125 07:56:57.455878 4832 flags.go:64] FLAG: --log-json-info-buffer-size="0" Jan 25 07:56:57 crc kubenswrapper[4832]: I0125 07:56:57.455887 4832 flags.go:64] FLAG: --log-json-split-stream="false" Jan 25 07:56:57 crc kubenswrapper[4832]: I0125 07:56:57.455893 4832 flags.go:64] FLAG: --log-text-info-buffer-size="0" Jan 25 07:56:57 crc kubenswrapper[4832]: I0125 07:56:57.455899 4832 flags.go:64] FLAG: --log-text-split-stream="false" Jan 25 07:56:57 crc kubenswrapper[4832]: I0125 07:56:57.455905 4832 flags.go:64] FLAG: --logging-format="text" Jan 25 07:56:57 crc kubenswrapper[4832]: I0125 07:56:57.455912 4832 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" Jan 25 07:56:57 crc kubenswrapper[4832]: I0125 07:56:57.455918 4832 flags.go:64] FLAG: --make-iptables-util-chains="true" Jan 25 07:56:57 crc kubenswrapper[4832]: I0125 07:56:57.455924 4832 flags.go:64] FLAG: --manifest-url="" Jan 25 07:56:57 crc kubenswrapper[4832]: I0125 07:56:57.455931 4832 flags.go:64] FLAG: --manifest-url-header="" Jan 25 07:56:57 crc kubenswrapper[4832]: I0125 07:56:57.455939 4832 flags.go:64] FLAG: --max-housekeeping-interval="15s" Jan 25 07:56:57 crc kubenswrapper[4832]: I0125 07:56:57.455945 4832 flags.go:64] FLAG: --max-open-files="1000000" Jan 25 07:56:57 crc kubenswrapper[4832]: I0125 07:56:57.455952 4832 flags.go:64] FLAG: --max-pods="110" Jan 25 07:56:57 crc kubenswrapper[4832]: I0125 07:56:57.455958 4832 flags.go:64] FLAG: --maximum-dead-containers="-1" Jan 25 07:56:57 crc kubenswrapper[4832]: I0125 07:56:57.455964 4832 flags.go:64] FLAG: --maximum-dead-containers-per-container="1" Jan 25 07:56:57 crc kubenswrapper[4832]: I0125 07:56:57.455971 4832 flags.go:64] FLAG: --memory-manager-policy="None" Jan 25 07:56:57 crc kubenswrapper[4832]: I0125 07:56:57.455976 4832 flags.go:64] FLAG: --minimum-container-ttl-duration="6m0s" Jan 25 07:56:57 crc kubenswrapper[4832]: I0125 07:56:57.455983 4832 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s" Jan 25 07:56:57 crc kubenswrapper[4832]: I0125 07:56:57.455989 4832 flags.go:64] FLAG: --node-ip="192.168.126.11" Jan 25 07:56:57 crc kubenswrapper[4832]: I0125 07:56:57.455995 4832 flags.go:64] FLAG: --node-labels="node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.openshift.io/os_id=rhcos" Jan 25 07:56:57 crc kubenswrapper[4832]: I0125 07:56:57.456008 4832 flags.go:64] FLAG: --node-status-max-images="50" Jan 25 07:56:57 crc kubenswrapper[4832]: I0125 07:56:57.456015 4832 flags.go:64] FLAG: --node-status-update-frequency="10s" Jan 25 07:56:57 crc kubenswrapper[4832]: I0125 07:56:57.456021 4832 flags.go:64] FLAG: --oom-score-adj="-999" Jan 25 07:56:57 crc kubenswrapper[4832]: I0125 07:56:57.456027 4832 flags.go:64] FLAG: --pod-cidr="" Jan 25 07:56:57 crc kubenswrapper[4832]: I0125 07:56:57.456033 4832 flags.go:64] FLAG: --pod-infra-container-image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:33549946e22a9ffa738fd94b1345f90921bc8f92fa6137784cb33c77ad806f9d" Jan 25 07:56:57 crc kubenswrapper[4832]: I0125 07:56:57.456043 4832 flags.go:64] FLAG: --pod-manifest-path="" Jan 25 07:56:57 crc kubenswrapper[4832]: I0125 07:56:57.456049 4832 flags.go:64] FLAG: --pod-max-pids="-1" Jan 25 07:56:57 crc kubenswrapper[4832]: I0125 07:56:57.456055 4832 flags.go:64] FLAG: --pods-per-core="0" Jan 25 07:56:57 crc kubenswrapper[4832]: I0125 07:56:57.456061 4832 flags.go:64] FLAG: --port="10250" Jan 25 07:56:57 crc kubenswrapper[4832]: I0125 07:56:57.456067 4832 flags.go:64] FLAG: --protect-kernel-defaults="false" Jan 25 07:56:57 crc kubenswrapper[4832]: I0125 07:56:57.456073 4832 flags.go:64] FLAG: --provider-id="" Jan 25 07:56:57 crc kubenswrapper[4832]: I0125 07:56:57.456080 4832 flags.go:64] FLAG: --qos-reserved="" Jan 25 07:56:57 crc kubenswrapper[4832]: I0125 07:56:57.456086 4832 flags.go:64] FLAG: --read-only-port="10255" Jan 25 07:56:57 crc kubenswrapper[4832]: I0125 07:56:57.456092 4832 flags.go:64] FLAG: --register-node="true" Jan 25 07:56:57 crc kubenswrapper[4832]: I0125 07:56:57.456098 4832 flags.go:64] FLAG: --register-schedulable="true" Jan 25 07:56:57 crc kubenswrapper[4832]: I0125 07:56:57.456104 4832 flags.go:64] FLAG: --register-with-taints="node-role.kubernetes.io/master=:NoSchedule" Jan 25 07:56:57 crc kubenswrapper[4832]: I0125 07:56:57.456114 4832 flags.go:64] FLAG: --registry-burst="10" Jan 25 07:56:57 crc kubenswrapper[4832]: I0125 07:56:57.456147 4832 flags.go:64] FLAG: --registry-qps="5" Jan 25 07:56:57 crc kubenswrapper[4832]: I0125 07:56:57.456154 4832 flags.go:64] FLAG: --reserved-cpus="" Jan 25 07:56:57 crc kubenswrapper[4832]: I0125 07:56:57.456160 4832 flags.go:64] FLAG: --reserved-memory="" Jan 25 07:56:57 crc kubenswrapper[4832]: I0125 07:56:57.456167 4832 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf" Jan 25 07:56:57 crc kubenswrapper[4832]: I0125 07:56:57.456174 4832 flags.go:64] FLAG: --root-dir="/var/lib/kubelet" Jan 25 07:56:57 crc kubenswrapper[4832]: I0125 07:56:57.456180 4832 flags.go:64] FLAG: --rotate-certificates="false" Jan 25 07:56:57 crc kubenswrapper[4832]: I0125 07:56:57.456186 4832 flags.go:64] FLAG: --rotate-server-certificates="false" Jan 25 07:56:57 crc kubenswrapper[4832]: I0125 07:56:57.456193 4832 flags.go:64] FLAG: --runonce="false" Jan 25 07:56:57 crc kubenswrapper[4832]: I0125 07:56:57.456199 4832 flags.go:64] FLAG: --runtime-cgroups="/system.slice/crio.service" Jan 25 07:56:57 crc kubenswrapper[4832]: I0125 07:56:57.456205 4832 flags.go:64] FLAG: --runtime-request-timeout="2m0s" Jan 25 07:56:57 crc kubenswrapper[4832]: I0125 07:56:57.456212 4832 flags.go:64] FLAG: --seccomp-default="false" Jan 25 07:56:57 crc kubenswrapper[4832]: I0125 07:56:57.456218 4832 flags.go:64] FLAG: --serialize-image-pulls="true" Jan 25 07:56:57 crc kubenswrapper[4832]: I0125 07:56:57.456224 4832 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s" Jan 25 07:56:57 crc kubenswrapper[4832]: I0125 07:56:57.456230 4832 flags.go:64] FLAG: --storage-driver-db="cadvisor" Jan 25 07:56:57 crc kubenswrapper[4832]: I0125 07:56:57.456237 4832 flags.go:64] FLAG: --storage-driver-host="localhost:8086" Jan 25 07:56:57 crc kubenswrapper[4832]: I0125 07:56:57.456243 4832 flags.go:64] FLAG: --storage-driver-password="root" Jan 25 07:56:57 crc kubenswrapper[4832]: I0125 07:56:57.456249 4832 flags.go:64] FLAG: --storage-driver-secure="false" Jan 25 07:56:57 crc kubenswrapper[4832]: I0125 07:56:57.456256 4832 flags.go:64] FLAG: --storage-driver-table="stats" Jan 25 07:56:57 crc kubenswrapper[4832]: I0125 07:56:57.456262 4832 flags.go:64] FLAG: --storage-driver-user="root" Jan 25 07:56:57 crc kubenswrapper[4832]: I0125 07:56:57.456268 4832 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s" Jan 25 07:56:57 crc kubenswrapper[4832]: I0125 07:56:57.456274 4832 flags.go:64] FLAG: --sync-frequency="1m0s" Jan 25 07:56:57 crc kubenswrapper[4832]: I0125 07:56:57.456281 4832 flags.go:64] FLAG: --system-cgroups="" Jan 25 07:56:57 crc kubenswrapper[4832]: I0125 07:56:57.456286 4832 flags.go:64] FLAG: --system-reserved="cpu=200m,ephemeral-storage=350Mi,memory=350Mi" Jan 25 07:56:57 crc kubenswrapper[4832]: I0125 07:56:57.456296 4832 flags.go:64] FLAG: --system-reserved-cgroup="" Jan 25 07:56:57 crc kubenswrapper[4832]: I0125 07:56:57.456303 4832 flags.go:64] FLAG: --tls-cert-file="" Jan 25 07:56:57 crc kubenswrapper[4832]: I0125 07:56:57.456311 4832 flags.go:64] FLAG: --tls-cipher-suites="[]" Jan 25 07:56:57 crc kubenswrapper[4832]: I0125 07:56:57.456320 4832 flags.go:64] FLAG: --tls-min-version="" Jan 25 07:56:57 crc kubenswrapper[4832]: I0125 07:56:57.456328 4832 flags.go:64] FLAG: --tls-private-key-file="" Jan 25 07:56:57 crc kubenswrapper[4832]: I0125 07:56:57.456335 4832 flags.go:64] FLAG: --topology-manager-policy="none" Jan 25 07:56:57 crc kubenswrapper[4832]: I0125 07:56:57.456342 4832 flags.go:64] FLAG: --topology-manager-policy-options="" Jan 25 07:56:57 crc kubenswrapper[4832]: I0125 07:56:57.456351 4832 flags.go:64] FLAG: --topology-manager-scope="container" Jan 25 07:56:57 crc kubenswrapper[4832]: I0125 07:56:57.456359 4832 flags.go:64] FLAG: --v="2" Jan 25 07:56:57 crc kubenswrapper[4832]: I0125 07:56:57.456367 4832 flags.go:64] FLAG: --version="false" Jan 25 07:56:57 crc kubenswrapper[4832]: I0125 07:56:57.456375 4832 flags.go:64] FLAG: --vmodule="" Jan 25 07:56:57 crc kubenswrapper[4832]: I0125 07:56:57.456408 4832 flags.go:64] FLAG: --volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" Jan 25 07:56:57 crc kubenswrapper[4832]: I0125 07:56:57.456415 4832 flags.go:64] FLAG: --volume-stats-agg-period="1m0s" Jan 25 07:56:57 crc kubenswrapper[4832]: W0125 07:56:57.456569 4832 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 25 07:56:57 crc kubenswrapper[4832]: W0125 07:56:57.456576 4832 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 25 07:56:57 crc kubenswrapper[4832]: W0125 07:56:57.456582 4832 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 25 07:56:57 crc kubenswrapper[4832]: W0125 07:56:57.456587 4832 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 25 07:56:57 crc kubenswrapper[4832]: W0125 07:56:57.456593 4832 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 25 07:56:57 crc kubenswrapper[4832]: W0125 07:56:57.456598 4832 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 25 07:56:57 crc kubenswrapper[4832]: W0125 07:56:57.456604 4832 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 25 07:56:57 crc kubenswrapper[4832]: W0125 07:56:57.456610 4832 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 25 07:56:57 crc kubenswrapper[4832]: W0125 07:56:57.456616 4832 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 25 07:56:57 crc kubenswrapper[4832]: W0125 07:56:57.456621 4832 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 25 07:56:57 crc kubenswrapper[4832]: W0125 07:56:57.456627 4832 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 25 07:56:57 crc kubenswrapper[4832]: W0125 07:56:57.456632 4832 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 25 07:56:57 crc kubenswrapper[4832]: W0125 07:56:57.456637 4832 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 25 07:56:57 crc kubenswrapper[4832]: W0125 07:56:57.456647 4832 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 25 07:56:57 crc kubenswrapper[4832]: W0125 07:56:57.456652 4832 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 25 07:56:57 crc kubenswrapper[4832]: W0125 07:56:57.456658 4832 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 25 07:56:57 crc kubenswrapper[4832]: W0125 07:56:57.456663 4832 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 25 07:56:57 crc kubenswrapper[4832]: W0125 07:56:57.456668 4832 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 25 07:56:57 crc kubenswrapper[4832]: W0125 07:56:57.456674 4832 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 25 07:56:57 crc kubenswrapper[4832]: W0125 07:56:57.456679 4832 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 25 07:56:57 crc kubenswrapper[4832]: W0125 07:56:57.456686 4832 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 25 07:56:57 crc kubenswrapper[4832]: W0125 07:56:57.456693 4832 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 25 07:56:57 crc kubenswrapper[4832]: W0125 07:56:57.456698 4832 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 25 07:56:57 crc kubenswrapper[4832]: W0125 07:56:57.456704 4832 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 25 07:56:57 crc kubenswrapper[4832]: W0125 07:56:57.456710 4832 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 25 07:56:57 crc kubenswrapper[4832]: W0125 07:56:57.456716 4832 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 25 07:56:57 crc kubenswrapper[4832]: W0125 07:56:57.456721 4832 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 25 07:56:57 crc kubenswrapper[4832]: W0125 07:56:57.456726 4832 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 25 07:56:57 crc kubenswrapper[4832]: W0125 07:56:57.456731 4832 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 25 07:56:57 crc kubenswrapper[4832]: W0125 07:56:57.456737 4832 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 25 07:56:57 crc kubenswrapper[4832]: W0125 07:56:57.456743 4832 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 25 07:56:57 crc kubenswrapper[4832]: W0125 07:56:57.456748 4832 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 25 07:56:57 crc kubenswrapper[4832]: W0125 07:56:57.456753 4832 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 25 07:56:57 crc kubenswrapper[4832]: W0125 07:56:57.456759 4832 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 25 07:56:57 crc kubenswrapper[4832]: W0125 07:56:57.456764 4832 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 25 07:56:57 crc kubenswrapper[4832]: W0125 07:56:57.456769 4832 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 25 07:56:57 crc kubenswrapper[4832]: W0125 07:56:57.456775 4832 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 25 07:56:57 crc kubenswrapper[4832]: W0125 07:56:57.456780 4832 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 25 07:56:57 crc kubenswrapper[4832]: W0125 07:56:57.456785 4832 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 25 07:56:57 crc kubenswrapper[4832]: W0125 07:56:57.456790 4832 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 25 07:56:57 crc kubenswrapper[4832]: W0125 07:56:57.456797 4832 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 25 07:56:57 crc kubenswrapper[4832]: W0125 07:56:57.456802 4832 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 25 07:56:57 crc kubenswrapper[4832]: W0125 07:56:57.456809 4832 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 25 07:56:57 crc kubenswrapper[4832]: W0125 07:56:57.456816 4832 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 25 07:56:57 crc kubenswrapper[4832]: W0125 07:56:57.456835 4832 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 25 07:56:57 crc kubenswrapper[4832]: W0125 07:56:57.456860 4832 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 25 07:56:57 crc kubenswrapper[4832]: W0125 07:56:57.456867 4832 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 25 07:56:57 crc kubenswrapper[4832]: W0125 07:56:57.456874 4832 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 25 07:56:57 crc kubenswrapper[4832]: W0125 07:56:57.456881 4832 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 25 07:56:57 crc kubenswrapper[4832]: W0125 07:56:57.456887 4832 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 25 07:56:57 crc kubenswrapper[4832]: W0125 07:56:57.456893 4832 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 25 07:56:57 crc kubenswrapper[4832]: W0125 07:56:57.456899 4832 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 25 07:56:57 crc kubenswrapper[4832]: W0125 07:56:57.456906 4832 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 25 07:56:57 crc kubenswrapper[4832]: W0125 07:56:57.456911 4832 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 25 07:56:57 crc kubenswrapper[4832]: W0125 07:56:57.456917 4832 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 25 07:56:57 crc kubenswrapper[4832]: W0125 07:56:57.456924 4832 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 25 07:56:57 crc kubenswrapper[4832]: W0125 07:56:57.456931 4832 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 25 07:56:57 crc kubenswrapper[4832]: W0125 07:56:57.456938 4832 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 25 07:56:57 crc kubenswrapper[4832]: W0125 07:56:57.456945 4832 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 25 07:56:57 crc kubenswrapper[4832]: W0125 07:56:57.456951 4832 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 25 07:56:57 crc kubenswrapper[4832]: W0125 07:56:57.456956 4832 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 25 07:56:57 crc kubenswrapper[4832]: W0125 07:56:57.456962 4832 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 25 07:56:57 crc kubenswrapper[4832]: W0125 07:56:57.456967 4832 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 25 07:56:57 crc kubenswrapper[4832]: W0125 07:56:57.456972 4832 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 25 07:56:57 crc kubenswrapper[4832]: W0125 07:56:57.456978 4832 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 25 07:56:57 crc kubenswrapper[4832]: W0125 07:56:57.456983 4832 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 25 07:56:57 crc kubenswrapper[4832]: W0125 07:56:57.456989 4832 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 25 07:56:57 crc kubenswrapper[4832]: W0125 07:56:57.456994 4832 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 25 07:56:57 crc kubenswrapper[4832]: W0125 07:56:57.457000 4832 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 25 07:56:57 crc kubenswrapper[4832]: W0125 07:56:57.457005 4832 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 25 07:56:57 crc kubenswrapper[4832]: W0125 07:56:57.457010 4832 feature_gate.go:330] unrecognized feature gate: Example Jan 25 07:56:57 crc kubenswrapper[4832]: I0125 07:56:57.457027 4832 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Jan 25 07:56:57 crc kubenswrapper[4832]: I0125 07:56:57.467976 4832 server.go:491] "Kubelet version" kubeletVersion="v1.31.5" Jan 25 07:56:57 crc kubenswrapper[4832]: I0125 07:56:57.468012 4832 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 25 07:56:57 crc kubenswrapper[4832]: W0125 07:56:57.468104 4832 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 25 07:56:57 crc kubenswrapper[4832]: W0125 07:56:57.468138 4832 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 25 07:56:57 crc kubenswrapper[4832]: W0125 07:56:57.468143 4832 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 25 07:56:57 crc kubenswrapper[4832]: W0125 07:56:57.468147 4832 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 25 07:56:57 crc kubenswrapper[4832]: W0125 07:56:57.468151 4832 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 25 07:56:57 crc kubenswrapper[4832]: W0125 07:56:57.468156 4832 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 25 07:56:57 crc kubenswrapper[4832]: W0125 07:56:57.468160 4832 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 25 07:56:57 crc kubenswrapper[4832]: W0125 07:56:57.468163 4832 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 25 07:56:57 crc kubenswrapper[4832]: W0125 07:56:57.468167 4832 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 25 07:56:57 crc kubenswrapper[4832]: W0125 07:56:57.468171 4832 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 25 07:56:57 crc kubenswrapper[4832]: W0125 07:56:57.468174 4832 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 25 07:56:57 crc kubenswrapper[4832]: W0125 07:56:57.468178 4832 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 25 07:56:57 crc kubenswrapper[4832]: W0125 07:56:57.468182 4832 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 25 07:56:57 crc kubenswrapper[4832]: W0125 07:56:57.468187 4832 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 25 07:56:57 crc kubenswrapper[4832]: W0125 07:56:57.468192 4832 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 25 07:56:57 crc kubenswrapper[4832]: W0125 07:56:57.468196 4832 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 25 07:56:57 crc kubenswrapper[4832]: W0125 07:56:57.468199 4832 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 25 07:56:57 crc kubenswrapper[4832]: W0125 07:56:57.468203 4832 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 25 07:56:57 crc kubenswrapper[4832]: W0125 07:56:57.468207 4832 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 25 07:56:57 crc kubenswrapper[4832]: W0125 07:56:57.468210 4832 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 25 07:56:57 crc kubenswrapper[4832]: W0125 07:56:57.468214 4832 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 25 07:56:57 crc kubenswrapper[4832]: W0125 07:56:57.468218 4832 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 25 07:56:57 crc kubenswrapper[4832]: W0125 07:56:57.468223 4832 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 25 07:56:57 crc kubenswrapper[4832]: W0125 07:56:57.468227 4832 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 25 07:56:57 crc kubenswrapper[4832]: W0125 07:56:57.468231 4832 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 25 07:56:57 crc kubenswrapper[4832]: W0125 07:56:57.468236 4832 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 25 07:56:57 crc kubenswrapper[4832]: W0125 07:56:57.468240 4832 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 25 07:56:57 crc kubenswrapper[4832]: W0125 07:56:57.468244 4832 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 25 07:56:57 crc kubenswrapper[4832]: W0125 07:56:57.468248 4832 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 25 07:56:57 crc kubenswrapper[4832]: W0125 07:56:57.468253 4832 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 25 07:56:57 crc kubenswrapper[4832]: W0125 07:56:57.468262 4832 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 25 07:56:57 crc kubenswrapper[4832]: W0125 07:56:57.468269 4832 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 25 07:56:57 crc kubenswrapper[4832]: W0125 07:56:57.468274 4832 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 25 07:56:57 crc kubenswrapper[4832]: W0125 07:56:57.468279 4832 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 25 07:56:57 crc kubenswrapper[4832]: W0125 07:56:57.468283 4832 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 25 07:56:57 crc kubenswrapper[4832]: W0125 07:56:57.468288 4832 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 25 07:56:57 crc kubenswrapper[4832]: W0125 07:56:57.468293 4832 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 25 07:56:57 crc kubenswrapper[4832]: W0125 07:56:57.468297 4832 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 25 07:56:57 crc kubenswrapper[4832]: W0125 07:56:57.468301 4832 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 25 07:56:57 crc kubenswrapper[4832]: W0125 07:56:57.468304 4832 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 25 07:56:57 crc kubenswrapper[4832]: W0125 07:56:57.468308 4832 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 25 07:56:57 crc kubenswrapper[4832]: W0125 07:56:57.468311 4832 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 25 07:56:57 crc kubenswrapper[4832]: W0125 07:56:57.468315 4832 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 25 07:56:57 crc kubenswrapper[4832]: W0125 07:56:57.468319 4832 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 25 07:56:57 crc kubenswrapper[4832]: W0125 07:56:57.468324 4832 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 25 07:56:57 crc kubenswrapper[4832]: W0125 07:56:57.468328 4832 feature_gate.go:330] unrecognized feature gate: Example Jan 25 07:56:57 crc kubenswrapper[4832]: W0125 07:56:57.468332 4832 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 25 07:56:57 crc kubenswrapper[4832]: W0125 07:56:57.468336 4832 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 25 07:56:57 crc kubenswrapper[4832]: W0125 07:56:57.468340 4832 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 25 07:56:57 crc kubenswrapper[4832]: W0125 07:56:57.468344 4832 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 25 07:56:57 crc kubenswrapper[4832]: W0125 07:56:57.468348 4832 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 25 07:56:57 crc kubenswrapper[4832]: W0125 07:56:57.468352 4832 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 25 07:56:57 crc kubenswrapper[4832]: W0125 07:56:57.468356 4832 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 25 07:56:57 crc kubenswrapper[4832]: W0125 07:56:57.468359 4832 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 25 07:56:57 crc kubenswrapper[4832]: W0125 07:56:57.468363 4832 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 25 07:56:57 crc kubenswrapper[4832]: W0125 07:56:57.468366 4832 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 25 07:56:57 crc kubenswrapper[4832]: W0125 07:56:57.468370 4832 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 25 07:56:57 crc kubenswrapper[4832]: W0125 07:56:57.468374 4832 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 25 07:56:57 crc kubenswrapper[4832]: W0125 07:56:57.468378 4832 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 25 07:56:57 crc kubenswrapper[4832]: W0125 07:56:57.468402 4832 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 25 07:56:57 crc kubenswrapper[4832]: W0125 07:56:57.468408 4832 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 25 07:56:57 crc kubenswrapper[4832]: W0125 07:56:57.468412 4832 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 25 07:56:57 crc kubenswrapper[4832]: W0125 07:56:57.468415 4832 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 25 07:56:57 crc kubenswrapper[4832]: W0125 07:56:57.468420 4832 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 25 07:56:57 crc kubenswrapper[4832]: W0125 07:56:57.468425 4832 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 25 07:56:57 crc kubenswrapper[4832]: W0125 07:56:57.468429 4832 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 25 07:56:57 crc kubenswrapper[4832]: W0125 07:56:57.468432 4832 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 25 07:56:57 crc kubenswrapper[4832]: W0125 07:56:57.468436 4832 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 25 07:56:57 crc kubenswrapper[4832]: W0125 07:56:57.468440 4832 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 25 07:56:57 crc kubenswrapper[4832]: W0125 07:56:57.468444 4832 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 25 07:56:57 crc kubenswrapper[4832]: W0125 07:56:57.468448 4832 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 25 07:56:57 crc kubenswrapper[4832]: I0125 07:56:57.468456 4832 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Jan 25 07:56:57 crc kubenswrapper[4832]: W0125 07:56:57.468599 4832 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 25 07:56:57 crc kubenswrapper[4832]: W0125 07:56:57.468607 4832 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 25 07:56:57 crc kubenswrapper[4832]: W0125 07:56:57.468611 4832 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 25 07:56:57 crc kubenswrapper[4832]: W0125 07:56:57.468616 4832 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 25 07:56:57 crc kubenswrapper[4832]: W0125 07:56:57.468620 4832 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 25 07:56:57 crc kubenswrapper[4832]: W0125 07:56:57.468624 4832 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 25 07:56:57 crc kubenswrapper[4832]: W0125 07:56:57.468628 4832 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 25 07:56:57 crc kubenswrapper[4832]: W0125 07:56:57.468632 4832 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 25 07:56:57 crc kubenswrapper[4832]: W0125 07:56:57.468636 4832 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 25 07:56:57 crc kubenswrapper[4832]: W0125 07:56:57.468642 4832 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 25 07:56:57 crc kubenswrapper[4832]: W0125 07:56:57.468647 4832 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 25 07:56:57 crc kubenswrapper[4832]: W0125 07:56:57.468654 4832 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 25 07:56:57 crc kubenswrapper[4832]: W0125 07:56:57.468659 4832 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 25 07:56:57 crc kubenswrapper[4832]: W0125 07:56:57.468664 4832 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 25 07:56:57 crc kubenswrapper[4832]: W0125 07:56:57.468668 4832 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 25 07:56:57 crc kubenswrapper[4832]: W0125 07:56:57.468673 4832 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 25 07:56:57 crc kubenswrapper[4832]: W0125 07:56:57.468677 4832 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 25 07:56:57 crc kubenswrapper[4832]: W0125 07:56:57.468682 4832 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 25 07:56:57 crc kubenswrapper[4832]: W0125 07:56:57.468686 4832 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 25 07:56:57 crc kubenswrapper[4832]: W0125 07:56:57.468690 4832 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 25 07:56:57 crc kubenswrapper[4832]: W0125 07:56:57.468697 4832 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 25 07:56:57 crc kubenswrapper[4832]: W0125 07:56:57.468701 4832 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 25 07:56:57 crc kubenswrapper[4832]: W0125 07:56:57.468706 4832 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 25 07:56:57 crc kubenswrapper[4832]: W0125 07:56:57.468711 4832 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 25 07:56:57 crc kubenswrapper[4832]: W0125 07:56:57.468716 4832 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 25 07:56:57 crc kubenswrapper[4832]: W0125 07:56:57.468721 4832 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 25 07:56:57 crc kubenswrapper[4832]: W0125 07:56:57.468726 4832 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 25 07:56:57 crc kubenswrapper[4832]: W0125 07:56:57.468730 4832 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 25 07:56:57 crc kubenswrapper[4832]: W0125 07:56:57.468734 4832 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 25 07:56:57 crc kubenswrapper[4832]: W0125 07:56:57.468738 4832 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 25 07:56:57 crc kubenswrapper[4832]: W0125 07:56:57.468743 4832 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 25 07:56:57 crc kubenswrapper[4832]: W0125 07:56:57.468747 4832 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 25 07:56:57 crc kubenswrapper[4832]: W0125 07:56:57.468751 4832 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 25 07:56:57 crc kubenswrapper[4832]: W0125 07:56:57.468755 4832 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 25 07:56:57 crc kubenswrapper[4832]: W0125 07:56:57.468760 4832 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 25 07:56:57 crc kubenswrapper[4832]: W0125 07:56:57.468764 4832 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 25 07:56:57 crc kubenswrapper[4832]: W0125 07:56:57.468769 4832 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 25 07:56:57 crc kubenswrapper[4832]: W0125 07:56:57.468776 4832 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 25 07:56:57 crc kubenswrapper[4832]: W0125 07:56:57.468781 4832 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 25 07:56:57 crc kubenswrapper[4832]: W0125 07:56:57.468786 4832 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 25 07:56:57 crc kubenswrapper[4832]: W0125 07:56:57.468790 4832 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 25 07:56:57 crc kubenswrapper[4832]: W0125 07:56:57.468795 4832 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 25 07:56:57 crc kubenswrapper[4832]: W0125 07:56:57.468800 4832 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 25 07:56:57 crc kubenswrapper[4832]: W0125 07:56:57.468805 4832 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 25 07:56:57 crc kubenswrapper[4832]: W0125 07:56:57.468809 4832 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 25 07:56:57 crc kubenswrapper[4832]: W0125 07:56:57.468813 4832 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 25 07:56:57 crc kubenswrapper[4832]: W0125 07:56:57.468818 4832 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 25 07:56:57 crc kubenswrapper[4832]: W0125 07:56:57.468822 4832 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 25 07:56:57 crc kubenswrapper[4832]: W0125 07:56:57.468826 4832 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 25 07:56:57 crc kubenswrapper[4832]: W0125 07:56:57.468831 4832 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 25 07:56:57 crc kubenswrapper[4832]: W0125 07:56:57.468834 4832 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 25 07:56:57 crc kubenswrapper[4832]: W0125 07:56:57.468838 4832 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 25 07:56:57 crc kubenswrapper[4832]: W0125 07:56:57.468842 4832 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 25 07:56:57 crc kubenswrapper[4832]: W0125 07:56:57.468845 4832 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 25 07:56:57 crc kubenswrapper[4832]: W0125 07:56:57.468849 4832 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 25 07:56:57 crc kubenswrapper[4832]: W0125 07:56:57.468853 4832 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 25 07:56:57 crc kubenswrapper[4832]: W0125 07:56:57.468857 4832 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 25 07:56:57 crc kubenswrapper[4832]: W0125 07:56:57.468860 4832 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 25 07:56:57 crc kubenswrapper[4832]: W0125 07:56:57.468865 4832 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 25 07:56:57 crc kubenswrapper[4832]: W0125 07:56:57.468869 4832 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 25 07:56:57 crc kubenswrapper[4832]: W0125 07:56:57.468874 4832 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 25 07:56:57 crc kubenswrapper[4832]: W0125 07:56:57.468878 4832 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 25 07:56:57 crc kubenswrapper[4832]: W0125 07:56:57.468882 4832 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 25 07:56:57 crc kubenswrapper[4832]: W0125 07:56:57.468886 4832 feature_gate.go:330] unrecognized feature gate: Example Jan 25 07:56:57 crc kubenswrapper[4832]: W0125 07:56:57.468890 4832 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 25 07:56:57 crc kubenswrapper[4832]: W0125 07:56:57.468893 4832 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 25 07:56:57 crc kubenswrapper[4832]: W0125 07:56:57.468898 4832 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 25 07:56:57 crc kubenswrapper[4832]: W0125 07:56:57.468901 4832 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 25 07:56:57 crc kubenswrapper[4832]: W0125 07:56:57.468906 4832 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 25 07:56:57 crc kubenswrapper[4832]: W0125 07:56:57.468909 4832 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 25 07:56:57 crc kubenswrapper[4832]: W0125 07:56:57.468913 4832 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 25 07:56:57 crc kubenswrapper[4832]: I0125 07:56:57.468920 4832 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Jan 25 07:56:57 crc kubenswrapper[4832]: I0125 07:56:57.469304 4832 server.go:940] "Client rotation is on, will bootstrap in background" Jan 25 07:56:57 crc kubenswrapper[4832]: I0125 07:56:57.472054 4832 bootstrap.go:85] "Current kubeconfig file contents are still valid, no bootstrap necessary" Jan 25 07:56:57 crc kubenswrapper[4832]: I0125 07:56:57.472173 4832 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jan 25 07:56:57 crc kubenswrapper[4832]: I0125 07:56:57.472649 4832 server.go:997] "Starting client certificate rotation" Jan 25 07:56:57 crc kubenswrapper[4832]: I0125 07:56:57.472671 4832 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate rotation is enabled Jan 25 07:56:57 crc kubenswrapper[4832]: I0125 07:56:57.473102 4832 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2026-02-24 05:52:08 +0000 UTC, rotation deadline is 2026-01-13 09:33:24.456792856 +0000 UTC Jan 25 07:56:57 crc kubenswrapper[4832]: I0125 07:56:57.473264 4832 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Jan 25 07:56:57 crc kubenswrapper[4832]: I0125 07:56:57.477440 4832 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Jan 25 07:56:57 crc kubenswrapper[4832]: E0125 07:56:57.480551 4832 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.213:6443: connect: connection refused" logger="UnhandledError" Jan 25 07:56:57 crc kubenswrapper[4832]: I0125 07:56:57.481626 4832 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Jan 25 07:56:57 crc kubenswrapper[4832]: I0125 07:56:57.494208 4832 log.go:25] "Validated CRI v1 runtime API" Jan 25 07:56:57 crc kubenswrapper[4832]: I0125 07:56:57.515941 4832 log.go:25] "Validated CRI v1 image API" Jan 25 07:56:57 crc kubenswrapper[4832]: I0125 07:56:57.518834 4832 server.go:1437] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jan 25 07:56:57 crc kubenswrapper[4832]: I0125 07:56:57.522958 4832 fs.go:133] Filesystem UUIDs: map[0b076daa-c26a-46d2-b3a6-72a8dbc6e257:/dev/vda4 2026-01-25-07-52-31-00:/dev/sr0 7B77-95E7:/dev/vda2 de0497b0-db1b-465a-b278-03db02455c71:/dev/vda3] Jan 25 07:56:57 crc kubenswrapper[4832]: I0125 07:56:57.523011 4832 fs.go:134] Filesystem partitions: map[/dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /dev/vda3:{mountpoint:/boot major:252 minor:3 fsType:ext4 blockSize:0} /dev/vda4:{mountpoint:/var major:252 minor:4 fsType:xfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /run/user/1000:{mountpoint:/run/user/1000 major:0 minor:42 fsType:tmpfs blockSize:0} /tmp:{mountpoint:/tmp major:0 minor:30 fsType:tmpfs blockSize:0} /var/lib/etcd:{mountpoint:/var/lib/etcd major:0 minor:43 fsType:tmpfs blockSize:0}] Jan 25 07:56:57 crc kubenswrapper[4832]: I0125 07:56:57.542237 4832 manager.go:217] Machine: {Timestamp:2026-01-25 07:56:57.540889713 +0000 UTC m=+0.214713256 CPUVendorID:AuthenticAMD NumCores:12 NumPhysicalCores:1 NumSockets:12 CpuFrequency:2800000 MemoryCapacity:33654120448 SwapCapacity:0 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:21801e6708c44f15b81395eb736a7cec SystemUUID:55010a19-6f9d-4b9e-9f82-47bdc3835176 BootID:0979aa75-019e-429a-886d-abfe16bbe8b2 Filesystems:[{Device:/run/user/1000 DeviceMajor:0 DeviceMinor:42 Capacity:3365408768 Type:vfs Inodes:821633 HasInodes:true} {Device:/var/lib/etcd DeviceMajor:0 DeviceMinor:43 Capacity:1073741824 Type:vfs Inodes:4108169 HasInodes:true} {Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:16827060224 Type:vfs Inodes:4108169 HasInodes:true} {Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:6730825728 Type:vfs Inodes:819200 HasInodes:true} {Device:/dev/vda4 DeviceMajor:252 DeviceMinor:4 Capacity:85292941312 Type:vfs Inodes:41679680 HasInodes:true} {Device:/tmp DeviceMajor:0 DeviceMinor:30 Capacity:16827060224 Type:vfs Inodes:1048576 HasInodes:true} {Device:/dev/vda3 DeviceMajor:252 DeviceMinor:3 Capacity:366869504 Type:vfs Inodes:98304 HasInodes:true}] DiskMap:map[252:0:{Name:vda Major:252 Minor:0 Size:214748364800 Scheduler:none}] NetworkDevices:[{Name:br-ex MacAddress:fa:16:3e:8d:05:3e Speed:0 Mtu:1500} {Name:br-int MacAddress:d6:39:55:2e:22:71 Speed:0 Mtu:1400} {Name:ens3 MacAddress:fa:16:3e:8d:05:3e Speed:-1 Mtu:1500} {Name:ens7 MacAddress:fa:16:3e:2f:ff:12 Speed:-1 Mtu:1500} {Name:ens7.20 MacAddress:52:54:00:8d:fd:de Speed:-1 Mtu:1496} {Name:ens7.21 MacAddress:52:54:00:80:77:75 Speed:-1 Mtu:1496} {Name:ens7.22 MacAddress:52:54:00:18:0d:44 Speed:-1 Mtu:1496} {Name:eth10 MacAddress:6a:1b:34:a7:d5:5a Speed:0 Mtu:1500} {Name:ovn-k8s-mp0 MacAddress:0a:58:0a:d9:00:02 Speed:0 Mtu:1400} {Name:ovs-system MacAddress:be:c1:15:16:83:60 Speed:0 Mtu:1500}] Topology:[{Id:0 Memory:33654120448 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0] Caches:[{Id:0 Size:32768 Type:Data Level:1} {Id:0 Size:32768 Type:Instruction Level:1} {Id:0 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:0 Size:16777216 Type:Unified Level:3}] SocketID:0 BookID: DrawerID:} {Id:0 Threads:[1] Caches:[{Id:1 Size:32768 Type:Data Level:1} {Id:1 Size:32768 Type:Instruction Level:1} {Id:1 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:1 Size:16777216 Type:Unified Level:3}] SocketID:1 BookID: DrawerID:} {Id:0 Threads:[10] Caches:[{Id:10 Size:32768 Type:Data Level:1} {Id:10 Size:32768 Type:Instruction Level:1} {Id:10 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:10 Size:16777216 Type:Unified Level:3}] SocketID:10 BookID: DrawerID:} {Id:0 Threads:[11] Caches:[{Id:11 Size:32768 Type:Data Level:1} {Id:11 Size:32768 Type:Instruction Level:1} {Id:11 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:11 Size:16777216 Type:Unified Level:3}] SocketID:11 BookID: DrawerID:} {Id:0 Threads:[2] Caches:[{Id:2 Size:32768 Type:Data Level:1} {Id:2 Size:32768 Type:Instruction Level:1} {Id:2 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:2 Size:16777216 Type:Unified Level:3}] SocketID:2 BookID: DrawerID:} {Id:0 Threads:[3] Caches:[{Id:3 Size:32768 Type:Data Level:1} {Id:3 Size:32768 Type:Instruction Level:1} {Id:3 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:3 Size:16777216 Type:Unified Level:3}] SocketID:3 BookID: DrawerID:} {Id:0 Threads:[4] Caches:[{Id:4 Size:32768 Type:Data Level:1} {Id:4 Size:32768 Type:Instruction Level:1} {Id:4 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:4 Size:16777216 Type:Unified Level:3}] SocketID:4 BookID: DrawerID:} {Id:0 Threads:[5] Caches:[{Id:5 Size:32768 Type:Data Level:1} {Id:5 Size:32768 Type:Instruction Level:1} {Id:5 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:5 Size:16777216 Type:Unified Level:3}] SocketID:5 BookID: DrawerID:} {Id:0 Threads:[6] Caches:[{Id:6 Size:32768 Type:Data Level:1} {Id:6 Size:32768 Type:Instruction Level:1} {Id:6 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:6 Size:16777216 Type:Unified Level:3}] SocketID:6 BookID: DrawerID:} {Id:0 Threads:[7] Caches:[{Id:7 Size:32768 Type:Data Level:1} {Id:7 Size:32768 Type:Instruction Level:1} {Id:7 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:7 Size:16777216 Type:Unified Level:3}] SocketID:7 BookID: DrawerID:} {Id:0 Threads:[8] Caches:[{Id:8 Size:32768 Type:Data Level:1} {Id:8 Size:32768 Type:Instruction Level:1} {Id:8 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:8 Size:16777216 Type:Unified Level:3}] SocketID:8 BookID: DrawerID:} {Id:0 Threads:[9] Caches:[{Id:9 Size:32768 Type:Data Level:1} {Id:9 Size:32768 Type:Instruction Level:1} {Id:9 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:9 Size:16777216 Type:Unified Level:3}] SocketID:9 BookID: DrawerID:}] Caches:[] Distances:[10]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Jan 25 07:56:57 crc kubenswrapper[4832]: I0125 07:56:57.542567 4832 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Jan 25 07:56:57 crc kubenswrapper[4832]: I0125 07:56:57.542742 4832 manager.go:233] Version: {KernelVersion:5.14.0-427.50.2.el9_4.x86_64 ContainerOsVersion:Red Hat Enterprise Linux CoreOS 418.94.202502100215-0 DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:} Jan 25 07:56:57 crc kubenswrapper[4832]: I0125 07:56:57.543511 4832 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jan 25 07:56:57 crc kubenswrapper[4832]: I0125 07:56:57.543879 4832 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 25 07:56:57 crc kubenswrapper[4832]: I0125 07:56:57.543947 4832 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"crc","RuntimeCgroupsName":"/system.slice/crio.service","SystemCgroupsName":"/system.slice","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":true,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":{"cpu":"200m","ephemeral-storage":"350Mi","memory":"350Mi"},"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":4096,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 25 07:56:57 crc kubenswrapper[4832]: I0125 07:56:57.544330 4832 topology_manager.go:138] "Creating topology manager with none policy" Jan 25 07:56:57 crc kubenswrapper[4832]: I0125 07:56:57.544351 4832 container_manager_linux.go:303] "Creating device plugin manager" Jan 25 07:56:57 crc kubenswrapper[4832]: I0125 07:56:57.544741 4832 manager.go:142] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock" Jan 25 07:56:57 crc kubenswrapper[4832]: I0125 07:56:57.544803 4832 server.go:66] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock" Jan 25 07:56:57 crc kubenswrapper[4832]: I0125 07:56:57.545366 4832 state_mem.go:36] "Initialized new in-memory state store" Jan 25 07:56:57 crc kubenswrapper[4832]: I0125 07:56:57.545546 4832 server.go:1245] "Using root directory" path="/var/lib/kubelet" Jan 25 07:56:57 crc kubenswrapper[4832]: I0125 07:56:57.546634 4832 kubelet.go:418] "Attempting to sync node with API server" Jan 25 07:56:57 crc kubenswrapper[4832]: I0125 07:56:57.546685 4832 kubelet.go:313] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 25 07:56:57 crc kubenswrapper[4832]: I0125 07:56:57.546732 4832 file.go:69] "Watching path" path="/etc/kubernetes/manifests" Jan 25 07:56:57 crc kubenswrapper[4832]: I0125 07:56:57.546755 4832 kubelet.go:324] "Adding apiserver pod source" Jan 25 07:56:57 crc kubenswrapper[4832]: I0125 07:56:57.546773 4832 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 25 07:56:57 crc kubenswrapper[4832]: W0125 07:56:57.549294 4832 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.213:6443: connect: connection refused Jan 25 07:56:57 crc kubenswrapper[4832]: W0125 07:56:57.549296 4832 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.213:6443: connect: connection refused Jan 25 07:56:57 crc kubenswrapper[4832]: E0125 07:56:57.549627 4832 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.213:6443: connect: connection refused" logger="UnhandledError" Jan 25 07:56:57 crc kubenswrapper[4832]: E0125 07:56:57.549517 4832 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.213:6443: connect: connection refused" logger="UnhandledError" Jan 25 07:56:57 crc kubenswrapper[4832]: I0125 07:56:57.550466 4832 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="cri-o" version="1.31.5-4.rhaos4.18.gitdad78d5.el9" apiVersion="v1" Jan 25 07:56:57 crc kubenswrapper[4832]: I0125 07:56:57.551088 4832 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-server-current.pem". Jan 25 07:56:57 crc kubenswrapper[4832]: I0125 07:56:57.552190 4832 kubelet.go:854] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 25 07:56:57 crc kubenswrapper[4832]: I0125 07:56:57.553182 4832 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" Jan 25 07:56:57 crc kubenswrapper[4832]: I0125 07:56:57.553247 4832 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir" Jan 25 07:56:57 crc kubenswrapper[4832]: I0125 07:56:57.553269 4832 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/git-repo" Jan 25 07:56:57 crc kubenswrapper[4832]: I0125 07:56:57.553284 4832 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Jan 25 07:56:57 crc kubenswrapper[4832]: I0125 07:56:57.553306 4832 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Jan 25 07:56:57 crc kubenswrapper[4832]: I0125 07:56:57.553320 4832 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/secret" Jan 25 07:56:57 crc kubenswrapper[4832]: I0125 07:56:57.553333 4832 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Jan 25 07:56:57 crc kubenswrapper[4832]: I0125 07:56:57.553360 4832 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/downward-api" Jan 25 07:56:57 crc kubenswrapper[4832]: I0125 07:56:57.553376 4832 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/fc" Jan 25 07:56:57 crc kubenswrapper[4832]: I0125 07:56:57.553415 4832 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/configmap" Jan 25 07:56:57 crc kubenswrapper[4832]: I0125 07:56:57.553455 4832 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/projected" Jan 25 07:56:57 crc kubenswrapper[4832]: I0125 07:56:57.553469 4832 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Jan 25 07:56:57 crc kubenswrapper[4832]: I0125 07:56:57.553729 4832 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/csi" Jan 25 07:56:57 crc kubenswrapper[4832]: I0125 07:56:57.554521 4832 server.go:1280] "Started kubelet" Jan 25 07:56:57 crc kubenswrapper[4832]: I0125 07:56:57.556050 4832 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.213:6443: connect: connection refused Jan 25 07:56:57 crc kubenswrapper[4832]: I0125 07:56:57.555699 4832 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 25 07:56:57 crc systemd[1]: Started Kubernetes Kubelet. Jan 25 07:56:57 crc kubenswrapper[4832]: I0125 07:56:57.555695 4832 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jan 25 07:56:57 crc kubenswrapper[4832]: I0125 07:56:57.558380 4832 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 25 07:56:57 crc kubenswrapper[4832]: E0125 07:56:57.563694 4832 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.102.83.213:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.188dea46988d0a7c default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-25 07:56:57.554463356 +0000 UTC m=+0.228286969,LastTimestamp:2026-01-25 07:56:57.554463356 +0000 UTC m=+0.228286969,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 25 07:56:57 crc kubenswrapper[4832]: I0125 07:56:57.565105 4832 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate rotation is enabled Jan 25 07:56:57 crc kubenswrapper[4832]: I0125 07:56:57.565151 4832 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 25 07:56:57 crc kubenswrapper[4832]: I0125 07:56:57.565161 4832 server.go:460] "Adding debug handlers to kubelet server" Jan 25 07:56:57 crc kubenswrapper[4832]: I0125 07:56:57.565250 4832 volume_manager.go:287] "The desired_state_of_world populator starts" Jan 25 07:56:57 crc kubenswrapper[4832]: I0125 07:56:57.565269 4832 volume_manager.go:289] "Starting Kubelet Volume Manager" Jan 25 07:56:57 crc kubenswrapper[4832]: E0125 07:56:57.565269 4832 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Jan 25 07:56:57 crc kubenswrapper[4832]: I0125 07:56:57.565239 4832 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-08 05:01:24.797755069 +0000 UTC Jan 25 07:56:57 crc kubenswrapper[4832]: I0125 07:56:57.565405 4832 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Jan 25 07:56:57 crc kubenswrapper[4832]: E0125 07:56:57.565968 4832 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.213:6443: connect: connection refused" interval="200ms" Jan 25 07:56:57 crc kubenswrapper[4832]: W0125 07:56:57.566453 4832 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.213:6443: connect: connection refused Jan 25 07:56:57 crc kubenswrapper[4832]: E0125 07:56:57.566559 4832 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.213:6443: connect: connection refused" logger="UnhandledError" Jan 25 07:56:57 crc kubenswrapper[4832]: I0125 07:56:57.566943 4832 factory.go:219] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory Jan 25 07:56:57 crc kubenswrapper[4832]: I0125 07:56:57.566974 4832 factory.go:55] Registering systemd factory Jan 25 07:56:57 crc kubenswrapper[4832]: I0125 07:56:57.566989 4832 factory.go:221] Registration of the systemd container factory successfully Jan 25 07:56:57 crc kubenswrapper[4832]: I0125 07:56:57.567833 4832 factory.go:153] Registering CRI-O factory Jan 25 07:56:57 crc kubenswrapper[4832]: I0125 07:56:57.567934 4832 factory.go:221] Registration of the crio container factory successfully Jan 25 07:56:57 crc kubenswrapper[4832]: I0125 07:56:57.568278 4832 factory.go:103] Registering Raw factory Jan 25 07:56:57 crc kubenswrapper[4832]: I0125 07:56:57.568403 4832 manager.go:1196] Started watching for new ooms in manager Jan 25 07:56:57 crc kubenswrapper[4832]: I0125 07:56:57.569059 4832 manager.go:319] Starting recovery of all containers Jan 25 07:56:57 crc kubenswrapper[4832]: I0125 07:56:57.577494 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" seLinuxMountContext="" Jan 25 07:56:57 crc kubenswrapper[4832]: I0125 07:56:57.577563 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" seLinuxMountContext="" Jan 25 07:56:57 crc kubenswrapper[4832]: I0125 07:56:57.577591 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" seLinuxMountContext="" Jan 25 07:56:57 crc kubenswrapper[4832]: I0125 07:56:57.577606 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" seLinuxMountContext="" Jan 25 07:56:57 crc kubenswrapper[4832]: I0125 07:56:57.577621 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" seLinuxMountContext="" Jan 25 07:56:57 crc kubenswrapper[4832]: I0125 07:56:57.577636 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" seLinuxMountContext="" Jan 25 07:56:57 crc kubenswrapper[4832]: I0125 07:56:57.577707 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" seLinuxMountContext="" Jan 25 07:56:57 crc kubenswrapper[4832]: I0125 07:56:57.577726 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" seLinuxMountContext="" Jan 25 07:56:57 crc kubenswrapper[4832]: I0125 07:56:57.577855 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" seLinuxMountContext="" Jan 25 07:56:57 crc kubenswrapper[4832]: I0125 07:56:57.577873 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" seLinuxMountContext="" Jan 25 07:56:57 crc kubenswrapper[4832]: I0125 07:56:57.577887 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" seLinuxMountContext="" Jan 25 07:56:57 crc kubenswrapper[4832]: I0125 07:56:57.577909 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" seLinuxMountContext="" Jan 25 07:56:57 crc kubenswrapper[4832]: I0125 07:56:57.577921 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" seLinuxMountContext="" Jan 25 07:56:57 crc kubenswrapper[4832]: I0125 07:56:57.577936 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides" seLinuxMountContext="" Jan 25 07:56:57 crc kubenswrapper[4832]: I0125 07:56:57.577950 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" seLinuxMountContext="" Jan 25 07:56:57 crc kubenswrapper[4832]: I0125 07:56:57.577963 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3b6479f0-333b-4a96-9adf-2099afdc2447" volumeName="kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr" seLinuxMountContext="" Jan 25 07:56:57 crc kubenswrapper[4832]: I0125 07:56:57.578037 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" seLinuxMountContext="" Jan 25 07:56:57 crc kubenswrapper[4832]: I0125 07:56:57.578050 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" seLinuxMountContext="" Jan 25 07:56:57 crc kubenswrapper[4832]: I0125 07:56:57.578064 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" seLinuxMountContext="" Jan 25 07:56:57 crc kubenswrapper[4832]: I0125 07:56:57.578077 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" seLinuxMountContext="" Jan 25 07:56:57 crc kubenswrapper[4832]: I0125 07:56:57.578121 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" seLinuxMountContext="" Jan 25 07:56:57 crc kubenswrapper[4832]: I0125 07:56:57.578188 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49ef4625-1d3a-4a9f-b595-c2433d32326d" volumeName="kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" seLinuxMountContext="" Jan 25 07:56:57 crc kubenswrapper[4832]: I0125 07:56:57.578204 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" seLinuxMountContext="" Jan 25 07:56:57 crc kubenswrapper[4832]: I0125 07:56:57.578225 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" seLinuxMountContext="" Jan 25 07:56:57 crc kubenswrapper[4832]: I0125 07:56:57.578269 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" seLinuxMountContext="" Jan 25 07:56:57 crc kubenswrapper[4832]: I0125 07:56:57.578287 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" seLinuxMountContext="" Jan 25 07:56:57 crc kubenswrapper[4832]: I0125 07:56:57.578303 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" seLinuxMountContext="" Jan 25 07:56:57 crc kubenswrapper[4832]: I0125 07:56:57.578400 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" seLinuxMountContext="" Jan 25 07:56:57 crc kubenswrapper[4832]: I0125 07:56:57.578461 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" seLinuxMountContext="" Jan 25 07:56:57 crc kubenswrapper[4832]: I0125 07:56:57.578475 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" seLinuxMountContext="" Jan 25 07:56:57 crc kubenswrapper[4832]: I0125 07:56:57.578492 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" seLinuxMountContext="" Jan 25 07:56:57 crc kubenswrapper[4832]: I0125 07:56:57.578505 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" seLinuxMountContext="" Jan 25 07:56:57 crc kubenswrapper[4832]: I0125 07:56:57.578602 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" seLinuxMountContext="" Jan 25 07:56:57 crc kubenswrapper[4832]: I0125 07:56:57.578645 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" seLinuxMountContext="" Jan 25 07:56:57 crc kubenswrapper[4832]: I0125 07:56:57.578658 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" seLinuxMountContext="" Jan 25 07:56:57 crc kubenswrapper[4832]: I0125 07:56:57.578671 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" seLinuxMountContext="" Jan 25 07:56:57 crc kubenswrapper[4832]: I0125 07:56:57.578720 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" seLinuxMountContext="" Jan 25 07:56:57 crc kubenswrapper[4832]: I0125 07:56:57.578736 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" seLinuxMountContext="" Jan 25 07:56:57 crc kubenswrapper[4832]: I0125 07:56:57.578748 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" seLinuxMountContext="" Jan 25 07:56:57 crc kubenswrapper[4832]: I0125 07:56:57.578760 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" seLinuxMountContext="" Jan 25 07:56:57 crc kubenswrapper[4832]: I0125 07:56:57.578791 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" seLinuxMountContext="" Jan 25 07:56:57 crc kubenswrapper[4832]: I0125 07:56:57.578869 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" seLinuxMountContext="" Jan 25 07:56:57 crc kubenswrapper[4832]: I0125 07:56:57.578881 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" seLinuxMountContext="" Jan 25 07:56:57 crc kubenswrapper[4832]: I0125 07:56:57.578889 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" seLinuxMountContext="" Jan 25 07:56:57 crc kubenswrapper[4832]: I0125 07:56:57.578941 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" seLinuxMountContext="" Jan 25 07:56:57 crc kubenswrapper[4832]: I0125 07:56:57.578954 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" seLinuxMountContext="" Jan 25 07:56:57 crc kubenswrapper[4832]: I0125 07:56:57.578991 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" seLinuxMountContext="" Jan 25 07:56:57 crc kubenswrapper[4832]: I0125 07:56:57.579005 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" seLinuxMountContext="" Jan 25 07:56:57 crc kubenswrapper[4832]: I0125 07:56:57.579043 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" seLinuxMountContext="" Jan 25 07:56:57 crc kubenswrapper[4832]: I0125 07:56:57.579062 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" seLinuxMountContext="" Jan 25 07:56:57 crc kubenswrapper[4832]: I0125 07:56:57.579077 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" seLinuxMountContext="" Jan 25 07:56:57 crc kubenswrapper[4832]: I0125 07:56:57.579088 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" seLinuxMountContext="" Jan 25 07:56:57 crc kubenswrapper[4832]: I0125 07:56:57.579166 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" volumeName="kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" seLinuxMountContext="" Jan 25 07:56:57 crc kubenswrapper[4832]: I0125 07:56:57.579181 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" seLinuxMountContext="" Jan 25 07:56:57 crc kubenswrapper[4832]: I0125 07:56:57.579198 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" seLinuxMountContext="" Jan 25 07:56:57 crc kubenswrapper[4832]: I0125 07:56:57.579216 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" seLinuxMountContext="" Jan 25 07:56:57 crc kubenswrapper[4832]: I0125 07:56:57.579250 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" seLinuxMountContext="" Jan 25 07:56:57 crc kubenswrapper[4832]: I0125 07:56:57.579302 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" seLinuxMountContext="" Jan 25 07:56:57 crc kubenswrapper[4832]: I0125 07:56:57.579316 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" seLinuxMountContext="" Jan 25 07:56:57 crc kubenswrapper[4832]: I0125 07:56:57.579328 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" seLinuxMountContext="" Jan 25 07:56:57 crc kubenswrapper[4832]: I0125 07:56:57.579362 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" seLinuxMountContext="" Jan 25 07:56:57 crc kubenswrapper[4832]: I0125 07:56:57.579438 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" seLinuxMountContext="" Jan 25 07:56:57 crc kubenswrapper[4832]: I0125 07:56:57.579479 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" seLinuxMountContext="" Jan 25 07:56:57 crc kubenswrapper[4832]: I0125 07:56:57.579494 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" seLinuxMountContext="" Jan 25 07:56:57 crc kubenswrapper[4832]: I0125 07:56:57.579531 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" seLinuxMountContext="" Jan 25 07:56:57 crc kubenswrapper[4832]: I0125 07:56:57.579546 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" seLinuxMountContext="" Jan 25 07:56:57 crc kubenswrapper[4832]: I0125 07:56:57.579560 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" volumeName="kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" seLinuxMountContext="" Jan 25 07:56:57 crc kubenswrapper[4832]: I0125 07:56:57.579574 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" seLinuxMountContext="" Jan 25 07:56:57 crc kubenswrapper[4832]: I0125 07:56:57.579623 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf" seLinuxMountContext="" Jan 25 07:56:57 crc kubenswrapper[4832]: I0125 07:56:57.579638 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" seLinuxMountContext="" Jan 25 07:56:57 crc kubenswrapper[4832]: I0125 07:56:57.579651 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" seLinuxMountContext="" Jan 25 07:56:57 crc kubenswrapper[4832]: I0125 07:56:57.579691 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" seLinuxMountContext="" Jan 25 07:56:57 crc kubenswrapper[4832]: I0125 07:56:57.579737 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" seLinuxMountContext="" Jan 25 07:56:57 crc kubenswrapper[4832]: I0125 07:56:57.579751 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" seLinuxMountContext="" Jan 25 07:56:57 crc kubenswrapper[4832]: I0125 07:56:57.579764 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" seLinuxMountContext="" Jan 25 07:56:57 crc kubenswrapper[4832]: I0125 07:56:57.579885 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" seLinuxMountContext="" Jan 25 07:56:57 crc kubenswrapper[4832]: I0125 07:56:57.579919 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" seLinuxMountContext="" Jan 25 07:56:57 crc kubenswrapper[4832]: I0125 07:56:57.579933 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" seLinuxMountContext="" Jan 25 07:56:57 crc kubenswrapper[4832]: I0125 07:56:57.579994 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" seLinuxMountContext="" Jan 25 07:56:57 crc kubenswrapper[4832]: I0125 07:56:57.580012 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" seLinuxMountContext="" Jan 25 07:56:57 crc kubenswrapper[4832]: I0125 07:56:57.580053 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" seLinuxMountContext="" Jan 25 07:56:57 crc kubenswrapper[4832]: I0125 07:56:57.580066 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" seLinuxMountContext="" Jan 25 07:56:57 crc kubenswrapper[4832]: I0125 07:56:57.580101 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5" seLinuxMountContext="" Jan 25 07:56:57 crc kubenswrapper[4832]: I0125 07:56:57.580113 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="44663579-783b-4372-86d6-acf235a62d72" volumeName="kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" seLinuxMountContext="" Jan 25 07:56:57 crc kubenswrapper[4832]: I0125 07:56:57.580153 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" seLinuxMountContext="" Jan 25 07:56:57 crc kubenswrapper[4832]: I0125 07:56:57.580165 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" seLinuxMountContext="" Jan 25 07:56:57 crc kubenswrapper[4832]: I0125 07:56:57.580178 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" seLinuxMountContext="" Jan 25 07:56:57 crc kubenswrapper[4832]: I0125 07:56:57.580250 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" seLinuxMountContext="" Jan 25 07:56:57 crc kubenswrapper[4832]: I0125 07:56:57.580312 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" seLinuxMountContext="" Jan 25 07:56:57 crc kubenswrapper[4832]: I0125 07:56:57.580324 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" seLinuxMountContext="" Jan 25 07:56:57 crc kubenswrapper[4832]: I0125 07:56:57.580344 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" seLinuxMountContext="" Jan 25 07:56:57 crc kubenswrapper[4832]: I0125 07:56:57.580433 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" seLinuxMountContext="" Jan 25 07:56:57 crc kubenswrapper[4832]: I0125 07:56:57.580506 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" seLinuxMountContext="" Jan 25 07:56:57 crc kubenswrapper[4832]: I0125 07:56:57.580569 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" seLinuxMountContext="" Jan 25 07:56:57 crc kubenswrapper[4832]: I0125 07:56:57.580585 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" seLinuxMountContext="" Jan 25 07:56:57 crc kubenswrapper[4832]: I0125 07:56:57.580597 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" seLinuxMountContext="" Jan 25 07:56:57 crc kubenswrapper[4832]: I0125 07:56:57.580635 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" seLinuxMountContext="" Jan 25 07:56:57 crc kubenswrapper[4832]: I0125 07:56:57.580653 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" seLinuxMountContext="" Jan 25 07:56:57 crc kubenswrapper[4832]: I0125 07:56:57.580666 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" seLinuxMountContext="" Jan 25 07:56:57 crc kubenswrapper[4832]: I0125 07:56:57.580678 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" seLinuxMountContext="" Jan 25 07:56:57 crc kubenswrapper[4832]: I0125 07:56:57.580691 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf" seLinuxMountContext="" Jan 25 07:56:57 crc kubenswrapper[4832]: I0125 07:56:57.580702 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" seLinuxMountContext="" Jan 25 07:56:57 crc kubenswrapper[4832]: I0125 07:56:57.580714 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" seLinuxMountContext="" Jan 25 07:56:57 crc kubenswrapper[4832]: I0125 07:56:57.580726 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" seLinuxMountContext="" Jan 25 07:56:57 crc kubenswrapper[4832]: I0125 07:56:57.580878 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d751cbb-f2e2-430d-9754-c882a5e924a5" volumeName="kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl" seLinuxMountContext="" Jan 25 07:56:57 crc kubenswrapper[4832]: I0125 07:56:57.580899 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" seLinuxMountContext="" Jan 25 07:56:57 crc kubenswrapper[4832]: I0125 07:56:57.580912 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" seLinuxMountContext="" Jan 25 07:56:57 crc kubenswrapper[4832]: I0125 07:56:57.580971 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" seLinuxMountContext="" Jan 25 07:56:57 crc kubenswrapper[4832]: I0125 07:56:57.581014 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" seLinuxMountContext="" Jan 25 07:56:57 crc kubenswrapper[4832]: I0125 07:56:57.581031 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" seLinuxMountContext="" Jan 25 07:56:57 crc kubenswrapper[4832]: I0125 07:56:57.581043 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" seLinuxMountContext="" Jan 25 07:56:57 crc kubenswrapper[4832]: I0125 07:56:57.581056 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" seLinuxMountContext="" Jan 25 07:56:57 crc kubenswrapper[4832]: I0125 07:56:57.582787 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" seLinuxMountContext="" Jan 25 07:56:57 crc kubenswrapper[4832]: I0125 07:56:57.582850 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" seLinuxMountContext="" Jan 25 07:56:57 crc kubenswrapper[4832]: I0125 07:56:57.582867 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" seLinuxMountContext="" Jan 25 07:56:57 crc kubenswrapper[4832]: I0125 07:56:57.582888 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" seLinuxMountContext="" Jan 25 07:56:57 crc kubenswrapper[4832]: I0125 07:56:57.582913 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb" seLinuxMountContext="" Jan 25 07:56:57 crc kubenswrapper[4832]: I0125 07:56:57.582926 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" seLinuxMountContext="" Jan 25 07:56:57 crc kubenswrapper[4832]: I0125 07:56:57.582948 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" seLinuxMountContext="" Jan 25 07:56:57 crc kubenswrapper[4832]: I0125 07:56:57.583070 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" seLinuxMountContext="" Jan 25 07:56:57 crc kubenswrapper[4832]: I0125 07:56:57.583098 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" seLinuxMountContext="" Jan 25 07:56:57 crc kubenswrapper[4832]: I0125 07:56:57.583109 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" seLinuxMountContext="" Jan 25 07:56:57 crc kubenswrapper[4832]: I0125 07:56:57.583122 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm" seLinuxMountContext="" Jan 25 07:56:57 crc kubenswrapper[4832]: I0125 07:56:57.583138 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" seLinuxMountContext="" Jan 25 07:56:57 crc kubenswrapper[4832]: I0125 07:56:57.583199 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" seLinuxMountContext="" Jan 25 07:56:57 crc kubenswrapper[4832]: I0125 07:56:57.583218 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" seLinuxMountContext="" Jan 25 07:56:57 crc kubenswrapper[4832]: I0125 07:56:57.583231 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" seLinuxMountContext="" Jan 25 07:56:57 crc kubenswrapper[4832]: I0125 07:56:57.583243 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" seLinuxMountContext="" Jan 25 07:56:57 crc kubenswrapper[4832]: I0125 07:56:57.583401 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" seLinuxMountContext="" Jan 25 07:56:57 crc kubenswrapper[4832]: I0125 07:56:57.583418 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" seLinuxMountContext="" Jan 25 07:56:57 crc kubenswrapper[4832]: I0125 07:56:57.583436 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" seLinuxMountContext="" Jan 25 07:56:57 crc kubenswrapper[4832]: I0125 07:56:57.584518 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" seLinuxMountContext="" Jan 25 07:56:57 crc kubenswrapper[4832]: I0125 07:56:57.584585 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" seLinuxMountContext="" Jan 25 07:56:57 crc kubenswrapper[4832]: I0125 07:56:57.584601 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" seLinuxMountContext="" Jan 25 07:56:57 crc kubenswrapper[4832]: I0125 07:56:57.584626 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" seLinuxMountContext="" Jan 25 07:56:57 crc kubenswrapper[4832]: I0125 07:56:57.584638 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" seLinuxMountContext="" Jan 25 07:56:57 crc kubenswrapper[4832]: I0125 07:56:57.584657 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" seLinuxMountContext="" Jan 25 07:56:57 crc kubenswrapper[4832]: I0125 07:56:57.584686 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" seLinuxMountContext="" Jan 25 07:56:57 crc kubenswrapper[4832]: I0125 07:56:57.584701 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" seLinuxMountContext="" Jan 25 07:56:57 crc kubenswrapper[4832]: I0125 07:56:57.584716 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" seLinuxMountContext="" Jan 25 07:56:57 crc kubenswrapper[4832]: I0125 07:56:57.584727 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" seLinuxMountContext="" Jan 25 07:56:57 crc kubenswrapper[4832]: I0125 07:56:57.584742 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" seLinuxMountContext="" Jan 25 07:56:57 crc kubenswrapper[4832]: I0125 07:56:57.585370 4832 reconstruct.go:144] "Volume is marked device as uncertain and added into the actual state" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" deviceMountPath="/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount" Jan 25 07:56:57 crc kubenswrapper[4832]: I0125 07:56:57.585423 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" seLinuxMountContext="" Jan 25 07:56:57 crc kubenswrapper[4832]: I0125 07:56:57.585437 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" seLinuxMountContext="" Jan 25 07:56:57 crc kubenswrapper[4832]: I0125 07:56:57.585455 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" seLinuxMountContext="" Jan 25 07:56:57 crc kubenswrapper[4832]: I0125 07:56:57.585468 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" seLinuxMountContext="" Jan 25 07:56:57 crc kubenswrapper[4832]: I0125 07:56:57.585480 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" seLinuxMountContext="" Jan 25 07:56:57 crc kubenswrapper[4832]: I0125 07:56:57.585500 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" seLinuxMountContext="" Jan 25 07:56:57 crc kubenswrapper[4832]: I0125 07:56:57.585515 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" seLinuxMountContext="" Jan 25 07:56:57 crc kubenswrapper[4832]: I0125 07:56:57.585532 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" seLinuxMountContext="" Jan 25 07:56:57 crc kubenswrapper[4832]: I0125 07:56:57.585549 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" seLinuxMountContext="" Jan 25 07:56:57 crc kubenswrapper[4832]: I0125 07:56:57.585562 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" seLinuxMountContext="" Jan 25 07:56:57 crc kubenswrapper[4832]: I0125 07:56:57.585576 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" seLinuxMountContext="" Jan 25 07:56:57 crc kubenswrapper[4832]: I0125 07:56:57.585589 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" seLinuxMountContext="" Jan 25 07:56:57 crc kubenswrapper[4832]: I0125 07:56:57.585605 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" seLinuxMountContext="" Jan 25 07:56:57 crc kubenswrapper[4832]: I0125 07:56:57.585617 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" seLinuxMountContext="" Jan 25 07:56:57 crc kubenswrapper[4832]: I0125 07:56:57.585630 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" seLinuxMountContext="" Jan 25 07:56:57 crc kubenswrapper[4832]: I0125 07:56:57.585647 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" seLinuxMountContext="" Jan 25 07:56:57 crc kubenswrapper[4832]: I0125 07:56:57.585661 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script" seLinuxMountContext="" Jan 25 07:56:57 crc kubenswrapper[4832]: I0125 07:56:57.585689 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls" seLinuxMountContext="" Jan 25 07:56:57 crc kubenswrapper[4832]: I0125 07:56:57.585708 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" seLinuxMountContext="" Jan 25 07:56:57 crc kubenswrapper[4832]: I0125 07:56:57.585723 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" seLinuxMountContext="" Jan 25 07:56:57 crc kubenswrapper[4832]: I0125 07:56:57.585744 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" seLinuxMountContext="" Jan 25 07:56:57 crc kubenswrapper[4832]: I0125 07:56:57.585756 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" seLinuxMountContext="" Jan 25 07:56:57 crc kubenswrapper[4832]: I0125 07:56:57.585768 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" seLinuxMountContext="" Jan 25 07:56:57 crc kubenswrapper[4832]: I0125 07:56:57.585786 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" seLinuxMountContext="" Jan 25 07:56:57 crc kubenswrapper[4832]: I0125 07:56:57.585810 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" seLinuxMountContext="" Jan 25 07:56:57 crc kubenswrapper[4832]: I0125 07:56:57.585832 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" seLinuxMountContext="" Jan 25 07:56:57 crc kubenswrapper[4832]: I0125 07:56:57.585853 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" seLinuxMountContext="" Jan 25 07:56:57 crc kubenswrapper[4832]: I0125 07:56:57.585870 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" seLinuxMountContext="" Jan 25 07:56:57 crc kubenswrapper[4832]: I0125 07:56:57.585893 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" seLinuxMountContext="" Jan 25 07:56:57 crc kubenswrapper[4832]: I0125 07:56:57.585908 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" seLinuxMountContext="" Jan 25 07:56:57 crc kubenswrapper[4832]: I0125 07:56:57.585929 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" seLinuxMountContext="" Jan 25 07:56:57 crc kubenswrapper[4832]: I0125 07:56:57.585942 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" seLinuxMountContext="" Jan 25 07:56:57 crc kubenswrapper[4832]: I0125 07:56:57.585957 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" seLinuxMountContext="" Jan 25 07:56:57 crc kubenswrapper[4832]: I0125 07:56:57.585976 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" seLinuxMountContext="" Jan 25 07:56:57 crc kubenswrapper[4832]: I0125 07:56:57.585991 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" seLinuxMountContext="" Jan 25 07:56:57 crc kubenswrapper[4832]: I0125 07:56:57.586012 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" seLinuxMountContext="" Jan 25 07:56:57 crc kubenswrapper[4832]: I0125 07:56:57.586030 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" seLinuxMountContext="" Jan 25 07:56:57 crc kubenswrapper[4832]: I0125 07:56:57.586046 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" seLinuxMountContext="" Jan 25 07:56:57 crc kubenswrapper[4832]: I0125 07:56:57.586067 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" seLinuxMountContext="" Jan 25 07:56:57 crc kubenswrapper[4832]: I0125 07:56:57.586080 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" seLinuxMountContext="" Jan 25 07:56:57 crc kubenswrapper[4832]: I0125 07:56:57.586098 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" seLinuxMountContext="" Jan 25 07:56:57 crc kubenswrapper[4832]: I0125 07:56:57.586113 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" seLinuxMountContext="" Jan 25 07:56:57 crc kubenswrapper[4832]: I0125 07:56:57.586129 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" seLinuxMountContext="" Jan 25 07:56:57 crc kubenswrapper[4832]: I0125 07:56:57.586158 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" seLinuxMountContext="" Jan 25 07:56:57 crc kubenswrapper[4832]: I0125 07:56:57.586176 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" seLinuxMountContext="" Jan 25 07:56:57 crc kubenswrapper[4832]: I0125 07:56:57.586191 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" seLinuxMountContext="" Jan 25 07:56:57 crc kubenswrapper[4832]: I0125 07:56:57.586214 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" seLinuxMountContext="" Jan 25 07:56:57 crc kubenswrapper[4832]: I0125 07:56:57.586228 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" seLinuxMountContext="" Jan 25 07:56:57 crc kubenswrapper[4832]: I0125 07:56:57.586248 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" seLinuxMountContext="" Jan 25 07:56:57 crc kubenswrapper[4832]: I0125 07:56:57.586261 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" seLinuxMountContext="" Jan 25 07:56:57 crc kubenswrapper[4832]: I0125 07:56:57.586277 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" seLinuxMountContext="" Jan 25 07:56:57 crc kubenswrapper[4832]: I0125 07:56:57.586297 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" seLinuxMountContext="" Jan 25 07:56:57 crc kubenswrapper[4832]: I0125 07:56:57.586312 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" seLinuxMountContext="" Jan 25 07:56:57 crc kubenswrapper[4832]: I0125 07:56:57.586339 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert" seLinuxMountContext="" Jan 25 07:56:57 crc kubenswrapper[4832]: I0125 07:56:57.586357 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" seLinuxMountContext="" Jan 25 07:56:57 crc kubenswrapper[4832]: I0125 07:56:57.586372 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert" seLinuxMountContext="" Jan 25 07:56:57 crc kubenswrapper[4832]: I0125 07:56:57.586409 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" seLinuxMountContext="" Jan 25 07:56:57 crc kubenswrapper[4832]: I0125 07:56:57.586424 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" seLinuxMountContext="" Jan 25 07:56:57 crc kubenswrapper[4832]: I0125 07:56:57.586446 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" seLinuxMountContext="" Jan 25 07:56:57 crc kubenswrapper[4832]: I0125 07:56:57.586462 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" seLinuxMountContext="" Jan 25 07:56:57 crc kubenswrapper[4832]: I0125 07:56:57.586478 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" seLinuxMountContext="" Jan 25 07:56:57 crc kubenswrapper[4832]: I0125 07:56:57.586498 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" seLinuxMountContext="" Jan 25 07:56:57 crc kubenswrapper[4832]: I0125 07:56:57.586512 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" seLinuxMountContext="" Jan 25 07:56:57 crc kubenswrapper[4832]: I0125 07:56:57.586529 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" seLinuxMountContext="" Jan 25 07:56:57 crc kubenswrapper[4832]: I0125 07:56:57.586542 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" seLinuxMountContext="" Jan 25 07:56:57 crc kubenswrapper[4832]: I0125 07:56:57.586554 4832 reconstruct.go:97] "Volume reconstruction finished" Jan 25 07:56:57 crc kubenswrapper[4832]: I0125 07:56:57.586563 4832 reconciler.go:26] "Reconciler: start to sync state" Jan 25 07:56:57 crc kubenswrapper[4832]: I0125 07:56:57.599891 4832 manager.go:324] Recovery completed Jan 25 07:56:57 crc kubenswrapper[4832]: I0125 07:56:57.610056 4832 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 25 07:56:57 crc kubenswrapper[4832]: I0125 07:56:57.611490 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:56:57 crc kubenswrapper[4832]: I0125 07:56:57.611525 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:56:57 crc kubenswrapper[4832]: I0125 07:56:57.611537 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:56:57 crc kubenswrapper[4832]: I0125 07:56:57.612264 4832 cpu_manager.go:225] "Starting CPU manager" policy="none" Jan 25 07:56:57 crc kubenswrapper[4832]: I0125 07:56:57.612286 4832 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Jan 25 07:56:57 crc kubenswrapper[4832]: I0125 07:56:57.612305 4832 state_mem.go:36] "Initialized new in-memory state store" Jan 25 07:56:57 crc kubenswrapper[4832]: I0125 07:56:57.665696 4832 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 25 07:56:57 crc kubenswrapper[4832]: E0125 07:56:57.665720 4832 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Jan 25 07:56:57 crc kubenswrapper[4832]: I0125 07:56:57.667018 4832 policy_none.go:49] "None policy: Start" Jan 25 07:56:57 crc kubenswrapper[4832]: I0125 07:56:57.668203 4832 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 25 07:56:57 crc kubenswrapper[4832]: I0125 07:56:57.668235 4832 state_mem.go:35] "Initializing new in-memory state store" Jan 25 07:56:57 crc kubenswrapper[4832]: I0125 07:56:57.668261 4832 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 25 07:56:57 crc kubenswrapper[4832]: I0125 07:56:57.668312 4832 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 25 07:56:57 crc kubenswrapper[4832]: I0125 07:56:57.668350 4832 kubelet.go:2335] "Starting kubelet main sync loop" Jan 25 07:56:57 crc kubenswrapper[4832]: E0125 07:56:57.668536 4832 kubelet.go:2359] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 25 07:56:57 crc kubenswrapper[4832]: W0125 07:56:57.670197 4832 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.213:6443: connect: connection refused Jan 25 07:56:57 crc kubenswrapper[4832]: E0125 07:56:57.670278 4832 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.213:6443: connect: connection refused" logger="UnhandledError" Jan 25 07:56:57 crc kubenswrapper[4832]: I0125 07:56:57.740532 4832 manager.go:334] "Starting Device Plugin manager" Jan 25 07:56:57 crc kubenswrapper[4832]: I0125 07:56:57.740610 4832 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 25 07:56:57 crc kubenswrapper[4832]: I0125 07:56:57.740623 4832 server.go:79] "Starting device plugin registration server" Jan 25 07:56:57 crc kubenswrapper[4832]: I0125 07:56:57.741174 4832 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 25 07:56:57 crc kubenswrapper[4832]: I0125 07:56:57.741538 4832 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 25 07:56:57 crc kubenswrapper[4832]: I0125 07:56:57.741929 4832 plugin_watcher.go:51] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry" Jan 25 07:56:57 crc kubenswrapper[4832]: I0125 07:56:57.742026 4832 plugin_manager.go:116] "The desired_state_of_world populator (plugin watcher) starts" Jan 25 07:56:57 crc kubenswrapper[4832]: I0125 07:56:57.742039 4832 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 25 07:56:57 crc kubenswrapper[4832]: E0125 07:56:57.750754 4832 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Jan 25 07:56:57 crc kubenswrapper[4832]: E0125 07:56:57.767463 4832 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.213:6443: connect: connection refused" interval="400ms" Jan 25 07:56:57 crc kubenswrapper[4832]: I0125 07:56:57.769559 4832 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc","openshift-kube-controller-manager/kube-controller-manager-crc","openshift-kube-scheduler/openshift-kube-scheduler-crc","openshift-machine-config-operator/kube-rbac-proxy-crio-crc","openshift-etcd/etcd-crc"] Jan 25 07:56:57 crc kubenswrapper[4832]: I0125 07:56:57.769705 4832 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 25 07:56:57 crc kubenswrapper[4832]: I0125 07:56:57.771563 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:56:57 crc kubenswrapper[4832]: I0125 07:56:57.771635 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:56:57 crc kubenswrapper[4832]: I0125 07:56:57.771649 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:56:57 crc kubenswrapper[4832]: I0125 07:56:57.771800 4832 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 25 07:56:57 crc kubenswrapper[4832]: I0125 07:56:57.772522 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 25 07:56:57 crc kubenswrapper[4832]: I0125 07:56:57.772555 4832 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 25 07:56:57 crc kubenswrapper[4832]: I0125 07:56:57.773075 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:56:57 crc kubenswrapper[4832]: I0125 07:56:57.773103 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:56:57 crc kubenswrapper[4832]: I0125 07:56:57.773111 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:56:57 crc kubenswrapper[4832]: I0125 07:56:57.773192 4832 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 25 07:56:57 crc kubenswrapper[4832]: I0125 07:56:57.773273 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:56:57 crc kubenswrapper[4832]: I0125 07:56:57.773309 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:56:57 crc kubenswrapper[4832]: I0125 07:56:57.773324 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:56:57 crc kubenswrapper[4832]: I0125 07:56:57.773349 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 25 07:56:57 crc kubenswrapper[4832]: I0125 07:56:57.773379 4832 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 25 07:56:57 crc kubenswrapper[4832]: I0125 07:56:57.774267 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:56:57 crc kubenswrapper[4832]: I0125 07:56:57.774310 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:56:57 crc kubenswrapper[4832]: I0125 07:56:57.774320 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:56:57 crc kubenswrapper[4832]: I0125 07:56:57.774514 4832 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 25 07:56:57 crc kubenswrapper[4832]: I0125 07:56:57.774926 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 25 07:56:57 crc kubenswrapper[4832]: I0125 07:56:57.775030 4832 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 25 07:56:57 crc kubenswrapper[4832]: I0125 07:56:57.775486 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:56:57 crc kubenswrapper[4832]: I0125 07:56:57.775510 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:56:57 crc kubenswrapper[4832]: I0125 07:56:57.775521 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:56:57 crc kubenswrapper[4832]: I0125 07:56:57.775609 4832 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 25 07:56:57 crc kubenswrapper[4832]: I0125 07:56:57.775692 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:56:57 crc kubenswrapper[4832]: I0125 07:56:57.775753 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:56:57 crc kubenswrapper[4832]: I0125 07:56:57.775771 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:56:57 crc kubenswrapper[4832]: I0125 07:56:57.775712 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 25 07:56:57 crc kubenswrapper[4832]: I0125 07:56:57.775822 4832 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 25 07:56:57 crc kubenswrapper[4832]: I0125 07:56:57.776908 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:56:57 crc kubenswrapper[4832]: I0125 07:56:57.776959 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:56:57 crc kubenswrapper[4832]: I0125 07:56:57.776978 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:56:57 crc kubenswrapper[4832]: I0125 07:56:57.776997 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:56:57 crc kubenswrapper[4832]: I0125 07:56:57.777028 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:56:57 crc kubenswrapper[4832]: I0125 07:56:57.777028 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:56:57 crc kubenswrapper[4832]: I0125 07:56:57.777066 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:56:57 crc kubenswrapper[4832]: I0125 07:56:57.777040 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:56:57 crc kubenswrapper[4832]: I0125 07:56:57.777084 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:56:57 crc kubenswrapper[4832]: I0125 07:56:57.777337 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Jan 25 07:56:57 crc kubenswrapper[4832]: I0125 07:56:57.777372 4832 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 25 07:56:57 crc kubenswrapper[4832]: I0125 07:56:57.779377 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:56:57 crc kubenswrapper[4832]: I0125 07:56:57.779444 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:56:57 crc kubenswrapper[4832]: I0125 07:56:57.779463 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:56:57 crc kubenswrapper[4832]: I0125 07:56:57.842168 4832 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 25 07:56:57 crc kubenswrapper[4832]: I0125 07:56:57.847251 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:56:57 crc kubenswrapper[4832]: I0125 07:56:57.847316 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:56:57 crc kubenswrapper[4832]: I0125 07:56:57.847330 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:56:57 crc kubenswrapper[4832]: I0125 07:56:57.847370 4832 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 25 07:56:57 crc kubenswrapper[4832]: E0125 07:56:57.848081 4832 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.213:6443: connect: connection refused" node="crc" Jan 25 07:56:57 crc kubenswrapper[4832]: I0125 07:56:57.889617 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 25 07:56:57 crc kubenswrapper[4832]: I0125 07:56:57.889707 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 25 07:56:57 crc kubenswrapper[4832]: I0125 07:56:57.889736 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 25 07:56:57 crc kubenswrapper[4832]: I0125 07:56:57.889769 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 25 07:56:57 crc kubenswrapper[4832]: I0125 07:56:57.889810 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 25 07:56:57 crc kubenswrapper[4832]: I0125 07:56:57.889946 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 25 07:56:57 crc kubenswrapper[4832]: I0125 07:56:57.890032 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 25 07:56:57 crc kubenswrapper[4832]: I0125 07:56:57.890192 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 25 07:56:57 crc kubenswrapper[4832]: I0125 07:56:57.890284 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 25 07:56:57 crc kubenswrapper[4832]: I0125 07:56:57.890329 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 25 07:56:57 crc kubenswrapper[4832]: I0125 07:56:57.890354 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 25 07:56:57 crc kubenswrapper[4832]: I0125 07:56:57.890415 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 25 07:56:57 crc kubenswrapper[4832]: I0125 07:56:57.890444 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 25 07:56:57 crc kubenswrapper[4832]: I0125 07:56:57.890509 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 25 07:56:57 crc kubenswrapper[4832]: I0125 07:56:57.890594 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 25 07:56:57 crc kubenswrapper[4832]: I0125 07:56:57.992564 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 25 07:56:57 crc kubenswrapper[4832]: I0125 07:56:57.992641 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 25 07:56:57 crc kubenswrapper[4832]: I0125 07:56:57.992670 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 25 07:56:57 crc kubenswrapper[4832]: I0125 07:56:57.992696 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 25 07:56:57 crc kubenswrapper[4832]: I0125 07:56:57.992721 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 25 07:56:57 crc kubenswrapper[4832]: I0125 07:56:57.992742 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 25 07:56:57 crc kubenswrapper[4832]: I0125 07:56:57.992775 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 25 07:56:57 crc kubenswrapper[4832]: I0125 07:56:57.992787 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 25 07:56:57 crc kubenswrapper[4832]: I0125 07:56:57.992834 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 25 07:56:57 crc kubenswrapper[4832]: I0125 07:56:57.992846 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 25 07:56:57 crc kubenswrapper[4832]: I0125 07:56:57.992801 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 25 07:56:57 crc kubenswrapper[4832]: I0125 07:56:57.992897 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 25 07:56:57 crc kubenswrapper[4832]: I0125 07:56:57.992944 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 25 07:56:57 crc kubenswrapper[4832]: I0125 07:56:57.992909 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 25 07:56:57 crc kubenswrapper[4832]: I0125 07:56:57.992954 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 25 07:56:57 crc kubenswrapper[4832]: I0125 07:56:57.992919 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 25 07:56:57 crc kubenswrapper[4832]: I0125 07:56:57.993010 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 25 07:56:57 crc kubenswrapper[4832]: I0125 07:56:57.993052 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 25 07:56:57 crc kubenswrapper[4832]: I0125 07:56:57.993207 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 25 07:56:57 crc kubenswrapper[4832]: I0125 07:56:57.993247 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 25 07:56:57 crc kubenswrapper[4832]: I0125 07:56:57.993273 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 25 07:56:57 crc kubenswrapper[4832]: I0125 07:56:57.993304 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 25 07:56:57 crc kubenswrapper[4832]: I0125 07:56:57.992978 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 25 07:56:57 crc kubenswrapper[4832]: I0125 07:56:57.993316 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 25 07:56:57 crc kubenswrapper[4832]: I0125 07:56:57.993331 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 25 07:56:57 crc kubenswrapper[4832]: I0125 07:56:57.993082 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 25 07:56:57 crc kubenswrapper[4832]: I0125 07:56:57.993353 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 25 07:56:57 crc kubenswrapper[4832]: I0125 07:56:57.993400 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 25 07:56:57 crc kubenswrapper[4832]: I0125 07:56:57.993457 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 25 07:56:57 crc kubenswrapper[4832]: I0125 07:56:57.993512 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 25 07:56:58 crc kubenswrapper[4832]: I0125 07:56:58.048246 4832 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 25 07:56:58 crc kubenswrapper[4832]: I0125 07:56:58.050170 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:56:58 crc kubenswrapper[4832]: I0125 07:56:58.050229 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:56:58 crc kubenswrapper[4832]: I0125 07:56:58.050243 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:56:58 crc kubenswrapper[4832]: I0125 07:56:58.050278 4832 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 25 07:56:58 crc kubenswrapper[4832]: E0125 07:56:58.051052 4832 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.213:6443: connect: connection refused" node="crc" Jan 25 07:56:58 crc kubenswrapper[4832]: I0125 07:56:58.098775 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 25 07:56:58 crc kubenswrapper[4832]: I0125 07:56:58.105196 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 25 07:56:58 crc kubenswrapper[4832]: I0125 07:56:58.121795 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 25 07:56:58 crc kubenswrapper[4832]: W0125 07:56:58.132275 4832 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf614b9022728cf315e60c057852e563e.slice/crio-40bba9b6a1d96916394b9e0b1559c7d2e1efcf3572343c66959279e7886c2cf4 WatchSource:0}: Error finding container 40bba9b6a1d96916394b9e0b1559c7d2e1efcf3572343c66959279e7886c2cf4: Status 404 returned error can't find the container with id 40bba9b6a1d96916394b9e0b1559c7d2e1efcf3572343c66959279e7886c2cf4 Jan 25 07:56:58 crc kubenswrapper[4832]: W0125 07:56:58.134940 4832 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf4b27818a5e8e43d0dc095d08835c792.slice/crio-18a4d825ddfaad38d127941613f609b55cacdafe018b8cf07ba16ca5910d7569 WatchSource:0}: Error finding container 18a4d825ddfaad38d127941613f609b55cacdafe018b8cf07ba16ca5910d7569: Status 404 returned error can't find the container with id 18a4d825ddfaad38d127941613f609b55cacdafe018b8cf07ba16ca5910d7569 Jan 25 07:56:58 crc kubenswrapper[4832]: I0125 07:56:58.136890 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 25 07:56:58 crc kubenswrapper[4832]: W0125 07:56:58.137329 4832 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3dcd261975c3d6b9a6ad6367fd4facd3.slice/crio-3020852acdfda4869bd12da5dcdaa92d7553168feb34e27c594d676f3376541f WatchSource:0}: Error finding container 3020852acdfda4869bd12da5dcdaa92d7553168feb34e27c594d676f3376541f: Status 404 returned error can't find the container with id 3020852acdfda4869bd12da5dcdaa92d7553168feb34e27c594d676f3376541f Jan 25 07:56:58 crc kubenswrapper[4832]: W0125 07:56:58.147987 4832 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd1b160f5dda77d281dd8e69ec8d817f9.slice/crio-3ec7a13aa49ebc2f82f22cd3c98719cdf0634a05d08f5049c7724218bdb12df4 WatchSource:0}: Error finding container 3ec7a13aa49ebc2f82f22cd3c98719cdf0634a05d08f5049c7724218bdb12df4: Status 404 returned error can't find the container with id 3ec7a13aa49ebc2f82f22cd3c98719cdf0634a05d08f5049c7724218bdb12df4 Jan 25 07:56:58 crc kubenswrapper[4832]: I0125 07:56:58.151795 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Jan 25 07:56:58 crc kubenswrapper[4832]: E0125 07:56:58.168367 4832 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.213:6443: connect: connection refused" interval="800ms" Jan 25 07:56:58 crc kubenswrapper[4832]: W0125 07:56:58.178308 4832 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2139d3e2895fc6797b9c76a1b4c9886d.slice/crio-b88873324d0016dd1ff3e5c283de67cbdc0e9c9145acf077ef696be405c2286b WatchSource:0}: Error finding container b88873324d0016dd1ff3e5c283de67cbdc0e9c9145acf077ef696be405c2286b: Status 404 returned error can't find the container with id b88873324d0016dd1ff3e5c283de67cbdc0e9c9145acf077ef696be405c2286b Jan 25 07:56:58 crc kubenswrapper[4832]: I0125 07:56:58.451907 4832 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 25 07:56:58 crc kubenswrapper[4832]: I0125 07:56:58.453170 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:56:58 crc kubenswrapper[4832]: I0125 07:56:58.453211 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:56:58 crc kubenswrapper[4832]: I0125 07:56:58.453222 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:56:58 crc kubenswrapper[4832]: I0125 07:56:58.453272 4832 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 25 07:56:58 crc kubenswrapper[4832]: E0125 07:56:58.453788 4832 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.213:6443: connect: connection refused" node="crc" Jan 25 07:56:58 crc kubenswrapper[4832]: I0125 07:56:58.557334 4832 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.213:6443: connect: connection refused Jan 25 07:56:58 crc kubenswrapper[4832]: I0125 07:56:58.566428 4832 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-31 04:45:54.23679214 +0000 UTC Jan 25 07:56:58 crc kubenswrapper[4832]: I0125 07:56:58.678113 4832 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="c6f28ecd4c0dfb159fffbbdfc1ecbfee0ce21de2efa607937d80ec098bfc2534" exitCode=0 Jan 25 07:56:58 crc kubenswrapper[4832]: I0125 07:56:58.678218 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"c6f28ecd4c0dfb159fffbbdfc1ecbfee0ce21de2efa607937d80ec098bfc2534"} Jan 25 07:56:58 crc kubenswrapper[4832]: I0125 07:56:58.678315 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"b88873324d0016dd1ff3e5c283de67cbdc0e9c9145acf077ef696be405c2286b"} Jan 25 07:56:58 crc kubenswrapper[4832]: I0125 07:56:58.678473 4832 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 25 07:56:58 crc kubenswrapper[4832]: I0125 07:56:58.679694 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:56:58 crc kubenswrapper[4832]: I0125 07:56:58.679743 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:56:58 crc kubenswrapper[4832]: I0125 07:56:58.679753 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:56:58 crc kubenswrapper[4832]: I0125 07:56:58.681018 4832 generic.go:334] "Generic (PLEG): container finished" podID="d1b160f5dda77d281dd8e69ec8d817f9" containerID="950d9ef513ef0b8dfe71e41de54a35ffc366d8ec047e5d72819b0dd54a3bf003" exitCode=0 Jan 25 07:56:58 crc kubenswrapper[4832]: I0125 07:56:58.681074 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerDied","Data":"950d9ef513ef0b8dfe71e41de54a35ffc366d8ec047e5d72819b0dd54a3bf003"} Jan 25 07:56:58 crc kubenswrapper[4832]: I0125 07:56:58.681092 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"3ec7a13aa49ebc2f82f22cd3c98719cdf0634a05d08f5049c7724218bdb12df4"} Jan 25 07:56:58 crc kubenswrapper[4832]: I0125 07:56:58.681168 4832 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 25 07:56:58 crc kubenswrapper[4832]: I0125 07:56:58.681972 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:56:58 crc kubenswrapper[4832]: I0125 07:56:58.682023 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:56:58 crc kubenswrapper[4832]: I0125 07:56:58.682036 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:56:58 crc kubenswrapper[4832]: I0125 07:56:58.684620 4832 generic.go:334] "Generic (PLEG): container finished" podID="3dcd261975c3d6b9a6ad6367fd4facd3" containerID="79304c289cb94b7a9cd8eed25b9e679ded9f3b2b6133ad21157032e313120e85" exitCode=0 Jan 25 07:56:58 crc kubenswrapper[4832]: I0125 07:56:58.684619 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerDied","Data":"79304c289cb94b7a9cd8eed25b9e679ded9f3b2b6133ad21157032e313120e85"} Jan 25 07:56:58 crc kubenswrapper[4832]: I0125 07:56:58.684725 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"3020852acdfda4869bd12da5dcdaa92d7553168feb34e27c594d676f3376541f"} Jan 25 07:56:58 crc kubenswrapper[4832]: I0125 07:56:58.684905 4832 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 25 07:56:58 crc kubenswrapper[4832]: I0125 07:56:58.685962 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:56:58 crc kubenswrapper[4832]: I0125 07:56:58.686038 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:56:58 crc kubenswrapper[4832]: I0125 07:56:58.686049 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:56:58 crc kubenswrapper[4832]: I0125 07:56:58.686695 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"b044eb1a229266f00938c08da6aa9e86425ca71d08c8434d7214d54850c36bbb"} Jan 25 07:56:58 crc kubenswrapper[4832]: I0125 07:56:58.686728 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"40bba9b6a1d96916394b9e0b1559c7d2e1efcf3572343c66959279e7886c2cf4"} Jan 25 07:56:58 crc kubenswrapper[4832]: I0125 07:56:58.689811 4832 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="b5cdefbe9da3ff798b69ba79465cd9b6fce74df31802f14dca3fa58ba5b9d1bd" exitCode=0 Jan 25 07:56:58 crc kubenswrapper[4832]: I0125 07:56:58.689870 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"b5cdefbe9da3ff798b69ba79465cd9b6fce74df31802f14dca3fa58ba5b9d1bd"} Jan 25 07:56:58 crc kubenswrapper[4832]: I0125 07:56:58.689912 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"18a4d825ddfaad38d127941613f609b55cacdafe018b8cf07ba16ca5910d7569"} Jan 25 07:56:58 crc kubenswrapper[4832]: I0125 07:56:58.690041 4832 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 25 07:56:58 crc kubenswrapper[4832]: I0125 07:56:58.690997 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:56:58 crc kubenswrapper[4832]: I0125 07:56:58.691019 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:56:58 crc kubenswrapper[4832]: I0125 07:56:58.691028 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:56:58 crc kubenswrapper[4832]: I0125 07:56:58.694958 4832 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 25 07:56:58 crc kubenswrapper[4832]: I0125 07:56:58.696701 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:56:58 crc kubenswrapper[4832]: I0125 07:56:58.696747 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:56:58 crc kubenswrapper[4832]: I0125 07:56:58.696767 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:56:58 crc kubenswrapper[4832]: W0125 07:56:58.727025 4832 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.213:6443: connect: connection refused Jan 25 07:56:58 crc kubenswrapper[4832]: E0125 07:56:58.727126 4832 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.213:6443: connect: connection refused" logger="UnhandledError" Jan 25 07:56:58 crc kubenswrapper[4832]: W0125 07:56:58.870850 4832 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.213:6443: connect: connection refused Jan 25 07:56:58 crc kubenswrapper[4832]: E0125 07:56:58.870932 4832 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.213:6443: connect: connection refused" logger="UnhandledError" Jan 25 07:56:58 crc kubenswrapper[4832]: W0125 07:56:58.967264 4832 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.213:6443: connect: connection refused Jan 25 07:56:58 crc kubenswrapper[4832]: E0125 07:56:58.967356 4832 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.213:6443: connect: connection refused" logger="UnhandledError" Jan 25 07:56:58 crc kubenswrapper[4832]: E0125 07:56:58.968842 4832 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.213:6443: connect: connection refused" interval="1.6s" Jan 25 07:56:59 crc kubenswrapper[4832]: W0125 07:56:59.099203 4832 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.213:6443: connect: connection refused Jan 25 07:56:59 crc kubenswrapper[4832]: E0125 07:56:59.099315 4832 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.213:6443: connect: connection refused" logger="UnhandledError" Jan 25 07:56:59 crc kubenswrapper[4832]: I0125 07:56:59.254447 4832 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 25 07:56:59 crc kubenswrapper[4832]: I0125 07:56:59.257834 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:56:59 crc kubenswrapper[4832]: I0125 07:56:59.257886 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:56:59 crc kubenswrapper[4832]: I0125 07:56:59.257903 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:56:59 crc kubenswrapper[4832]: I0125 07:56:59.257937 4832 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 25 07:56:59 crc kubenswrapper[4832]: I0125 07:56:59.566824 4832 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-09 23:30:12.969408049 +0000 UTC Jan 25 07:56:59 crc kubenswrapper[4832]: I0125 07:56:59.675463 4832 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Jan 25 07:56:59 crc kubenswrapper[4832]: I0125 07:56:59.699313 4832 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="b3d6c060504d04d04a811fe906985b4981037f7c249befd89d21694b58983826" exitCode=0 Jan 25 07:56:59 crc kubenswrapper[4832]: I0125 07:56:59.699436 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"b3d6c060504d04d04a811fe906985b4981037f7c249befd89d21694b58983826"} Jan 25 07:56:59 crc kubenswrapper[4832]: I0125 07:56:59.699608 4832 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 25 07:56:59 crc kubenswrapper[4832]: I0125 07:56:59.700998 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:56:59 crc kubenswrapper[4832]: I0125 07:56:59.701041 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:56:59 crc kubenswrapper[4832]: I0125 07:56:59.701056 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:56:59 crc kubenswrapper[4832]: I0125 07:56:59.702783 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"16cd5f32fafee871295127ddc44b9575056c8d5c29478dd3fb19da6bda07f5fc"} Jan 25 07:56:59 crc kubenswrapper[4832]: I0125 07:56:59.702979 4832 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 25 07:56:59 crc kubenswrapper[4832]: I0125 07:56:59.704082 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:56:59 crc kubenswrapper[4832]: I0125 07:56:59.704116 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:56:59 crc kubenswrapper[4832]: I0125 07:56:59.704128 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:56:59 crc kubenswrapper[4832]: I0125 07:56:59.706631 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"e1d1028b32f15c85ebc49f8b388004a91d6c08f1bc2c7bf77c2d34db97525111"} Jan 25 07:56:59 crc kubenswrapper[4832]: I0125 07:56:59.706659 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"902f7ae070f61b744e77e5cbcc7e585607467b588514ae3e99fdded86279a9b1"} Jan 25 07:56:59 crc kubenswrapper[4832]: I0125 07:56:59.706671 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"acf625e850d98cfae07cd2c4ef9d3f9a5404baad9c9bf3e87fa7ff5d1ba00212"} Jan 25 07:56:59 crc kubenswrapper[4832]: I0125 07:56:59.706751 4832 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 25 07:56:59 crc kubenswrapper[4832]: I0125 07:56:59.707584 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:56:59 crc kubenswrapper[4832]: I0125 07:56:59.707615 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:56:59 crc kubenswrapper[4832]: I0125 07:56:59.707626 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:56:59 crc kubenswrapper[4832]: I0125 07:56:59.714349 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"b7833d14895ff5c8aa464bdd04ddfe77dd2cddb9658d863bf6421449e62657bd"} Jan 25 07:56:59 crc kubenswrapper[4832]: I0125 07:56:59.714407 4832 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 25 07:56:59 crc kubenswrapper[4832]: I0125 07:56:59.714414 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"82354c782a5e3edb960aa716e1fc5fa9ab40d1f483ae320f08abfb662c1f1911"} Jan 25 07:56:59 crc kubenswrapper[4832]: I0125 07:56:59.714561 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"8be196a1dec67a58e78aa9de2efa770fc899f210cc9c13962f0ebe78b967ba34"} Jan 25 07:56:59 crc kubenswrapper[4832]: I0125 07:56:59.715622 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:56:59 crc kubenswrapper[4832]: I0125 07:56:59.715666 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:56:59 crc kubenswrapper[4832]: I0125 07:56:59.715684 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:56:59 crc kubenswrapper[4832]: I0125 07:56:59.725805 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"7c0b0c638bfaa98aaf9932b5ad1b0bfc04ba52038c40f3aa85103388c557ace5"} Jan 25 07:56:59 crc kubenswrapper[4832]: I0125 07:56:59.725970 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"37e9206fcc440929199c51b318bab8d2c23814d1307eaed596434c12edf2ed21"} Jan 25 07:56:59 crc kubenswrapper[4832]: I0125 07:56:59.726081 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"959f94a48ef709e3a3ca62ab6fda1874fd98e4fa70fbde0fa03da51bc8d0ed25"} Jan 25 07:56:59 crc kubenswrapper[4832]: I0125 07:56:59.726196 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"427b76c32266adf832d2068d3a55977e793505c5bb68d7b55f73115596094910"} Jan 25 07:57:00 crc kubenswrapper[4832]: I0125 07:57:00.567679 4832 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-30 14:04:30.06030433 +0000 UTC Jan 25 07:57:00 crc kubenswrapper[4832]: I0125 07:57:00.733028 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"7e2213b4c4748dc37cf94e9b977630270dedbabf28e81c8a6d75e4ee3346ad7a"} Jan 25 07:57:00 crc kubenswrapper[4832]: I0125 07:57:00.733078 4832 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 25 07:57:00 crc kubenswrapper[4832]: I0125 07:57:00.734472 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:57:00 crc kubenswrapper[4832]: I0125 07:57:00.734528 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:57:00 crc kubenswrapper[4832]: I0125 07:57:00.734552 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:57:00 crc kubenswrapper[4832]: I0125 07:57:00.736294 4832 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="f98f07a514287378206a12966a18bcce2ce996434858c036f7e405a8c5d51721" exitCode=0 Jan 25 07:57:00 crc kubenswrapper[4832]: I0125 07:57:00.736420 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"f98f07a514287378206a12966a18bcce2ce996434858c036f7e405a8c5d51721"} Jan 25 07:57:00 crc kubenswrapper[4832]: I0125 07:57:00.736445 4832 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 25 07:57:00 crc kubenswrapper[4832]: I0125 07:57:00.736604 4832 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 25 07:57:00 crc kubenswrapper[4832]: I0125 07:57:00.740161 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:57:00 crc kubenswrapper[4832]: I0125 07:57:00.740895 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:57:00 crc kubenswrapper[4832]: I0125 07:57:00.740925 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:57:00 crc kubenswrapper[4832]: I0125 07:57:00.740982 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:57:00 crc kubenswrapper[4832]: I0125 07:57:00.740946 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:57:00 crc kubenswrapper[4832]: I0125 07:57:00.741095 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:57:01 crc kubenswrapper[4832]: I0125 07:57:01.568327 4832 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-28 07:17:13.66693155 +0000 UTC Jan 25 07:57:01 crc kubenswrapper[4832]: I0125 07:57:01.744142 4832 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 25 07:57:01 crc kubenswrapper[4832]: I0125 07:57:01.744126 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"7970bc59b29bb18f7064917431bb4dd3388f593b65f71d697e3bc1c37493d087"} Jan 25 07:57:01 crc kubenswrapper[4832]: I0125 07:57:01.744198 4832 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 25 07:57:01 crc kubenswrapper[4832]: I0125 07:57:01.744219 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"22fb11acb07674f4808f4563567010790f12a87af272fdcf5ad1998e616c3f13"} Jan 25 07:57:01 crc kubenswrapper[4832]: I0125 07:57:01.744257 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"e400282707469172abd90879bb5c4f96419dd2fbdbc5cc58c6ee9954624b221f"} Jan 25 07:57:01 crc kubenswrapper[4832]: I0125 07:57:01.744281 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"56b41ea1d1a7bb58c288bf3c661f5cd441412fc4790cd8361da2061bd35721dc"} Jan 25 07:57:01 crc kubenswrapper[4832]: I0125 07:57:01.745008 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:57:01 crc kubenswrapper[4832]: I0125 07:57:01.745040 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:57:01 crc kubenswrapper[4832]: I0125 07:57:01.745049 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:57:02 crc kubenswrapper[4832]: I0125 07:57:02.480656 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 25 07:57:02 crc kubenswrapper[4832]: I0125 07:57:02.569447 4832 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-08 17:08:50.652730884 +0000 UTC Jan 25 07:57:02 crc kubenswrapper[4832]: I0125 07:57:02.752447 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"7ae35d18ac48a31c47656c723134740770a44da6fa1587a853402bbfd4f51956"} Jan 25 07:57:02 crc kubenswrapper[4832]: I0125 07:57:02.752513 4832 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 25 07:57:02 crc kubenswrapper[4832]: I0125 07:57:02.752579 4832 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 25 07:57:02 crc kubenswrapper[4832]: I0125 07:57:02.752611 4832 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 25 07:57:02 crc kubenswrapper[4832]: I0125 07:57:02.754086 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:57:02 crc kubenswrapper[4832]: I0125 07:57:02.754125 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:57:02 crc kubenswrapper[4832]: I0125 07:57:02.754137 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:57:02 crc kubenswrapper[4832]: I0125 07:57:02.754223 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:57:02 crc kubenswrapper[4832]: I0125 07:57:02.754261 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:57:02 crc kubenswrapper[4832]: I0125 07:57:02.754279 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:57:03 crc kubenswrapper[4832]: I0125 07:57:03.570555 4832 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-10 06:38:30.029049943 +0000 UTC Jan 25 07:57:03 crc kubenswrapper[4832]: I0125 07:57:03.752941 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 25 07:57:03 crc kubenswrapper[4832]: I0125 07:57:03.755109 4832 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 25 07:57:03 crc kubenswrapper[4832]: I0125 07:57:03.755168 4832 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 25 07:57:03 crc kubenswrapper[4832]: I0125 07:57:03.755294 4832 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 25 07:57:03 crc kubenswrapper[4832]: I0125 07:57:03.756242 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:57:03 crc kubenswrapper[4832]: I0125 07:57:03.756280 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:57:03 crc kubenswrapper[4832]: I0125 07:57:03.756292 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:57:03 crc kubenswrapper[4832]: I0125 07:57:03.756818 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:57:03 crc kubenswrapper[4832]: I0125 07:57:03.756862 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:57:03 crc kubenswrapper[4832]: I0125 07:57:03.756872 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:57:04 crc kubenswrapper[4832]: I0125 07:57:04.570708 4832 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-15 12:42:36.12451655 +0000 UTC Jan 25 07:57:04 crc kubenswrapper[4832]: I0125 07:57:04.905676 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 25 07:57:04 crc kubenswrapper[4832]: I0125 07:57:04.905969 4832 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 25 07:57:04 crc kubenswrapper[4832]: I0125 07:57:04.907932 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:57:04 crc kubenswrapper[4832]: I0125 07:57:04.907995 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:57:04 crc kubenswrapper[4832]: I0125 07:57:04.908009 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:57:05 crc kubenswrapper[4832]: I0125 07:57:05.364096 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-etcd/etcd-crc" Jan 25 07:57:05 crc kubenswrapper[4832]: I0125 07:57:05.364457 4832 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 25 07:57:05 crc kubenswrapper[4832]: I0125 07:57:05.367818 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:57:05 crc kubenswrapper[4832]: I0125 07:57:05.367956 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:57:05 crc kubenswrapper[4832]: I0125 07:57:05.368107 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:57:05 crc kubenswrapper[4832]: I0125 07:57:05.399221 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 25 07:57:05 crc kubenswrapper[4832]: I0125 07:57:05.399504 4832 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 25 07:57:05 crc kubenswrapper[4832]: I0125 07:57:05.401034 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:57:05 crc kubenswrapper[4832]: I0125 07:57:05.401096 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:57:05 crc kubenswrapper[4832]: I0125 07:57:05.401119 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:57:05 crc kubenswrapper[4832]: I0125 07:57:05.526289 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 25 07:57:05 crc kubenswrapper[4832]: I0125 07:57:05.526590 4832 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 25 07:57:05 crc kubenswrapper[4832]: I0125 07:57:05.528082 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:57:05 crc kubenswrapper[4832]: I0125 07:57:05.528130 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:57:05 crc kubenswrapper[4832]: I0125 07:57:05.528140 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:57:05 crc kubenswrapper[4832]: I0125 07:57:05.571579 4832 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-10 07:07:15.443278762 +0000 UTC Jan 25 07:57:05 crc kubenswrapper[4832]: I0125 07:57:05.770602 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 25 07:57:05 crc kubenswrapper[4832]: I0125 07:57:05.770773 4832 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 25 07:57:05 crc kubenswrapper[4832]: I0125 07:57:05.772710 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:57:05 crc kubenswrapper[4832]: I0125 07:57:05.772757 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:57:05 crc kubenswrapper[4832]: I0125 07:57:05.772768 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:57:05 crc kubenswrapper[4832]: I0125 07:57:05.778026 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 25 07:57:06 crc kubenswrapper[4832]: I0125 07:57:06.215113 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-etcd/etcd-crc" Jan 25 07:57:06 crc kubenswrapper[4832]: I0125 07:57:06.215370 4832 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 25 07:57:06 crc kubenswrapper[4832]: I0125 07:57:06.216714 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:57:06 crc kubenswrapper[4832]: I0125 07:57:06.216835 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:57:06 crc kubenswrapper[4832]: I0125 07:57:06.216912 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:57:06 crc kubenswrapper[4832]: I0125 07:57:06.571782 4832 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-27 13:49:04.127151465 +0000 UTC Jan 25 07:57:06 crc kubenswrapper[4832]: I0125 07:57:06.762663 4832 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 25 07:57:06 crc kubenswrapper[4832]: I0125 07:57:06.762805 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 25 07:57:06 crc kubenswrapper[4832]: I0125 07:57:06.763520 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:57:06 crc kubenswrapper[4832]: I0125 07:57:06.763618 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:57:06 crc kubenswrapper[4832]: I0125 07:57:06.763704 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:57:07 crc kubenswrapper[4832]: I0125 07:57:07.572681 4832 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-12 11:10:40.58029658 +0000 UTC Jan 25 07:57:07 crc kubenswrapper[4832]: E0125 07:57:07.750847 4832 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Jan 25 07:57:07 crc kubenswrapper[4832]: I0125 07:57:07.764873 4832 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 25 07:57:07 crc kubenswrapper[4832]: I0125 07:57:07.765738 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:57:07 crc kubenswrapper[4832]: I0125 07:57:07.765776 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:57:07 crc kubenswrapper[4832]: I0125 07:57:07.765788 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:57:08 crc kubenswrapper[4832]: I0125 07:57:08.262695 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 25 07:57:08 crc kubenswrapper[4832]: I0125 07:57:08.400066 4832 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 25 07:57:08 crc kubenswrapper[4832]: I0125 07:57:08.400158 4832 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 25 07:57:08 crc kubenswrapper[4832]: I0125 07:57:08.573767 4832 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-12 13:21:39.997998902 +0000 UTC Jan 25 07:57:08 crc kubenswrapper[4832]: I0125 07:57:08.768624 4832 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 25 07:57:08 crc kubenswrapper[4832]: I0125 07:57:08.770751 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:57:08 crc kubenswrapper[4832]: I0125 07:57:08.770793 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:57:08 crc kubenswrapper[4832]: I0125 07:57:08.770806 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:57:08 crc kubenswrapper[4832]: I0125 07:57:08.773367 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 25 07:57:09 crc kubenswrapper[4832]: E0125 07:57:09.259631 4832 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": net/http: TLS handshake timeout" node="crc" Jan 25 07:57:09 crc kubenswrapper[4832]: I0125 07:57:09.558040 4832 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": net/http: TLS handshake timeout Jan 25 07:57:09 crc kubenswrapper[4832]: I0125 07:57:09.574484 4832 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-04 01:44:32.210001918 +0000 UTC Jan 25 07:57:09 crc kubenswrapper[4832]: E0125 07:57:09.677455 4832 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": net/http: TLS handshake timeout" logger="UnhandledError" Jan 25 07:57:09 crc kubenswrapper[4832]: I0125 07:57:09.771179 4832 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 25 07:57:09 crc kubenswrapper[4832]: I0125 07:57:09.772001 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:57:09 crc kubenswrapper[4832]: I0125 07:57:09.772056 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:57:09 crc kubenswrapper[4832]: I0125 07:57:09.772072 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:57:10 crc kubenswrapper[4832]: I0125 07:57:10.568621 4832 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Jan 25 07:57:10 crc kubenswrapper[4832]: I0125 07:57:10.568701 4832 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Jan 25 07:57:10 crc kubenswrapper[4832]: I0125 07:57:10.574899 4832 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-14 05:57:21.026785635 +0000 UTC Jan 25 07:57:10 crc kubenswrapper[4832]: I0125 07:57:10.578418 4832 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Jan 25 07:57:10 crc kubenswrapper[4832]: I0125 07:57:10.578479 4832 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Jan 25 07:57:10 crc kubenswrapper[4832]: I0125 07:57:10.860451 4832 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 25 07:57:10 crc kubenswrapper[4832]: I0125 07:57:10.862099 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:57:10 crc kubenswrapper[4832]: I0125 07:57:10.862144 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:57:10 crc kubenswrapper[4832]: I0125 07:57:10.862156 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:57:10 crc kubenswrapper[4832]: I0125 07:57:10.862186 4832 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 25 07:57:11 crc kubenswrapper[4832]: I0125 07:57:11.575268 4832 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-30 09:18:32.677862742 +0000 UTC Jan 25 07:57:12 crc kubenswrapper[4832]: I0125 07:57:12.488883 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 25 07:57:12 crc kubenswrapper[4832]: I0125 07:57:12.489107 4832 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 25 07:57:12 crc kubenswrapper[4832]: I0125 07:57:12.491678 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:57:12 crc kubenswrapper[4832]: I0125 07:57:12.491729 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:57:12 crc kubenswrapper[4832]: I0125 07:57:12.491746 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:57:12 crc kubenswrapper[4832]: I0125 07:57:12.496014 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 25 07:57:12 crc kubenswrapper[4832]: I0125 07:57:12.576063 4832 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-16 11:41:52.071093667 +0000 UTC Jan 25 07:57:12 crc kubenswrapper[4832]: I0125 07:57:12.778756 4832 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 25 07:57:12 crc kubenswrapper[4832]: I0125 07:57:12.780079 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:57:12 crc kubenswrapper[4832]: I0125 07:57:12.780147 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:57:12 crc kubenswrapper[4832]: I0125 07:57:12.780168 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:57:13 crc kubenswrapper[4832]: I0125 07:57:13.576811 4832 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-06 03:56:36.890703589 +0000 UTC Jan 25 07:57:13 crc kubenswrapper[4832]: I0125 07:57:13.986326 4832 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Jan 25 07:57:14 crc kubenswrapper[4832]: I0125 07:57:14.009037 4832 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Jan 25 07:57:14 crc kubenswrapper[4832]: I0125 07:57:14.577635 4832 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-13 23:34:53.784631099 +0000 UTC Jan 25 07:57:15 crc kubenswrapper[4832]: I0125 07:57:15.561543 4832 trace.go:236] Trace[300709425]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (25-Jan-2026 07:57:02.480) (total time: 13081ms): Jan 25 07:57:15 crc kubenswrapper[4832]: Trace[300709425]: ---"Objects listed" error: 13081ms (07:57:15.561) Jan 25 07:57:15 crc kubenswrapper[4832]: Trace[300709425]: [13.081248323s] [13.081248323s] END Jan 25 07:57:15 crc kubenswrapper[4832]: I0125 07:57:15.561933 4832 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Jan 25 07:57:15 crc kubenswrapper[4832]: E0125 07:57:15.561533 4832 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": context deadline exceeded" interval="3.2s" Jan 25 07:57:15 crc kubenswrapper[4832]: I0125 07:57:15.561572 4832 trace.go:236] Trace[1057053414]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (25-Jan-2026 07:57:01.462) (total time: 14099ms): Jan 25 07:57:15 crc kubenswrapper[4832]: Trace[1057053414]: ---"Objects listed" error: 14099ms (07:57:15.561) Jan 25 07:57:15 crc kubenswrapper[4832]: Trace[1057053414]: [14.099440548s] [14.099440548s] END Jan 25 07:57:15 crc kubenswrapper[4832]: I0125 07:57:15.562649 4832 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Jan 25 07:57:15 crc kubenswrapper[4832]: I0125 07:57:15.563659 4832 trace.go:236] Trace[695731509]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (25-Jan-2026 07:57:01.447) (total time: 14116ms): Jan 25 07:57:15 crc kubenswrapper[4832]: Trace[695731509]: ---"Objects listed" error: 14116ms (07:57:15.563) Jan 25 07:57:15 crc kubenswrapper[4832]: Trace[695731509]: [14.116359669s] [14.116359669s] END Jan 25 07:57:15 crc kubenswrapper[4832]: I0125 07:57:15.563694 4832 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Jan 25 07:57:15 crc kubenswrapper[4832]: I0125 07:57:15.565212 4832 reconstruct.go:205] "DevicePaths of reconstructed volumes updated" Jan 25 07:57:15 crc kubenswrapper[4832]: I0125 07:57:15.568480 4832 trace.go:236] Trace[534052180]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (25-Jan-2026 07:57:01.064) (total time: 14503ms): Jan 25 07:57:15 crc kubenswrapper[4832]: Trace[534052180]: ---"Objects listed" error: 14503ms (07:57:15.568) Jan 25 07:57:15 crc kubenswrapper[4832]: Trace[534052180]: [14.503917142s] [14.503917142s] END Jan 25 07:57:15 crc kubenswrapper[4832]: I0125 07:57:15.568524 4832 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Jan 25 07:57:15 crc kubenswrapper[4832]: I0125 07:57:15.578449 4832 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-23 15:50:06.784721045 +0000 UTC Jan 25 07:57:15 crc kubenswrapper[4832]: I0125 07:57:15.598442 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 25 07:57:15 crc kubenswrapper[4832]: I0125 07:57:15.606464 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 25 07:57:15 crc kubenswrapper[4832]: I0125 07:57:15.610630 4832 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:46338->192.168.126.11:17697: read: connection reset by peer" start-of-body= Jan 25 07:57:15 crc kubenswrapper[4832]: I0125 07:57:15.610914 4832 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:46338->192.168.126.11:17697: read: connection reset by peer" Jan 25 07:57:15 crc kubenswrapper[4832]: I0125 07:57:15.613009 4832 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" start-of-body= Jan 25 07:57:15 crc kubenswrapper[4832]: I0125 07:57:15.613298 4832 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" Jan 25 07:57:15 crc kubenswrapper[4832]: I0125 07:57:15.613603 4832 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" start-of-body= Jan 25 07:57:15 crc kubenswrapper[4832]: I0125 07:57:15.613644 4832 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" Jan 25 07:57:15 crc kubenswrapper[4832]: I0125 07:57:15.789422 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Jan 25 07:57:15 crc kubenswrapper[4832]: I0125 07:57:15.792312 4832 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="7e2213b4c4748dc37cf94e9b977630270dedbabf28e81c8a6d75e4ee3346ad7a" exitCode=255 Jan 25 07:57:15 crc kubenswrapper[4832]: I0125 07:57:15.792419 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"7e2213b4c4748dc37cf94e9b977630270dedbabf28e81c8a6d75e4ee3346ad7a"} Jan 25 07:57:15 crc kubenswrapper[4832]: E0125 07:57:15.798032 4832 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-crc\" already exists" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 25 07:57:15 crc kubenswrapper[4832]: I0125 07:57:15.810345 4832 scope.go:117] "RemoveContainer" containerID="7e2213b4c4748dc37cf94e9b977630270dedbabf28e81c8a6d75e4ee3346ad7a" Jan 25 07:57:15 crc kubenswrapper[4832]: I0125 07:57:15.894741 4832 kubelet_node_status.go:115] "Node was previously registered" node="crc" Jan 25 07:57:15 crc kubenswrapper[4832]: I0125 07:57:15.895056 4832 kubelet_node_status.go:79] "Successfully registered node" node="crc" Jan 25 07:57:15 crc kubenswrapper[4832]: I0125 07:57:15.897093 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:57:15 crc kubenswrapper[4832]: I0125 07:57:15.897141 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:57:15 crc kubenswrapper[4832]: I0125 07:57:15.897158 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:57:15 crc kubenswrapper[4832]: I0125 07:57:15.897177 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:57:15 crc kubenswrapper[4832]: I0125 07:57:15.897189 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:57:15Z","lastTransitionTime":"2026-01-25T07:57:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:57:15 crc kubenswrapper[4832]: E0125 07:57:15.911110 4832 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-25T07:57:15Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:15Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-25T07:57:15Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:15Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-25T07:57:15Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:15Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-25T07:57:15Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:15Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"0979aa75-019e-429a-886d-abfe16bbe8b2\\\",\\\"systemUUID\\\":\\\"55010a19-6f9d-4b9e-9f82-47bdc3835176\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 25 07:57:15 crc kubenswrapper[4832]: I0125 07:57:15.915209 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:57:15 crc kubenswrapper[4832]: I0125 07:57:15.915234 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:57:15 crc kubenswrapper[4832]: I0125 07:57:15.915246 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:57:15 crc kubenswrapper[4832]: I0125 07:57:15.915260 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:57:15 crc kubenswrapper[4832]: I0125 07:57:15.915270 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:57:15Z","lastTransitionTime":"2026-01-25T07:57:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:57:15 crc kubenswrapper[4832]: E0125 07:57:15.926162 4832 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-25T07:57:15Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:15Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-25T07:57:15Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:15Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-25T07:57:15Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:15Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-25T07:57:15Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:15Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"0979aa75-019e-429a-886d-abfe16bbe8b2\\\",\\\"systemUUID\\\":\\\"55010a19-6f9d-4b9e-9f82-47bdc3835176\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 25 07:57:15 crc kubenswrapper[4832]: I0125 07:57:15.932172 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:57:15 crc kubenswrapper[4832]: I0125 07:57:15.932205 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:57:15 crc kubenswrapper[4832]: I0125 07:57:15.932216 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:57:15 crc kubenswrapper[4832]: I0125 07:57:15.932230 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:57:15 crc kubenswrapper[4832]: I0125 07:57:15.932241 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:57:15Z","lastTransitionTime":"2026-01-25T07:57:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:57:15 crc kubenswrapper[4832]: E0125 07:57:15.943754 4832 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-25T07:57:15Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:15Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-25T07:57:15Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:15Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-25T07:57:15Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:15Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-25T07:57:15Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:15Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"0979aa75-019e-429a-886d-abfe16bbe8b2\\\",\\\"systemUUID\\\":\\\"55010a19-6f9d-4b9e-9f82-47bdc3835176\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 25 07:57:15 crc kubenswrapper[4832]: I0125 07:57:15.948009 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:57:15 crc kubenswrapper[4832]: I0125 07:57:15.948044 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:57:15 crc kubenswrapper[4832]: I0125 07:57:15.948052 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:57:15 crc kubenswrapper[4832]: I0125 07:57:15.948067 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:57:15 crc kubenswrapper[4832]: I0125 07:57:15.948078 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:57:15Z","lastTransitionTime":"2026-01-25T07:57:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:57:15 crc kubenswrapper[4832]: E0125 07:57:15.956754 4832 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-25T07:57:15Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:15Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-25T07:57:15Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:15Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-25T07:57:15Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:15Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-25T07:57:15Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:15Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"0979aa75-019e-429a-886d-abfe16bbe8b2\\\",\\\"systemUUID\\\":\\\"55010a19-6f9d-4b9e-9f82-47bdc3835176\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 25 07:57:15 crc kubenswrapper[4832]: I0125 07:57:15.959461 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:57:15 crc kubenswrapper[4832]: I0125 07:57:15.959492 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:57:15 crc kubenswrapper[4832]: I0125 07:57:15.959500 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:57:15 crc kubenswrapper[4832]: I0125 07:57:15.959513 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:57:15 crc kubenswrapper[4832]: I0125 07:57:15.959523 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:57:15Z","lastTransitionTime":"2026-01-25T07:57:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:57:15 crc kubenswrapper[4832]: E0125 07:57:15.967836 4832 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-25T07:57:15Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:15Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-25T07:57:15Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:15Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-25T07:57:15Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:15Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-25T07:57:15Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:15Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"0979aa75-019e-429a-886d-abfe16bbe8b2\\\",\\\"systemUUID\\\":\\\"55010a19-6f9d-4b9e-9f82-47bdc3835176\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 25 07:57:15 crc kubenswrapper[4832]: E0125 07:57:15.968000 4832 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 25 07:57:15 crc kubenswrapper[4832]: I0125 07:57:15.969730 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:57:15 crc kubenswrapper[4832]: I0125 07:57:15.969779 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:57:15 crc kubenswrapper[4832]: I0125 07:57:15.969791 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:57:15 crc kubenswrapper[4832]: I0125 07:57:15.969811 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:57:15 crc kubenswrapper[4832]: I0125 07:57:15.969826 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:57:15Z","lastTransitionTime":"2026-01-25T07:57:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.072066 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.072100 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.072109 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.072124 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.072140 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:57:16Z","lastTransitionTime":"2026-01-25T07:57:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.174048 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.174088 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.174100 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.174118 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.174131 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:57:16Z","lastTransitionTime":"2026-01-25T07:57:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.226409 4832 csr.go:261] certificate signing request csr-6tznt is approved, waiting to be issued Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.245819 4832 csr.go:257] certificate signing request csr-6tznt is issued Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.261219 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-etcd/etcd-crc" Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.274802 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-crc" Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.276205 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.276242 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.276256 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.276270 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.276281 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:57:16Z","lastTransitionTime":"2026-01-25T07:57:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.378783 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.378821 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.378832 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.378847 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.378861 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:57:16Z","lastTransitionTime":"2026-01-25T07:57:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.481186 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.481239 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.481248 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.481261 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.481270 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:57:16Z","lastTransitionTime":"2026-01-25T07:57:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.558440 4832 apiserver.go:52] "Watching apiserver" Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.561785 4832 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.562189 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/etcd-crc","openshift-kube-apiserver/kube-apiserver-crc","openshift-network-console/networking-console-plugin-85b44fc459-gdk6g","openshift-network-diagnostics/network-check-target-xd92c","openshift-network-operator/iptables-alerter-4ln5h","openshift-dns/node-resolver-ljmz9","openshift-image-registry/node-ca-6dqw2","openshift-kube-controller-manager/kube-controller-manager-crc","openshift-machine-config-operator/machine-config-daemon-9r9sz","openshift-network-diagnostics/network-check-source-55646444c4-trplf","openshift-network-node-identity/network-node-identity-vrzqb","openshift-network-operator/network-operator-58b4c7f79c-55gtf"] Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.562477 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.562583 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.562614 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 25 07:57:16 crc kubenswrapper[4832]: E0125 07:57:16.562700 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.562876 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.562914 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 25 07:57:16 crc kubenswrapper[4832]: E0125 07:57:16.562876 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.563209 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.563287 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-ljmz9" Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.563435 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-9r9sz" Jan 25 07:57:16 crc kubenswrapper[4832]: E0125 07:57:16.563531 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.563733 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-6dqw2" Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.565034 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.565137 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.565272 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.565443 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.566166 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.566310 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.566466 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.566478 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.566398 4832 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.567211 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.567248 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.567614 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.567999 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.568033 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.568195 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.568218 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.568264 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.568354 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.568358 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.568411 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.568455 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.568354 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.571815 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.571846 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.571864 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.571887 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.571903 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.571921 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.571966 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.571982 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.572002 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.572018 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.572035 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.572053 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.572071 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.572087 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.572101 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.572118 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.572713 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.573067 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" (OuterVolumeSpecName: "kube-api-access-279lb") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "kube-api-access-279lb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.573132 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.573325 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.573378 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.573412 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.573438 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" (OuterVolumeSpecName: "kube-api-access-d4lsv") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "kube-api-access-d4lsv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.573437 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.574241 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.574316 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.574364 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.574425 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.574459 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.574497 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.574535 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.574565 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.574604 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.574645 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.574678 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.574713 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.574770 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.574805 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.574836 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.574871 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.574915 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.574947 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.574991 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.575030 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.575067 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.575103 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.575137 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.575179 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.575222 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.575257 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.575299 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.575340 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.575421 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.575456 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.575496 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.575534 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.573593 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" (OuterVolumeSpecName: "kube-api-access-lz9wn") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "kube-api-access-lz9wn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.575569 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.575571 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" (OuterVolumeSpecName: "kube-api-access-sb6h7") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "kube-api-access-sb6h7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.574023 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" (OuterVolumeSpecName: "kube-api-access-kfwg7") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "kube-api-access-kfwg7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.574154 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" (OuterVolumeSpecName: "kube-api-access-gf66m") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "kube-api-access-gf66m". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.574169 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" (OuterVolumeSpecName: "audit") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "audit". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.575610 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.575657 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.575688 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.575725 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.575764 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.575862 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.575899 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.575945 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.575984 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.576018 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.576054 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.576089 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.576117 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") pod \"49ef4625-1d3a-4a9f-b595-c2433d32326d\" (UID: \"49ef4625-1d3a-4a9f-b595-c2433d32326d\") " Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.576155 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.576193 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.576229 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.576262 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.576296 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.576333 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.576363 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.576428 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.576472 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.576512 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.576558 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.576597 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.576628 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.576667 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.576705 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") pod \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\" (UID: \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\") " Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.576747 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.576781 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.576822 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.576860 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.576894 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.576930 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.576966 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") pod \"44663579-783b-4372-86d6-acf235a62d72\" (UID: \"44663579-783b-4372-86d6-acf235a62d72\") " Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.576995 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.577030 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.577069 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.577109 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.577147 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.577187 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.577225 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.577258 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.577292 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.577323 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.577358 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.577944 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.578003 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.578039 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.578068 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.578120 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.578162 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.578192 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.578259 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.578297 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.578333 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.578364 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.578418 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.578452 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.578479 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.578514 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.578548 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.578582 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.578698 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.578734 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.578767 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.578797 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.578827 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.578858 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.578892 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.578927 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.578961 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.578993 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.579022 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.579051 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.579079 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.579109 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.579140 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.579175 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.579208 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.579236 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.579268 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.579300 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.579329 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.579401 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.579439 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.579465 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.579500 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.579766 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.579805 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.580865 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.580935 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.580960 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.580985 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.581009 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.581035 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.581061 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.581086 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.581109 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.581135 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.581156 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.581175 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.581197 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.581217 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.581235 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.581259 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.581281 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.581302 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.581323 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.581343 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.581366 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.581401 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.581427 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.581456 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.581484 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.581507 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.581530 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.581551 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.581575 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.581597 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.581618 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.581638 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.581669 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.581694 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.581719 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.574283 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.574573 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" (OuterVolumeSpecName: "config") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.574572 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" (OuterVolumeSpecName: "config") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.574576 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" (OuterVolumeSpecName: "certs") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.574726 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" (OuterVolumeSpecName: "kube-api-access-fqsjt") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "kube-api-access-fqsjt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.574752 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" (OuterVolumeSpecName: "kube-api-access-x7zkh") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "kube-api-access-x7zkh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.575093 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.575114 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.575270 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" (OuterVolumeSpecName: "kube-api-access-qg5z5") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "kube-api-access-qg5z5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.575509 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.575533 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.575795 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" (OuterVolumeSpecName: "kube-api-access-htfz6") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "kube-api-access-htfz6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.575812 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.575997 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.576087 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.576173 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" (OuterVolumeSpecName: "config") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.576204 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" (OuterVolumeSpecName: "client-ca") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.576402 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" (OuterVolumeSpecName: "signing-key") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.576452 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" (OuterVolumeSpecName: "kube-api-access-jkwtn") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "kube-api-access-jkwtn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.576568 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" (OuterVolumeSpecName: "kube-api-access-7c4vf") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "kube-api-access-7c4vf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.576655 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" (OuterVolumeSpecName: "serviceca") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "serviceca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.576714 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.576818 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" (OuterVolumeSpecName: "kube-api-access-mnrrd") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "kube-api-access-mnrrd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.577016 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" (OuterVolumeSpecName: "control-plane-machine-set-operator-tls") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "control-plane-machine-set-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.577034 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.577336 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" (OuterVolumeSpecName: "config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.577600 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.577878 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" (OuterVolumeSpecName: "kube-api-access-v47cf") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "kube-api-access-v47cf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.577889 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.578114 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.578868 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" (OuterVolumeSpecName: "config") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.579644 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.579701 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.579818 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" (OuterVolumeSpecName: "kube-api-access-wxkg8") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "kube-api-access-wxkg8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.580017 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" (OuterVolumeSpecName: "kube-api-access-w4xd4") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "kube-api-access-w4xd4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.580129 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" (OuterVolumeSpecName: "cert") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.580276 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" (OuterVolumeSpecName: "config") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.580301 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.580484 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" (OuterVolumeSpecName: "mcc-auth-proxy-config") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "mcc-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.580599 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.580998 4832 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-14 18:18:44.855031327 +0000 UTC Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.581330 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.581629 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" (OuterVolumeSpecName: "config") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.581860 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.581857 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" (OuterVolumeSpecName: "kube-api-access-rnphk") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "kube-api-access-rnphk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.581896 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.582091 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" (OuterVolumeSpecName: "stats-auth") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "stats-auth". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.582282 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" (OuterVolumeSpecName: "kube-api-access-bf2bz") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "kube-api-access-bf2bz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.582436 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.582492 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.582631 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" (OuterVolumeSpecName: "console-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.582650 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.582977 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" (OuterVolumeSpecName: "package-server-manager-serving-cert") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "package-server-manager-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.583018 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.583105 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" (OuterVolumeSpecName: "config") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.583249 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" (OuterVolumeSpecName: "ovn-control-plane-metrics-cert") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovn-control-plane-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.583333 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" (OuterVolumeSpecName: "utilities") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.583463 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" (OuterVolumeSpecName: "utilities") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.583842 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" (OuterVolumeSpecName: "kube-api-access-6ccd8") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "kube-api-access-6ccd8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.584623 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" (OuterVolumeSpecName: "config") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.584761 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" (OuterVolumeSpecName: "kube-api-access-lzf88") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "kube-api-access-lzf88". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.584953 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" (OuterVolumeSpecName: "webhook-certs") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "webhook-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.585866 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.585173 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" (OuterVolumeSpecName: "utilities") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.585397 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" (OuterVolumeSpecName: "kube-api-access-xcgwh") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "kube-api-access-xcgwh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.585436 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.585602 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" (OuterVolumeSpecName: "etcd-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.585697 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" (OuterVolumeSpecName: "kube-api-access-pcxfs") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "kube-api-access-pcxfs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.583814 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.586026 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.586069 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.586097 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.586076 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.586118 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" (OuterVolumeSpecName: "kube-api-access-nzwt7") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "kube-api-access-nzwt7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.586170 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.586200 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" (OuterVolumeSpecName: "config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.586411 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" (OuterVolumeSpecName: "image-registry-operator-tls") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "image-registry-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.586444 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.586472 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" (OuterVolumeSpecName: "node-bootstrap-token") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "node-bootstrap-token". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.586495 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.586537 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" (OuterVolumeSpecName: "kube-api-access-zkvpv") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "kube-api-access-zkvpv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.586581 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" (OuterVolumeSpecName: "config") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.586592 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" (OuterVolumeSpecName: "service-ca") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.586634 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" (OuterVolumeSpecName: "kube-api-access-6g6sz") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "kube-api-access-6g6sz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.586650 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.586698 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.586777 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.586828 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.586838 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" (OuterVolumeSpecName: "default-certificate") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "default-certificate". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.586936 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.586995 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.587046 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.587078 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.587117 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") pod \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\" (UID: \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\") " Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.587123 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" (OuterVolumeSpecName: "machine-api-operator-tls") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "machine-api-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.587157 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.587195 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.587227 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.587260 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.587362 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/1fb47e8e-c812-41b4-9be7-3fad81e121b0-rootfs\") pod \"machine-config-daemon-9r9sz\" (UID: \"1fb47e8e-c812-41b4-9be7-3fad81e121b0\") " pod="openshift-machine-config-operator/machine-config-daemon-9r9sz" Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.587439 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.587480 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/5b30a48c-b823-4cdd-ac0c-def5487d8fa6-serviceca\") pod \"node-ca-6dqw2\" (UID: \"5b30a48c-b823-4cdd-ac0c-def5487d8fa6\") " pod="openshift-image-registry/node-ca-6dqw2" Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.587545 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.587556 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" (OuterVolumeSpecName: "client-ca") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 25 07:57:16 crc kubenswrapper[4832]: E0125 07:57:16.587633 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-25 07:57:17.087600361 +0000 UTC m=+19.761423894 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.587655 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" (OuterVolumeSpecName: "kube-api-access-cfbct") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "kube-api-access-cfbct". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.587853 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" (OuterVolumeSpecName: "kube-api-access-tk88c") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "kube-api-access-tk88c". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.588323 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" (OuterVolumeSpecName: "kube-api-access-mg5zb") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "kube-api-access-mg5zb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.592589 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" (OuterVolumeSpecName: "images") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.592771 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" (OuterVolumeSpecName: "kube-api-access-zgdk5") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "kube-api-access-zgdk5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.592922 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.592999 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.593556 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" (OuterVolumeSpecName: "config") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.594027 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.594610 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.595412 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" (OuterVolumeSpecName: "kube-api-access-9xfj7") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "kube-api-access-9xfj7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.595489 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.595724 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.595982 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" (OuterVolumeSpecName: "kube-api-access-w9rds") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "kube-api-access-w9rds". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.594672 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.596041 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.596240 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.596884 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.597173 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.597507 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" (OuterVolumeSpecName: "kube-api-access-249nr") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "kube-api-access-249nr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.597796 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.598052 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.598231 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" (OuterVolumeSpecName: "service-ca") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.598253 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" (OuterVolumeSpecName: "machine-approver-tls") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "machine-approver-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.598343 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.598447 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" (OuterVolumeSpecName: "kube-api-access-dbsvg") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "kube-api-access-dbsvg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.598570 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.598716 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.599056 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" (OuterVolumeSpecName: "kube-api-access-w7l8j") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "kube-api-access-w7l8j". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.599142 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.599828 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.599948 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.599963 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" (OuterVolumeSpecName: "available-featuregates") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "available-featuregates". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.600124 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" (OuterVolumeSpecName: "kube-api-access-d6qdx") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "kube-api-access-d6qdx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.600499 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.600623 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" (OuterVolumeSpecName: "signing-cabundle") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-cabundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.600945 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.601301 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.601853 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.603839 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.604172 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.604444 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.604934 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" (OuterVolumeSpecName: "kube-api-access-pjr6v") pod "49ef4625-1d3a-4a9f-b595-c2433d32326d" (UID: "49ef4625-1d3a-4a9f-b595-c2433d32326d"). InnerVolumeSpecName "kube-api-access-pjr6v". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.605102 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" (OuterVolumeSpecName: "kube-api-access-pj782") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "kube-api-access-pj782". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.605271 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" (OuterVolumeSpecName: "config-volume") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.605561 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.605726 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.605842 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" (OuterVolumeSpecName: "kube-api-access-8tdtz") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "kube-api-access-8tdtz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.605985 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.606367 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.605053 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" (OuterVolumeSpecName: "images") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.606893 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" (OuterVolumeSpecName: "kube-api-access-fcqwp") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "kube-api-access-fcqwp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 25 07:57:16 crc kubenswrapper[4832]: E0125 07:57:16.612826 4832 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 25 07:57:16 crc kubenswrapper[4832]: E0125 07:57:16.613180 4832 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 25 07:57:16 crc kubenswrapper[4832]: E0125 07:57:16.613196 4832 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.613262 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.613411 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 25 07:57:16 crc kubenswrapper[4832]: E0125 07:57:16.613632 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-25 07:57:17.113610132 +0000 UTC m=+19.787433665 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.613742 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.613858 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.613894 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" (OuterVolumeSpecName: "tmpfs") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "tmpfs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.613964 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.614010 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.614092 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.614134 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/1fb47e8e-c812-41b4-9be7-3fad81e121b0-proxy-tls\") pod \"machine-config-daemon-9r9sz\" (UID: \"1fb47e8e-c812-41b4-9be7-3fad81e121b0\") " pod="openshift-machine-config-operator/machine-config-daemon-9r9sz" Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.614211 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2t6v2\" (UniqueName: \"kubernetes.io/projected/1fb47e8e-c812-41b4-9be7-3fad81e121b0-kube-api-access-2t6v2\") pod \"machine-config-daemon-9r9sz\" (UID: \"1fb47e8e-c812-41b4-9be7-3fad81e121b0\") " pod="openshift-machine-config-operator/machine-config-daemon-9r9sz" Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.614275 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.614323 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.614349 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s6dzs\" (UniqueName: \"kubernetes.io/projected/f0e6de28-95c1-4b62-93a5-8141ed12ba8e-kube-api-access-s6dzs\") pod \"node-resolver-ljmz9\" (UID: \"f0e6de28-95c1-4b62-93a5-8141ed12ba8e\") " pod="openshift-dns/node-resolver-ljmz9" Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.614380 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.614473 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.614526 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 25 07:57:16 crc kubenswrapper[4832]: E0125 07:57:16.614538 4832 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.614631 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" (OuterVolumeSpecName: "kube-api-access-xcphl") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "kube-api-access-xcphl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.614837 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" (OuterVolumeSpecName: "samples-operator-tls") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "samples-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.614943 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 25 07:57:16 crc kubenswrapper[4832]: E0125 07:57:16.615068 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-25 07:57:17.115051836 +0000 UTC m=+19.788875369 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.615169 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.615235 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/5b30a48c-b823-4cdd-ac0c-def5487d8fa6-host\") pod \"node-ca-6dqw2\" (UID: \"5b30a48c-b823-4cdd-ac0c-def5487d8fa6\") " pod="openshift-image-registry/node-ca-6dqw2" Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.615279 4832 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.615296 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/1fb47e8e-c812-41b4-9be7-3fad81e121b0-mcd-auth-proxy-config\") pod \"machine-config-daemon-9r9sz\" (UID: \"1fb47e8e-c812-41b4-9be7-3fad81e121b0\") " pod="openshift-machine-config-operator/machine-config-daemon-9r9sz" Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.615359 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 25 07:57:16 crc kubenswrapper[4832]: E0125 07:57:16.615426 4832 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.615483 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/f0e6de28-95c1-4b62-93a5-8141ed12ba8e-hosts-file\") pod \"node-resolver-ljmz9\" (UID: \"f0e6de28-95c1-4b62-93a5-8141ed12ba8e\") " pod="openshift-dns/node-resolver-ljmz9" Jan 25 07:57:16 crc kubenswrapper[4832]: E0125 07:57:16.615495 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-25 07:57:17.115484069 +0000 UTC m=+19.789307602 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.615723 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.615797 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.615839 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.615903 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gxmsw\" (UniqueName: \"kubernetes.io/projected/5b30a48c-b823-4cdd-ac0c-def5487d8fa6-kube-api-access-gxmsw\") pod \"node-ca-6dqw2\" (UID: \"5b30a48c-b823-4cdd-ac0c-def5487d8fa6\") " pod="openshift-image-registry/node-ca-6dqw2" Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.616057 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.616361 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" (OuterVolumeSpecName: "kube-api-access-4d4hj") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "kube-api-access-4d4hj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.616727 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" (OuterVolumeSpecName: "kube-api-access-vt5rc") pod "44663579-783b-4372-86d6-acf235a62d72" (UID: "44663579-783b-4372-86d6-acf235a62d72"). InnerVolumeSpecName "kube-api-access-vt5rc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.616740 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" (OuterVolumeSpecName: "mcd-auth-proxy-config") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "mcd-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.616961 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.617328 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.617356 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.617378 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.617206 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" (OuterVolumeSpecName: "kube-api-access-2d4wz") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "kube-api-access-2d4wz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.617222 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" (OuterVolumeSpecName: "apiservice-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "apiservice-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.617266 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.617405 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:57:16Z","lastTransitionTime":"2026-01-25T07:57:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.617885 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.617934 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" (OuterVolumeSpecName: "utilities") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.617937 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.618019 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" (OuterVolumeSpecName: "etcd-service-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.618502 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.618611 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.618754 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.618987 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.620417 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.620646 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.621443 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") on node \"crc\" DevicePath \"\"" Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.621470 4832 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.621486 4832 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.621501 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.621513 4832 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.621525 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") on node \"crc\" DevicePath \"\"" Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.621539 4832 reconciler_common.go:293] "Volume detached for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") on node \"crc\" DevicePath \"\"" Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.621552 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") on node \"crc\" DevicePath \"\"" Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.621563 4832 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") on node \"crc\" DevicePath \"\"" Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.621577 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") on node \"crc\" DevicePath \"\"" Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.621626 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") on node \"crc\" DevicePath \"\"" Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.621638 4832 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.621650 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") on node \"crc\" DevicePath \"\"" Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.621662 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") on node \"crc\" DevicePath \"\"" Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.621676 4832 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") on node \"crc\" DevicePath \"\"" Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.621688 4832 reconciler_common.go:293] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") on node \"crc\" DevicePath \"\"" Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.621699 4832 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.621712 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") on node \"crc\" DevicePath \"\"" Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.621724 4832 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") on node \"crc\" DevicePath \"\"" Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.621734 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") on node \"crc\" DevicePath \"\"" Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.621745 4832 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") on node \"crc\" DevicePath \"\"" Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.621760 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") on node \"crc\" DevicePath \"\"" Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.621773 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") on node \"crc\" DevicePath \"\"" Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.621788 4832 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") on node \"crc\" DevicePath \"\"" Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.621801 4832 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") on node \"crc\" DevicePath \"\"" Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.621815 4832 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.621869 4832 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.621885 4832 reconciler_common.go:293] "Volume detached for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") on node \"crc\" DevicePath \"\"" Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.621901 4832 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") on node \"crc\" DevicePath \"\"" Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.621917 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") on node \"crc\" DevicePath \"\"" Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.621935 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") on node \"crc\" DevicePath \"\"" Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.621950 4832 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") on node \"crc\" DevicePath \"\"" Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.621965 4832 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") on node \"crc\" DevicePath \"\"" Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.621979 4832 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.621993 4832 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.622010 4832 reconciler_common.go:293] "Volume detached for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.622028 4832 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.622044 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") on node \"crc\" DevicePath \"\"" Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.622059 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") on node \"crc\" DevicePath \"\"" Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.622079 4832 reconciler_common.go:293] "Volume detached for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.622095 4832 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") on node \"crc\" DevicePath \"\"" Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.622113 4832 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.622133 4832 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.622149 4832 reconciler_common.go:293] "Volume detached for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") on node \"crc\" DevicePath \"\"" Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.622164 4832 reconciler_common.go:293] "Volume detached for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") on node \"crc\" DevicePath \"\"" Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.622183 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") on node \"crc\" DevicePath \"\"" Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.622197 4832 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.622213 4832 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") on node \"crc\" DevicePath \"\"" Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.622229 4832 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") on node \"crc\" DevicePath \"\"" Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.622242 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") on node \"crc\" DevicePath \"\"" Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.622255 4832 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.622266 4832 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") on node \"crc\" DevicePath \"\"" Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.622280 4832 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.622295 4832 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.622307 4832 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.622318 4832 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") on node \"crc\" DevicePath \"\"" Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.622329 4832 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.622340 4832 reconciler_common.go:293] "Volume detached for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") on node \"crc\" DevicePath \"\"" Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.622354 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") on node \"crc\" DevicePath \"\"" Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.622366 4832 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.622377 4832 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.622421 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") on node \"crc\" DevicePath \"\"" Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.622433 4832 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.622444 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") on node \"crc\" DevicePath \"\"" Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.622456 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.622468 4832 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") on node \"crc\" DevicePath \"\"" Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.622480 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") on node \"crc\" DevicePath \"\"" Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.622491 4832 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") on node \"crc\" DevicePath \"\"" Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.622502 4832 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.622513 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") on node \"crc\" DevicePath \"\"" Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.622525 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") on node \"crc\" DevicePath \"\"" Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.622538 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") on node \"crc\" DevicePath \"\"" Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.622550 4832 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.622561 4832 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.622573 4832 reconciler_common.go:293] "Volume detached for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") on node \"crc\" DevicePath \"\"" Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.622586 4832 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.622597 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") on node \"crc\" DevicePath \"\"" Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.622610 4832 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.622620 4832 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.622631 4832 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") on node \"crc\" DevicePath \"\"" Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.622642 4832 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") on node \"crc\" DevicePath \"\"" Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.622653 4832 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.622663 4832 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") on node \"crc\" DevicePath \"\"" Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.622674 4832 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") on node \"crc\" DevicePath \"\"" Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.622685 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") on node \"crc\" DevicePath \"\"" Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.622696 4832 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") on node \"crc\" DevicePath \"\"" Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.622709 4832 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") on node \"crc\" DevicePath \"\"" Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.622719 4832 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.622731 4832 reconciler_common.go:293] "Volume detached for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.622744 4832 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.622754 4832 reconciler_common.go:293] "Volume detached for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.622767 4832 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") on node \"crc\" DevicePath \"\"" Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.622778 4832 reconciler_common.go:293] "Volume detached for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") on node \"crc\" DevicePath \"\"" Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.622793 4832 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.622803 4832 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.622814 4832 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.622827 4832 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") on node \"crc\" DevicePath \"\"" Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.622838 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") on node \"crc\" DevicePath \"\"" Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.622851 4832 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") on node \"crc\" DevicePath \"\"" Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.622862 4832 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.622872 4832 reconciler_common.go:293] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") on node \"crc\" DevicePath \"\"" Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.622883 4832 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") on node \"crc\" DevicePath \"\"" Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.622895 4832 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") on node \"crc\" DevicePath \"\"" Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.622905 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") on node \"crc\" DevicePath \"\"" Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.622918 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") on node \"crc\" DevicePath \"\"" Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.622929 4832 reconciler_common.go:293] "Volume detached for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.622942 4832 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") on node \"crc\" DevicePath \"\"" Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.622953 4832 reconciler_common.go:293] "Volume detached for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") on node \"crc\" DevicePath \"\"" Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.622963 4832 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") on node \"crc\" DevicePath \"\"" Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.622976 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") on node \"crc\" DevicePath \"\"" Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.622988 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") on node \"crc\" DevicePath \"\"" Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.622992 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.622998 4832 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.623125 4832 reconciler_common.go:293] "Volume detached for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") on node \"crc\" DevicePath \"\"" Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.623140 4832 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.623151 4832 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") on node \"crc\" DevicePath \"\"" Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.623267 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" (OuterVolumeSpecName: "kube-api-access-ngvvp") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "kube-api-access-ngvvp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.623309 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") on node \"crc\" DevicePath \"\"" Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.623349 4832 reconciler_common.go:293] "Volume detached for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.623395 4832 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.623655 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.623684 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" (OuterVolumeSpecName: "config") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.623986 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.624471 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") on node \"crc\" DevicePath \"\"" Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.624540 4832 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.624556 4832 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.624455 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" (OuterVolumeSpecName: "kube-api-access-x4zgh") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "kube-api-access-x4zgh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.624570 4832 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.612851 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.624598 4832 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") on node \"crc\" DevicePath \"\"" Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.624609 4832 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.624620 4832 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.624631 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") on node \"crc\" DevicePath \"\"" Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.624641 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") on node \"crc\" DevicePath \"\"" Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.624652 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") on node \"crc\" DevicePath \"\"" Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.624662 4832 reconciler_common.go:293] "Volume detached for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") on node \"crc\" DevicePath \"\"" Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.624672 4832 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.624683 4832 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.624696 4832 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") on node \"crc\" DevicePath \"\"" Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.624717 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") on node \"crc\" DevicePath \"\"" Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.624736 4832 reconciler_common.go:293] "Volume detached for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") on node \"crc\" DevicePath \"\"" Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.624750 4832 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.624761 4832 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.624775 4832 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.624785 4832 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") on node \"crc\" DevicePath \"\"" Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.624795 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.624804 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") on node \"crc\" DevicePath \"\"" Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.624814 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") on node \"crc\" DevicePath \"\"" Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.624831 4832 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.624849 4832 reconciler_common.go:293] "Volume detached for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") on node \"crc\" DevicePath \"\"" Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.624504 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.624861 4832 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.624878 4832 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") on node \"crc\" DevicePath \"\"" Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.624893 4832 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") on node \"crc\" DevicePath \"\"" Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.624535 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" (OuterVolumeSpecName: "kube-api-access-jhbk2") pod "bd23aa5c-e532-4e53-bccf-e79f130c5ae8" (UID: "bd23aa5c-e532-4e53-bccf-e79f130c5ae8"). InnerVolumeSpecName "kube-api-access-jhbk2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.624912 4832 reconciler_common.go:293] "Volume detached for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") on node \"crc\" DevicePath \"\"" Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.624926 4832 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.624987 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" (OuterVolumeSpecName: "config") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.625097 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" (OuterVolumeSpecName: "kube-api-access-qs4fp") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "kube-api-access-qs4fp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.625539 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" (OuterVolumeSpecName: "multus-daemon-config") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "multus-daemon-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.625657 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" (OuterVolumeSpecName: "config") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.625939 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.626013 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.626097 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" (OuterVolumeSpecName: "kube-api-access-s4n52") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "kube-api-access-s4n52". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.626198 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.626227 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.626797 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" (OuterVolumeSpecName: "image-import-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "image-import-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.626896 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.627766 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.627832 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.632345 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" (OuterVolumeSpecName: "kube-api-access-x2m85") pod "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" (UID: "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d"). InnerVolumeSpecName "kube-api-access-x2m85". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.632798 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.635924 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" (OuterVolumeSpecName: "kube-api-access-2w9zh") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "kube-api-access-2w9zh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.636109 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 25 07:57:16 crc kubenswrapper[4832]: E0125 07:57:16.642875 4832 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 25 07:57:16 crc kubenswrapper[4832]: E0125 07:57:16.642924 4832 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 25 07:57:16 crc kubenswrapper[4832]: E0125 07:57:16.642944 4832 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 25 07:57:16 crc kubenswrapper[4832]: E0125 07:57:16.643037 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-25 07:57:17.143010336 +0000 UTC m=+19.816834059 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.645638 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.645698 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.646267 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.648173 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.649982 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:16Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.653172 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.658063 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.664776 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.666445 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.670266 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.675923 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4399c971-4476-4d24-ae22-8f9710ee1ea8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:56:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:56:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:56:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:56:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:56:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://427b76c32266adf832d2068d3a55977e793505c5bb68d7b55f73115596094910\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:56:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://37e9206fcc440929199c51b318bab8d2c23814d1307eaed596434c12edf2ed21\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:56:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://959f94a48ef709e3a3ca62ab6fda1874fd98e4fa70fbde0fa03da51bc8d0ed25\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:56:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e2213b4c4748dc37cf94e9b977630270dedbabf28e81c8a6d75e4ee3346ad7a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7e2213b4c4748dc37cf94e9b977630270dedbabf28e81c8a6d75e4ee3346ad7a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-25T07:57:15Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0125 07:57:10.242088 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0125 07:57:10.245266 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3222874030/tls.crt::/tmp/serving-cert-3222874030/tls.key\\\\\\\"\\\\nI0125 07:57:15.582629 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0125 07:57:15.585295 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0125 07:57:15.585315 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0125 07:57:15.585341 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0125 07:57:15.585347 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0125 07:57:15.590465 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0125 07:57:15.590486 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0125 07:57:15.590498 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0125 07:57:15.590502 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0125 07:57:15.590506 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0125 07:57:15.590510 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0125 07:57:15.590513 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0125 07:57:15.590670 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0125 07:57:15.594690 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-25T07:56:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c0b0c638bfaa98aaf9932b5ad1b0bfc04ba52038c40f3aa85103388c557ace5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:56:59Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5cdefbe9da3ff798b69ba79465cd9b6fce74df31802f14dca3fa58ba5b9d1bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b5cdefbe9da3ff798b69ba79465cd9b6fce74df31802f14dca3fa58ba5b9d1bd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T07:56:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T07:56:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T07:56:57Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.701286 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fcc553c4-1007-4dbc-8420-60b36d54467a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:56:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:56:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:56:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8be196a1dec67a58e78aa9de2efa770fc899f210cc9c13962f0ebe78b967ba34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:56:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b044eb1a229266f00938c08da6aa9e86425ca71d08c8434d7214d54850c36bbb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:56:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://82354c782a5e3edb960aa716e1fc5fa9ab40d1f483ae320f08abfb662c1f1911\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:56:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7833d14895ff5c8aa464bdd04ddfe77dd2cddb9658d863bf6421449e62657bd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:56:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T07:56:57Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.725591 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.725614 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.725639 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.725660 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gxmsw\" (UniqueName: \"kubernetes.io/projected/5b30a48c-b823-4cdd-ac0c-def5487d8fa6-kube-api-access-gxmsw\") pod \"node-ca-6dqw2\" (UID: \"5b30a48c-b823-4cdd-ac0c-def5487d8fa6\") " pod="openshift-image-registry/node-ca-6dqw2" Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.725671 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.725682 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/1fb47e8e-c812-41b4-9be7-3fad81e121b0-rootfs\") pod \"machine-config-daemon-9r9sz\" (UID: \"1fb47e8e-c812-41b4-9be7-3fad81e121b0\") " pod="openshift-machine-config-operator/machine-config-daemon-9r9sz" Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.725691 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.725701 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/5b30a48c-b823-4cdd-ac0c-def5487d8fa6-serviceca\") pod \"node-ca-6dqw2\" (UID: \"5b30a48c-b823-4cdd-ac0c-def5487d8fa6\") " pod="openshift-image-registry/node-ca-6dqw2" Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.725702 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:57:16Z","lastTransitionTime":"2026-01-25T07:57:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.725747 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.725765 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/1fb47e8e-c812-41b4-9be7-3fad81e121b0-proxy-tls\") pod \"machine-config-daemon-9r9sz\" (UID: \"1fb47e8e-c812-41b4-9be7-3fad81e121b0\") " pod="openshift-machine-config-operator/machine-config-daemon-9r9sz" Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.725781 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2t6v2\" (UniqueName: \"kubernetes.io/projected/1fb47e8e-c812-41b4-9be7-3fad81e121b0-kube-api-access-2t6v2\") pod \"machine-config-daemon-9r9sz\" (UID: \"1fb47e8e-c812-41b4-9be7-3fad81e121b0\") " pod="openshift-machine-config-operator/machine-config-daemon-9r9sz" Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.725778 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.725818 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s6dzs\" (UniqueName: \"kubernetes.io/projected/f0e6de28-95c1-4b62-93a5-8141ed12ba8e-kube-api-access-s6dzs\") pod \"node-resolver-ljmz9\" (UID: \"f0e6de28-95c1-4b62-93a5-8141ed12ba8e\") " pod="openshift-dns/node-resolver-ljmz9" Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.725785 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/1fb47e8e-c812-41b4-9be7-3fad81e121b0-rootfs\") pod \"machine-config-daemon-9r9sz\" (UID: \"1fb47e8e-c812-41b4-9be7-3fad81e121b0\") " pod="openshift-machine-config-operator/machine-config-daemon-9r9sz" Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.725862 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/5b30a48c-b823-4cdd-ac0c-def5487d8fa6-host\") pod \"node-ca-6dqw2\" (UID: \"5b30a48c-b823-4cdd-ac0c-def5487d8fa6\") " pod="openshift-image-registry/node-ca-6dqw2" Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.725883 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/f0e6de28-95c1-4b62-93a5-8141ed12ba8e-hosts-file\") pod \"node-resolver-ljmz9\" (UID: \"f0e6de28-95c1-4b62-93a5-8141ed12ba8e\") " pod="openshift-dns/node-resolver-ljmz9" Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.725909 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/1fb47e8e-c812-41b4-9be7-3fad81e121b0-mcd-auth-proxy-config\") pod \"machine-config-daemon-9r9sz\" (UID: \"1fb47e8e-c812-41b4-9be7-3fad81e121b0\") " pod="openshift-machine-config-operator/machine-config-daemon-9r9sz" Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.725937 4832 reconciler_common.go:293] "Volume detached for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") on node \"crc\" DevicePath \"\"" Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.725947 4832 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.725958 4832 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.725968 4832 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.725977 4832 reconciler_common.go:293] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") on node \"crc\" DevicePath \"\"" Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.725987 4832 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") on node \"crc\" DevicePath \"\"" Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.725996 4832 reconciler_common.go:293] "Volume detached for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") on node \"crc\" DevicePath \"\"" Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.726005 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") on node \"crc\" DevicePath \"\"" Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.726013 4832 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.726021 4832 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") on node \"crc\" DevicePath \"\"" Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.726030 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") on node \"crc\" DevicePath \"\"" Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.726040 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") on node \"crc\" DevicePath \"\"" Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.726052 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") on node \"crc\" DevicePath \"\"" Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.726064 4832 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") on node \"crc\" DevicePath \"\"" Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.726081 4832 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") on node \"crc\" DevicePath \"\"" Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.726102 4832 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.726111 4832 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") on node \"crc\" DevicePath \"\"" Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.726119 4832 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") on node \"crc\" DevicePath \"\"" Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.726128 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") on node \"crc\" DevicePath \"\"" Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.726138 4832 reconciler_common.go:293] "Volume detached for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.726147 4832 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") on node \"crc\" DevicePath \"\"" Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.726155 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") on node \"crc\" DevicePath \"\"" Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.726165 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.726173 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") on node \"crc\" DevicePath \"\"" Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.726184 4832 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") on node \"crc\" DevicePath \"\"" Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.726212 4832 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") on node \"crc\" DevicePath \"\"" Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.726236 4832 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.726250 4832 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.726264 4832 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.726278 4832 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") on node \"crc\" DevicePath \"\"" Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.726292 4832 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.726305 4832 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.726306 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.726318 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") on node \"crc\" DevicePath \"\"" Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.726407 4832 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.726766 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/5b30a48c-b823-4cdd-ac0c-def5487d8fa6-serviceca\") pod \"node-ca-6dqw2\" (UID: \"5b30a48c-b823-4cdd-ac0c-def5487d8fa6\") " pod="openshift-image-registry/node-ca-6dqw2" Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.726820 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/f0e6de28-95c1-4b62-93a5-8141ed12ba8e-hosts-file\") pod \"node-resolver-ljmz9\" (UID: \"f0e6de28-95c1-4b62-93a5-8141ed12ba8e\") " pod="openshift-dns/node-resolver-ljmz9" Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.726853 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/5b30a48c-b823-4cdd-ac0c-def5487d8fa6-host\") pod \"node-ca-6dqw2\" (UID: \"5b30a48c-b823-4cdd-ac0c-def5487d8fa6\") " pod="openshift-image-registry/node-ca-6dqw2" Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.727303 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") on node \"crc\" DevicePath \"\"" Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.727313 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/1fb47e8e-c812-41b4-9be7-3fad81e121b0-mcd-auth-proxy-config\") pod \"machine-config-daemon-9r9sz\" (UID: \"1fb47e8e-c812-41b4-9be7-3fad81e121b0\") " pod="openshift-machine-config-operator/machine-config-daemon-9r9sz" Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.727321 4832 reconciler_common.go:293] "Volume detached for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") on node \"crc\" DevicePath \"\"" Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.727370 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") on node \"crc\" DevicePath \"\"" Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.727406 4832 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.727415 4832 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.727424 4832 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") on node \"crc\" DevicePath \"\"" Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.727433 4832 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.727442 4832 reconciler_common.go:293] "Volume detached for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") on node \"crc\" DevicePath \"\"" Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.727451 4832 reconciler_common.go:293] "Volume detached for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.727464 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") on node \"crc\" DevicePath \"\"" Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.727474 4832 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") on node \"crc\" DevicePath \"\"" Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.727482 4832 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.737653 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:16Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.738105 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/1fb47e8e-c812-41b4-9be7-3fad81e121b0-proxy-tls\") pod \"machine-config-daemon-9r9sz\" (UID: \"1fb47e8e-c812-41b4-9be7-3fad81e121b0\") " pod="openshift-machine-config-operator/machine-config-daemon-9r9sz" Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.753044 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gxmsw\" (UniqueName: \"kubernetes.io/projected/5b30a48c-b823-4cdd-ac0c-def5487d8fa6-kube-api-access-gxmsw\") pod \"node-ca-6dqw2\" (UID: \"5b30a48c-b823-4cdd-ac0c-def5487d8fa6\") " pod="openshift-image-registry/node-ca-6dqw2" Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.753989 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-6dqw2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5b30a48c-b823-4cdd-ac0c-def5487d8fa6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:16Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:16Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxmsw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T07:57:16Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-6dqw2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.755863 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2t6v2\" (UniqueName: \"kubernetes.io/projected/1fb47e8e-c812-41b4-9be7-3fad81e121b0-kube-api-access-2t6v2\") pod \"machine-config-daemon-9r9sz\" (UID: \"1fb47e8e-c812-41b4-9be7-3fad81e121b0\") " pod="openshift-machine-config-operator/machine-config-daemon-9r9sz" Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.758085 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s6dzs\" (UniqueName: \"kubernetes.io/projected/f0e6de28-95c1-4b62-93a5-8141ed12ba8e-kube-api-access-s6dzs\") pod \"node-resolver-ljmz9\" (UID: \"f0e6de28-95c1-4b62-93a5-8141ed12ba8e\") " pod="openshift-dns/node-resolver-ljmz9" Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.793703 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d0e4b534-077a-47eb-a9aa-463b4dce27c2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:56:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:56:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e400282707469172abd90879bb5c4f96419dd2fbdbc5cc58c6ee9954624b221f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://22fb11acb07674f4808f4563567010790f12a87af272fdcf5ad1998e616c3f13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7970bc59b29bb18f7064917431bb4dd3388f593b65f71d697e3bc1c37493d087\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ae35d18ac48a31c47656c723134740770a44da6fa1587a853402bbfd4f51956\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://56b41ea1d1a7bb58c288bf3c661f5cd441412fc4790cd8361da2061bd35721dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c6f28ecd4c0dfb159fffbbdfc1ecbfee0ce21de2efa607937d80ec098bfc2534\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c6f28ecd4c0dfb159fffbbdfc1ecbfee0ce21de2efa607937d80ec098bfc2534\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T07:56:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T07:56:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3d6c060504d04d04a811fe906985b4981037f7c249befd89d21694b58983826\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b3d6c060504d04d04a811fe906985b4981037f7c249befd89d21694b58983826\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T07:56:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T07:56:58Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f98f07a514287378206a12966a18bcce2ce996434858c036f7e405a8c5d51721\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f98f07a514287378206a12966a18bcce2ce996434858c036f7e405a8c5d51721\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T07:56:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T07:56:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T07:56:57Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.795952 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.798681 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"56d7d5b36830b76c8af4d6a98ec50b4096ef677b7ec94784724d5395dbc5e1a5"} Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.798734 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 25 07:57:16 crc kubenswrapper[4832]: E0125 07:57:16.808259 4832 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"etcd-crc\" already exists" pod="openshift-etcd/etcd-crc" Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.815937 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:16Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.828350 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.828376 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.828402 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.828416 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.828424 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:57:16Z","lastTransitionTime":"2026-01-25T07:57:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.828410 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-9r9sz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1fb47e8e-c812-41b4-9be7-3fad81e121b0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:16Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:16Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2t6v2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2t6v2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T07:57:16Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-9r9sz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.838159 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:16Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.859615 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.870460 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:16Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.877680 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-ljmz9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f0e6de28-95c1-4b62-93a5-8141ed12ba8e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:16Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:16Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s6dzs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T07:57:16Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-ljmz9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.886696 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.889522 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4399c971-4476-4d24-ae22-8f9710ee1ea8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:56:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:56:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:56:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:56:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:56:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://427b76c32266adf832d2068d3a55977e793505c5bb68d7b55f73115596094910\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:56:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://37e9206fcc440929199c51b318bab8d2c23814d1307eaed596434c12edf2ed21\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:56:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://959f94a48ef709e3a3ca62ab6fda1874fd98e4fa70fbde0fa03da51bc8d0ed25\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:56:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://56d7d5b36830b76c8af4d6a98ec50b4096ef677b7ec94784724d5395dbc5e1a5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7e2213b4c4748dc37cf94e9b977630270dedbabf28e81c8a6d75e4ee3346ad7a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-25T07:57:15Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0125 07:57:10.242088 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0125 07:57:10.245266 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3222874030/tls.crt::/tmp/serving-cert-3222874030/tls.key\\\\\\\"\\\\nI0125 07:57:15.582629 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0125 07:57:15.585295 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0125 07:57:15.585315 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0125 07:57:15.585341 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0125 07:57:15.585347 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0125 07:57:15.590465 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0125 07:57:15.590486 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0125 07:57:15.590498 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0125 07:57:15.590502 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0125 07:57:15.590506 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0125 07:57:15.590510 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0125 07:57:15.590513 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0125 07:57:15.590670 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0125 07:57:15.594690 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-25T07:56:59Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c0b0c638bfaa98aaf9932b5ad1b0bfc04ba52038c40f3aa85103388c557ace5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:56:59Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5cdefbe9da3ff798b69ba79465cd9b6fce74df31802f14dca3fa58ba5b9d1bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b5cdefbe9da3ff798b69ba79465cd9b6fce74df31802f14dca3fa58ba5b9d1bd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T07:56:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T07:56:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T07:56:57Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 25 07:57:16 crc kubenswrapper[4832]: W0125 07:57:16.897878 4832 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod37a5e44f_9a88_4405_be8a_b645485e7312.slice/crio-d5a933eca633735fcd342f3b149bf69560e365b4873b3e706f5f35cebdbbe0bb WatchSource:0}: Error finding container d5a933eca633735fcd342f3b149bf69560e365b4873b3e706f5f35cebdbbe0bb: Status 404 returned error can't find the container with id d5a933eca633735fcd342f3b149bf69560e365b4873b3e706f5f35cebdbbe0bb Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.901204 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.907231 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fcc553c4-1007-4dbc-8420-60b36d54467a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:56:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:56:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:56:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8be196a1dec67a58e78aa9de2efa770fc899f210cc9c13962f0ebe78b967ba34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:56:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b044eb1a229266f00938c08da6aa9e86425ca71d08c8434d7214d54850c36bbb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:56:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://82354c782a5e3edb960aa716e1fc5fa9ab40d1f483ae320f08abfb662c1f1911\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:56:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7833d14895ff5c8aa464bdd04ddfe77dd2cddb9658d863bf6421449e62657bd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:56:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T07:56:57Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.933855 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.933909 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.933922 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.933943 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.933961 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:57:16Z","lastTransitionTime":"2026-01-25T07:57:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.934376 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:16Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.937633 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.952800 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-6dqw2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5b30a48c-b823-4cdd-ac0c-def5487d8fa6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:16Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:16Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxmsw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T07:57:16Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-6dqw2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.959706 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-ljmz9" Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.967653 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-9r9sz" Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.970098 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d0e4b534-077a-47eb-a9aa-463b4dce27c2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:56:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:56:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e400282707469172abd90879bb5c4f96419dd2fbdbc5cc58c6ee9954624b221f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://22fb11acb07674f4808f4563567010790f12a87af272fdcf5ad1998e616c3f13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7970bc59b29bb18f7064917431bb4dd3388f593b65f71d697e3bc1c37493d087\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ae35d18ac48a31c47656c723134740770a44da6fa1587a853402bbfd4f51956\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://56b41ea1d1a7bb58c288bf3c661f5cd441412fc4790cd8361da2061bd35721dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c6f28ecd4c0dfb159fffbbdfc1ecbfee0ce21de2efa607937d80ec098bfc2534\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c6f28ecd4c0dfb159fffbbdfc1ecbfee0ce21de2efa607937d80ec098bfc2534\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T07:56:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T07:56:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3d6c060504d04d04a811fe906985b4981037f7c249befd89d21694b58983826\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b3d6c060504d04d04a811fe906985b4981037f7c249befd89d21694b58983826\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T07:56:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T07:56:58Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f98f07a514287378206a12966a18bcce2ce996434858c036f7e405a8c5d51721\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f98f07a514287378206a12966a18bcce2ce996434858c036f7e405a8c5d51721\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T07:56:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T07:56:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T07:56:57Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.974234 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-6dqw2" Jan 25 07:57:16 crc kubenswrapper[4832]: I0125 07:57:16.991105 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:16Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 25 07:57:17 crc kubenswrapper[4832]: W0125 07:57:17.011841 4832 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1fb47e8e_c812_41b4_9be7_3fad81e121b0.slice/crio-41215c7bd881a9355ae74080b37c4132e90339243fd33009b35933e6617442ff WatchSource:0}: Error finding container 41215c7bd881a9355ae74080b37c4132e90339243fd33009b35933e6617442ff: Status 404 returned error can't find the container with id 41215c7bd881a9355ae74080b37c4132e90339243fd33009b35933e6617442ff Jan 25 07:57:17 crc kubenswrapper[4832]: I0125 07:57:17.017820 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-9r9sz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1fb47e8e-c812-41b4-9be7-3fad81e121b0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:16Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:16Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2t6v2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2t6v2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T07:57:16Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-9r9sz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 25 07:57:17 crc kubenswrapper[4832]: I0125 07:57:17.036353 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:57:17 crc kubenswrapper[4832]: I0125 07:57:17.036672 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:57:17 crc kubenswrapper[4832]: I0125 07:57:17.036685 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:57:17 crc kubenswrapper[4832]: I0125 07:57:17.036702 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:57:17 crc kubenswrapper[4832]: I0125 07:57:17.036713 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:57:17Z","lastTransitionTime":"2026-01-25T07:57:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:57:17 crc kubenswrapper[4832]: I0125 07:57:17.046707 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:16Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 25 07:57:17 crc kubenswrapper[4832]: I0125 07:57:17.066903 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 25 07:57:17 crc kubenswrapper[4832]: I0125 07:57:17.087706 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:16Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 25 07:57:17 crc kubenswrapper[4832]: I0125 07:57:17.099678 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-ljmz9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f0e6de28-95c1-4b62-93a5-8141ed12ba8e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:16Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:16Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s6dzs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T07:57:16Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-ljmz9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 25 07:57:17 crc kubenswrapper[4832]: I0125 07:57:17.115760 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 25 07:57:17 crc kubenswrapper[4832]: I0125 07:57:17.129715 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 25 07:57:17 crc kubenswrapper[4832]: I0125 07:57:17.129826 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 25 07:57:17 crc kubenswrapper[4832]: I0125 07:57:17.129859 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 25 07:57:17 crc kubenswrapper[4832]: I0125 07:57:17.129886 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 25 07:57:17 crc kubenswrapper[4832]: E0125 07:57:17.130011 4832 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 25 07:57:17 crc kubenswrapper[4832]: E0125 07:57:17.130032 4832 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 25 07:57:17 crc kubenswrapper[4832]: E0125 07:57:17.130044 4832 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 25 07:57:17 crc kubenswrapper[4832]: E0125 07:57:17.130097 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-25 07:57:18.130081119 +0000 UTC m=+20.803904652 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 25 07:57:17 crc kubenswrapper[4832]: E0125 07:57:17.130489 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-25 07:57:18.130476841 +0000 UTC m=+20.804300384 (durationBeforeRetry 1s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 25 07:57:17 crc kubenswrapper[4832]: E0125 07:57:17.130539 4832 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 25 07:57:17 crc kubenswrapper[4832]: E0125 07:57:17.130567 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-25 07:57:18.130557583 +0000 UTC m=+20.804381126 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 25 07:57:17 crc kubenswrapper[4832]: E0125 07:57:17.130617 4832 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 25 07:57:17 crc kubenswrapper[4832]: E0125 07:57:17.130644 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-25 07:57:18.130636566 +0000 UTC m=+20.804460109 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 25 07:57:17 crc kubenswrapper[4832]: I0125 07:57:17.138452 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:57:17 crc kubenswrapper[4832]: I0125 07:57:17.138482 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:57:17 crc kubenswrapper[4832]: I0125 07:57:17.138494 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:57:17 crc kubenswrapper[4832]: I0125 07:57:17.138510 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:57:17 crc kubenswrapper[4832]: I0125 07:57:17.138521 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:57:17Z","lastTransitionTime":"2026-01-25T07:57:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:57:17 crc kubenswrapper[4832]: I0125 07:57:17.152427 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-kzrcf"] Jan 25 07:57:17 crc kubenswrapper[4832]: I0125 07:57:17.152828 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-kzrcf" Jan 25 07:57:17 crc kubenswrapper[4832]: I0125 07:57:17.154511 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-plv66"] Jan 25 07:57:17 crc kubenswrapper[4832]: I0125 07:57:17.155187 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-additional-cni-plugins-7tflx"] Jan 25 07:57:17 crc kubenswrapper[4832]: I0125 07:57:17.155717 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-7tflx" Jan 25 07:57:17 crc kubenswrapper[4832]: I0125 07:57:17.156052 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-plv66" Jan 25 07:57:17 crc kubenswrapper[4832]: I0125 07:57:17.156263 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Jan 25 07:57:17 crc kubenswrapper[4832]: I0125 07:57:17.156522 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Jan 25 07:57:17 crc kubenswrapper[4832]: I0125 07:57:17.156587 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Jan 25 07:57:17 crc kubenswrapper[4832]: I0125 07:57:17.156969 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Jan 25 07:57:17 crc kubenswrapper[4832]: I0125 07:57:17.157065 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Jan 25 07:57:17 crc kubenswrapper[4832]: I0125 07:57:17.157176 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Jan 25 07:57:17 crc kubenswrapper[4832]: I0125 07:57:17.157265 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Jan 25 07:57:17 crc kubenswrapper[4832]: I0125 07:57:17.157274 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Jan 25 07:57:17 crc kubenswrapper[4832]: I0125 07:57:17.174727 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Jan 25 07:57:17 crc kubenswrapper[4832]: I0125 07:57:17.174931 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Jan 25 07:57:17 crc kubenswrapper[4832]: I0125 07:57:17.174931 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Jan 25 07:57:17 crc kubenswrapper[4832]: I0125 07:57:17.175156 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Jan 25 07:57:17 crc kubenswrapper[4832]: I0125 07:57:17.175256 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Jan 25 07:57:17 crc kubenswrapper[4832]: I0125 07:57:17.175510 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Jan 25 07:57:17 crc kubenswrapper[4832]: I0125 07:57:17.184169 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d0e4b534-077a-47eb-a9aa-463b4dce27c2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:56:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:56:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e400282707469172abd90879bb5c4f96419dd2fbdbc5cc58c6ee9954624b221f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://22fb11acb07674f4808f4563567010790f12a87af272fdcf5ad1998e616c3f13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7970bc59b29bb18f7064917431bb4dd3388f593b65f71d697e3bc1c37493d087\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ae35d18ac48a31c47656c723134740770a44da6fa1587a853402bbfd4f51956\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://56b41ea1d1a7bb58c288bf3c661f5cd441412fc4790cd8361da2061bd35721dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c6f28ecd4c0dfb159fffbbdfc1ecbfee0ce21de2efa607937d80ec098bfc2534\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c6f28ecd4c0dfb159fffbbdfc1ecbfee0ce21de2efa607937d80ec098bfc2534\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T07:56:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T07:56:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3d6c060504d04d04a811fe906985b4981037f7c249befd89d21694b58983826\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b3d6c060504d04d04a811fe906985b4981037f7c249befd89d21694b58983826\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T07:56:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T07:56:58Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f98f07a514287378206a12966a18bcce2ce996434858c036f7e405a8c5d51721\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f98f07a514287378206a12966a18bcce2ce996434858c036f7e405a8c5d51721\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T07:56:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T07:56:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T07:56:57Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 25 07:57:17 crc kubenswrapper[4832]: I0125 07:57:17.199926 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:16Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 25 07:57:17 crc kubenswrapper[4832]: I0125 07:57:17.216209 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-kzrcf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5439ad80-35f6-4da4-8745-8104e9963472\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dg29p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T07:57:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-kzrcf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 25 07:57:17 crc kubenswrapper[4832]: I0125 07:57:17.229118 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:16Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 25 07:57:17 crc kubenswrapper[4832]: I0125 07:57:17.232169 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/947f1c61-f061-4448-b301-9c2554b67933-cnibin\") pod \"multus-additional-cni-plugins-7tflx\" (UID: \"947f1c61-f061-4448-b301-9c2554b67933\") " pod="openshift-multus/multus-additional-cni-plugins-7tflx" Jan 25 07:57:17 crc kubenswrapper[4832]: I0125 07:57:17.232202 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g6tmq\" (UniqueName: \"kubernetes.io/projected/947f1c61-f061-4448-b301-9c2554b67933-kube-api-access-g6tmq\") pod \"multus-additional-cni-plugins-7tflx\" (UID: \"947f1c61-f061-4448-b301-9c2554b67933\") " pod="openshift-multus/multus-additional-cni-plugins-7tflx" Jan 25 07:57:17 crc kubenswrapper[4832]: I0125 07:57:17.232231 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/5439ad80-35f6-4da4-8745-8104e9963472-cnibin\") pod \"multus-kzrcf\" (UID: \"5439ad80-35f6-4da4-8745-8104e9963472\") " pod="openshift-multus/multus-kzrcf" Jan 25 07:57:17 crc kubenswrapper[4832]: I0125 07:57:17.232245 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/9c6fdc72-86dc-433d-8aac-57b0eeefaca3-host-cni-netd\") pod \"ovnkube-node-plv66\" (UID: \"9c6fdc72-86dc-433d-8aac-57b0eeefaca3\") " pod="openshift-ovn-kubernetes/ovnkube-node-plv66" Jan 25 07:57:17 crc kubenswrapper[4832]: I0125 07:57:17.232268 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/5439ad80-35f6-4da4-8745-8104e9963472-hostroot\") pod \"multus-kzrcf\" (UID: \"5439ad80-35f6-4da4-8745-8104e9963472\") " pod="openshift-multus/multus-kzrcf" Jan 25 07:57:17 crc kubenswrapper[4832]: I0125 07:57:17.232282 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/9c6fdc72-86dc-433d-8aac-57b0eeefaca3-ovnkube-script-lib\") pod \"ovnkube-node-plv66\" (UID: \"9c6fdc72-86dc-433d-8aac-57b0eeefaca3\") " pod="openshift-ovn-kubernetes/ovnkube-node-plv66" Jan 25 07:57:17 crc kubenswrapper[4832]: I0125 07:57:17.232296 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/5439ad80-35f6-4da4-8745-8104e9963472-host-var-lib-cni-multus\") pod \"multus-kzrcf\" (UID: \"5439ad80-35f6-4da4-8745-8104e9963472\") " pod="openshift-multus/multus-kzrcf" Jan 25 07:57:17 crc kubenswrapper[4832]: I0125 07:57:17.232310 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/9c6fdc72-86dc-433d-8aac-57b0eeefaca3-node-log\") pod \"ovnkube-node-plv66\" (UID: \"9c6fdc72-86dc-433d-8aac-57b0eeefaca3\") " pod="openshift-ovn-kubernetes/ovnkube-node-plv66" Jan 25 07:57:17 crc kubenswrapper[4832]: I0125 07:57:17.232324 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/5439ad80-35f6-4da4-8745-8104e9963472-multus-conf-dir\") pod \"multus-kzrcf\" (UID: \"5439ad80-35f6-4da4-8745-8104e9963472\") " pod="openshift-multus/multus-kzrcf" Jan 25 07:57:17 crc kubenswrapper[4832]: I0125 07:57:17.232336 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/9c6fdc72-86dc-433d-8aac-57b0eeefaca3-run-ovn\") pod \"ovnkube-node-plv66\" (UID: \"9c6fdc72-86dc-433d-8aac-57b0eeefaca3\") " pod="openshift-ovn-kubernetes/ovnkube-node-plv66" Jan 25 07:57:17 crc kubenswrapper[4832]: I0125 07:57:17.232349 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/9c6fdc72-86dc-433d-8aac-57b0eeefaca3-ovn-node-metrics-cert\") pod \"ovnkube-node-plv66\" (UID: \"9c6fdc72-86dc-433d-8aac-57b0eeefaca3\") " pod="openshift-ovn-kubernetes/ovnkube-node-plv66" Jan 25 07:57:17 crc kubenswrapper[4832]: I0125 07:57:17.232364 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/5439ad80-35f6-4da4-8745-8104e9963472-multus-socket-dir-parent\") pod \"multus-kzrcf\" (UID: \"5439ad80-35f6-4da4-8745-8104e9963472\") " pod="openshift-multus/multus-kzrcf" Jan 25 07:57:17 crc kubenswrapper[4832]: I0125 07:57:17.232378 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/5439ad80-35f6-4da4-8745-8104e9963472-multus-daemon-config\") pod \"multus-kzrcf\" (UID: \"5439ad80-35f6-4da4-8745-8104e9963472\") " pod="openshift-multus/multus-kzrcf" Jan 25 07:57:17 crc kubenswrapper[4832]: I0125 07:57:17.232439 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/9c6fdc72-86dc-433d-8aac-57b0eeefaca3-var-lib-openvswitch\") pod \"ovnkube-node-plv66\" (UID: \"9c6fdc72-86dc-433d-8aac-57b0eeefaca3\") " pod="openshift-ovn-kubernetes/ovnkube-node-plv66" Jan 25 07:57:17 crc kubenswrapper[4832]: I0125 07:57:17.232458 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 25 07:57:17 crc kubenswrapper[4832]: I0125 07:57:17.232474 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/9c6fdc72-86dc-433d-8aac-57b0eeefaca3-run-openvswitch\") pod \"ovnkube-node-plv66\" (UID: \"9c6fdc72-86dc-433d-8aac-57b0eeefaca3\") " pod="openshift-ovn-kubernetes/ovnkube-node-plv66" Jan 25 07:57:17 crc kubenswrapper[4832]: I0125 07:57:17.232497 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/9c6fdc72-86dc-433d-8aac-57b0eeefaca3-host-run-ovn-kubernetes\") pod \"ovnkube-node-plv66\" (UID: \"9c6fdc72-86dc-433d-8aac-57b0eeefaca3\") " pod="openshift-ovn-kubernetes/ovnkube-node-plv66" Jan 25 07:57:17 crc kubenswrapper[4832]: I0125 07:57:17.232515 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/947f1c61-f061-4448-b301-9c2554b67933-tuning-conf-dir\") pod \"multus-additional-cni-plugins-7tflx\" (UID: \"947f1c61-f061-4448-b301-9c2554b67933\") " pod="openshift-multus/multus-additional-cni-plugins-7tflx" Jan 25 07:57:17 crc kubenswrapper[4832]: I0125 07:57:17.232530 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/5439ad80-35f6-4da4-8745-8104e9963472-etc-kubernetes\") pod \"multus-kzrcf\" (UID: \"5439ad80-35f6-4da4-8745-8104e9963472\") " pod="openshift-multus/multus-kzrcf" Jan 25 07:57:17 crc kubenswrapper[4832]: I0125 07:57:17.232546 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/9c6fdc72-86dc-433d-8aac-57b0eeefaca3-etc-openvswitch\") pod \"ovnkube-node-plv66\" (UID: \"9c6fdc72-86dc-433d-8aac-57b0eeefaca3\") " pod="openshift-ovn-kubernetes/ovnkube-node-plv66" Jan 25 07:57:17 crc kubenswrapper[4832]: I0125 07:57:17.232564 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/947f1c61-f061-4448-b301-9c2554b67933-os-release\") pod \"multus-additional-cni-plugins-7tflx\" (UID: \"947f1c61-f061-4448-b301-9c2554b67933\") " pod="openshift-multus/multus-additional-cni-plugins-7tflx" Jan 25 07:57:17 crc kubenswrapper[4832]: I0125 07:57:17.232580 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/5439ad80-35f6-4da4-8745-8104e9963472-host-var-lib-kubelet\") pod \"multus-kzrcf\" (UID: \"5439ad80-35f6-4da4-8745-8104e9963472\") " pod="openshift-multus/multus-kzrcf" Jan 25 07:57:17 crc kubenswrapper[4832]: I0125 07:57:17.232595 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/9c6fdc72-86dc-433d-8aac-57b0eeefaca3-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-plv66\" (UID: \"9c6fdc72-86dc-433d-8aac-57b0eeefaca3\") " pod="openshift-ovn-kubernetes/ovnkube-node-plv66" Jan 25 07:57:17 crc kubenswrapper[4832]: I0125 07:57:17.232612 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/947f1c61-f061-4448-b301-9c2554b67933-cni-binary-copy\") pod \"multus-additional-cni-plugins-7tflx\" (UID: \"947f1c61-f061-4448-b301-9c2554b67933\") " pod="openshift-multus/multus-additional-cni-plugins-7tflx" Jan 25 07:57:17 crc kubenswrapper[4832]: I0125 07:57:17.232627 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/947f1c61-f061-4448-b301-9c2554b67933-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-7tflx\" (UID: \"947f1c61-f061-4448-b301-9c2554b67933\") " pod="openshift-multus/multus-additional-cni-plugins-7tflx" Jan 25 07:57:17 crc kubenswrapper[4832]: I0125 07:57:17.232643 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/5439ad80-35f6-4da4-8745-8104e9963472-multus-cni-dir\") pod \"multus-kzrcf\" (UID: \"5439ad80-35f6-4da4-8745-8104e9963472\") " pod="openshift-multus/multus-kzrcf" Jan 25 07:57:17 crc kubenswrapper[4832]: I0125 07:57:17.232659 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/5439ad80-35f6-4da4-8745-8104e9963472-os-release\") pod \"multus-kzrcf\" (UID: \"5439ad80-35f6-4da4-8745-8104e9963472\") " pod="openshift-multus/multus-kzrcf" Jan 25 07:57:17 crc kubenswrapper[4832]: I0125 07:57:17.232674 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/5439ad80-35f6-4da4-8745-8104e9963472-host-var-lib-cni-bin\") pod \"multus-kzrcf\" (UID: \"5439ad80-35f6-4da4-8745-8104e9963472\") " pod="openshift-multus/multus-kzrcf" Jan 25 07:57:17 crc kubenswrapper[4832]: I0125 07:57:17.232690 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/947f1c61-f061-4448-b301-9c2554b67933-system-cni-dir\") pod \"multus-additional-cni-plugins-7tflx\" (UID: \"947f1c61-f061-4448-b301-9c2554b67933\") " pod="openshift-multus/multus-additional-cni-plugins-7tflx" Jan 25 07:57:17 crc kubenswrapper[4832]: I0125 07:57:17.232706 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dg29p\" (UniqueName: \"kubernetes.io/projected/5439ad80-35f6-4da4-8745-8104e9963472-kube-api-access-dg29p\") pod \"multus-kzrcf\" (UID: \"5439ad80-35f6-4da4-8745-8104e9963472\") " pod="openshift-multus/multus-kzrcf" Jan 25 07:57:17 crc kubenswrapper[4832]: I0125 07:57:17.232722 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rkm2k\" (UniqueName: \"kubernetes.io/projected/9c6fdc72-86dc-433d-8aac-57b0eeefaca3-kube-api-access-rkm2k\") pod \"ovnkube-node-plv66\" (UID: \"9c6fdc72-86dc-433d-8aac-57b0eeefaca3\") " pod="openshift-ovn-kubernetes/ovnkube-node-plv66" Jan 25 07:57:17 crc kubenswrapper[4832]: I0125 07:57:17.232737 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/5439ad80-35f6-4da4-8745-8104e9963472-system-cni-dir\") pod \"multus-kzrcf\" (UID: \"5439ad80-35f6-4da4-8745-8104e9963472\") " pod="openshift-multus/multus-kzrcf" Jan 25 07:57:17 crc kubenswrapper[4832]: I0125 07:57:17.232752 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/5439ad80-35f6-4da4-8745-8104e9963472-cni-binary-copy\") pod \"multus-kzrcf\" (UID: \"5439ad80-35f6-4da4-8745-8104e9963472\") " pod="openshift-multus/multus-kzrcf" Jan 25 07:57:17 crc kubenswrapper[4832]: I0125 07:57:17.232767 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/5439ad80-35f6-4da4-8745-8104e9963472-host-run-k8s-cni-cncf-io\") pod \"multus-kzrcf\" (UID: \"5439ad80-35f6-4da4-8745-8104e9963472\") " pod="openshift-multus/multus-kzrcf" Jan 25 07:57:17 crc kubenswrapper[4832]: I0125 07:57:17.232781 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/9c6fdc72-86dc-433d-8aac-57b0eeefaca3-host-run-netns\") pod \"ovnkube-node-plv66\" (UID: \"9c6fdc72-86dc-433d-8aac-57b0eeefaca3\") " pod="openshift-ovn-kubernetes/ovnkube-node-plv66" Jan 25 07:57:17 crc kubenswrapper[4832]: I0125 07:57:17.232801 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/9c6fdc72-86dc-433d-8aac-57b0eeefaca3-host-kubelet\") pod \"ovnkube-node-plv66\" (UID: \"9c6fdc72-86dc-433d-8aac-57b0eeefaca3\") " pod="openshift-ovn-kubernetes/ovnkube-node-plv66" Jan 25 07:57:17 crc kubenswrapper[4832]: I0125 07:57:17.232815 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/9c6fdc72-86dc-433d-8aac-57b0eeefaca3-env-overrides\") pod \"ovnkube-node-plv66\" (UID: \"9c6fdc72-86dc-433d-8aac-57b0eeefaca3\") " pod="openshift-ovn-kubernetes/ovnkube-node-plv66" Jan 25 07:57:17 crc kubenswrapper[4832]: I0125 07:57:17.232833 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/9c6fdc72-86dc-433d-8aac-57b0eeefaca3-systemd-units\") pod \"ovnkube-node-plv66\" (UID: \"9c6fdc72-86dc-433d-8aac-57b0eeefaca3\") " pod="openshift-ovn-kubernetes/ovnkube-node-plv66" Jan 25 07:57:17 crc kubenswrapper[4832]: I0125 07:57:17.232845 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/9c6fdc72-86dc-433d-8aac-57b0eeefaca3-host-cni-bin\") pod \"ovnkube-node-plv66\" (UID: \"9c6fdc72-86dc-433d-8aac-57b0eeefaca3\") " pod="openshift-ovn-kubernetes/ovnkube-node-plv66" Jan 25 07:57:17 crc kubenswrapper[4832]: I0125 07:57:17.232859 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/9c6fdc72-86dc-433d-8aac-57b0eeefaca3-ovnkube-config\") pod \"ovnkube-node-plv66\" (UID: \"9c6fdc72-86dc-433d-8aac-57b0eeefaca3\") " pod="openshift-ovn-kubernetes/ovnkube-node-plv66" Jan 25 07:57:17 crc kubenswrapper[4832]: I0125 07:57:17.232874 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/9c6fdc72-86dc-433d-8aac-57b0eeefaca3-host-slash\") pod \"ovnkube-node-plv66\" (UID: \"9c6fdc72-86dc-433d-8aac-57b0eeefaca3\") " pod="openshift-ovn-kubernetes/ovnkube-node-plv66" Jan 25 07:57:17 crc kubenswrapper[4832]: I0125 07:57:17.232886 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/9c6fdc72-86dc-433d-8aac-57b0eeefaca3-run-systemd\") pod \"ovnkube-node-plv66\" (UID: \"9c6fdc72-86dc-433d-8aac-57b0eeefaca3\") " pod="openshift-ovn-kubernetes/ovnkube-node-plv66" Jan 25 07:57:17 crc kubenswrapper[4832]: I0125 07:57:17.232899 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/9c6fdc72-86dc-433d-8aac-57b0eeefaca3-log-socket\") pod \"ovnkube-node-plv66\" (UID: \"9c6fdc72-86dc-433d-8aac-57b0eeefaca3\") " pod="openshift-ovn-kubernetes/ovnkube-node-plv66" Jan 25 07:57:17 crc kubenswrapper[4832]: I0125 07:57:17.232914 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/5439ad80-35f6-4da4-8745-8104e9963472-host-run-netns\") pod \"multus-kzrcf\" (UID: \"5439ad80-35f6-4da4-8745-8104e9963472\") " pod="openshift-multus/multus-kzrcf" Jan 25 07:57:17 crc kubenswrapper[4832]: I0125 07:57:17.232927 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/5439ad80-35f6-4da4-8745-8104e9963472-host-run-multus-certs\") pod \"multus-kzrcf\" (UID: \"5439ad80-35f6-4da4-8745-8104e9963472\") " pod="openshift-multus/multus-kzrcf" Jan 25 07:57:17 crc kubenswrapper[4832]: E0125 07:57:17.233102 4832 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 25 07:57:17 crc kubenswrapper[4832]: E0125 07:57:17.233118 4832 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 25 07:57:17 crc kubenswrapper[4832]: E0125 07:57:17.233128 4832 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 25 07:57:17 crc kubenswrapper[4832]: E0125 07:57:17.233161 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-25 07:57:18.233149779 +0000 UTC m=+20.906973312 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 25 07:57:17 crc kubenswrapper[4832]: I0125 07:57:17.239585 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 25 07:57:17 crc kubenswrapper[4832]: I0125 07:57:17.241241 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:57:17 crc kubenswrapper[4832]: I0125 07:57:17.241276 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:57:17 crc kubenswrapper[4832]: I0125 07:57:17.241288 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:57:17 crc kubenswrapper[4832]: I0125 07:57:17.241305 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:57:17 crc kubenswrapper[4832]: I0125 07:57:17.241316 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:57:17Z","lastTransitionTime":"2026-01-25T07:57:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:57:17 crc kubenswrapper[4832]: I0125 07:57:17.247376 4832 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2027-01-25 07:52:16 +0000 UTC, rotation deadline is 2026-11-23 07:05:32.967359666 +0000 UTC Jan 25 07:57:17 crc kubenswrapper[4832]: I0125 07:57:17.247503 4832 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Waiting 7247h8m15.71985984s for next certificate rotation Jan 25 07:57:17 crc kubenswrapper[4832]: I0125 07:57:17.249742 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:16Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 25 07:57:17 crc kubenswrapper[4832]: I0125 07:57:17.256821 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-ljmz9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f0e6de28-95c1-4b62-93a5-8141ed12ba8e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:16Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:16Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s6dzs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T07:57:16Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-ljmz9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 25 07:57:17 crc kubenswrapper[4832]: I0125 07:57:17.264768 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-9r9sz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1fb47e8e-c812-41b4-9be7-3fad81e121b0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:16Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:16Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2t6v2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2t6v2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T07:57:16Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-9r9sz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 25 07:57:17 crc kubenswrapper[4832]: I0125 07:57:17.281090 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 25 07:57:17 crc kubenswrapper[4832]: I0125 07:57:17.297200 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4399c971-4476-4d24-ae22-8f9710ee1ea8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:56:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:56:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:56:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:56:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:56:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://427b76c32266adf832d2068d3a55977e793505c5bb68d7b55f73115596094910\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:56:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://37e9206fcc440929199c51b318bab8d2c23814d1307eaed596434c12edf2ed21\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:56:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://959f94a48ef709e3a3ca62ab6fda1874fd98e4fa70fbde0fa03da51bc8d0ed25\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:56:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://56d7d5b36830b76c8af4d6a98ec50b4096ef677b7ec94784724d5395dbc5e1a5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7e2213b4c4748dc37cf94e9b977630270dedbabf28e81c8a6d75e4ee3346ad7a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-25T07:57:15Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0125 07:57:10.242088 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0125 07:57:10.245266 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3222874030/tls.crt::/tmp/serving-cert-3222874030/tls.key\\\\\\\"\\\\nI0125 07:57:15.582629 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0125 07:57:15.585295 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0125 07:57:15.585315 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0125 07:57:15.585341 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0125 07:57:15.585347 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0125 07:57:15.590465 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0125 07:57:15.590486 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0125 07:57:15.590498 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0125 07:57:15.590502 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0125 07:57:15.590506 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0125 07:57:15.590510 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0125 07:57:15.590513 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0125 07:57:15.590670 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0125 07:57:15.594690 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-25T07:56:59Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c0b0c638bfaa98aaf9932b5ad1b0bfc04ba52038c40f3aa85103388c557ace5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:56:59Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5cdefbe9da3ff798b69ba79465cd9b6fce74df31802f14dca3fa58ba5b9d1bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b5cdefbe9da3ff798b69ba79465cd9b6fce74df31802f14dca3fa58ba5b9d1bd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T07:56:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T07:56:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T07:56:57Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 25 07:57:17 crc kubenswrapper[4832]: I0125 07:57:17.307799 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fcc553c4-1007-4dbc-8420-60b36d54467a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:56:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:56:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:56:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8be196a1dec67a58e78aa9de2efa770fc899f210cc9c13962f0ebe78b967ba34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:56:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b044eb1a229266f00938c08da6aa9e86425ca71d08c8434d7214d54850c36bbb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:56:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://82354c782a5e3edb960aa716e1fc5fa9ab40d1f483ae320f08abfb662c1f1911\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:56:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7833d14895ff5c8aa464bdd04ddfe77dd2cddb9658d863bf6421449e62657bd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:56:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T07:56:57Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 25 07:57:17 crc kubenswrapper[4832]: I0125 07:57:17.317267 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:16Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 25 07:57:17 crc kubenswrapper[4832]: I0125 07:57:17.324176 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-6dqw2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5b30a48c-b823-4cdd-ac0c-def5487d8fa6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:16Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:16Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxmsw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T07:57:16Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-6dqw2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 25 07:57:17 crc kubenswrapper[4832]: I0125 07:57:17.332477 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-9r9sz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1fb47e8e-c812-41b4-9be7-3fad81e121b0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:16Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:16Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2t6v2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2t6v2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T07:57:16Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-9r9sz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 25 07:57:17 crc kubenswrapper[4832]: I0125 07:57:17.333788 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/9c6fdc72-86dc-433d-8aac-57b0eeefaca3-run-openvswitch\") pod \"ovnkube-node-plv66\" (UID: \"9c6fdc72-86dc-433d-8aac-57b0eeefaca3\") " pod="openshift-ovn-kubernetes/ovnkube-node-plv66" Jan 25 07:57:17 crc kubenswrapper[4832]: I0125 07:57:17.333842 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/9c6fdc72-86dc-433d-8aac-57b0eeefaca3-host-run-ovn-kubernetes\") pod \"ovnkube-node-plv66\" (UID: \"9c6fdc72-86dc-433d-8aac-57b0eeefaca3\") " pod="openshift-ovn-kubernetes/ovnkube-node-plv66" Jan 25 07:57:17 crc kubenswrapper[4832]: I0125 07:57:17.333871 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/947f1c61-f061-4448-b301-9c2554b67933-tuning-conf-dir\") pod \"multus-additional-cni-plugins-7tflx\" (UID: \"947f1c61-f061-4448-b301-9c2554b67933\") " pod="openshift-multus/multus-additional-cni-plugins-7tflx" Jan 25 07:57:17 crc kubenswrapper[4832]: I0125 07:57:17.333887 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/9c6fdc72-86dc-433d-8aac-57b0eeefaca3-run-openvswitch\") pod \"ovnkube-node-plv66\" (UID: \"9c6fdc72-86dc-433d-8aac-57b0eeefaca3\") " pod="openshift-ovn-kubernetes/ovnkube-node-plv66" Jan 25 07:57:17 crc kubenswrapper[4832]: I0125 07:57:17.333892 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/5439ad80-35f6-4da4-8745-8104e9963472-etc-kubernetes\") pod \"multus-kzrcf\" (UID: \"5439ad80-35f6-4da4-8745-8104e9963472\") " pod="openshift-multus/multus-kzrcf" Jan 25 07:57:17 crc kubenswrapper[4832]: I0125 07:57:17.333943 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/9c6fdc72-86dc-433d-8aac-57b0eeefaca3-etc-openvswitch\") pod \"ovnkube-node-plv66\" (UID: \"9c6fdc72-86dc-433d-8aac-57b0eeefaca3\") " pod="openshift-ovn-kubernetes/ovnkube-node-plv66" Jan 25 07:57:17 crc kubenswrapper[4832]: I0125 07:57:17.333954 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/9c6fdc72-86dc-433d-8aac-57b0eeefaca3-host-run-ovn-kubernetes\") pod \"ovnkube-node-plv66\" (UID: \"9c6fdc72-86dc-433d-8aac-57b0eeefaca3\") " pod="openshift-ovn-kubernetes/ovnkube-node-plv66" Jan 25 07:57:17 crc kubenswrapper[4832]: I0125 07:57:17.333922 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/5439ad80-35f6-4da4-8745-8104e9963472-etc-kubernetes\") pod \"multus-kzrcf\" (UID: \"5439ad80-35f6-4da4-8745-8104e9963472\") " pod="openshift-multus/multus-kzrcf" Jan 25 07:57:17 crc kubenswrapper[4832]: I0125 07:57:17.334024 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/9c6fdc72-86dc-433d-8aac-57b0eeefaca3-etc-openvswitch\") pod \"ovnkube-node-plv66\" (UID: \"9c6fdc72-86dc-433d-8aac-57b0eeefaca3\") " pod="openshift-ovn-kubernetes/ovnkube-node-plv66" Jan 25 07:57:17 crc kubenswrapper[4832]: I0125 07:57:17.334008 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/947f1c61-f061-4448-b301-9c2554b67933-os-release\") pod \"multus-additional-cni-plugins-7tflx\" (UID: \"947f1c61-f061-4448-b301-9c2554b67933\") " pod="openshift-multus/multus-additional-cni-plugins-7tflx" Jan 25 07:57:17 crc kubenswrapper[4832]: I0125 07:57:17.333968 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/947f1c61-f061-4448-b301-9c2554b67933-os-release\") pod \"multus-additional-cni-plugins-7tflx\" (UID: \"947f1c61-f061-4448-b301-9c2554b67933\") " pod="openshift-multus/multus-additional-cni-plugins-7tflx" Jan 25 07:57:17 crc kubenswrapper[4832]: I0125 07:57:17.334202 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/5439ad80-35f6-4da4-8745-8104e9963472-host-var-lib-kubelet\") pod \"multus-kzrcf\" (UID: \"5439ad80-35f6-4da4-8745-8104e9963472\") " pod="openshift-multus/multus-kzrcf" Jan 25 07:57:17 crc kubenswrapper[4832]: I0125 07:57:17.334211 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/947f1c61-f061-4448-b301-9c2554b67933-tuning-conf-dir\") pod \"multus-additional-cni-plugins-7tflx\" (UID: \"947f1c61-f061-4448-b301-9c2554b67933\") " pod="openshift-multus/multus-additional-cni-plugins-7tflx" Jan 25 07:57:17 crc kubenswrapper[4832]: I0125 07:57:17.334236 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/9c6fdc72-86dc-433d-8aac-57b0eeefaca3-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-plv66\" (UID: \"9c6fdc72-86dc-433d-8aac-57b0eeefaca3\") " pod="openshift-ovn-kubernetes/ovnkube-node-plv66" Jan 25 07:57:17 crc kubenswrapper[4832]: I0125 07:57:17.334264 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/5439ad80-35f6-4da4-8745-8104e9963472-host-var-lib-kubelet\") pod \"multus-kzrcf\" (UID: \"5439ad80-35f6-4da4-8745-8104e9963472\") " pod="openshift-multus/multus-kzrcf" Jan 25 07:57:17 crc kubenswrapper[4832]: I0125 07:57:17.334271 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/947f1c61-f061-4448-b301-9c2554b67933-cni-binary-copy\") pod \"multus-additional-cni-plugins-7tflx\" (UID: \"947f1c61-f061-4448-b301-9c2554b67933\") " pod="openshift-multus/multus-additional-cni-plugins-7tflx" Jan 25 07:57:17 crc kubenswrapper[4832]: I0125 07:57:17.334293 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/9c6fdc72-86dc-433d-8aac-57b0eeefaca3-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-plv66\" (UID: \"9c6fdc72-86dc-433d-8aac-57b0eeefaca3\") " pod="openshift-ovn-kubernetes/ovnkube-node-plv66" Jan 25 07:57:17 crc kubenswrapper[4832]: I0125 07:57:17.334297 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/947f1c61-f061-4448-b301-9c2554b67933-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-7tflx\" (UID: \"947f1c61-f061-4448-b301-9c2554b67933\") " pod="openshift-multus/multus-additional-cni-plugins-7tflx" Jan 25 07:57:17 crc kubenswrapper[4832]: I0125 07:57:17.334325 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/5439ad80-35f6-4da4-8745-8104e9963472-multus-cni-dir\") pod \"multus-kzrcf\" (UID: \"5439ad80-35f6-4da4-8745-8104e9963472\") " pod="openshift-multus/multus-kzrcf" Jan 25 07:57:17 crc kubenswrapper[4832]: I0125 07:57:17.334343 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/5439ad80-35f6-4da4-8745-8104e9963472-os-release\") pod \"multus-kzrcf\" (UID: \"5439ad80-35f6-4da4-8745-8104e9963472\") " pod="openshift-multus/multus-kzrcf" Jan 25 07:57:17 crc kubenswrapper[4832]: I0125 07:57:17.334365 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/5439ad80-35f6-4da4-8745-8104e9963472-host-var-lib-cni-bin\") pod \"multus-kzrcf\" (UID: \"5439ad80-35f6-4da4-8745-8104e9963472\") " pod="openshift-multus/multus-kzrcf" Jan 25 07:57:17 crc kubenswrapper[4832]: I0125 07:57:17.334432 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/947f1c61-f061-4448-b301-9c2554b67933-system-cni-dir\") pod \"multus-additional-cni-plugins-7tflx\" (UID: \"947f1c61-f061-4448-b301-9c2554b67933\") " pod="openshift-multus/multus-additional-cni-plugins-7tflx" Jan 25 07:57:17 crc kubenswrapper[4832]: I0125 07:57:17.334438 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/5439ad80-35f6-4da4-8745-8104e9963472-os-release\") pod \"multus-kzrcf\" (UID: \"5439ad80-35f6-4da4-8745-8104e9963472\") " pod="openshift-multus/multus-kzrcf" Jan 25 07:57:17 crc kubenswrapper[4832]: I0125 07:57:17.334461 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dg29p\" (UniqueName: \"kubernetes.io/projected/5439ad80-35f6-4da4-8745-8104e9963472-kube-api-access-dg29p\") pod \"multus-kzrcf\" (UID: \"5439ad80-35f6-4da4-8745-8104e9963472\") " pod="openshift-multus/multus-kzrcf" Jan 25 07:57:17 crc kubenswrapper[4832]: I0125 07:57:17.334488 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rkm2k\" (UniqueName: \"kubernetes.io/projected/9c6fdc72-86dc-433d-8aac-57b0eeefaca3-kube-api-access-rkm2k\") pod \"ovnkube-node-plv66\" (UID: \"9c6fdc72-86dc-433d-8aac-57b0eeefaca3\") " pod="openshift-ovn-kubernetes/ovnkube-node-plv66" Jan 25 07:57:17 crc kubenswrapper[4832]: I0125 07:57:17.334493 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/5439ad80-35f6-4da4-8745-8104e9963472-host-var-lib-cni-bin\") pod \"multus-kzrcf\" (UID: \"5439ad80-35f6-4da4-8745-8104e9963472\") " pod="openshift-multus/multus-kzrcf" Jan 25 07:57:17 crc kubenswrapper[4832]: I0125 07:57:17.334517 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/5439ad80-35f6-4da4-8745-8104e9963472-system-cni-dir\") pod \"multus-kzrcf\" (UID: \"5439ad80-35f6-4da4-8745-8104e9963472\") " pod="openshift-multus/multus-kzrcf" Jan 25 07:57:17 crc kubenswrapper[4832]: I0125 07:57:17.334523 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/947f1c61-f061-4448-b301-9c2554b67933-system-cni-dir\") pod \"multus-additional-cni-plugins-7tflx\" (UID: \"947f1c61-f061-4448-b301-9c2554b67933\") " pod="openshift-multus/multus-additional-cni-plugins-7tflx" Jan 25 07:57:17 crc kubenswrapper[4832]: I0125 07:57:17.334538 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/5439ad80-35f6-4da4-8745-8104e9963472-cni-binary-copy\") pod \"multus-kzrcf\" (UID: \"5439ad80-35f6-4da4-8745-8104e9963472\") " pod="openshift-multus/multus-kzrcf" Jan 25 07:57:17 crc kubenswrapper[4832]: I0125 07:57:17.334562 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/5439ad80-35f6-4da4-8745-8104e9963472-host-run-k8s-cni-cncf-io\") pod \"multus-kzrcf\" (UID: \"5439ad80-35f6-4da4-8745-8104e9963472\") " pod="openshift-multus/multus-kzrcf" Jan 25 07:57:17 crc kubenswrapper[4832]: I0125 07:57:17.334548 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/5439ad80-35f6-4da4-8745-8104e9963472-multus-cni-dir\") pod \"multus-kzrcf\" (UID: \"5439ad80-35f6-4da4-8745-8104e9963472\") " pod="openshift-multus/multus-kzrcf" Jan 25 07:57:17 crc kubenswrapper[4832]: I0125 07:57:17.334583 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/9c6fdc72-86dc-433d-8aac-57b0eeefaca3-host-run-netns\") pod \"ovnkube-node-plv66\" (UID: \"9c6fdc72-86dc-433d-8aac-57b0eeefaca3\") " pod="openshift-ovn-kubernetes/ovnkube-node-plv66" Jan 25 07:57:17 crc kubenswrapper[4832]: I0125 07:57:17.334644 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/9c6fdc72-86dc-433d-8aac-57b0eeefaca3-host-run-netns\") pod \"ovnkube-node-plv66\" (UID: \"9c6fdc72-86dc-433d-8aac-57b0eeefaca3\") " pod="openshift-ovn-kubernetes/ovnkube-node-plv66" Jan 25 07:57:17 crc kubenswrapper[4832]: I0125 07:57:17.334669 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/9c6fdc72-86dc-433d-8aac-57b0eeefaca3-host-kubelet\") pod \"ovnkube-node-plv66\" (UID: \"9c6fdc72-86dc-433d-8aac-57b0eeefaca3\") " pod="openshift-ovn-kubernetes/ovnkube-node-plv66" Jan 25 07:57:17 crc kubenswrapper[4832]: I0125 07:57:17.334700 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/9c6fdc72-86dc-433d-8aac-57b0eeefaca3-env-overrides\") pod \"ovnkube-node-plv66\" (UID: \"9c6fdc72-86dc-433d-8aac-57b0eeefaca3\") " pod="openshift-ovn-kubernetes/ovnkube-node-plv66" Jan 25 07:57:17 crc kubenswrapper[4832]: I0125 07:57:17.334705 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/5439ad80-35f6-4da4-8745-8104e9963472-system-cni-dir\") pod \"multus-kzrcf\" (UID: \"5439ad80-35f6-4da4-8745-8104e9963472\") " pod="openshift-multus/multus-kzrcf" Jan 25 07:57:17 crc kubenswrapper[4832]: I0125 07:57:17.334728 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/9c6fdc72-86dc-433d-8aac-57b0eeefaca3-systemd-units\") pod \"ovnkube-node-plv66\" (UID: \"9c6fdc72-86dc-433d-8aac-57b0eeefaca3\") " pod="openshift-ovn-kubernetes/ovnkube-node-plv66" Jan 25 07:57:17 crc kubenswrapper[4832]: I0125 07:57:17.334749 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/9c6fdc72-86dc-433d-8aac-57b0eeefaca3-host-kubelet\") pod \"ovnkube-node-plv66\" (UID: \"9c6fdc72-86dc-433d-8aac-57b0eeefaca3\") " pod="openshift-ovn-kubernetes/ovnkube-node-plv66" Jan 25 07:57:17 crc kubenswrapper[4832]: I0125 07:57:17.334763 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/9c6fdc72-86dc-433d-8aac-57b0eeefaca3-host-cni-bin\") pod \"ovnkube-node-plv66\" (UID: \"9c6fdc72-86dc-433d-8aac-57b0eeefaca3\") " pod="openshift-ovn-kubernetes/ovnkube-node-plv66" Jan 25 07:57:17 crc kubenswrapper[4832]: I0125 07:57:17.334789 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/9c6fdc72-86dc-433d-8aac-57b0eeefaca3-ovnkube-config\") pod \"ovnkube-node-plv66\" (UID: \"9c6fdc72-86dc-433d-8aac-57b0eeefaca3\") " pod="openshift-ovn-kubernetes/ovnkube-node-plv66" Jan 25 07:57:17 crc kubenswrapper[4832]: I0125 07:57:17.334818 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/9c6fdc72-86dc-433d-8aac-57b0eeefaca3-host-slash\") pod \"ovnkube-node-plv66\" (UID: \"9c6fdc72-86dc-433d-8aac-57b0eeefaca3\") " pod="openshift-ovn-kubernetes/ovnkube-node-plv66" Jan 25 07:57:17 crc kubenswrapper[4832]: I0125 07:57:17.334840 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/9c6fdc72-86dc-433d-8aac-57b0eeefaca3-run-systemd\") pod \"ovnkube-node-plv66\" (UID: \"9c6fdc72-86dc-433d-8aac-57b0eeefaca3\") " pod="openshift-ovn-kubernetes/ovnkube-node-plv66" Jan 25 07:57:17 crc kubenswrapper[4832]: I0125 07:57:17.334871 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/9c6fdc72-86dc-433d-8aac-57b0eeefaca3-log-socket\") pod \"ovnkube-node-plv66\" (UID: \"9c6fdc72-86dc-433d-8aac-57b0eeefaca3\") " pod="openshift-ovn-kubernetes/ovnkube-node-plv66" Jan 25 07:57:17 crc kubenswrapper[4832]: I0125 07:57:17.334898 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/5439ad80-35f6-4da4-8745-8104e9963472-host-run-netns\") pod \"multus-kzrcf\" (UID: \"5439ad80-35f6-4da4-8745-8104e9963472\") " pod="openshift-multus/multus-kzrcf" Jan 25 07:57:17 crc kubenswrapper[4832]: I0125 07:57:17.334925 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/5439ad80-35f6-4da4-8745-8104e9963472-host-run-multus-certs\") pod \"multus-kzrcf\" (UID: \"5439ad80-35f6-4da4-8745-8104e9963472\") " pod="openshift-multus/multus-kzrcf" Jan 25 07:57:17 crc kubenswrapper[4832]: I0125 07:57:17.334944 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/947f1c61-f061-4448-b301-9c2554b67933-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-7tflx\" (UID: \"947f1c61-f061-4448-b301-9c2554b67933\") " pod="openshift-multus/multus-additional-cni-plugins-7tflx" Jan 25 07:57:17 crc kubenswrapper[4832]: I0125 07:57:17.334955 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/947f1c61-f061-4448-b301-9c2554b67933-cnibin\") pod \"multus-additional-cni-plugins-7tflx\" (UID: \"947f1c61-f061-4448-b301-9c2554b67933\") " pod="openshift-multus/multus-additional-cni-plugins-7tflx" Jan 25 07:57:17 crc kubenswrapper[4832]: I0125 07:57:17.334975 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/9c6fdc72-86dc-433d-8aac-57b0eeefaca3-host-slash\") pod \"ovnkube-node-plv66\" (UID: \"9c6fdc72-86dc-433d-8aac-57b0eeefaca3\") " pod="openshift-ovn-kubernetes/ovnkube-node-plv66" Jan 25 07:57:17 crc kubenswrapper[4832]: I0125 07:57:17.335019 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/9c6fdc72-86dc-433d-8aac-57b0eeefaca3-log-socket\") pod \"ovnkube-node-plv66\" (UID: \"9c6fdc72-86dc-433d-8aac-57b0eeefaca3\") " pod="openshift-ovn-kubernetes/ovnkube-node-plv66" Jan 25 07:57:17 crc kubenswrapper[4832]: I0125 07:57:17.334984 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g6tmq\" (UniqueName: \"kubernetes.io/projected/947f1c61-f061-4448-b301-9c2554b67933-kube-api-access-g6tmq\") pod \"multus-additional-cni-plugins-7tflx\" (UID: \"947f1c61-f061-4448-b301-9c2554b67933\") " pod="openshift-multus/multus-additional-cni-plugins-7tflx" Jan 25 07:57:17 crc kubenswrapper[4832]: I0125 07:57:17.335049 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/9c6fdc72-86dc-433d-8aac-57b0eeefaca3-run-systemd\") pod \"ovnkube-node-plv66\" (UID: \"9c6fdc72-86dc-433d-8aac-57b0eeefaca3\") " pod="openshift-ovn-kubernetes/ovnkube-node-plv66" Jan 25 07:57:17 crc kubenswrapper[4832]: I0125 07:57:17.335074 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/5439ad80-35f6-4da4-8745-8104e9963472-host-run-multus-certs\") pod \"multus-kzrcf\" (UID: \"5439ad80-35f6-4da4-8745-8104e9963472\") " pod="openshift-multus/multus-kzrcf" Jan 25 07:57:17 crc kubenswrapper[4832]: I0125 07:57:17.335074 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/5439ad80-35f6-4da4-8745-8104e9963472-cnibin\") pod \"multus-kzrcf\" (UID: \"5439ad80-35f6-4da4-8745-8104e9963472\") " pod="openshift-multus/multus-kzrcf" Jan 25 07:57:17 crc kubenswrapper[4832]: I0125 07:57:17.335108 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/9c6fdc72-86dc-433d-8aac-57b0eeefaca3-host-cni-netd\") pod \"ovnkube-node-plv66\" (UID: \"9c6fdc72-86dc-433d-8aac-57b0eeefaca3\") " pod="openshift-ovn-kubernetes/ovnkube-node-plv66" Jan 25 07:57:17 crc kubenswrapper[4832]: I0125 07:57:17.335114 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/947f1c61-f061-4448-b301-9c2554b67933-cni-binary-copy\") pod \"multus-additional-cni-plugins-7tflx\" (UID: \"947f1c61-f061-4448-b301-9c2554b67933\") " pod="openshift-multus/multus-additional-cni-plugins-7tflx" Jan 25 07:57:17 crc kubenswrapper[4832]: I0125 07:57:17.335137 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/5439ad80-35f6-4da4-8745-8104e9963472-hostroot\") pod \"multus-kzrcf\" (UID: \"5439ad80-35f6-4da4-8745-8104e9963472\") " pod="openshift-multus/multus-kzrcf" Jan 25 07:57:17 crc kubenswrapper[4832]: I0125 07:57:17.335154 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/5439ad80-35f6-4da4-8745-8104e9963472-hostroot\") pod \"multus-kzrcf\" (UID: \"5439ad80-35f6-4da4-8745-8104e9963472\") " pod="openshift-multus/multus-kzrcf" Jan 25 07:57:17 crc kubenswrapper[4832]: I0125 07:57:17.335154 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/9c6fdc72-86dc-433d-8aac-57b0eeefaca3-systemd-units\") pod \"ovnkube-node-plv66\" (UID: \"9c6fdc72-86dc-433d-8aac-57b0eeefaca3\") " pod="openshift-ovn-kubernetes/ovnkube-node-plv66" Jan 25 07:57:17 crc kubenswrapper[4832]: I0125 07:57:17.335167 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/9c6fdc72-86dc-433d-8aac-57b0eeefaca3-ovnkube-script-lib\") pod \"ovnkube-node-plv66\" (UID: \"9c6fdc72-86dc-433d-8aac-57b0eeefaca3\") " pod="openshift-ovn-kubernetes/ovnkube-node-plv66" Jan 25 07:57:17 crc kubenswrapper[4832]: I0125 07:57:17.335176 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/5439ad80-35f6-4da4-8745-8104e9963472-host-run-netns\") pod \"multus-kzrcf\" (UID: \"5439ad80-35f6-4da4-8745-8104e9963472\") " pod="openshift-multus/multus-kzrcf" Jan 25 07:57:17 crc kubenswrapper[4832]: I0125 07:57:17.335192 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/5439ad80-35f6-4da4-8745-8104e9963472-host-var-lib-cni-multus\") pod \"multus-kzrcf\" (UID: \"5439ad80-35f6-4da4-8745-8104e9963472\") " pod="openshift-multus/multus-kzrcf" Jan 25 07:57:17 crc kubenswrapper[4832]: I0125 07:57:17.335198 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/9c6fdc72-86dc-433d-8aac-57b0eeefaca3-host-cni-netd\") pod \"ovnkube-node-plv66\" (UID: \"9c6fdc72-86dc-433d-8aac-57b0eeefaca3\") " pod="openshift-ovn-kubernetes/ovnkube-node-plv66" Jan 25 07:57:17 crc kubenswrapper[4832]: I0125 07:57:17.335198 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/9c6fdc72-86dc-433d-8aac-57b0eeefaca3-host-cni-bin\") pod \"ovnkube-node-plv66\" (UID: \"9c6fdc72-86dc-433d-8aac-57b0eeefaca3\") " pod="openshift-ovn-kubernetes/ovnkube-node-plv66" Jan 25 07:57:17 crc kubenswrapper[4832]: I0125 07:57:17.335213 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/9c6fdc72-86dc-433d-8aac-57b0eeefaca3-node-log\") pod \"ovnkube-node-plv66\" (UID: \"9c6fdc72-86dc-433d-8aac-57b0eeefaca3\") " pod="openshift-ovn-kubernetes/ovnkube-node-plv66" Jan 25 07:57:17 crc kubenswrapper[4832]: I0125 07:57:17.335235 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/947f1c61-f061-4448-b301-9c2554b67933-cnibin\") pod \"multus-additional-cni-plugins-7tflx\" (UID: \"947f1c61-f061-4448-b301-9c2554b67933\") " pod="openshift-multus/multus-additional-cni-plugins-7tflx" Jan 25 07:57:17 crc kubenswrapper[4832]: I0125 07:57:17.335240 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/5439ad80-35f6-4da4-8745-8104e9963472-multus-conf-dir\") pod \"multus-kzrcf\" (UID: \"5439ad80-35f6-4da4-8745-8104e9963472\") " pod="openshift-multus/multus-kzrcf" Jan 25 07:57:17 crc kubenswrapper[4832]: I0125 07:57:17.335256 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/5439ad80-35f6-4da4-8745-8104e9963472-host-var-lib-cni-multus\") pod \"multus-kzrcf\" (UID: \"5439ad80-35f6-4da4-8745-8104e9963472\") " pod="openshift-multus/multus-kzrcf" Jan 25 07:57:17 crc kubenswrapper[4832]: I0125 07:57:17.335258 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/9c6fdc72-86dc-433d-8aac-57b0eeefaca3-run-ovn\") pod \"ovnkube-node-plv66\" (UID: \"9c6fdc72-86dc-433d-8aac-57b0eeefaca3\") " pod="openshift-ovn-kubernetes/ovnkube-node-plv66" Jan 25 07:57:17 crc kubenswrapper[4832]: I0125 07:57:17.335275 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/9c6fdc72-86dc-433d-8aac-57b0eeefaca3-env-overrides\") pod \"ovnkube-node-plv66\" (UID: \"9c6fdc72-86dc-433d-8aac-57b0eeefaca3\") " pod="openshift-ovn-kubernetes/ovnkube-node-plv66" Jan 25 07:57:17 crc kubenswrapper[4832]: I0125 07:57:17.335279 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/9c6fdc72-86dc-433d-8aac-57b0eeefaca3-ovn-node-metrics-cert\") pod \"ovnkube-node-plv66\" (UID: \"9c6fdc72-86dc-433d-8aac-57b0eeefaca3\") " pod="openshift-ovn-kubernetes/ovnkube-node-plv66" Jan 25 07:57:17 crc kubenswrapper[4832]: I0125 07:57:17.335315 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/5439ad80-35f6-4da4-8745-8104e9963472-multus-socket-dir-parent\") pod \"multus-kzrcf\" (UID: \"5439ad80-35f6-4da4-8745-8104e9963472\") " pod="openshift-multus/multus-kzrcf" Jan 25 07:57:17 crc kubenswrapper[4832]: I0125 07:57:17.335335 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/5439ad80-35f6-4da4-8745-8104e9963472-multus-daemon-config\") pod \"multus-kzrcf\" (UID: \"5439ad80-35f6-4da4-8745-8104e9963472\") " pod="openshift-multus/multus-kzrcf" Jan 25 07:57:17 crc kubenswrapper[4832]: I0125 07:57:17.335351 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/9c6fdc72-86dc-433d-8aac-57b0eeefaca3-var-lib-openvswitch\") pod \"ovnkube-node-plv66\" (UID: \"9c6fdc72-86dc-433d-8aac-57b0eeefaca3\") " pod="openshift-ovn-kubernetes/ovnkube-node-plv66" Jan 25 07:57:17 crc kubenswrapper[4832]: I0125 07:57:17.335410 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/9c6fdc72-86dc-433d-8aac-57b0eeefaca3-var-lib-openvswitch\") pod \"ovnkube-node-plv66\" (UID: \"9c6fdc72-86dc-433d-8aac-57b0eeefaca3\") " pod="openshift-ovn-kubernetes/ovnkube-node-plv66" Jan 25 07:57:17 crc kubenswrapper[4832]: I0125 07:57:17.335447 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/5439ad80-35f6-4da4-8745-8104e9963472-multus-socket-dir-parent\") pod \"multus-kzrcf\" (UID: \"5439ad80-35f6-4da4-8745-8104e9963472\") " pod="openshift-multus/multus-kzrcf" Jan 25 07:57:17 crc kubenswrapper[4832]: I0125 07:57:17.335511 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/5439ad80-35f6-4da4-8745-8104e9963472-host-run-k8s-cni-cncf-io\") pod \"multus-kzrcf\" (UID: \"5439ad80-35f6-4da4-8745-8104e9963472\") " pod="openshift-multus/multus-kzrcf" Jan 25 07:57:17 crc kubenswrapper[4832]: I0125 07:57:17.335563 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/5439ad80-35f6-4da4-8745-8104e9963472-multus-conf-dir\") pod \"multus-kzrcf\" (UID: \"5439ad80-35f6-4da4-8745-8104e9963472\") " pod="openshift-multus/multus-kzrcf" Jan 25 07:57:17 crc kubenswrapper[4832]: I0125 07:57:17.335598 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/5439ad80-35f6-4da4-8745-8104e9963472-cni-binary-copy\") pod \"multus-kzrcf\" (UID: \"5439ad80-35f6-4da4-8745-8104e9963472\") " pod="openshift-multus/multus-kzrcf" Jan 25 07:57:17 crc kubenswrapper[4832]: I0125 07:57:17.335602 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/9c6fdc72-86dc-433d-8aac-57b0eeefaca3-node-log\") pod \"ovnkube-node-plv66\" (UID: \"9c6fdc72-86dc-433d-8aac-57b0eeefaca3\") " pod="openshift-ovn-kubernetes/ovnkube-node-plv66" Jan 25 07:57:17 crc kubenswrapper[4832]: I0125 07:57:17.335633 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/9c6fdc72-86dc-433d-8aac-57b0eeefaca3-run-ovn\") pod \"ovnkube-node-plv66\" (UID: \"9c6fdc72-86dc-433d-8aac-57b0eeefaca3\") " pod="openshift-ovn-kubernetes/ovnkube-node-plv66" Jan 25 07:57:17 crc kubenswrapper[4832]: I0125 07:57:17.335124 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/5439ad80-35f6-4da4-8745-8104e9963472-cnibin\") pod \"multus-kzrcf\" (UID: \"5439ad80-35f6-4da4-8745-8104e9963472\") " pod="openshift-multus/multus-kzrcf" Jan 25 07:57:17 crc kubenswrapper[4832]: I0125 07:57:17.335851 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/9c6fdc72-86dc-433d-8aac-57b0eeefaca3-ovnkube-script-lib\") pod \"ovnkube-node-plv66\" (UID: \"9c6fdc72-86dc-433d-8aac-57b0eeefaca3\") " pod="openshift-ovn-kubernetes/ovnkube-node-plv66" Jan 25 07:57:17 crc kubenswrapper[4832]: I0125 07:57:17.335860 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/5439ad80-35f6-4da4-8745-8104e9963472-multus-daemon-config\") pod \"multus-kzrcf\" (UID: \"5439ad80-35f6-4da4-8745-8104e9963472\") " pod="openshift-multus/multus-kzrcf" Jan 25 07:57:17 crc kubenswrapper[4832]: I0125 07:57:17.335910 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/9c6fdc72-86dc-433d-8aac-57b0eeefaca3-ovnkube-config\") pod \"ovnkube-node-plv66\" (UID: \"9c6fdc72-86dc-433d-8aac-57b0eeefaca3\") " pod="openshift-ovn-kubernetes/ovnkube-node-plv66" Jan 25 07:57:17 crc kubenswrapper[4832]: I0125 07:57:17.344534 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:57:17 crc kubenswrapper[4832]: I0125 07:57:17.344596 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:57:17 crc kubenswrapper[4832]: I0125 07:57:17.344609 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:57:17 crc kubenswrapper[4832]: I0125 07:57:17.344633 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:57:17 crc kubenswrapper[4832]: I0125 07:57:17.344647 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:57:17Z","lastTransitionTime":"2026-01-25T07:57:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:57:17 crc kubenswrapper[4832]: I0125 07:57:17.345483 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:16Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 25 07:57:17 crc kubenswrapper[4832]: I0125 07:57:17.357562 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 25 07:57:17 crc kubenswrapper[4832]: I0125 07:57:17.371845 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:16Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 25 07:57:17 crc kubenswrapper[4832]: I0125 07:57:17.380146 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-ljmz9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f0e6de28-95c1-4b62-93a5-8141ed12ba8e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:16Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:16Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s6dzs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T07:57:16Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-ljmz9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 25 07:57:17 crc kubenswrapper[4832]: I0125 07:57:17.390009 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 25 07:57:17 crc kubenswrapper[4832]: I0125 07:57:17.401890 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-7tflx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"947f1c61-f061-4448-b301-9c2554b67933\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6tmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6tmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6tmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6tmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6tmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6tmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6tmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T07:57:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-7tflx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 25 07:57:17 crc kubenswrapper[4832]: I0125 07:57:17.427937 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-plv66" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9c6fdc72-86dc-433d-8aac-57b0eeefaca3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rkm2k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rkm2k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rkm2k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rkm2k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rkm2k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rkm2k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rkm2k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rkm2k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rkm2k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T07:57:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-plv66\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 25 07:57:17 crc kubenswrapper[4832]: I0125 07:57:17.447087 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:57:17 crc kubenswrapper[4832]: I0125 07:57:17.447124 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:57:17 crc kubenswrapper[4832]: I0125 07:57:17.447135 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:57:17 crc kubenswrapper[4832]: I0125 07:57:17.447150 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:57:17 crc kubenswrapper[4832]: I0125 07:57:17.447162 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:57:17Z","lastTransitionTime":"2026-01-25T07:57:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:57:17 crc kubenswrapper[4832]: I0125 07:57:17.448062 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4399c971-4476-4d24-ae22-8f9710ee1ea8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:56:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:56:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:56:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:56:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:56:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://427b76c32266adf832d2068d3a55977e793505c5bb68d7b55f73115596094910\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:56:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://37e9206fcc440929199c51b318bab8d2c23814d1307eaed596434c12edf2ed21\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:56:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://959f94a48ef709e3a3ca62ab6fda1874fd98e4fa70fbde0fa03da51bc8d0ed25\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:56:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://56d7d5b36830b76c8af4d6a98ec50b4096ef677b7ec94784724d5395dbc5e1a5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7e2213b4c4748dc37cf94e9b977630270dedbabf28e81c8a6d75e4ee3346ad7a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-25T07:57:15Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0125 07:57:10.242088 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0125 07:57:10.245266 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3222874030/tls.crt::/tmp/serving-cert-3222874030/tls.key\\\\\\\"\\\\nI0125 07:57:15.582629 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0125 07:57:15.585295 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0125 07:57:15.585315 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0125 07:57:15.585341 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0125 07:57:15.585347 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0125 07:57:15.590465 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0125 07:57:15.590486 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0125 07:57:15.590498 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0125 07:57:15.590502 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0125 07:57:15.590506 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0125 07:57:15.590510 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0125 07:57:15.590513 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0125 07:57:15.590670 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0125 07:57:15.594690 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-25T07:56:59Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c0b0c638bfaa98aaf9932b5ad1b0bfc04ba52038c40f3aa85103388c557ace5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:56:59Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5cdefbe9da3ff798b69ba79465cd9b6fce74df31802f14dca3fa58ba5b9d1bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b5cdefbe9da3ff798b69ba79465cd9b6fce74df31802f14dca3fa58ba5b9d1bd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T07:56:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T07:56:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T07:56:57Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 25 07:57:17 crc kubenswrapper[4832]: I0125 07:57:17.472072 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/9c6fdc72-86dc-433d-8aac-57b0eeefaca3-ovn-node-metrics-cert\") pod \"ovnkube-node-plv66\" (UID: \"9c6fdc72-86dc-433d-8aac-57b0eeefaca3\") " pod="openshift-ovn-kubernetes/ovnkube-node-plv66" Jan 25 07:57:17 crc kubenswrapper[4832]: I0125 07:57:17.472261 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dg29p\" (UniqueName: \"kubernetes.io/projected/5439ad80-35f6-4da4-8745-8104e9963472-kube-api-access-dg29p\") pod \"multus-kzrcf\" (UID: \"5439ad80-35f6-4da4-8745-8104e9963472\") " pod="openshift-multus/multus-kzrcf" Jan 25 07:57:17 crc kubenswrapper[4832]: I0125 07:57:17.472353 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rkm2k\" (UniqueName: \"kubernetes.io/projected/9c6fdc72-86dc-433d-8aac-57b0eeefaca3-kube-api-access-rkm2k\") pod \"ovnkube-node-plv66\" (UID: \"9c6fdc72-86dc-433d-8aac-57b0eeefaca3\") " pod="openshift-ovn-kubernetes/ovnkube-node-plv66" Jan 25 07:57:17 crc kubenswrapper[4832]: I0125 07:57:17.473379 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g6tmq\" (UniqueName: \"kubernetes.io/projected/947f1c61-f061-4448-b301-9c2554b67933-kube-api-access-g6tmq\") pod \"multus-additional-cni-plugins-7tflx\" (UID: \"947f1c61-f061-4448-b301-9c2554b67933\") " pod="openshift-multus/multus-additional-cni-plugins-7tflx" Jan 25 07:57:17 crc kubenswrapper[4832]: I0125 07:57:17.474072 4832 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Jan 25 07:57:17 crc kubenswrapper[4832]: W0125 07:57:17.474543 4832 reflector.go:484] object-"openshift-dns"/"node-resolver-dockercfg-kz9s7": watch of *v1.Secret ended with: very short watch: object-"openshift-dns"/"node-resolver-dockercfg-kz9s7": Unexpected watch close - watch lasted less than a second and no items received Jan 25 07:57:17 crc kubenswrapper[4832]: W0125 07:57:17.474593 4832 reflector.go:484] object-"openshift-machine-config-operator"/"kube-rbac-proxy": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-machine-config-operator"/"kube-rbac-proxy": Unexpected watch close - watch lasted less than a second and no items received Jan 25 07:57:17 crc kubenswrapper[4832]: W0125 07:57:17.474632 4832 reflector.go:484] object-"openshift-dns"/"kube-root-ca.crt": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-dns"/"kube-root-ca.crt": Unexpected watch close - watch lasted less than a second and no items received Jan 25 07:57:17 crc kubenswrapper[4832]: W0125 07:57:17.474645 4832 reflector.go:484] object-"openshift-network-node-identity"/"openshift-service-ca.crt": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-network-node-identity"/"openshift-service-ca.crt": Unexpected watch close - watch lasted less than a second and no items received Jan 25 07:57:17 crc kubenswrapper[4832]: W0125 07:57:17.474666 4832 reflector.go:484] object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert": watch of *v1.Secret ended with: very short watch: object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert": Unexpected watch close - watch lasted less than a second and no items received Jan 25 07:57:17 crc kubenswrapper[4832]: W0125 07:57:17.474729 4832 reflector.go:484] object-"openshift-image-registry"/"image-registry-certificates": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-image-registry"/"image-registry-certificates": Unexpected watch close - watch lasted less than a second and no items received Jan 25 07:57:17 crc kubenswrapper[4832]: W0125 07:57:17.474751 4832 reflector.go:484] object-"openshift-multus"/"default-dockercfg-2q5b6": watch of *v1.Secret ended with: very short watch: object-"openshift-multus"/"default-dockercfg-2q5b6": Unexpected watch close - watch lasted less than a second and no items received Jan 25 07:57:17 crc kubenswrapper[4832]: W0125 07:57:17.474752 4832 reflector.go:484] object-"openshift-network-operator"/"metrics-tls": watch of *v1.Secret ended with: very short watch: object-"openshift-network-operator"/"metrics-tls": Unexpected watch close - watch lasted less than a second and no items received Jan 25 07:57:17 crc kubenswrapper[4832]: W0125 07:57:17.474680 4832 reflector.go:484] object-"openshift-network-operator"/"kube-root-ca.crt": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-network-operator"/"kube-root-ca.crt": Unexpected watch close - watch lasted less than a second and no items received Jan 25 07:57:17 crc kubenswrapper[4832]: W0125 07:57:17.474697 4832 reflector.go:484] object-"openshift-machine-config-operator"/"kube-root-ca.crt": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-machine-config-operator"/"kube-root-ca.crt": Unexpected watch close - watch lasted less than a second and no items received Jan 25 07:57:17 crc kubenswrapper[4832]: W0125 07:57:17.474708 4832 reflector.go:484] object-"openshift-multus"/"default-cni-sysctl-allowlist": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-multus"/"default-cni-sysctl-allowlist": Unexpected watch close - watch lasted less than a second and no items received Jan 25 07:57:17 crc kubenswrapper[4832]: W0125 07:57:17.474771 4832 reflector.go:484] object-"openshift-network-node-identity"/"env-overrides": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-network-node-identity"/"env-overrides": Unexpected watch close - watch lasted less than a second and no items received Jan 25 07:57:17 crc kubenswrapper[4832]: W0125 07:57:17.474804 4832 reflector.go:484] object-"openshift-multus"/"cni-copy-resources": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-multus"/"cni-copy-resources": Unexpected watch close - watch lasted less than a second and no items received Jan 25 07:57:17 crc kubenswrapper[4832]: W0125 07:57:17.474823 4832 reflector.go:484] object-"openshift-multus"/"multus-daemon-config": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-multus"/"multus-daemon-config": Unexpected watch close - watch lasted less than a second and no items received Jan 25 07:57:17 crc kubenswrapper[4832]: W0125 07:57:17.474840 4832 reflector.go:484] object-"openshift-machine-config-operator"/"openshift-service-ca.crt": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-machine-config-operator"/"openshift-service-ca.crt": Unexpected watch close - watch lasted less than a second and no items received Jan 25 07:57:17 crc kubenswrapper[4832]: W0125 07:57:17.474864 4832 reflector.go:484] object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq": watch of *v1.Secret ended with: very short watch: object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq": Unexpected watch close - watch lasted less than a second and no items received Jan 25 07:57:17 crc kubenswrapper[4832]: W0125 07:57:17.474873 4832 reflector.go:484] object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt": Unexpected watch close - watch lasted less than a second and no items received Jan 25 07:57:17 crc kubenswrapper[4832]: W0125 07:57:17.474883 4832 reflector.go:484] object-"openshift-network-operator"/"iptables-alerter-script": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-network-operator"/"iptables-alerter-script": Unexpected watch close - watch lasted less than a second and no items received Jan 25 07:57:17 crc kubenswrapper[4832]: W0125 07:57:17.474908 4832 reflector.go:484] object-"openshift-network-node-identity"/"network-node-identity-cert": watch of *v1.Secret ended with: very short watch: object-"openshift-network-node-identity"/"network-node-identity-cert": Unexpected watch close - watch lasted less than a second and no items received Jan 25 07:57:17 crc kubenswrapper[4832]: W0125 07:57:17.474908 4832 reflector.go:484] object-"openshift-ovn-kubernetes"/"ovnkube-script-lib": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-ovn-kubernetes"/"ovnkube-script-lib": Unexpected watch close - watch lasted less than a second and no items received Jan 25 07:57:17 crc kubenswrapper[4832]: W0125 07:57:17.474940 4832 reflector.go:484] object-"openshift-network-node-identity"/"ovnkube-identity-cm": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-network-node-identity"/"ovnkube-identity-cm": Unexpected watch close - watch lasted less than a second and no items received Jan 25 07:57:17 crc kubenswrapper[4832]: W0125 07:57:17.474959 4832 reflector.go:484] object-"openshift-dns"/"openshift-service-ca.crt": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-dns"/"openshift-service-ca.crt": Unexpected watch close - watch lasted less than a second and no items received Jan 25 07:57:17 crc kubenswrapper[4832]: W0125 07:57:17.474907 4832 reflector.go:484] object-"openshift-ovn-kubernetes"/"ovnkube-config": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-ovn-kubernetes"/"ovnkube-config": Unexpected watch close - watch lasted less than a second and no items received Jan 25 07:57:17 crc kubenswrapper[4832]: W0125 07:57:17.474967 4832 reflector.go:484] object-"openshift-multus"/"kube-root-ca.crt": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-multus"/"kube-root-ca.crt": Unexpected watch close - watch lasted less than a second and no items received Jan 25 07:57:17 crc kubenswrapper[4832]: W0125 07:57:17.474993 4832 reflector.go:484] object-"openshift-image-registry"/"openshift-service-ca.crt": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-image-registry"/"openshift-service-ca.crt": Unexpected watch close - watch lasted less than a second and no items received Jan 25 07:57:17 crc kubenswrapper[4832]: W0125 07:57:17.474995 4832 reflector.go:484] object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl": watch of *v1.Secret ended with: very short watch: object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl": Unexpected watch close - watch lasted less than a second and no items received Jan 25 07:57:17 crc kubenswrapper[4832]: W0125 07:57:17.474994 4832 reflector.go:484] object-"openshift-network-operator"/"openshift-service-ca.crt": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-network-operator"/"openshift-service-ca.crt": Unexpected watch close - watch lasted less than a second and no items received Jan 25 07:57:17 crc kubenswrapper[4832]: W0125 07:57:17.475021 4832 reflector.go:484] object-"openshift-image-registry"/"kube-root-ca.crt": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-image-registry"/"kube-root-ca.crt": Unexpected watch close - watch lasted less than a second and no items received Jan 25 07:57:17 crc kubenswrapper[4832]: W0125 07:57:17.475000 4832 reflector.go:484] object-"openshift-network-node-identity"/"kube-root-ca.crt": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-network-node-identity"/"kube-root-ca.crt": Unexpected watch close - watch lasted less than a second and no items received Jan 25 07:57:17 crc kubenswrapper[4832]: W0125 07:57:17.475027 4832 reflector.go:484] object-"openshift-image-registry"/"node-ca-dockercfg-4777p": watch of *v1.Secret ended with: very short watch: object-"openshift-image-registry"/"node-ca-dockercfg-4777p": Unexpected watch close - watch lasted less than a second and no items received Jan 25 07:57:17 crc kubenswrapper[4832]: W0125 07:57:17.474684 4832 reflector.go:484] object-"openshift-ovn-kubernetes"/"kube-root-ca.crt": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-ovn-kubernetes"/"kube-root-ca.crt": Unexpected watch close - watch lasted less than a second and no items received Jan 25 07:57:17 crc kubenswrapper[4832]: W0125 07:57:17.475042 4832 reflector.go:484] object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz": watch of *v1.Secret ended with: very short watch: object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz": Unexpected watch close - watch lasted less than a second and no items received Jan 25 07:57:17 crc kubenswrapper[4832]: W0125 07:57:17.475016 4832 reflector.go:484] object-"openshift-multus"/"openshift-service-ca.crt": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-multus"/"openshift-service-ca.crt": Unexpected watch close - watch lasted less than a second and no items received Jan 25 07:57:17 crc kubenswrapper[4832]: W0125 07:57:17.475058 4832 reflector.go:484] object-"openshift-ovn-kubernetes"/"env-overrides": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-ovn-kubernetes"/"env-overrides": Unexpected watch close - watch lasted less than a second and no items received Jan 25 07:57:17 crc kubenswrapper[4832]: W0125 07:57:17.475053 4832 reflector.go:484] object-"openshift-machine-config-operator"/"proxy-tls": watch of *v1.Secret ended with: very short watch: object-"openshift-machine-config-operator"/"proxy-tls": Unexpected watch close - watch lasted less than a second and no items received Jan 25 07:57:17 crc kubenswrapper[4832]: I0125 07:57:17.477883 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-kzrcf" Jan 25 07:57:17 crc kubenswrapper[4832]: I0125 07:57:17.488776 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fcc553c4-1007-4dbc-8420-60b36d54467a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:56:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:56:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:56:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8be196a1dec67a58e78aa9de2efa770fc899f210cc9c13962f0ebe78b967ba34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:56:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b044eb1a229266f00938c08da6aa9e86425ca71d08c8434d7214d54850c36bbb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:56:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://82354c782a5e3edb960aa716e1fc5fa9ab40d1f483ae320f08abfb662c1f1911\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:56:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7833d14895ff5c8aa464bdd04ddfe77dd2cddb9658d863bf6421449e62657bd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:56:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T07:56:57Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 25 07:57:17 crc kubenswrapper[4832]: W0125 07:57:17.493136 4832 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5439ad80_35f6_4da4_8745_8104e9963472.slice/crio-8d2d0ed58c3f02961f98c05d9431e8947be14228f2c8b59501c0b98c0d2cf46a WatchSource:0}: Error finding container 8d2d0ed58c3f02961f98c05d9431e8947be14228f2c8b59501c0b98c0d2cf46a: Status 404 returned error can't find the container with id 8d2d0ed58c3f02961f98c05d9431e8947be14228f2c8b59501c0b98c0d2cf46a Jan 25 07:57:17 crc kubenswrapper[4832]: I0125 07:57:17.498494 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-7tflx" Jan 25 07:57:17 crc kubenswrapper[4832]: I0125 07:57:17.511171 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-plv66" Jan 25 07:57:17 crc kubenswrapper[4832]: I0125 07:57:17.530022 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:16Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T07:57:17Z is after 2025-08-24T17:21:41Z" Jan 25 07:57:17 crc kubenswrapper[4832]: W0125 07:57:17.533263 4832 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod947f1c61_f061_4448_b301_9c2554b67933.slice/crio-feafa4c61a9dcaf0ef9839d194662773be9bc372806871556b4d335544f5211f WatchSource:0}: Error finding container feafa4c61a9dcaf0ef9839d194662773be9bc372806871556b4d335544f5211f: Status 404 returned error can't find the container with id feafa4c61a9dcaf0ef9839d194662773be9bc372806871556b4d335544f5211f Jan 25 07:57:17 crc kubenswrapper[4832]: W0125 07:57:17.545275 4832 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9c6fdc72_86dc_433d_8aac_57b0eeefaca3.slice/crio-d73c9049e88f0abcfe403e59157661b88c6def931705eca09ebe7047427a19f5 WatchSource:0}: Error finding container d73c9049e88f0abcfe403e59157661b88c6def931705eca09ebe7047427a19f5: Status 404 returned error can't find the container with id d73c9049e88f0abcfe403e59157661b88c6def931705eca09ebe7047427a19f5 Jan 25 07:57:17 crc kubenswrapper[4832]: I0125 07:57:17.554662 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:57:17 crc kubenswrapper[4832]: I0125 07:57:17.554704 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:57:17 crc kubenswrapper[4832]: I0125 07:57:17.554715 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:57:17 crc kubenswrapper[4832]: I0125 07:57:17.554732 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:57:17 crc kubenswrapper[4832]: I0125 07:57:17.554743 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:57:17Z","lastTransitionTime":"2026-01-25T07:57:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:57:17 crc kubenswrapper[4832]: I0125 07:57:17.571705 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-6dqw2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5b30a48c-b823-4cdd-ac0c-def5487d8fa6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:16Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:16Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxmsw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T07:57:16Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-6dqw2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T07:57:17Z is after 2025-08-24T17:21:41Z" Jan 25 07:57:17 crc kubenswrapper[4832]: I0125 07:57:17.590505 4832 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-05 14:38:06.208499582 +0000 UTC Jan 25 07:57:17 crc kubenswrapper[4832]: I0125 07:57:17.633792 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d0e4b534-077a-47eb-a9aa-463b4dce27c2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:56:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:56:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e400282707469172abd90879bb5c4f96419dd2fbdbc5cc58c6ee9954624b221f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://22fb11acb07674f4808f4563567010790f12a87af272fdcf5ad1998e616c3f13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7970bc59b29bb18f7064917431bb4dd3388f593b65f71d697e3bc1c37493d087\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ae35d18ac48a31c47656c723134740770a44da6fa1587a853402bbfd4f51956\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://56b41ea1d1a7bb58c288bf3c661f5cd441412fc4790cd8361da2061bd35721dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c6f28ecd4c0dfb159fffbbdfc1ecbfee0ce21de2efa607937d80ec098bfc2534\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c6f28ecd4c0dfb159fffbbdfc1ecbfee0ce21de2efa607937d80ec098bfc2534\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T07:56:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T07:56:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3d6c060504d04d04a811fe906985b4981037f7c249befd89d21694b58983826\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b3d6c060504d04d04a811fe906985b4981037f7c249befd89d21694b58983826\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T07:56:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T07:56:58Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f98f07a514287378206a12966a18bcce2ce996434858c036f7e405a8c5d51721\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f98f07a514287378206a12966a18bcce2ce996434858c036f7e405a8c5d51721\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T07:56:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T07:56:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T07:56:57Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T07:57:17Z is after 2025-08-24T17:21:41Z" Jan 25 07:57:17 crc kubenswrapper[4832]: I0125 07:57:17.653696 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:16Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T07:57:17Z is after 2025-08-24T17:21:41Z" Jan 25 07:57:17 crc kubenswrapper[4832]: I0125 07:57:17.661343 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:57:17 crc kubenswrapper[4832]: I0125 07:57:17.661894 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:57:17 crc kubenswrapper[4832]: I0125 07:57:17.661914 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:57:17 crc kubenswrapper[4832]: I0125 07:57:17.661931 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:57:17 crc kubenswrapper[4832]: I0125 07:57:17.661944 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:57:17Z","lastTransitionTime":"2026-01-25T07:57:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:57:17 crc kubenswrapper[4832]: I0125 07:57:17.669986 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 25 07:57:17 crc kubenswrapper[4832]: E0125 07:57:17.670259 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 25 07:57:17 crc kubenswrapper[4832]: I0125 07:57:17.683642 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01ab3dd5-8196-46d0-ad33-122e2ca51def" path="/var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes" Jan 25 07:57:17 crc kubenswrapper[4832]: I0125 07:57:17.685435 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" path="/var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes" Jan 25 07:57:17 crc kubenswrapper[4832]: I0125 07:57:17.686989 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09efc573-dbb6-4249-bd59-9b87aba8dd28" path="/var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes" Jan 25 07:57:17 crc kubenswrapper[4832]: I0125 07:57:17.688042 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b574797-001e-440a-8f4e-c0be86edad0f" path="/var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes" Jan 25 07:57:17 crc kubenswrapper[4832]: I0125 07:57:17.688903 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b78653f-4ff9-4508-8672-245ed9b561e3" path="/var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes" Jan 25 07:57:17 crc kubenswrapper[4832]: I0125 07:57:17.690274 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1386a44e-36a2-460c-96d0-0359d2b6f0f5" path="/var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes" Jan 25 07:57:17 crc kubenswrapper[4832]: I0125 07:57:17.691190 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-kzrcf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5439ad80-35f6-4da4-8745-8104e9963472\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dg29p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T07:57:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-kzrcf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T07:57:17Z is after 2025-08-24T17:21:41Z" Jan 25 07:57:17 crc kubenswrapper[4832]: I0125 07:57:17.691512 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1bf7eb37-55a3-4c65-b768-a94c82151e69" path="/var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes" Jan 25 07:57:17 crc kubenswrapper[4832]: I0125 07:57:17.692160 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d611f23-29be-4491-8495-bee1670e935f" path="/var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes" Jan 25 07:57:17 crc kubenswrapper[4832]: I0125 07:57:17.693544 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20b0d48f-5fd6-431c-a545-e3c800c7b866" path="/var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/volumes" Jan 25 07:57:17 crc kubenswrapper[4832]: I0125 07:57:17.694212 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" path="/var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes" Jan 25 07:57:17 crc kubenswrapper[4832]: I0125 07:57:17.695587 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="22c825df-677d-4ca6-82db-3454ed06e783" path="/var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes" Jan 25 07:57:17 crc kubenswrapper[4832]: I0125 07:57:17.696457 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25e176fe-21b4-4974-b1ed-c8b94f112a7f" path="/var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes" Jan 25 07:57:17 crc kubenswrapper[4832]: I0125 07:57:17.697738 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" path="/var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes" Jan 25 07:57:17 crc kubenswrapper[4832]: I0125 07:57:17.698412 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31d8b7a1-420e-4252-a5b7-eebe8a111292" path="/var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes" Jan 25 07:57:17 crc kubenswrapper[4832]: I0125 07:57:17.699992 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ab1a177-2de0-46d9-b765-d0d0649bb42e" path="/var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/volumes" Jan 25 07:57:17 crc kubenswrapper[4832]: I0125 07:57:17.700680 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" path="/var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes" Jan 25 07:57:17 crc kubenswrapper[4832]: I0125 07:57:17.701476 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="43509403-f426-496e-be36-56cef71462f5" path="/var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes" Jan 25 07:57:17 crc kubenswrapper[4832]: I0125 07:57:17.703971 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44663579-783b-4372-86d6-acf235a62d72" path="/var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/volumes" Jan 25 07:57:17 crc kubenswrapper[4832]: I0125 07:57:17.704769 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="496e6271-fb68-4057-954e-a0d97a4afa3f" path="/var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes" Jan 25 07:57:17 crc kubenswrapper[4832]: I0125 07:57:17.705526 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" path="/var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes" Jan 25 07:57:17 crc kubenswrapper[4832]: I0125 07:57:17.706729 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49ef4625-1d3a-4a9f-b595-c2433d32326d" path="/var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/volumes" Jan 25 07:57:17 crc kubenswrapper[4832]: I0125 07:57:17.707488 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4bb40260-dbaa-4fb0-84df-5e680505d512" path="/var/lib/kubelet/pods/4bb40260-dbaa-4fb0-84df-5e680505d512/volumes" Jan 25 07:57:17 crc kubenswrapper[4832]: I0125 07:57:17.708530 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5225d0e4-402f-4861-b410-819f433b1803" path="/var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes" Jan 25 07:57:17 crc kubenswrapper[4832]: I0125 07:57:17.709318 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5441d097-087c-4d9a-baa8-b210afa90fc9" path="/var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes" Jan 25 07:57:17 crc kubenswrapper[4832]: I0125 07:57:17.709843 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="57a731c4-ef35-47a8-b875-bfb08a7f8011" path="/var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes" Jan 25 07:57:17 crc kubenswrapper[4832]: I0125 07:57:17.711193 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b88f790-22fa-440e-b583-365168c0b23d" path="/var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/volumes" Jan 25 07:57:17 crc kubenswrapper[4832]: I0125 07:57:17.712488 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5fe579f8-e8a6-4643-bce5-a661393c4dde" path="/var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/volumes" Jan 25 07:57:17 crc kubenswrapper[4832]: I0125 07:57:17.713061 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6402fda4-df10-493c-b4e5-d0569419652d" path="/var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes" Jan 25 07:57:17 crc kubenswrapper[4832]: I0125 07:57:17.713858 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6509e943-70c6-444c-bc41-48a544e36fbd" path="/var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes" Jan 25 07:57:17 crc kubenswrapper[4832]: I0125 07:57:17.714956 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6731426b-95fe-49ff-bb5f-40441049fde2" path="/var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/volumes" Jan 25 07:57:17 crc kubenswrapper[4832]: I0125 07:57:17.715891 4832 kubelet_volumes.go:152] "Cleaned up orphaned volume subpath from pod" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volume-subpaths/run-systemd/ovnkube-controller/6" Jan 25 07:57:17 crc kubenswrapper[4832]: I0125 07:57:17.716014 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volumes" Jan 25 07:57:17 crc kubenswrapper[4832]: I0125 07:57:17.718501 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7539238d-5fe0-46ed-884e-1c3b566537ec" path="/var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes" Jan 25 07:57:17 crc kubenswrapper[4832]: I0125 07:57:17.719426 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7583ce53-e0fe-4a16-9e4d-50516596a136" path="/var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes" Jan 25 07:57:17 crc kubenswrapper[4832]: I0125 07:57:17.719976 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7bb08738-c794-4ee8-9972-3a62ca171029" path="/var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes" Jan 25 07:57:17 crc kubenswrapper[4832]: I0125 07:57:17.722481 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87cf06ed-a83f-41a7-828d-70653580a8cb" path="/var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes" Jan 25 07:57:17 crc kubenswrapper[4832]: I0125 07:57:17.723312 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" path="/var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes" Jan 25 07:57:17 crc kubenswrapper[4832]: I0125 07:57:17.724000 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="925f1c65-6136-48ba-85aa-3a3b50560753" path="/var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes" Jan 25 07:57:17 crc kubenswrapper[4832]: I0125 07:57:17.725215 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" path="/var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/volumes" Jan 25 07:57:17 crc kubenswrapper[4832]: I0125 07:57:17.726154 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d4552c7-cd75-42dd-8880-30dd377c49a4" path="/var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes" Jan 25 07:57:17 crc kubenswrapper[4832]: I0125 07:57:17.727412 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" path="/var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/volumes" Jan 25 07:57:17 crc kubenswrapper[4832]: I0125 07:57:17.728151 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a31745f5-9847-4afe-82a5-3161cc66ca93" path="/var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes" Jan 25 07:57:17 crc kubenswrapper[4832]: I0125 07:57:17.729419 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" path="/var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes" Jan 25 07:57:17 crc kubenswrapper[4832]: I0125 07:57:17.729918 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T07:57:17Z is after 2025-08-24T17:21:41Z" Jan 25 07:57:17 crc kubenswrapper[4832]: I0125 07:57:17.730768 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6312bbd-5731-4ea0-a20f-81d5a57df44a" path="/var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/volumes" Jan 25 07:57:17 crc kubenswrapper[4832]: I0125 07:57:17.731484 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" path="/var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes" Jan 25 07:57:17 crc kubenswrapper[4832]: I0125 07:57:17.732350 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" path="/var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes" Jan 25 07:57:17 crc kubenswrapper[4832]: I0125 07:57:17.733570 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" path="/var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/volumes" Jan 25 07:57:17 crc kubenswrapper[4832]: I0125 07:57:17.734461 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf126b07-da06-4140-9a57-dfd54fc6b486" path="/var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes" Jan 25 07:57:17 crc kubenswrapper[4832]: I0125 07:57:17.735454 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c03ee662-fb2f-4fc4-a2c1-af487c19d254" path="/var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes" Jan 25 07:57:17 crc kubenswrapper[4832]: I0125 07:57:17.736047 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" path="/var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/volumes" Jan 25 07:57:17 crc kubenswrapper[4832]: I0125 07:57:17.736555 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e7e6199b-1264-4501-8953-767f51328d08" path="/var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes" Jan 25 07:57:17 crc kubenswrapper[4832]: I0125 07:57:17.737501 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="efdd0498-1daa-4136-9a4a-3b948c2293fc" path="/var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/volumes" Jan 25 07:57:17 crc kubenswrapper[4832]: I0125 07:57:17.738054 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" path="/var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/volumes" Jan 25 07:57:17 crc kubenswrapper[4832]: I0125 07:57:17.739336 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fda69060-fa79-4696-b1a6-7980f124bf7c" path="/var/lib/kubelet/pods/fda69060-fa79-4696-b1a6-7980f124bf7c/volumes" Jan 25 07:57:17 crc kubenswrapper[4832]: I0125 07:57:17.764933 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:57:17 crc kubenswrapper[4832]: I0125 07:57:17.764999 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:57:17 crc kubenswrapper[4832]: I0125 07:57:17.765011 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:57:17 crc kubenswrapper[4832]: I0125 07:57:17.765036 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:57:17 crc kubenswrapper[4832]: I0125 07:57:17.765051 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:57:17Z","lastTransitionTime":"2026-01-25T07:57:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:57:17 crc kubenswrapper[4832]: I0125 07:57:17.772176 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-7tflx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"947f1c61-f061-4448-b301-9c2554b67933\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6tmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6tmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6tmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6tmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6tmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6tmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6tmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T07:57:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-7tflx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T07:57:17Z is after 2025-08-24T17:21:41Z" Jan 25 07:57:17 crc kubenswrapper[4832]: E0125 07:57:17.790522 4832 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9c6fdc72_86dc_433d_8aac_57b0eeefaca3.slice/crio-conmon-ac96bdf8380dbae226d8f186a0449b986660f21889eb73734620b26fb796fbf1.scope\": RecentStats: unable to find data in memory cache]" Jan 25 07:57:17 crc kubenswrapper[4832]: I0125 07:57:17.802205 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"49bab1f91a75d2c164a43ba253102a6ac5ba0fd6347fad172ae2052f055d3434"} Jan 25 07:57:17 crc kubenswrapper[4832]: I0125 07:57:17.802284 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"d5a933eca633735fcd342f3b149bf69560e365b4873b3e706f5f35cebdbbe0bb"} Jan 25 07:57:17 crc kubenswrapper[4832]: I0125 07:57:17.804602 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-6dqw2" event={"ID":"5b30a48c-b823-4cdd-ac0c-def5487d8fa6","Type":"ContainerStarted","Data":"5d04c4243f10847106daab854b81ba5b24466780aa4900922ae2c460468a12e7"} Jan 25 07:57:17 crc kubenswrapper[4832]: I0125 07:57:17.804641 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-6dqw2" event={"ID":"5b30a48c-b823-4cdd-ac0c-def5487d8fa6","Type":"ContainerStarted","Data":"c4ae5dfa3160e01731a4629f59d5a846d732419559f1cd099eb7bf4edb9f5453"} Jan 25 07:57:17 crc kubenswrapper[4832]: I0125 07:57:17.808065 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"f08aec7c666388c5a9a5ccc970acf6e9df3262090951fd1a205cfb2f6cfb26a5"} Jan 25 07:57:17 crc kubenswrapper[4832]: I0125 07:57:17.808122 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"9e880d54d6b2d147d036dac73afd36230c3a984b018b7bd600dcbd33ca83aa84"} Jan 25 07:57:17 crc kubenswrapper[4832]: I0125 07:57:17.808141 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"8d0493fae790d48ce92879de0461131724f6f7ad9573fdda87c0a92617dc8398"} Jan 25 07:57:17 crc kubenswrapper[4832]: I0125 07:57:17.809355 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-7tflx" event={"ID":"947f1c61-f061-4448-b301-9c2554b67933","Type":"ContainerStarted","Data":"feafa4c61a9dcaf0ef9839d194662773be9bc372806871556b4d335544f5211f"} Jan 25 07:57:17 crc kubenswrapper[4832]: I0125 07:57:17.811163 4832 generic.go:334] "Generic (PLEG): container finished" podID="9c6fdc72-86dc-433d-8aac-57b0eeefaca3" containerID="ac96bdf8380dbae226d8f186a0449b986660f21889eb73734620b26fb796fbf1" exitCode=0 Jan 25 07:57:17 crc kubenswrapper[4832]: I0125 07:57:17.811346 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-plv66" event={"ID":"9c6fdc72-86dc-433d-8aac-57b0eeefaca3","Type":"ContainerDied","Data":"ac96bdf8380dbae226d8f186a0449b986660f21889eb73734620b26fb796fbf1"} Jan 25 07:57:17 crc kubenswrapper[4832]: I0125 07:57:17.811526 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-plv66" event={"ID":"9c6fdc72-86dc-433d-8aac-57b0eeefaca3","Type":"ContainerStarted","Data":"d73c9049e88f0abcfe403e59157661b88c6def931705eca09ebe7047427a19f5"} Jan 25 07:57:17 crc kubenswrapper[4832]: I0125 07:57:17.813838 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"da30309ae231b0408b29e86cc5b4fcea271a68b119be31ad78723b37bebd9206"} Jan 25 07:57:17 crc kubenswrapper[4832]: I0125 07:57:17.816230 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4399c971-4476-4d24-ae22-8f9710ee1ea8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:56:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:56:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:56:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:56:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:56:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://427b76c32266adf832d2068d3a55977e793505c5bb68d7b55f73115596094910\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:56:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://37e9206fcc440929199c51b318bab8d2c23814d1307eaed596434c12edf2ed21\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:56:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://959f94a48ef709e3a3ca62ab6fda1874fd98e4fa70fbde0fa03da51bc8d0ed25\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:56:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://56d7d5b36830b76c8af4d6a98ec50b4096ef677b7ec94784724d5395dbc5e1a5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7e2213b4c4748dc37cf94e9b977630270dedbabf28e81c8a6d75e4ee3346ad7a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-25T07:57:15Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0125 07:57:10.242088 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0125 07:57:10.245266 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3222874030/tls.crt::/tmp/serving-cert-3222874030/tls.key\\\\\\\"\\\\nI0125 07:57:15.582629 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0125 07:57:15.585295 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0125 07:57:15.585315 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0125 07:57:15.585341 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0125 07:57:15.585347 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0125 07:57:15.590465 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0125 07:57:15.590486 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0125 07:57:15.590498 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0125 07:57:15.590502 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0125 07:57:15.590506 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0125 07:57:15.590510 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0125 07:57:15.590513 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0125 07:57:15.590670 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0125 07:57:15.594690 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-25T07:56:59Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c0b0c638bfaa98aaf9932b5ad1b0bfc04ba52038c40f3aa85103388c557ace5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:56:59Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5cdefbe9da3ff798b69ba79465cd9b6fce74df31802f14dca3fa58ba5b9d1bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b5cdefbe9da3ff798b69ba79465cd9b6fce74df31802f14dca3fa58ba5b9d1bd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T07:56:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T07:56:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T07:56:57Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T07:57:17Z is after 2025-08-24T17:21:41Z" Jan 25 07:57:17 crc kubenswrapper[4832]: I0125 07:57:17.819311 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-kzrcf" event={"ID":"5439ad80-35f6-4da4-8745-8104e9963472","Type":"ContainerStarted","Data":"c1f3fab8a8806d76e6199970ac471a73665e6ec874f959a1e7908df814babfff"} Jan 25 07:57:17 crc kubenswrapper[4832]: I0125 07:57:17.819356 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-kzrcf" event={"ID":"5439ad80-35f6-4da4-8745-8104e9963472","Type":"ContainerStarted","Data":"8d2d0ed58c3f02961f98c05d9431e8947be14228f2c8b59501c0b98c0d2cf46a"} Jan 25 07:57:17 crc kubenswrapper[4832]: I0125 07:57:17.822912 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-9r9sz" event={"ID":"1fb47e8e-c812-41b4-9be7-3fad81e121b0","Type":"ContainerStarted","Data":"11d30ecfbac91cbd5f546d8f064b715e31917d7db31102376299e2c5fa7951f7"} Jan 25 07:57:17 crc kubenswrapper[4832]: I0125 07:57:17.822959 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-9r9sz" event={"ID":"1fb47e8e-c812-41b4-9be7-3fad81e121b0","Type":"ContainerStarted","Data":"9c32b6a39b2bc87d55b11a88a54d0909633358c70f3fc555cd4308bc5bf2689a"} Jan 25 07:57:17 crc kubenswrapper[4832]: I0125 07:57:17.822970 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-9r9sz" event={"ID":"1fb47e8e-c812-41b4-9be7-3fad81e121b0","Type":"ContainerStarted","Data":"41215c7bd881a9355ae74080b37c4132e90339243fd33009b35933e6617442ff"} Jan 25 07:57:17 crc kubenswrapper[4832]: I0125 07:57:17.833271 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-ljmz9" event={"ID":"f0e6de28-95c1-4b62-93a5-8141ed12ba8e","Type":"ContainerStarted","Data":"90459cff650e6a278d83d57b502423c3c3bd87cadc083c7642dfc4cc33e7953c"} Jan 25 07:57:17 crc kubenswrapper[4832]: I0125 07:57:17.833337 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-ljmz9" event={"ID":"f0e6de28-95c1-4b62-93a5-8141ed12ba8e","Type":"ContainerStarted","Data":"f86577eaf04a90fa87e12b0e3028ef99bae25f8067bcd8dba647e803d91ff7cb"} Jan 25 07:57:17 crc kubenswrapper[4832]: I0125 07:57:17.852312 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fcc553c4-1007-4dbc-8420-60b36d54467a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:56:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:56:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:56:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8be196a1dec67a58e78aa9de2efa770fc899f210cc9c13962f0ebe78b967ba34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:56:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b044eb1a229266f00938c08da6aa9e86425ca71d08c8434d7214d54850c36bbb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:56:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://82354c782a5e3edb960aa716e1fc5fa9ab40d1f483ae320f08abfb662c1f1911\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:56:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7833d14895ff5c8aa464bdd04ddfe77dd2cddb9658d863bf6421449e62657bd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:56:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T07:56:57Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T07:57:17Z is after 2025-08-24T17:21:41Z" Jan 25 07:57:17 crc kubenswrapper[4832]: I0125 07:57:17.867448 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:57:17 crc kubenswrapper[4832]: I0125 07:57:17.867505 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:57:17 crc kubenswrapper[4832]: I0125 07:57:17.867518 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:57:17 crc kubenswrapper[4832]: I0125 07:57:17.867535 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:57:17 crc kubenswrapper[4832]: I0125 07:57:17.867550 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:57:17Z","lastTransitionTime":"2026-01-25T07:57:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:57:17 crc kubenswrapper[4832]: I0125 07:57:17.893575 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:16Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T07:57:17Z is after 2025-08-24T17:21:41Z" Jan 25 07:57:17 crc kubenswrapper[4832]: I0125 07:57:17.929588 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-6dqw2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5b30a48c-b823-4cdd-ac0c-def5487d8fa6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:16Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:16Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxmsw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T07:57:16Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-6dqw2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T07:57:17Z is after 2025-08-24T17:21:41Z" Jan 25 07:57:17 crc kubenswrapper[4832]: I0125 07:57:17.970931 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:57:17 crc kubenswrapper[4832]: I0125 07:57:17.971341 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:57:17 crc kubenswrapper[4832]: I0125 07:57:17.971355 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:57:17 crc kubenswrapper[4832]: I0125 07:57:17.971372 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:57:17 crc kubenswrapper[4832]: I0125 07:57:17.971399 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:57:17Z","lastTransitionTime":"2026-01-25T07:57:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:57:17 crc kubenswrapper[4832]: I0125 07:57:17.984097 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-plv66" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9c6fdc72-86dc-433d-8aac-57b0eeefaca3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rkm2k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rkm2k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rkm2k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rkm2k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rkm2k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rkm2k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rkm2k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rkm2k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rkm2k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T07:57:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-plv66\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T07:57:17Z is after 2025-08-24T17:21:41Z" Jan 25 07:57:18 crc kubenswrapper[4832]: I0125 07:57:18.025532 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d0e4b534-077a-47eb-a9aa-463b4dce27c2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:56:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:56:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e400282707469172abd90879bb5c4f96419dd2fbdbc5cc58c6ee9954624b221f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://22fb11acb07674f4808f4563567010790f12a87af272fdcf5ad1998e616c3f13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7970bc59b29bb18f7064917431bb4dd3388f593b65f71d697e3bc1c37493d087\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ae35d18ac48a31c47656c723134740770a44da6fa1587a853402bbfd4f51956\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://56b41ea1d1a7bb58c288bf3c661f5cd441412fc4790cd8361da2061bd35721dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c6f28ecd4c0dfb159fffbbdfc1ecbfee0ce21de2efa607937d80ec098bfc2534\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c6f28ecd4c0dfb159fffbbdfc1ecbfee0ce21de2efa607937d80ec098bfc2534\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T07:56:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T07:56:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3d6c060504d04d04a811fe906985b4981037f7c249befd89d21694b58983826\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b3d6c060504d04d04a811fe906985b4981037f7c249befd89d21694b58983826\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T07:56:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T07:56:58Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f98f07a514287378206a12966a18bcce2ce996434858c036f7e405a8c5d51721\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f98f07a514287378206a12966a18bcce2ce996434858c036f7e405a8c5d51721\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T07:56:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T07:56:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T07:56:57Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T07:57:18Z is after 2025-08-24T17:21:41Z" Jan 25 07:57:18 crc kubenswrapper[4832]: I0125 07:57:18.054892 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:16Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T07:57:18Z is after 2025-08-24T17:21:41Z" Jan 25 07:57:18 crc kubenswrapper[4832]: I0125 07:57:18.073742 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:57:18 crc kubenswrapper[4832]: I0125 07:57:18.073789 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:57:18 crc kubenswrapper[4832]: I0125 07:57:18.073801 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:57:18 crc kubenswrapper[4832]: I0125 07:57:18.073818 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:57:18 crc kubenswrapper[4832]: I0125 07:57:18.073830 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:57:18Z","lastTransitionTime":"2026-01-25T07:57:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:57:18 crc kubenswrapper[4832]: I0125 07:57:18.090675 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-kzrcf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5439ad80-35f6-4da4-8745-8104e9963472\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dg29p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T07:57:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-kzrcf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T07:57:18Z is after 2025-08-24T17:21:41Z" Jan 25 07:57:18 crc kubenswrapper[4832]: I0125 07:57:18.137167 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:16Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T07:57:18Z is after 2025-08-24T17:21:41Z" Jan 25 07:57:18 crc kubenswrapper[4832]: I0125 07:57:18.145298 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 25 07:57:18 crc kubenswrapper[4832]: I0125 07:57:18.145377 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 25 07:57:18 crc kubenswrapper[4832]: I0125 07:57:18.145426 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 25 07:57:18 crc kubenswrapper[4832]: I0125 07:57:18.145454 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 25 07:57:18 crc kubenswrapper[4832]: E0125 07:57:18.145484 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-25 07:57:20.14545609 +0000 UTC m=+22.819279623 (durationBeforeRetry 2s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 25 07:57:18 crc kubenswrapper[4832]: E0125 07:57:18.145556 4832 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 25 07:57:18 crc kubenswrapper[4832]: E0125 07:57:18.145613 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-25 07:57:20.145597994 +0000 UTC m=+22.819421527 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 25 07:57:18 crc kubenswrapper[4832]: E0125 07:57:18.145607 4832 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 25 07:57:18 crc kubenswrapper[4832]: E0125 07:57:18.145618 4832 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 25 07:57:18 crc kubenswrapper[4832]: E0125 07:57:18.145699 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-25 07:57:20.145682357 +0000 UTC m=+22.819505890 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 25 07:57:18 crc kubenswrapper[4832]: E0125 07:57:18.145799 4832 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 25 07:57:18 crc kubenswrapper[4832]: E0125 07:57:18.145868 4832 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 25 07:57:18 crc kubenswrapper[4832]: E0125 07:57:18.145945 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-25 07:57:20.145924854 +0000 UTC m=+22.819748467 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 25 07:57:18 crc kubenswrapper[4832]: I0125 07:57:18.176427 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:57:18 crc kubenswrapper[4832]: I0125 07:57:18.176466 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:57:18 crc kubenswrapper[4832]: I0125 07:57:18.176476 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:57:18 crc kubenswrapper[4832]: I0125 07:57:18.176494 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:57:18 crc kubenswrapper[4832]: I0125 07:57:18.176504 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:57:18Z","lastTransitionTime":"2026-01-25T07:57:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:57:18 crc kubenswrapper[4832]: I0125 07:57:18.177570 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T07:57:18Z is after 2025-08-24T17:21:41Z" Jan 25 07:57:18 crc kubenswrapper[4832]: I0125 07:57:18.222035 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:16Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T07:57:18Z is after 2025-08-24T17:21:41Z" Jan 25 07:57:18 crc kubenswrapper[4832]: I0125 07:57:18.246723 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 25 07:57:18 crc kubenswrapper[4832]: E0125 07:57:18.246857 4832 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 25 07:57:18 crc kubenswrapper[4832]: E0125 07:57:18.246874 4832 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 25 07:57:18 crc kubenswrapper[4832]: E0125 07:57:18.246885 4832 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 25 07:57:18 crc kubenswrapper[4832]: E0125 07:57:18.246936 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-25 07:57:20.246923173 +0000 UTC m=+22.920746706 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 25 07:57:18 crc kubenswrapper[4832]: I0125 07:57:18.260292 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-ljmz9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f0e6de28-95c1-4b62-93a5-8141ed12ba8e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:16Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:16Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s6dzs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T07:57:16Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-ljmz9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T07:57:18Z is after 2025-08-24T17:21:41Z" Jan 25 07:57:18 crc kubenswrapper[4832]: I0125 07:57:18.279536 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:57:18 crc kubenswrapper[4832]: I0125 07:57:18.279576 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:57:18 crc kubenswrapper[4832]: I0125 07:57:18.279587 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:57:18 crc kubenswrapper[4832]: I0125 07:57:18.279605 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:57:18 crc kubenswrapper[4832]: I0125 07:57:18.279616 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:57:18Z","lastTransitionTime":"2026-01-25T07:57:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:57:18 crc kubenswrapper[4832]: I0125 07:57:18.292257 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-9r9sz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1fb47e8e-c812-41b4-9be7-3fad81e121b0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:16Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:16Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2t6v2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2t6v2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T07:57:16Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-9r9sz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T07:57:18Z is after 2025-08-24T17:21:41Z" Jan 25 07:57:18 crc kubenswrapper[4832]: I0125 07:57:18.302437 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Jan 25 07:57:18 crc kubenswrapper[4832]: I0125 07:57:18.341243 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Jan 25 07:57:18 crc kubenswrapper[4832]: I0125 07:57:18.375025 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:16Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T07:57:18Z is after 2025-08-24T17:21:41Z" Jan 25 07:57:18 crc kubenswrapper[4832]: I0125 07:57:18.381334 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Jan 25 07:57:18 crc kubenswrapper[4832]: I0125 07:57:18.383216 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:57:18 crc kubenswrapper[4832]: I0125 07:57:18.383245 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:57:18 crc kubenswrapper[4832]: I0125 07:57:18.383259 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:57:18 crc kubenswrapper[4832]: I0125 07:57:18.383275 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:57:18 crc kubenswrapper[4832]: I0125 07:57:18.383287 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:57:18Z","lastTransitionTime":"2026-01-25T07:57:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:57:18 crc kubenswrapper[4832]: I0125 07:57:18.400620 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Jan 25 07:57:18 crc kubenswrapper[4832]: I0125 07:57:18.423773 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Jan 25 07:57:18 crc kubenswrapper[4832]: I0125 07:57:18.460836 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Jan 25 07:57:18 crc kubenswrapper[4832]: I0125 07:57:18.480519 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Jan 25 07:57:18 crc kubenswrapper[4832]: I0125 07:57:18.486550 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:57:18 crc kubenswrapper[4832]: I0125 07:57:18.486605 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:57:18 crc kubenswrapper[4832]: I0125 07:57:18.486621 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:57:18 crc kubenswrapper[4832]: I0125 07:57:18.486644 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:57:18 crc kubenswrapper[4832]: I0125 07:57:18.486658 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:57:18Z","lastTransitionTime":"2026-01-25T07:57:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:57:18 crc kubenswrapper[4832]: I0125 07:57:18.508087 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-ljmz9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f0e6de28-95c1-4b62-93a5-8141ed12ba8e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://90459cff650e6a278d83d57b502423c3c3bd87cadc083c7642dfc4cc33e7953c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s6dzs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T07:57:16Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-ljmz9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T07:57:18Z is after 2025-08-24T17:21:41Z" Jan 25 07:57:18 crc kubenswrapper[4832]: I0125 07:57:18.520826 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Jan 25 07:57:18 crc kubenswrapper[4832]: I0125 07:57:18.542594 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Jan 25 07:57:18 crc kubenswrapper[4832]: I0125 07:57:18.560288 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Jan 25 07:57:18 crc kubenswrapper[4832]: I0125 07:57:18.581379 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Jan 25 07:57:18 crc kubenswrapper[4832]: I0125 07:57:18.590591 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:57:18 crc kubenswrapper[4832]: I0125 07:57:18.590653 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:57:18 crc kubenswrapper[4832]: I0125 07:57:18.590669 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:57:18 crc kubenswrapper[4832]: I0125 07:57:18.590696 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:57:18 crc kubenswrapper[4832]: I0125 07:57:18.590728 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:57:18Z","lastTransitionTime":"2026-01-25T07:57:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:57:18 crc kubenswrapper[4832]: I0125 07:57:18.590731 4832 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-12 13:22:59.265400142 +0000 UTC Jan 25 07:57:18 crc kubenswrapper[4832]: I0125 07:57:18.620913 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Jan 25 07:57:18 crc kubenswrapper[4832]: I0125 07:57:18.640134 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Jan 25 07:57:18 crc kubenswrapper[4832]: I0125 07:57:18.661619 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Jan 25 07:57:18 crc kubenswrapper[4832]: I0125 07:57:18.669526 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 25 07:57:18 crc kubenswrapper[4832]: E0125 07:57:18.669647 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 25 07:57:18 crc kubenswrapper[4832]: I0125 07:57:18.669713 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 25 07:57:18 crc kubenswrapper[4832]: E0125 07:57:18.669761 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 25 07:57:18 crc kubenswrapper[4832]: I0125 07:57:18.688750 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-9r9sz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1fb47e8e-c812-41b4-9be7-3fad81e121b0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11d30ecfbac91cbd5f546d8f064b715e31917d7db31102376299e2c5fa7951f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2t6v2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c32b6a39b2bc87d55b11a88a54d0909633358c70f3fc555cd4308bc5bf2689a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2t6v2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T07:57:16Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-9r9sz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T07:57:18Z is after 2025-08-24T17:21:41Z" Jan 25 07:57:18 crc kubenswrapper[4832]: I0125 07:57:18.693000 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:57:18 crc kubenswrapper[4832]: I0125 07:57:18.693070 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:57:18 crc kubenswrapper[4832]: I0125 07:57:18.693089 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:57:18 crc kubenswrapper[4832]: I0125 07:57:18.693117 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:57:18 crc kubenswrapper[4832]: I0125 07:57:18.693132 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:57:18Z","lastTransitionTime":"2026-01-25T07:57:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:57:18 crc kubenswrapper[4832]: I0125 07:57:18.700773 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Jan 25 07:57:18 crc kubenswrapper[4832]: I0125 07:57:18.720297 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Jan 25 07:57:18 crc kubenswrapper[4832]: I0125 07:57:18.740652 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Jan 25 07:57:18 crc kubenswrapper[4832]: I0125 07:57:18.761037 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Jan 25 07:57:18 crc kubenswrapper[4832]: I0125 07:57:18.795754 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:57:18 crc kubenswrapper[4832]: I0125 07:57:18.795797 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:57:18 crc kubenswrapper[4832]: I0125 07:57:18.795807 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:57:18 crc kubenswrapper[4832]: I0125 07:57:18.795825 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:57:18 crc kubenswrapper[4832]: I0125 07:57:18.795839 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:57:18Z","lastTransitionTime":"2026-01-25T07:57:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:57:18 crc kubenswrapper[4832]: I0125 07:57:18.800614 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Jan 25 07:57:18 crc kubenswrapper[4832]: I0125 07:57:18.824549 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Jan 25 07:57:18 crc kubenswrapper[4832]: I0125 07:57:18.840558 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-plv66" event={"ID":"9c6fdc72-86dc-433d-8aac-57b0eeefaca3","Type":"ContainerStarted","Data":"955df1f749685e35f57096ab341705a767f9f044c498ff9fe0c578205ab00e47"} Jan 25 07:57:18 crc kubenswrapper[4832]: I0125 07:57:18.840806 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-plv66" event={"ID":"9c6fdc72-86dc-433d-8aac-57b0eeefaca3","Type":"ContainerStarted","Data":"4a4281c5178e1f538e268252a65fbf98cf6d3febdb246a148f96a4aa074654ef"} Jan 25 07:57:18 crc kubenswrapper[4832]: I0125 07:57:18.840872 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-plv66" event={"ID":"9c6fdc72-86dc-433d-8aac-57b0eeefaca3","Type":"ContainerStarted","Data":"5b2bdf85709ae59146893142e9c99259a30d0a3d382b2212b1863f677f6afc2c"} Jan 25 07:57:18 crc kubenswrapper[4832]: I0125 07:57:18.840934 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-plv66" event={"ID":"9c6fdc72-86dc-433d-8aac-57b0eeefaca3","Type":"ContainerStarted","Data":"4eb8d5ded80c75addd304eb271c805a5558200db4ad062ef7354d8a0e4d2892d"} Jan 25 07:57:18 crc kubenswrapper[4832]: I0125 07:57:18.840995 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-plv66" event={"ID":"9c6fdc72-86dc-433d-8aac-57b0eeefaca3","Type":"ContainerStarted","Data":"9039a4038315d24ad4f721f3a16dc792881c104d23270f4ab5ffb3d84ff4cb99"} Jan 25 07:57:18 crc kubenswrapper[4832]: I0125 07:57:18.841051 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-plv66" event={"ID":"9c6fdc72-86dc-433d-8aac-57b0eeefaca3","Type":"ContainerStarted","Data":"e0de5e2c0084fa8b9faf368e61b965f84d8411bcbdfb8b3cf6a35f4bc6088e68"} Jan 25 07:57:18 crc kubenswrapper[4832]: I0125 07:57:18.841968 4832 generic.go:334] "Generic (PLEG): container finished" podID="947f1c61-f061-4448-b301-9c2554b67933" containerID="446dcb21c95e4112671db6f4b8376ff3361d3d386ecdaa190f615271511be812" exitCode=0 Jan 25 07:57:18 crc kubenswrapper[4832]: I0125 07:57:18.841995 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-7tflx" event={"ID":"947f1c61-f061-4448-b301-9c2554b67933","Type":"ContainerDied","Data":"446dcb21c95e4112671db6f4b8376ff3361d3d386ecdaa190f615271511be812"} Jan 25 07:57:18 crc kubenswrapper[4832]: I0125 07:57:18.851677 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:16Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T07:57:18Z is after 2025-08-24T17:21:41Z" Jan 25 07:57:18 crc kubenswrapper[4832]: I0125 07:57:18.860299 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Jan 25 07:57:18 crc kubenswrapper[4832]: I0125 07:57:18.880918 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Jan 25 07:57:18 crc kubenswrapper[4832]: I0125 07:57:18.899158 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:57:18 crc kubenswrapper[4832]: I0125 07:57:18.899206 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:57:18 crc kubenswrapper[4832]: I0125 07:57:18.899216 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:57:18 crc kubenswrapper[4832]: I0125 07:57:18.899234 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:57:18 crc kubenswrapper[4832]: I0125 07:57:18.899246 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:57:18Z","lastTransitionTime":"2026-01-25T07:57:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:57:18 crc kubenswrapper[4832]: I0125 07:57:18.900969 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Jan 25 07:57:18 crc kubenswrapper[4832]: I0125 07:57:18.921245 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Jan 25 07:57:18 crc kubenswrapper[4832]: I0125 07:57:18.961586 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Jan 25 07:57:18 crc kubenswrapper[4832]: I0125 07:57:18.981337 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Jan 25 07:57:19 crc kubenswrapper[4832]: I0125 07:57:19.000534 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Jan 25 07:57:19 crc kubenswrapper[4832]: I0125 07:57:19.002094 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:57:19 crc kubenswrapper[4832]: I0125 07:57:19.002147 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:57:19 crc kubenswrapper[4832]: I0125 07:57:19.002164 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:57:19 crc kubenswrapper[4832]: I0125 07:57:19.002190 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:57:19 crc kubenswrapper[4832]: I0125 07:57:19.002238 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:57:19Z","lastTransitionTime":"2026-01-25T07:57:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:57:19 crc kubenswrapper[4832]: I0125 07:57:19.020957 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Jan 25 07:57:19 crc kubenswrapper[4832]: I0125 07:57:19.041441 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Jan 25 07:57:19 crc kubenswrapper[4832]: I0125 07:57:19.060993 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Jan 25 07:57:19 crc kubenswrapper[4832]: I0125 07:57:19.081159 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Jan 25 07:57:19 crc kubenswrapper[4832]: I0125 07:57:19.105791 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:57:19 crc kubenswrapper[4832]: I0125 07:57:19.105843 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:57:19 crc kubenswrapper[4832]: I0125 07:57:19.105867 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:57:19 crc kubenswrapper[4832]: I0125 07:57:19.105887 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:57:19 crc kubenswrapper[4832]: I0125 07:57:19.105899 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:57:19Z","lastTransitionTime":"2026-01-25T07:57:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:57:19 crc kubenswrapper[4832]: I0125 07:57:19.111375 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://49bab1f91a75d2c164a43ba253102a6ac5ba0fd6347fad172ae2052f055d3434\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T07:57:19Z is after 2025-08-24T17:21:41Z" Jan 25 07:57:19 crc kubenswrapper[4832]: I0125 07:57:19.121174 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Jan 25 07:57:19 crc kubenswrapper[4832]: I0125 07:57:19.140967 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Jan 25 07:57:19 crc kubenswrapper[4832]: I0125 07:57:19.160647 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Jan 25 07:57:19 crc kubenswrapper[4832]: I0125 07:57:19.179997 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Jan 25 07:57:19 crc kubenswrapper[4832]: I0125 07:57:19.207865 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:57:19 crc kubenswrapper[4832]: I0125 07:57:19.207911 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:57:19 crc kubenswrapper[4832]: I0125 07:57:19.207921 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:57:19 crc kubenswrapper[4832]: I0125 07:57:19.207977 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:57:19 crc kubenswrapper[4832]: I0125 07:57:19.207997 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:57:19Z","lastTransitionTime":"2026-01-25T07:57:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:57:19 crc kubenswrapper[4832]: I0125 07:57:19.230561 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-7tflx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"947f1c61-f061-4448-b301-9c2554b67933\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6tmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6tmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6tmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6tmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6tmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6tmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6tmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T07:57:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-7tflx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T07:57:19Z is after 2025-08-24T17:21:41Z" Jan 25 07:57:19 crc kubenswrapper[4832]: I0125 07:57:19.267894 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T07:57:19Z is after 2025-08-24T17:21:41Z" Jan 25 07:57:19 crc kubenswrapper[4832]: I0125 07:57:19.309844 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:16Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T07:57:19Z is after 2025-08-24T17:21:41Z" Jan 25 07:57:19 crc kubenswrapper[4832]: I0125 07:57:19.310732 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:57:19 crc kubenswrapper[4832]: I0125 07:57:19.310802 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:57:19 crc kubenswrapper[4832]: I0125 07:57:19.310817 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:57:19 crc kubenswrapper[4832]: I0125 07:57:19.310840 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:57:19 crc kubenswrapper[4832]: I0125 07:57:19.310855 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:57:19Z","lastTransitionTime":"2026-01-25T07:57:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:57:19 crc kubenswrapper[4832]: I0125 07:57:19.348928 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-6dqw2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5b30a48c-b823-4cdd-ac0c-def5487d8fa6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d04c4243f10847106daab854b81ba5b24466780aa4900922ae2c460468a12e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxmsw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T07:57:16Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-6dqw2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T07:57:19Z is after 2025-08-24T17:21:41Z" Jan 25 07:57:19 crc kubenswrapper[4832]: I0125 07:57:19.404152 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-plv66" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9c6fdc72-86dc-433d-8aac-57b0eeefaca3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rkm2k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rkm2k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rkm2k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rkm2k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rkm2k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rkm2k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rkm2k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rkm2k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac96bdf8380dbae226d8f186a0449b986660f21889eb73734620b26fb796fbf1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ac96bdf8380dbae226d8f186a0449b986660f21889eb73734620b26fb796fbf1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T07:57:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rkm2k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T07:57:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-plv66\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T07:57:19Z is after 2025-08-24T17:21:41Z" Jan 25 07:57:19 crc kubenswrapper[4832]: I0125 07:57:19.412447 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:57:19 crc kubenswrapper[4832]: I0125 07:57:19.412481 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:57:19 crc kubenswrapper[4832]: I0125 07:57:19.412491 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:57:19 crc kubenswrapper[4832]: I0125 07:57:19.412505 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:57:19 crc kubenswrapper[4832]: I0125 07:57:19.412516 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:57:19Z","lastTransitionTime":"2026-01-25T07:57:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:57:19 crc kubenswrapper[4832]: I0125 07:57:19.431444 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4399c971-4476-4d24-ae22-8f9710ee1ea8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:56:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:56:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:56:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:56:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:56:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://427b76c32266adf832d2068d3a55977e793505c5bb68d7b55f73115596094910\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:56:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://37e9206fcc440929199c51b318bab8d2c23814d1307eaed596434c12edf2ed21\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:56:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://959f94a48ef709e3a3ca62ab6fda1874fd98e4fa70fbde0fa03da51bc8d0ed25\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:56:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://56d7d5b36830b76c8af4d6a98ec50b4096ef677b7ec94784724d5395dbc5e1a5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7e2213b4c4748dc37cf94e9b977630270dedbabf28e81c8a6d75e4ee3346ad7a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-25T07:57:15Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0125 07:57:10.242088 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0125 07:57:10.245266 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3222874030/tls.crt::/tmp/serving-cert-3222874030/tls.key\\\\\\\"\\\\nI0125 07:57:15.582629 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0125 07:57:15.585295 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0125 07:57:15.585315 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0125 07:57:15.585341 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0125 07:57:15.585347 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0125 07:57:15.590465 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0125 07:57:15.590486 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0125 07:57:15.590498 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0125 07:57:15.590502 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0125 07:57:15.590506 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0125 07:57:15.590510 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0125 07:57:15.590513 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0125 07:57:15.590670 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0125 07:57:15.594690 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-25T07:56:59Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c0b0c638bfaa98aaf9932b5ad1b0bfc04ba52038c40f3aa85103388c557ace5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:56:59Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5cdefbe9da3ff798b69ba79465cd9b6fce74df31802f14dca3fa58ba5b9d1bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b5cdefbe9da3ff798b69ba79465cd9b6fce74df31802f14dca3fa58ba5b9d1bd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T07:56:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T07:56:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T07:56:57Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T07:57:19Z is after 2025-08-24T17:21:41Z" Jan 25 07:57:19 crc kubenswrapper[4832]: I0125 07:57:19.468730 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fcc553c4-1007-4dbc-8420-60b36d54467a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:56:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:56:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:56:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8be196a1dec67a58e78aa9de2efa770fc899f210cc9c13962f0ebe78b967ba34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:56:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b044eb1a229266f00938c08da6aa9e86425ca71d08c8434d7214d54850c36bbb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:56:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://82354c782a5e3edb960aa716e1fc5fa9ab40d1f483ae320f08abfb662c1f1911\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:56:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7833d14895ff5c8aa464bdd04ddfe77dd2cddb9658d863bf6421449e62657bd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:56:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T07:56:57Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T07:57:19Z is after 2025-08-24T17:21:41Z" Jan 25 07:57:19 crc kubenswrapper[4832]: I0125 07:57:19.511197 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f08aec7c666388c5a9a5ccc970acf6e9df3262090951fd1a205cfb2f6cfb26a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e880d54d6b2d147d036dac73afd36230c3a984b018b7bd600dcbd33ca83aa84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T07:57:19Z is after 2025-08-24T17:21:41Z" Jan 25 07:57:19 crc kubenswrapper[4832]: I0125 07:57:19.515241 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:57:19 crc kubenswrapper[4832]: I0125 07:57:19.515272 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:57:19 crc kubenswrapper[4832]: I0125 07:57:19.515284 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:57:19 crc kubenswrapper[4832]: I0125 07:57:19.515297 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:57:19 crc kubenswrapper[4832]: I0125 07:57:19.515307 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:57:19Z","lastTransitionTime":"2026-01-25T07:57:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:57:19 crc kubenswrapper[4832]: I0125 07:57:19.551369 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-kzrcf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5439ad80-35f6-4da4-8745-8104e9963472\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c1f3fab8a8806d76e6199970ac471a73665e6ec874f959a1e7908df814babfff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dg29p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T07:57:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-kzrcf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T07:57:19Z is after 2025-08-24T17:21:41Z" Jan 25 07:57:19 crc kubenswrapper[4832]: I0125 07:57:19.591536 4832 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-23 08:54:33.599585452 +0000 UTC Jan 25 07:57:19 crc kubenswrapper[4832]: I0125 07:57:19.594295 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d0e4b534-077a-47eb-a9aa-463b4dce27c2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:56:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:56:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e400282707469172abd90879bb5c4f96419dd2fbdbc5cc58c6ee9954624b221f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://22fb11acb07674f4808f4563567010790f12a87af272fdcf5ad1998e616c3f13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7970bc59b29bb18f7064917431bb4dd3388f593b65f71d697e3bc1c37493d087\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ae35d18ac48a31c47656c723134740770a44da6fa1587a853402bbfd4f51956\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://56b41ea1d1a7bb58c288bf3c661f5cd441412fc4790cd8361da2061bd35721dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c6f28ecd4c0dfb159fffbbdfc1ecbfee0ce21de2efa607937d80ec098bfc2534\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c6f28ecd4c0dfb159fffbbdfc1ecbfee0ce21de2efa607937d80ec098bfc2534\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T07:56:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T07:56:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3d6c060504d04d04a811fe906985b4981037f7c249befd89d21694b58983826\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b3d6c060504d04d04a811fe906985b4981037f7c249befd89d21694b58983826\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T07:56:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T07:56:58Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f98f07a514287378206a12966a18bcce2ce996434858c036f7e405a8c5d51721\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f98f07a514287378206a12966a18bcce2ce996434858c036f7e405a8c5d51721\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T07:56:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T07:56:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T07:56:57Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T07:57:19Z is after 2025-08-24T17:21:41Z" Jan 25 07:57:19 crc kubenswrapper[4832]: I0125 07:57:19.617285 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:57:19 crc kubenswrapper[4832]: I0125 07:57:19.617331 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:57:19 crc kubenswrapper[4832]: I0125 07:57:19.617342 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:57:19 crc kubenswrapper[4832]: I0125 07:57:19.617360 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:57:19 crc kubenswrapper[4832]: I0125 07:57:19.617372 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:57:19Z","lastTransitionTime":"2026-01-25T07:57:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:57:19 crc kubenswrapper[4832]: I0125 07:57:19.631155 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fcc553c4-1007-4dbc-8420-60b36d54467a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:56:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:56:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:56:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8be196a1dec67a58e78aa9de2efa770fc899f210cc9c13962f0ebe78b967ba34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:56:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b044eb1a229266f00938c08da6aa9e86425ca71d08c8434d7214d54850c36bbb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:56:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://82354c782a5e3edb960aa716e1fc5fa9ab40d1f483ae320f08abfb662c1f1911\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:56:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7833d14895ff5c8aa464bdd04ddfe77dd2cddb9658d863bf6421449e62657bd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:56:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T07:56:57Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T07:57:19Z is after 2025-08-24T17:21:41Z" Jan 25 07:57:19 crc kubenswrapper[4832]: I0125 07:57:19.669066 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 25 07:57:19 crc kubenswrapper[4832]: E0125 07:57:19.669281 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 25 07:57:19 crc kubenswrapper[4832]: I0125 07:57:19.670320 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:16Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T07:57:19Z is after 2025-08-24T17:21:41Z" Jan 25 07:57:19 crc kubenswrapper[4832]: I0125 07:57:19.708369 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-6dqw2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5b30a48c-b823-4cdd-ac0c-def5487d8fa6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d04c4243f10847106daab854b81ba5b24466780aa4900922ae2c460468a12e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxmsw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T07:57:16Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-6dqw2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T07:57:19Z is after 2025-08-24T17:21:41Z" Jan 25 07:57:19 crc kubenswrapper[4832]: I0125 07:57:19.720121 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:57:19 crc kubenswrapper[4832]: I0125 07:57:19.720169 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:57:19 crc kubenswrapper[4832]: I0125 07:57:19.720179 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:57:19 crc kubenswrapper[4832]: I0125 07:57:19.720193 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:57:19 crc kubenswrapper[4832]: I0125 07:57:19.720203 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:57:19Z","lastTransitionTime":"2026-01-25T07:57:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:57:19 crc kubenswrapper[4832]: I0125 07:57:19.756263 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-plv66" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9c6fdc72-86dc-433d-8aac-57b0eeefaca3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rkm2k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rkm2k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rkm2k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rkm2k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rkm2k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rkm2k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rkm2k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rkm2k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac96bdf8380dbae226d8f186a0449b986660f21889eb73734620b26fb796fbf1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ac96bdf8380dbae226d8f186a0449b986660f21889eb73734620b26fb796fbf1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T07:57:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rkm2k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T07:57:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-plv66\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T07:57:19Z is after 2025-08-24T17:21:41Z" Jan 25 07:57:19 crc kubenswrapper[4832]: I0125 07:57:19.790734 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4399c971-4476-4d24-ae22-8f9710ee1ea8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:56:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:56:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:56:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:56:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:56:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://427b76c32266adf832d2068d3a55977e793505c5bb68d7b55f73115596094910\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:56:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://37e9206fcc440929199c51b318bab8d2c23814d1307eaed596434c12edf2ed21\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:56:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://959f94a48ef709e3a3ca62ab6fda1874fd98e4fa70fbde0fa03da51bc8d0ed25\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:56:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://56d7d5b36830b76c8af4d6a98ec50b4096ef677b7ec94784724d5395dbc5e1a5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7e2213b4c4748dc37cf94e9b977630270dedbabf28e81c8a6d75e4ee3346ad7a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-25T07:57:15Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0125 07:57:10.242088 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0125 07:57:10.245266 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3222874030/tls.crt::/tmp/serving-cert-3222874030/tls.key\\\\\\\"\\\\nI0125 07:57:15.582629 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0125 07:57:15.585295 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0125 07:57:15.585315 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0125 07:57:15.585341 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0125 07:57:15.585347 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0125 07:57:15.590465 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0125 07:57:15.590486 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0125 07:57:15.590498 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0125 07:57:15.590502 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0125 07:57:15.590506 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0125 07:57:15.590510 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0125 07:57:15.590513 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0125 07:57:15.590670 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0125 07:57:15.594690 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-25T07:56:59Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c0b0c638bfaa98aaf9932b5ad1b0bfc04ba52038c40f3aa85103388c557ace5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:56:59Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5cdefbe9da3ff798b69ba79465cd9b6fce74df31802f14dca3fa58ba5b9d1bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b5cdefbe9da3ff798b69ba79465cd9b6fce74df31802f14dca3fa58ba5b9d1bd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T07:56:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T07:56:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T07:56:57Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T07:57:19Z is after 2025-08-24T17:21:41Z" Jan 25 07:57:19 crc kubenswrapper[4832]: I0125 07:57:19.821931 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:57:19 crc kubenswrapper[4832]: I0125 07:57:19.821974 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:57:19 crc kubenswrapper[4832]: I0125 07:57:19.821983 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:57:19 crc kubenswrapper[4832]: I0125 07:57:19.821998 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:57:19 crc kubenswrapper[4832]: I0125 07:57:19.822007 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:57:19Z","lastTransitionTime":"2026-01-25T07:57:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:57:19 crc kubenswrapper[4832]: I0125 07:57:19.835850 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d0e4b534-077a-47eb-a9aa-463b4dce27c2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:56:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:56:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e400282707469172abd90879bb5c4f96419dd2fbdbc5cc58c6ee9954624b221f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://22fb11acb07674f4808f4563567010790f12a87af272fdcf5ad1998e616c3f13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7970bc59b29bb18f7064917431bb4dd3388f593b65f71d697e3bc1c37493d087\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ae35d18ac48a31c47656c723134740770a44da6fa1587a853402bbfd4f51956\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://56b41ea1d1a7bb58c288bf3c661f5cd441412fc4790cd8361da2061bd35721dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c6f28ecd4c0dfb159fffbbdfc1ecbfee0ce21de2efa607937d80ec098bfc2534\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c6f28ecd4c0dfb159fffbbdfc1ecbfee0ce21de2efa607937d80ec098bfc2534\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T07:56:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T07:56:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3d6c060504d04d04a811fe906985b4981037f7c249befd89d21694b58983826\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b3d6c060504d04d04a811fe906985b4981037f7c249befd89d21694b58983826\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T07:56:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T07:56:58Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f98f07a514287378206a12966a18bcce2ce996434858c036f7e405a8c5d51721\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f98f07a514287378206a12966a18bcce2ce996434858c036f7e405a8c5d51721\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T07:56:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T07:56:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T07:56:57Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T07:57:19Z is after 2025-08-24T17:21:41Z" Jan 25 07:57:19 crc kubenswrapper[4832]: I0125 07:57:19.847993 4832 generic.go:334] "Generic (PLEG): container finished" podID="947f1c61-f061-4448-b301-9c2554b67933" containerID="a2ca8e86a16d5f632146a210839dc52fb85013bd79ac5a467847d4a28a672539" exitCode=0 Jan 25 07:57:19 crc kubenswrapper[4832]: I0125 07:57:19.848188 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-7tflx" event={"ID":"947f1c61-f061-4448-b301-9c2554b67933","Type":"ContainerDied","Data":"a2ca8e86a16d5f632146a210839dc52fb85013bd79ac5a467847d4a28a672539"} Jan 25 07:57:19 crc kubenswrapper[4832]: I0125 07:57:19.849703 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"097b2ff685144140b86c80b5c605d0ef31116b56237a696d1da4bf98f65d7ae2"} Jan 25 07:57:19 crc kubenswrapper[4832]: I0125 07:57:19.871058 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f08aec7c666388c5a9a5ccc970acf6e9df3262090951fd1a205cfb2f6cfb26a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e880d54d6b2d147d036dac73afd36230c3a984b018b7bd600dcbd33ca83aa84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T07:57:19Z is after 2025-08-24T17:21:41Z" Jan 25 07:57:19 crc kubenswrapper[4832]: I0125 07:57:19.910375 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-kzrcf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5439ad80-35f6-4da4-8745-8104e9963472\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c1f3fab8a8806d76e6199970ac471a73665e6ec874f959a1e7908df814babfff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dg29p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T07:57:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-kzrcf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T07:57:19Z is after 2025-08-24T17:21:41Z" Jan 25 07:57:19 crc kubenswrapper[4832]: I0125 07:57:19.926348 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:57:19 crc kubenswrapper[4832]: I0125 07:57:19.926411 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:57:19 crc kubenswrapper[4832]: I0125 07:57:19.926421 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:57:19 crc kubenswrapper[4832]: I0125 07:57:19.926438 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:57:19 crc kubenswrapper[4832]: I0125 07:57:19.926449 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:57:19Z","lastTransitionTime":"2026-01-25T07:57:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:57:19 crc kubenswrapper[4832]: I0125 07:57:19.952694 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://49bab1f91a75d2c164a43ba253102a6ac5ba0fd6347fad172ae2052f055d3434\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T07:57:19Z is after 2025-08-24T17:21:41Z" Jan 25 07:57:19 crc kubenswrapper[4832]: I0125 07:57:19.994968 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:16Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T07:57:19Z is after 2025-08-24T17:21:41Z" Jan 25 07:57:20 crc kubenswrapper[4832]: I0125 07:57:20.029355 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:57:20 crc kubenswrapper[4832]: I0125 07:57:20.029434 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:57:20 crc kubenswrapper[4832]: I0125 07:57:20.029449 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:57:20 crc kubenswrapper[4832]: I0125 07:57:20.029466 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:57:20 crc kubenswrapper[4832]: I0125 07:57:20.029478 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:57:20Z","lastTransitionTime":"2026-01-25T07:57:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:57:20 crc kubenswrapper[4832]: I0125 07:57:20.038685 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-ljmz9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f0e6de28-95c1-4b62-93a5-8141ed12ba8e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://90459cff650e6a278d83d57b502423c3c3bd87cadc083c7642dfc4cc33e7953c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s6dzs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T07:57:16Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-ljmz9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T07:57:20Z is after 2025-08-24T17:21:41Z" Jan 25 07:57:20 crc kubenswrapper[4832]: I0125 07:57:20.069656 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-9r9sz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1fb47e8e-c812-41b4-9be7-3fad81e121b0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11d30ecfbac91cbd5f546d8f064b715e31917d7db31102376299e2c5fa7951f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2t6v2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c32b6a39b2bc87d55b11a88a54d0909633358c70f3fc555cd4308bc5bf2689a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2t6v2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T07:57:16Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-9r9sz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T07:57:20Z is after 2025-08-24T17:21:41Z" Jan 25 07:57:20 crc kubenswrapper[4832]: I0125 07:57:20.108655 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:16Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T07:57:20Z is after 2025-08-24T17:21:41Z" Jan 25 07:57:20 crc kubenswrapper[4832]: I0125 07:57:20.132228 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:57:20 crc kubenswrapper[4832]: I0125 07:57:20.132327 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:57:20 crc kubenswrapper[4832]: I0125 07:57:20.132345 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:57:20 crc kubenswrapper[4832]: I0125 07:57:20.132403 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:57:20 crc kubenswrapper[4832]: I0125 07:57:20.132421 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:57:20Z","lastTransitionTime":"2026-01-25T07:57:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:57:20 crc kubenswrapper[4832]: I0125 07:57:20.148651 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T07:57:20Z is after 2025-08-24T17:21:41Z" Jan 25 07:57:20 crc kubenswrapper[4832]: I0125 07:57:20.177001 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 25 07:57:20 crc kubenswrapper[4832]: E0125 07:57:20.177114 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-25 07:57:24.177094789 +0000 UTC m=+26.850918322 (durationBeforeRetry 4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 25 07:57:20 crc kubenswrapper[4832]: I0125 07:57:20.177144 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 25 07:57:20 crc kubenswrapper[4832]: I0125 07:57:20.177197 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 25 07:57:20 crc kubenswrapper[4832]: I0125 07:57:20.177231 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 25 07:57:20 crc kubenswrapper[4832]: E0125 07:57:20.177320 4832 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 25 07:57:20 crc kubenswrapper[4832]: E0125 07:57:20.177329 4832 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 25 07:57:20 crc kubenswrapper[4832]: E0125 07:57:20.177338 4832 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 25 07:57:20 crc kubenswrapper[4832]: E0125 07:57:20.177344 4832 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 25 07:57:20 crc kubenswrapper[4832]: E0125 07:57:20.177351 4832 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 25 07:57:20 crc kubenswrapper[4832]: E0125 07:57:20.177368 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-25 07:57:24.177360487 +0000 UTC m=+26.851184020 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 25 07:57:20 crc kubenswrapper[4832]: E0125 07:57:20.177399 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-25 07:57:24.177374217 +0000 UTC m=+26.851197750 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 25 07:57:20 crc kubenswrapper[4832]: E0125 07:57:20.177415 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-25 07:57:24.177409728 +0000 UTC m=+26.851233261 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 25 07:57:20 crc kubenswrapper[4832]: I0125 07:57:20.190347 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-7tflx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"947f1c61-f061-4448-b301-9c2554b67933\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6tmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://446dcb21c95e4112671db6f4b8376ff3361d3d386ecdaa190f615271511be812\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://446dcb21c95e4112671db6f4b8376ff3361d3d386ecdaa190f615271511be812\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T07:57:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T07:57:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6tmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6tmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6tmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6tmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6tmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6tmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T07:57:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-7tflx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T07:57:20Z is after 2025-08-24T17:21:41Z" Jan 25 07:57:20 crc kubenswrapper[4832]: I0125 07:57:20.229805 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:19Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:19Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://097b2ff685144140b86c80b5c605d0ef31116b56237a696d1da4bf98f65d7ae2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T07:57:20Z is after 2025-08-24T17:21:41Z" Jan 25 07:57:20 crc kubenswrapper[4832]: I0125 07:57:20.234321 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:57:20 crc kubenswrapper[4832]: I0125 07:57:20.234365 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:57:20 crc kubenswrapper[4832]: I0125 07:57:20.234377 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:57:20 crc kubenswrapper[4832]: I0125 07:57:20.234409 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:57:20 crc kubenswrapper[4832]: I0125 07:57:20.234450 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:57:20Z","lastTransitionTime":"2026-01-25T07:57:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:57:20 crc kubenswrapper[4832]: I0125 07:57:20.267129 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-ljmz9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f0e6de28-95c1-4b62-93a5-8141ed12ba8e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://90459cff650e6a278d83d57b502423c3c3bd87cadc083c7642dfc4cc33e7953c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s6dzs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T07:57:16Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-ljmz9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T07:57:20Z is after 2025-08-24T17:21:41Z" Jan 25 07:57:20 crc kubenswrapper[4832]: I0125 07:57:20.278855 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 25 07:57:20 crc kubenswrapper[4832]: E0125 07:57:20.279059 4832 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 25 07:57:20 crc kubenswrapper[4832]: E0125 07:57:20.279094 4832 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 25 07:57:20 crc kubenswrapper[4832]: E0125 07:57:20.279113 4832 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 25 07:57:20 crc kubenswrapper[4832]: E0125 07:57:20.279190 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-25 07:57:24.27916887 +0000 UTC m=+26.952992563 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 25 07:57:20 crc kubenswrapper[4832]: I0125 07:57:20.309368 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-9r9sz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1fb47e8e-c812-41b4-9be7-3fad81e121b0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11d30ecfbac91cbd5f546d8f064b715e31917d7db31102376299e2c5fa7951f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2t6v2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c32b6a39b2bc87d55b11a88a54d0909633358c70f3fc555cd4308bc5bf2689a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2t6v2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T07:57:16Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-9r9sz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T07:57:20Z is after 2025-08-24T17:21:41Z" Jan 25 07:57:20 crc kubenswrapper[4832]: I0125 07:57:20.337032 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:57:20 crc kubenswrapper[4832]: I0125 07:57:20.337070 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:57:20 crc kubenswrapper[4832]: I0125 07:57:20.337079 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:57:20 crc kubenswrapper[4832]: I0125 07:57:20.337097 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:57:20 crc kubenswrapper[4832]: I0125 07:57:20.337108 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:57:20Z","lastTransitionTime":"2026-01-25T07:57:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:57:20 crc kubenswrapper[4832]: I0125 07:57:20.349939 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:16Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T07:57:20Z is after 2025-08-24T17:21:41Z" Jan 25 07:57:20 crc kubenswrapper[4832]: I0125 07:57:20.389273 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://49bab1f91a75d2c164a43ba253102a6ac5ba0fd6347fad172ae2052f055d3434\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T07:57:20Z is after 2025-08-24T17:21:41Z" Jan 25 07:57:20 crc kubenswrapper[4832]: I0125 07:57:20.432323 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-7tflx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"947f1c61-f061-4448-b301-9c2554b67933\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6tmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://446dcb21c95e4112671db6f4b8376ff3361d3d386ecdaa190f615271511be812\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://446dcb21c95e4112671db6f4b8376ff3361d3d386ecdaa190f615271511be812\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T07:57:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T07:57:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6tmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2ca8e86a16d5f632146a210839dc52fb85013bd79ac5a467847d4a28a672539\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a2ca8e86a16d5f632146a210839dc52fb85013bd79ac5a467847d4a28a672539\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T07:57:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T07:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6tmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6tmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6tmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6tmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6tmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T07:57:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-7tflx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T07:57:20Z is after 2025-08-24T17:21:41Z" Jan 25 07:57:20 crc kubenswrapper[4832]: I0125 07:57:20.439996 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:57:20 crc kubenswrapper[4832]: I0125 07:57:20.440040 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:57:20 crc kubenswrapper[4832]: I0125 07:57:20.440052 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:57:20 crc kubenswrapper[4832]: I0125 07:57:20.440069 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:57:20 crc kubenswrapper[4832]: I0125 07:57:20.440078 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:57:20Z","lastTransitionTime":"2026-01-25T07:57:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:57:20 crc kubenswrapper[4832]: I0125 07:57:20.471568 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T07:57:20Z is after 2025-08-24T17:21:41Z" Jan 25 07:57:20 crc kubenswrapper[4832]: I0125 07:57:20.511192 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:16Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T07:57:20Z is after 2025-08-24T17:21:41Z" Jan 25 07:57:20 crc kubenswrapper[4832]: I0125 07:57:20.542789 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:57:20 crc kubenswrapper[4832]: I0125 07:57:20.542855 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:57:20 crc kubenswrapper[4832]: I0125 07:57:20.542873 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:57:20 crc kubenswrapper[4832]: I0125 07:57:20.542899 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:57:20 crc kubenswrapper[4832]: I0125 07:57:20.542914 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:57:20Z","lastTransitionTime":"2026-01-25T07:57:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:57:20 crc kubenswrapper[4832]: I0125 07:57:20.548258 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-6dqw2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5b30a48c-b823-4cdd-ac0c-def5487d8fa6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d04c4243f10847106daab854b81ba5b24466780aa4900922ae2c460468a12e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxmsw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T07:57:16Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-6dqw2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T07:57:20Z is after 2025-08-24T17:21:41Z" Jan 25 07:57:20 crc kubenswrapper[4832]: I0125 07:57:20.592002 4832 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-15 02:40:11.77811035 +0000 UTC Jan 25 07:57:20 crc kubenswrapper[4832]: I0125 07:57:20.596932 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-plv66" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9c6fdc72-86dc-433d-8aac-57b0eeefaca3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rkm2k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rkm2k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rkm2k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rkm2k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rkm2k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rkm2k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rkm2k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rkm2k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac96bdf8380dbae226d8f186a0449b986660f21889eb73734620b26fb796fbf1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ac96bdf8380dbae226d8f186a0449b986660f21889eb73734620b26fb796fbf1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T07:57:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rkm2k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T07:57:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-plv66\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T07:57:20Z is after 2025-08-24T17:21:41Z" Jan 25 07:57:20 crc kubenswrapper[4832]: I0125 07:57:20.632210 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4399c971-4476-4d24-ae22-8f9710ee1ea8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:56:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:56:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:56:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:56:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:56:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://427b76c32266adf832d2068d3a55977e793505c5bb68d7b55f73115596094910\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:56:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://37e9206fcc440929199c51b318bab8d2c23814d1307eaed596434c12edf2ed21\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:56:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://959f94a48ef709e3a3ca62ab6fda1874fd98e4fa70fbde0fa03da51bc8d0ed25\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:56:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://56d7d5b36830b76c8af4d6a98ec50b4096ef677b7ec94784724d5395dbc5e1a5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7e2213b4c4748dc37cf94e9b977630270dedbabf28e81c8a6d75e4ee3346ad7a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-25T07:57:15Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0125 07:57:10.242088 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0125 07:57:10.245266 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3222874030/tls.crt::/tmp/serving-cert-3222874030/tls.key\\\\\\\"\\\\nI0125 07:57:15.582629 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0125 07:57:15.585295 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0125 07:57:15.585315 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0125 07:57:15.585341 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0125 07:57:15.585347 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0125 07:57:15.590465 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0125 07:57:15.590486 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0125 07:57:15.590498 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0125 07:57:15.590502 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0125 07:57:15.590506 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0125 07:57:15.590510 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0125 07:57:15.590513 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0125 07:57:15.590670 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0125 07:57:15.594690 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-25T07:56:59Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c0b0c638bfaa98aaf9932b5ad1b0bfc04ba52038c40f3aa85103388c557ace5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:56:59Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5cdefbe9da3ff798b69ba79465cd9b6fce74df31802f14dca3fa58ba5b9d1bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b5cdefbe9da3ff798b69ba79465cd9b6fce74df31802f14dca3fa58ba5b9d1bd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T07:56:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T07:56:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T07:56:57Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T07:57:20Z is after 2025-08-24T17:21:41Z" Jan 25 07:57:20 crc kubenswrapper[4832]: I0125 07:57:20.645498 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:57:20 crc kubenswrapper[4832]: I0125 07:57:20.645572 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:57:20 crc kubenswrapper[4832]: I0125 07:57:20.645586 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:57:20 crc kubenswrapper[4832]: I0125 07:57:20.645609 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:57:20 crc kubenswrapper[4832]: I0125 07:57:20.645622 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:57:20Z","lastTransitionTime":"2026-01-25T07:57:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:57:20 crc kubenswrapper[4832]: I0125 07:57:20.668913 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 25 07:57:20 crc kubenswrapper[4832]: I0125 07:57:20.668936 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 25 07:57:20 crc kubenswrapper[4832]: E0125 07:57:20.669063 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 25 07:57:20 crc kubenswrapper[4832]: E0125 07:57:20.669142 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 25 07:57:20 crc kubenswrapper[4832]: I0125 07:57:20.670001 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fcc553c4-1007-4dbc-8420-60b36d54467a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:56:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:56:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:56:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8be196a1dec67a58e78aa9de2efa770fc899f210cc9c13962f0ebe78b967ba34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:56:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b044eb1a229266f00938c08da6aa9e86425ca71d08c8434d7214d54850c36bbb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:56:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://82354c782a5e3edb960aa716e1fc5fa9ab40d1f483ae320f08abfb662c1f1911\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:56:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7833d14895ff5c8aa464bdd04ddfe77dd2cddb9658d863bf6421449e62657bd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:56:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T07:56:57Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T07:57:20Z is after 2025-08-24T17:21:41Z" Jan 25 07:57:20 crc kubenswrapper[4832]: I0125 07:57:20.709808 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f08aec7c666388c5a9a5ccc970acf6e9df3262090951fd1a205cfb2f6cfb26a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e880d54d6b2d147d036dac73afd36230c3a984b018b7bd600dcbd33ca83aa84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T07:57:20Z is after 2025-08-24T17:21:41Z" Jan 25 07:57:20 crc kubenswrapper[4832]: I0125 07:57:20.748445 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:57:20 crc kubenswrapper[4832]: I0125 07:57:20.748485 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:57:20 crc kubenswrapper[4832]: I0125 07:57:20.748498 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:57:20 crc kubenswrapper[4832]: I0125 07:57:20.748515 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:57:20 crc kubenswrapper[4832]: I0125 07:57:20.748528 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:57:20Z","lastTransitionTime":"2026-01-25T07:57:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:57:20 crc kubenswrapper[4832]: I0125 07:57:20.751951 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-kzrcf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5439ad80-35f6-4da4-8745-8104e9963472\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c1f3fab8a8806d76e6199970ac471a73665e6ec874f959a1e7908df814babfff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dg29p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T07:57:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-kzrcf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T07:57:20Z is after 2025-08-24T17:21:41Z" Jan 25 07:57:20 crc kubenswrapper[4832]: I0125 07:57:20.799263 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d0e4b534-077a-47eb-a9aa-463b4dce27c2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:56:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:56:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e400282707469172abd90879bb5c4f96419dd2fbdbc5cc58c6ee9954624b221f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://22fb11acb07674f4808f4563567010790f12a87af272fdcf5ad1998e616c3f13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7970bc59b29bb18f7064917431bb4dd3388f593b65f71d697e3bc1c37493d087\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ae35d18ac48a31c47656c723134740770a44da6fa1587a853402bbfd4f51956\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://56b41ea1d1a7bb58c288bf3c661f5cd441412fc4790cd8361da2061bd35721dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c6f28ecd4c0dfb159fffbbdfc1ecbfee0ce21de2efa607937d80ec098bfc2534\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c6f28ecd4c0dfb159fffbbdfc1ecbfee0ce21de2efa607937d80ec098bfc2534\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T07:56:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T07:56:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3d6c060504d04d04a811fe906985b4981037f7c249befd89d21694b58983826\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b3d6c060504d04d04a811fe906985b4981037f7c249befd89d21694b58983826\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T07:56:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T07:56:58Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f98f07a514287378206a12966a18bcce2ce996434858c036f7e405a8c5d51721\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f98f07a514287378206a12966a18bcce2ce996434858c036f7e405a8c5d51721\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T07:56:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T07:56:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T07:56:57Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T07:57:20Z is after 2025-08-24T17:21:41Z" Jan 25 07:57:20 crc kubenswrapper[4832]: I0125 07:57:20.851521 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:57:20 crc kubenswrapper[4832]: I0125 07:57:20.851569 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:57:20 crc kubenswrapper[4832]: I0125 07:57:20.851582 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:57:20 crc kubenswrapper[4832]: I0125 07:57:20.851599 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:57:20 crc kubenswrapper[4832]: I0125 07:57:20.851612 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:57:20Z","lastTransitionTime":"2026-01-25T07:57:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:57:20 crc kubenswrapper[4832]: I0125 07:57:20.854485 4832 generic.go:334] "Generic (PLEG): container finished" podID="947f1c61-f061-4448-b301-9c2554b67933" containerID="3e8c763fc8bcc560d4435f2ed3be793465fb9e31b07bc26b76ce14bf7d9ce6b7" exitCode=0 Jan 25 07:57:20 crc kubenswrapper[4832]: I0125 07:57:20.854593 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-7tflx" event={"ID":"947f1c61-f061-4448-b301-9c2554b67933","Type":"ContainerDied","Data":"3e8c763fc8bcc560d4435f2ed3be793465fb9e31b07bc26b76ce14bf7d9ce6b7"} Jan 25 07:57:20 crc kubenswrapper[4832]: I0125 07:57:20.872275 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-7tflx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"947f1c61-f061-4448-b301-9c2554b67933\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6tmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://446dcb21c95e4112671db6f4b8376ff3361d3d386ecdaa190f615271511be812\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://446dcb21c95e4112671db6f4b8376ff3361d3d386ecdaa190f615271511be812\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T07:57:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T07:57:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6tmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2ca8e86a16d5f632146a210839dc52fb85013bd79ac5a467847d4a28a672539\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a2ca8e86a16d5f632146a210839dc52fb85013bd79ac5a467847d4a28a672539\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T07:57:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T07:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6tmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e8c763fc8bcc560d4435f2ed3be793465fb9e31b07bc26b76ce14bf7d9ce6b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3e8c763fc8bcc560d4435f2ed3be793465fb9e31b07bc26b76ce14bf7d9ce6b7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T07:57:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T07:57:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6tmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6tmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6tmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6tmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T07:57:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-7tflx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T07:57:20Z is after 2025-08-24T17:21:41Z" Jan 25 07:57:20 crc kubenswrapper[4832]: I0125 07:57:20.887291 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T07:57:20Z is after 2025-08-24T17:21:41Z" Jan 25 07:57:20 crc kubenswrapper[4832]: I0125 07:57:20.910295 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:16Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T07:57:20Z is after 2025-08-24T17:21:41Z" Jan 25 07:57:20 crc kubenswrapper[4832]: I0125 07:57:20.953451 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-6dqw2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5b30a48c-b823-4cdd-ac0c-def5487d8fa6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d04c4243f10847106daab854b81ba5b24466780aa4900922ae2c460468a12e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxmsw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T07:57:16Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-6dqw2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T07:57:20Z is after 2025-08-24T17:21:41Z" Jan 25 07:57:20 crc kubenswrapper[4832]: I0125 07:57:20.954117 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:57:20 crc kubenswrapper[4832]: I0125 07:57:20.954151 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:57:20 crc kubenswrapper[4832]: I0125 07:57:20.954161 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:57:20 crc kubenswrapper[4832]: I0125 07:57:20.954176 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:57:20 crc kubenswrapper[4832]: I0125 07:57:20.954186 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:57:20Z","lastTransitionTime":"2026-01-25T07:57:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:57:20 crc kubenswrapper[4832]: I0125 07:57:20.994174 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-plv66" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9c6fdc72-86dc-433d-8aac-57b0eeefaca3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rkm2k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rkm2k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rkm2k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rkm2k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rkm2k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rkm2k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rkm2k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rkm2k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac96bdf8380dbae226d8f186a0449b986660f21889eb73734620b26fb796fbf1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ac96bdf8380dbae226d8f186a0449b986660f21889eb73734620b26fb796fbf1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T07:57:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rkm2k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T07:57:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-plv66\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T07:57:20Z is after 2025-08-24T17:21:41Z" Jan 25 07:57:21 crc kubenswrapper[4832]: I0125 07:57:21.030607 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4399c971-4476-4d24-ae22-8f9710ee1ea8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:56:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:56:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:56:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:56:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:56:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://427b76c32266adf832d2068d3a55977e793505c5bb68d7b55f73115596094910\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:56:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://37e9206fcc440929199c51b318bab8d2c23814d1307eaed596434c12edf2ed21\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:56:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://959f94a48ef709e3a3ca62ab6fda1874fd98e4fa70fbde0fa03da51bc8d0ed25\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:56:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://56d7d5b36830b76c8af4d6a98ec50b4096ef677b7ec94784724d5395dbc5e1a5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7e2213b4c4748dc37cf94e9b977630270dedbabf28e81c8a6d75e4ee3346ad7a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-25T07:57:15Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0125 07:57:10.242088 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0125 07:57:10.245266 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3222874030/tls.crt::/tmp/serving-cert-3222874030/tls.key\\\\\\\"\\\\nI0125 07:57:15.582629 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0125 07:57:15.585295 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0125 07:57:15.585315 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0125 07:57:15.585341 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0125 07:57:15.585347 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0125 07:57:15.590465 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0125 07:57:15.590486 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0125 07:57:15.590498 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0125 07:57:15.590502 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0125 07:57:15.590506 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0125 07:57:15.590510 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0125 07:57:15.590513 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0125 07:57:15.590670 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0125 07:57:15.594690 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-25T07:56:59Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c0b0c638bfaa98aaf9932b5ad1b0bfc04ba52038c40f3aa85103388c557ace5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:56:59Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5cdefbe9da3ff798b69ba79465cd9b6fce74df31802f14dca3fa58ba5b9d1bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b5cdefbe9da3ff798b69ba79465cd9b6fce74df31802f14dca3fa58ba5b9d1bd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T07:56:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T07:56:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T07:56:57Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T07:57:21Z is after 2025-08-24T17:21:41Z" Jan 25 07:57:21 crc kubenswrapper[4832]: I0125 07:57:21.056525 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:57:21 crc kubenswrapper[4832]: I0125 07:57:21.056553 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:57:21 crc kubenswrapper[4832]: I0125 07:57:21.056565 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:57:21 crc kubenswrapper[4832]: I0125 07:57:21.056581 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:57:21 crc kubenswrapper[4832]: I0125 07:57:21.056592 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:57:21Z","lastTransitionTime":"2026-01-25T07:57:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:57:21 crc kubenswrapper[4832]: I0125 07:57:21.070470 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fcc553c4-1007-4dbc-8420-60b36d54467a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:56:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:56:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:56:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8be196a1dec67a58e78aa9de2efa770fc899f210cc9c13962f0ebe78b967ba34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:56:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b044eb1a229266f00938c08da6aa9e86425ca71d08c8434d7214d54850c36bbb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:56:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://82354c782a5e3edb960aa716e1fc5fa9ab40d1f483ae320f08abfb662c1f1911\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:56:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7833d14895ff5c8aa464bdd04ddfe77dd2cddb9658d863bf6421449e62657bd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:56:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T07:56:57Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T07:57:21Z is after 2025-08-24T17:21:41Z" Jan 25 07:57:21 crc kubenswrapper[4832]: I0125 07:57:21.109313 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f08aec7c666388c5a9a5ccc970acf6e9df3262090951fd1a205cfb2f6cfb26a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e880d54d6b2d147d036dac73afd36230c3a984b018b7bd600dcbd33ca83aa84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T07:57:21Z is after 2025-08-24T17:21:41Z" Jan 25 07:57:21 crc kubenswrapper[4832]: I0125 07:57:21.149810 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-kzrcf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5439ad80-35f6-4da4-8745-8104e9963472\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c1f3fab8a8806d76e6199970ac471a73665e6ec874f959a1e7908df814babfff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dg29p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T07:57:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-kzrcf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T07:57:21Z is after 2025-08-24T17:21:41Z" Jan 25 07:57:21 crc kubenswrapper[4832]: I0125 07:57:21.158588 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:57:21 crc kubenswrapper[4832]: I0125 07:57:21.158759 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:57:21 crc kubenswrapper[4832]: I0125 07:57:21.158818 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:57:21 crc kubenswrapper[4832]: I0125 07:57:21.158880 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:57:21 crc kubenswrapper[4832]: I0125 07:57:21.158951 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:57:21Z","lastTransitionTime":"2026-01-25T07:57:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:57:21 crc kubenswrapper[4832]: I0125 07:57:21.206221 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d0e4b534-077a-47eb-a9aa-463b4dce27c2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:56:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:56:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e400282707469172abd90879bb5c4f96419dd2fbdbc5cc58c6ee9954624b221f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://22fb11acb07674f4808f4563567010790f12a87af272fdcf5ad1998e616c3f13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7970bc59b29bb18f7064917431bb4dd3388f593b65f71d697e3bc1c37493d087\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ae35d18ac48a31c47656c723134740770a44da6fa1587a853402bbfd4f51956\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://56b41ea1d1a7bb58c288bf3c661f5cd441412fc4790cd8361da2061bd35721dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c6f28ecd4c0dfb159fffbbdfc1ecbfee0ce21de2efa607937d80ec098bfc2534\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c6f28ecd4c0dfb159fffbbdfc1ecbfee0ce21de2efa607937d80ec098bfc2534\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T07:56:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T07:56:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3d6c060504d04d04a811fe906985b4981037f7c249befd89d21694b58983826\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b3d6c060504d04d04a811fe906985b4981037f7c249befd89d21694b58983826\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T07:56:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T07:56:58Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f98f07a514287378206a12966a18bcce2ce996434858c036f7e405a8c5d51721\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f98f07a514287378206a12966a18bcce2ce996434858c036f7e405a8c5d51721\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T07:56:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T07:56:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T07:56:57Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T07:57:21Z is after 2025-08-24T17:21:41Z" Jan 25 07:57:21 crc kubenswrapper[4832]: I0125 07:57:21.228465 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:19Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:19Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://097b2ff685144140b86c80b5c605d0ef31116b56237a696d1da4bf98f65d7ae2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T07:57:21Z is after 2025-08-24T17:21:41Z" Jan 25 07:57:21 crc kubenswrapper[4832]: I0125 07:57:21.261178 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:57:21 crc kubenswrapper[4832]: I0125 07:57:21.261218 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:57:21 crc kubenswrapper[4832]: I0125 07:57:21.261227 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:57:21 crc kubenswrapper[4832]: I0125 07:57:21.261241 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:57:21 crc kubenswrapper[4832]: I0125 07:57:21.261253 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:57:21Z","lastTransitionTime":"2026-01-25T07:57:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:57:21 crc kubenswrapper[4832]: I0125 07:57:21.267790 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-ljmz9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f0e6de28-95c1-4b62-93a5-8141ed12ba8e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://90459cff650e6a278d83d57b502423c3c3bd87cadc083c7642dfc4cc33e7953c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s6dzs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T07:57:16Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-ljmz9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T07:57:21Z is after 2025-08-24T17:21:41Z" Jan 25 07:57:21 crc kubenswrapper[4832]: I0125 07:57:21.307218 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-9r9sz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1fb47e8e-c812-41b4-9be7-3fad81e121b0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11d30ecfbac91cbd5f546d8f064b715e31917d7db31102376299e2c5fa7951f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2t6v2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c32b6a39b2bc87d55b11a88a54d0909633358c70f3fc555cd4308bc5bf2689a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2t6v2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T07:57:16Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-9r9sz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T07:57:21Z is after 2025-08-24T17:21:41Z" Jan 25 07:57:21 crc kubenswrapper[4832]: I0125 07:57:21.350933 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:16Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T07:57:21Z is after 2025-08-24T17:21:41Z" Jan 25 07:57:21 crc kubenswrapper[4832]: I0125 07:57:21.362765 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:57:21 crc kubenswrapper[4832]: I0125 07:57:21.362800 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:57:21 crc kubenswrapper[4832]: I0125 07:57:21.362811 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:57:21 crc kubenswrapper[4832]: I0125 07:57:21.362826 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:57:21 crc kubenswrapper[4832]: I0125 07:57:21.362837 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:57:21Z","lastTransitionTime":"2026-01-25T07:57:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:57:21 crc kubenswrapper[4832]: I0125 07:57:21.388536 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://49bab1f91a75d2c164a43ba253102a6ac5ba0fd6347fad172ae2052f055d3434\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T07:57:21Z is after 2025-08-24T17:21:41Z" Jan 25 07:57:21 crc kubenswrapper[4832]: I0125 07:57:21.464786 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:57:21 crc kubenswrapper[4832]: I0125 07:57:21.464824 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:57:21 crc kubenswrapper[4832]: I0125 07:57:21.464838 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:57:21 crc kubenswrapper[4832]: I0125 07:57:21.464854 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:57:21 crc kubenswrapper[4832]: I0125 07:57:21.464865 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:57:21Z","lastTransitionTime":"2026-01-25T07:57:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:57:21 crc kubenswrapper[4832]: I0125 07:57:21.567637 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:57:21 crc kubenswrapper[4832]: I0125 07:57:21.567881 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:57:21 crc kubenswrapper[4832]: I0125 07:57:21.567893 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:57:21 crc kubenswrapper[4832]: I0125 07:57:21.567908 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:57:21 crc kubenswrapper[4832]: I0125 07:57:21.567919 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:57:21Z","lastTransitionTime":"2026-01-25T07:57:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:57:21 crc kubenswrapper[4832]: I0125 07:57:21.592617 4832 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-06 03:03:56.691170362 +0000 UTC Jan 25 07:57:21 crc kubenswrapper[4832]: I0125 07:57:21.669587 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 25 07:57:21 crc kubenswrapper[4832]: E0125 07:57:21.669829 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 25 07:57:21 crc kubenswrapper[4832]: I0125 07:57:21.671969 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:57:21 crc kubenswrapper[4832]: I0125 07:57:21.672007 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:57:21 crc kubenswrapper[4832]: I0125 07:57:21.672017 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:57:21 crc kubenswrapper[4832]: I0125 07:57:21.672033 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:57:21 crc kubenswrapper[4832]: I0125 07:57:21.672042 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:57:21Z","lastTransitionTime":"2026-01-25T07:57:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:57:21 crc kubenswrapper[4832]: I0125 07:57:21.774530 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:57:21 crc kubenswrapper[4832]: I0125 07:57:21.774591 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:57:21 crc kubenswrapper[4832]: I0125 07:57:21.774610 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:57:21 crc kubenswrapper[4832]: I0125 07:57:21.774630 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:57:21 crc kubenswrapper[4832]: I0125 07:57:21.774643 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:57:21Z","lastTransitionTime":"2026-01-25T07:57:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:57:21 crc kubenswrapper[4832]: I0125 07:57:21.862133 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-plv66" event={"ID":"9c6fdc72-86dc-433d-8aac-57b0eeefaca3","Type":"ContainerStarted","Data":"5d82289bf3a8f5881decb5d348cc43fdfd61f4ce6af17013a893b687d2c759d1"} Jan 25 07:57:21 crc kubenswrapper[4832]: I0125 07:57:21.865295 4832 generic.go:334] "Generic (PLEG): container finished" podID="947f1c61-f061-4448-b301-9c2554b67933" containerID="a6a224c00f14700b78550beaa705d0f1cf0b2f13ef8ec3ba81aef885b81292f3" exitCode=0 Jan 25 07:57:21 crc kubenswrapper[4832]: I0125 07:57:21.865332 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-7tflx" event={"ID":"947f1c61-f061-4448-b301-9c2554b67933","Type":"ContainerDied","Data":"a6a224c00f14700b78550beaa705d0f1cf0b2f13ef8ec3ba81aef885b81292f3"} Jan 25 07:57:21 crc kubenswrapper[4832]: I0125 07:57:21.879443 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:57:21 crc kubenswrapper[4832]: I0125 07:57:21.879514 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:57:21 crc kubenswrapper[4832]: I0125 07:57:21.879531 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:57:21 crc kubenswrapper[4832]: I0125 07:57:21.879559 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:57:21 crc kubenswrapper[4832]: I0125 07:57:21.879574 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:57:21Z","lastTransitionTime":"2026-01-25T07:57:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:57:21 crc kubenswrapper[4832]: I0125 07:57:21.879844 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T07:57:21Z is after 2025-08-24T17:21:41Z" Jan 25 07:57:21 crc kubenswrapper[4832]: I0125 07:57:21.904241 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-7tflx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"947f1c61-f061-4448-b301-9c2554b67933\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6tmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://446dcb21c95e4112671db6f4b8376ff3361d3d386ecdaa190f615271511be812\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://446dcb21c95e4112671db6f4b8376ff3361d3d386ecdaa190f615271511be812\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T07:57:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T07:57:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6tmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2ca8e86a16d5f632146a210839dc52fb85013bd79ac5a467847d4a28a672539\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a2ca8e86a16d5f632146a210839dc52fb85013bd79ac5a467847d4a28a672539\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T07:57:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T07:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6tmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e8c763fc8bcc560d4435f2ed3be793465fb9e31b07bc26b76ce14bf7d9ce6b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3e8c763fc8bcc560d4435f2ed3be793465fb9e31b07bc26b76ce14bf7d9ce6b7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T07:57:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T07:57:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6tmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6a224c00f14700b78550beaa705d0f1cf0b2f13ef8ec3ba81aef885b81292f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a6a224c00f14700b78550beaa705d0f1cf0b2f13ef8ec3ba81aef885b81292f3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T07:57:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T07:57:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6tmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6tmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6tmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T07:57:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-7tflx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T07:57:21Z is after 2025-08-24T17:21:41Z" Jan 25 07:57:21 crc kubenswrapper[4832]: I0125 07:57:21.917587 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-6dqw2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5b30a48c-b823-4cdd-ac0c-def5487d8fa6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d04c4243f10847106daab854b81ba5b24466780aa4900922ae2c460468a12e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxmsw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T07:57:16Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-6dqw2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T07:57:21Z is after 2025-08-24T17:21:41Z" Jan 25 07:57:21 crc kubenswrapper[4832]: I0125 07:57:21.939066 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-plv66" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9c6fdc72-86dc-433d-8aac-57b0eeefaca3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rkm2k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rkm2k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rkm2k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rkm2k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rkm2k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rkm2k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rkm2k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rkm2k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac96bdf8380dbae226d8f186a0449b986660f21889eb73734620b26fb796fbf1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ac96bdf8380dbae226d8f186a0449b986660f21889eb73734620b26fb796fbf1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T07:57:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rkm2k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T07:57:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-plv66\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T07:57:21Z is after 2025-08-24T17:21:41Z" Jan 25 07:57:21 crc kubenswrapper[4832]: I0125 07:57:21.953640 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4399c971-4476-4d24-ae22-8f9710ee1ea8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:56:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:56:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:56:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:56:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:56:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://427b76c32266adf832d2068d3a55977e793505c5bb68d7b55f73115596094910\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:56:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://37e9206fcc440929199c51b318bab8d2c23814d1307eaed596434c12edf2ed21\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:56:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://959f94a48ef709e3a3ca62ab6fda1874fd98e4fa70fbde0fa03da51bc8d0ed25\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:56:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://56d7d5b36830b76c8af4d6a98ec50b4096ef677b7ec94784724d5395dbc5e1a5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7e2213b4c4748dc37cf94e9b977630270dedbabf28e81c8a6d75e4ee3346ad7a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-25T07:57:15Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0125 07:57:10.242088 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0125 07:57:10.245266 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3222874030/tls.crt::/tmp/serving-cert-3222874030/tls.key\\\\\\\"\\\\nI0125 07:57:15.582629 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0125 07:57:15.585295 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0125 07:57:15.585315 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0125 07:57:15.585341 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0125 07:57:15.585347 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0125 07:57:15.590465 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0125 07:57:15.590486 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0125 07:57:15.590498 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0125 07:57:15.590502 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0125 07:57:15.590506 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0125 07:57:15.590510 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0125 07:57:15.590513 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0125 07:57:15.590670 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0125 07:57:15.594690 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-25T07:56:59Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c0b0c638bfaa98aaf9932b5ad1b0bfc04ba52038c40f3aa85103388c557ace5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:56:59Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5cdefbe9da3ff798b69ba79465cd9b6fce74df31802f14dca3fa58ba5b9d1bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b5cdefbe9da3ff798b69ba79465cd9b6fce74df31802f14dca3fa58ba5b9d1bd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T07:56:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T07:56:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T07:56:57Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T07:57:21Z is after 2025-08-24T17:21:41Z" Jan 25 07:57:21 crc kubenswrapper[4832]: I0125 07:57:21.969849 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fcc553c4-1007-4dbc-8420-60b36d54467a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:56:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:56:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:56:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8be196a1dec67a58e78aa9de2efa770fc899f210cc9c13962f0ebe78b967ba34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:56:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b044eb1a229266f00938c08da6aa9e86425ca71d08c8434d7214d54850c36bbb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:56:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://82354c782a5e3edb960aa716e1fc5fa9ab40d1f483ae320f08abfb662c1f1911\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:56:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7833d14895ff5c8aa464bdd04ddfe77dd2cddb9658d863bf6421449e62657bd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:56:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T07:56:57Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T07:57:21Z is after 2025-08-24T17:21:41Z" Jan 25 07:57:21 crc kubenswrapper[4832]: I0125 07:57:21.981642 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:57:21 crc kubenswrapper[4832]: I0125 07:57:21.981687 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:57:21 crc kubenswrapper[4832]: I0125 07:57:21.981697 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:57:21 crc kubenswrapper[4832]: I0125 07:57:21.981713 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:57:21 crc kubenswrapper[4832]: I0125 07:57:21.981723 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:57:21Z","lastTransitionTime":"2026-01-25T07:57:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:57:21 crc kubenswrapper[4832]: I0125 07:57:21.982484 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:16Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T07:57:21Z is after 2025-08-24T17:21:41Z" Jan 25 07:57:21 crc kubenswrapper[4832]: I0125 07:57:21.996186 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-kzrcf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5439ad80-35f6-4da4-8745-8104e9963472\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c1f3fab8a8806d76e6199970ac471a73665e6ec874f959a1e7908df814babfff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dg29p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T07:57:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-kzrcf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T07:57:21Z is after 2025-08-24T17:21:41Z" Jan 25 07:57:22 crc kubenswrapper[4832]: I0125 07:57:22.014248 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d0e4b534-077a-47eb-a9aa-463b4dce27c2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:56:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:56:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e400282707469172abd90879bb5c4f96419dd2fbdbc5cc58c6ee9954624b221f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://22fb11acb07674f4808f4563567010790f12a87af272fdcf5ad1998e616c3f13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7970bc59b29bb18f7064917431bb4dd3388f593b65f71d697e3bc1c37493d087\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ae35d18ac48a31c47656c723134740770a44da6fa1587a853402bbfd4f51956\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://56b41ea1d1a7bb58c288bf3c661f5cd441412fc4790cd8361da2061bd35721dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c6f28ecd4c0dfb159fffbbdfc1ecbfee0ce21de2efa607937d80ec098bfc2534\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c6f28ecd4c0dfb159fffbbdfc1ecbfee0ce21de2efa607937d80ec098bfc2534\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T07:56:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T07:56:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3d6c060504d04d04a811fe906985b4981037f7c249befd89d21694b58983826\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b3d6c060504d04d04a811fe906985b4981037f7c249befd89d21694b58983826\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T07:56:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T07:56:58Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f98f07a514287378206a12966a18bcce2ce996434858c036f7e405a8c5d51721\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f98f07a514287378206a12966a18bcce2ce996434858c036f7e405a8c5d51721\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T07:56:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T07:56:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T07:56:57Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T07:57:22Z is after 2025-08-24T17:21:41Z" Jan 25 07:57:22 crc kubenswrapper[4832]: I0125 07:57:22.025929 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f08aec7c666388c5a9a5ccc970acf6e9df3262090951fd1a205cfb2f6cfb26a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e880d54d6b2d147d036dac73afd36230c3a984b018b7bd600dcbd33ca83aa84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T07:57:22Z is after 2025-08-24T17:21:41Z" Jan 25 07:57:22 crc kubenswrapper[4832]: I0125 07:57:22.034911 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-ljmz9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f0e6de28-95c1-4b62-93a5-8141ed12ba8e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://90459cff650e6a278d83d57b502423c3c3bd87cadc083c7642dfc4cc33e7953c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s6dzs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T07:57:16Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-ljmz9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T07:57:22Z is after 2025-08-24T17:21:41Z" Jan 25 07:57:22 crc kubenswrapper[4832]: I0125 07:57:22.043971 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-9r9sz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1fb47e8e-c812-41b4-9be7-3fad81e121b0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11d30ecfbac91cbd5f546d8f064b715e31917d7db31102376299e2c5fa7951f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2t6v2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c32b6a39b2bc87d55b11a88a54d0909633358c70f3fc555cd4308bc5bf2689a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2t6v2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T07:57:16Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-9r9sz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T07:57:22Z is after 2025-08-24T17:21:41Z" Jan 25 07:57:22 crc kubenswrapper[4832]: I0125 07:57:22.053325 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:16Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T07:57:22Z is after 2025-08-24T17:21:41Z" Jan 25 07:57:22 crc kubenswrapper[4832]: I0125 07:57:22.063192 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://49bab1f91a75d2c164a43ba253102a6ac5ba0fd6347fad172ae2052f055d3434\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T07:57:22Z is after 2025-08-24T17:21:41Z" Jan 25 07:57:22 crc kubenswrapper[4832]: I0125 07:57:22.073790 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:19Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:19Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://097b2ff685144140b86c80b5c605d0ef31116b56237a696d1da4bf98f65d7ae2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T07:57:22Z is after 2025-08-24T17:21:41Z" Jan 25 07:57:22 crc kubenswrapper[4832]: I0125 07:57:22.084588 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:57:22 crc kubenswrapper[4832]: I0125 07:57:22.084627 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:57:22 crc kubenswrapper[4832]: I0125 07:57:22.084637 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:57:22 crc kubenswrapper[4832]: I0125 07:57:22.084652 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:57:22 crc kubenswrapper[4832]: I0125 07:57:22.084662 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:57:22Z","lastTransitionTime":"2026-01-25T07:57:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:57:22 crc kubenswrapper[4832]: I0125 07:57:22.187225 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:57:22 crc kubenswrapper[4832]: I0125 07:57:22.187254 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:57:22 crc kubenswrapper[4832]: I0125 07:57:22.187263 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:57:22 crc kubenswrapper[4832]: I0125 07:57:22.187279 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:57:22 crc kubenswrapper[4832]: I0125 07:57:22.187289 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:57:22Z","lastTransitionTime":"2026-01-25T07:57:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:57:22 crc kubenswrapper[4832]: I0125 07:57:22.290472 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:57:22 crc kubenswrapper[4832]: I0125 07:57:22.290510 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:57:22 crc kubenswrapper[4832]: I0125 07:57:22.290526 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:57:22 crc kubenswrapper[4832]: I0125 07:57:22.290541 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:57:22 crc kubenswrapper[4832]: I0125 07:57:22.290551 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:57:22Z","lastTransitionTime":"2026-01-25T07:57:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:57:22 crc kubenswrapper[4832]: I0125 07:57:22.393024 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:57:22 crc kubenswrapper[4832]: I0125 07:57:22.393059 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:57:22 crc kubenswrapper[4832]: I0125 07:57:22.393067 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:57:22 crc kubenswrapper[4832]: I0125 07:57:22.393081 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:57:22 crc kubenswrapper[4832]: I0125 07:57:22.393094 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:57:22Z","lastTransitionTime":"2026-01-25T07:57:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:57:22 crc kubenswrapper[4832]: I0125 07:57:22.495328 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:57:22 crc kubenswrapper[4832]: I0125 07:57:22.495361 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:57:22 crc kubenswrapper[4832]: I0125 07:57:22.495369 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:57:22 crc kubenswrapper[4832]: I0125 07:57:22.495399 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:57:22 crc kubenswrapper[4832]: I0125 07:57:22.495409 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:57:22Z","lastTransitionTime":"2026-01-25T07:57:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:57:22 crc kubenswrapper[4832]: I0125 07:57:22.593687 4832 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-19 07:45:05.334847234 +0000 UTC Jan 25 07:57:22 crc kubenswrapper[4832]: I0125 07:57:22.597834 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:57:22 crc kubenswrapper[4832]: I0125 07:57:22.597893 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:57:22 crc kubenswrapper[4832]: I0125 07:57:22.597911 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:57:22 crc kubenswrapper[4832]: I0125 07:57:22.597939 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:57:22 crc kubenswrapper[4832]: I0125 07:57:22.597957 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:57:22Z","lastTransitionTime":"2026-01-25T07:57:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:57:22 crc kubenswrapper[4832]: I0125 07:57:22.669273 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 25 07:57:22 crc kubenswrapper[4832]: I0125 07:57:22.669354 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 25 07:57:22 crc kubenswrapper[4832]: E0125 07:57:22.669449 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 25 07:57:22 crc kubenswrapper[4832]: E0125 07:57:22.669544 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 25 07:57:22 crc kubenswrapper[4832]: I0125 07:57:22.700602 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:57:22 crc kubenswrapper[4832]: I0125 07:57:22.700656 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:57:22 crc kubenswrapper[4832]: I0125 07:57:22.700665 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:57:22 crc kubenswrapper[4832]: I0125 07:57:22.700681 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:57:22 crc kubenswrapper[4832]: I0125 07:57:22.700722 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:57:22Z","lastTransitionTime":"2026-01-25T07:57:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:57:22 crc kubenswrapper[4832]: I0125 07:57:22.806737 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:57:22 crc kubenswrapper[4832]: I0125 07:57:22.806778 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:57:22 crc kubenswrapper[4832]: I0125 07:57:22.806786 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:57:22 crc kubenswrapper[4832]: I0125 07:57:22.806801 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:57:22 crc kubenswrapper[4832]: I0125 07:57:22.806812 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:57:22Z","lastTransitionTime":"2026-01-25T07:57:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:57:22 crc kubenswrapper[4832]: I0125 07:57:22.871568 4832 generic.go:334] "Generic (PLEG): container finished" podID="947f1c61-f061-4448-b301-9c2554b67933" containerID="0565bbfef6aee4dc36b7eeea5fb9b0d26004395c38af8fb6f1745ff6853957e4" exitCode=0 Jan 25 07:57:22 crc kubenswrapper[4832]: I0125 07:57:22.871623 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-7tflx" event={"ID":"947f1c61-f061-4448-b301-9c2554b67933","Type":"ContainerDied","Data":"0565bbfef6aee4dc36b7eeea5fb9b0d26004395c38af8fb6f1745ff6853957e4"} Jan 25 07:57:22 crc kubenswrapper[4832]: I0125 07:57:22.887402 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T07:57:22Z is after 2025-08-24T17:21:41Z" Jan 25 07:57:22 crc kubenswrapper[4832]: I0125 07:57:22.901337 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-7tflx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"947f1c61-f061-4448-b301-9c2554b67933\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6tmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://446dcb21c95e4112671db6f4b8376ff3361d3d386ecdaa190f615271511be812\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://446dcb21c95e4112671db6f4b8376ff3361d3d386ecdaa190f615271511be812\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T07:57:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T07:57:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6tmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2ca8e86a16d5f632146a210839dc52fb85013bd79ac5a467847d4a28a672539\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a2ca8e86a16d5f632146a210839dc52fb85013bd79ac5a467847d4a28a672539\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T07:57:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T07:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6tmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e8c763fc8bcc560d4435f2ed3be793465fb9e31b07bc26b76ce14bf7d9ce6b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3e8c763fc8bcc560d4435f2ed3be793465fb9e31b07bc26b76ce14bf7d9ce6b7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T07:57:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T07:57:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6tmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6a224c00f14700b78550beaa705d0f1cf0b2f13ef8ec3ba81aef885b81292f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a6a224c00f14700b78550beaa705d0f1cf0b2f13ef8ec3ba81aef885b81292f3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T07:57:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T07:57:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6tmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0565bbfef6aee4dc36b7eeea5fb9b0d26004395c38af8fb6f1745ff6853957e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0565bbfef6aee4dc36b7eeea5fb9b0d26004395c38af8fb6f1745ff6853957e4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T07:57:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T07:57:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6tmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6tmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T07:57:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-7tflx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T07:57:22Z is after 2025-08-24T17:21:41Z" Jan 25 07:57:22 crc kubenswrapper[4832]: I0125 07:57:22.908655 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:57:22 crc kubenswrapper[4832]: I0125 07:57:22.908680 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:57:22 crc kubenswrapper[4832]: I0125 07:57:22.908688 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:57:22 crc kubenswrapper[4832]: I0125 07:57:22.908705 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:57:22 crc kubenswrapper[4832]: I0125 07:57:22.908718 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:57:22Z","lastTransitionTime":"2026-01-25T07:57:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:57:22 crc kubenswrapper[4832]: I0125 07:57:22.920666 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-plv66" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9c6fdc72-86dc-433d-8aac-57b0eeefaca3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rkm2k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rkm2k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rkm2k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rkm2k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rkm2k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rkm2k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rkm2k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rkm2k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac96bdf8380dbae226d8f186a0449b986660f21889eb73734620b26fb796fbf1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ac96bdf8380dbae226d8f186a0449b986660f21889eb73734620b26fb796fbf1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T07:57:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rkm2k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T07:57:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-plv66\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T07:57:22Z is after 2025-08-24T17:21:41Z" Jan 25 07:57:22 crc kubenswrapper[4832]: I0125 07:57:22.933702 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4399c971-4476-4d24-ae22-8f9710ee1ea8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:56:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:56:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:56:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:56:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:56:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://427b76c32266adf832d2068d3a55977e793505c5bb68d7b55f73115596094910\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:56:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://37e9206fcc440929199c51b318bab8d2c23814d1307eaed596434c12edf2ed21\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:56:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://959f94a48ef709e3a3ca62ab6fda1874fd98e4fa70fbde0fa03da51bc8d0ed25\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:56:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://56d7d5b36830b76c8af4d6a98ec50b4096ef677b7ec94784724d5395dbc5e1a5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7e2213b4c4748dc37cf94e9b977630270dedbabf28e81c8a6d75e4ee3346ad7a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-25T07:57:15Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0125 07:57:10.242088 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0125 07:57:10.245266 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3222874030/tls.crt::/tmp/serving-cert-3222874030/tls.key\\\\\\\"\\\\nI0125 07:57:15.582629 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0125 07:57:15.585295 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0125 07:57:15.585315 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0125 07:57:15.585341 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0125 07:57:15.585347 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0125 07:57:15.590465 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0125 07:57:15.590486 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0125 07:57:15.590498 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0125 07:57:15.590502 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0125 07:57:15.590506 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0125 07:57:15.590510 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0125 07:57:15.590513 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0125 07:57:15.590670 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0125 07:57:15.594690 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-25T07:56:59Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c0b0c638bfaa98aaf9932b5ad1b0bfc04ba52038c40f3aa85103388c557ace5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:56:59Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5cdefbe9da3ff798b69ba79465cd9b6fce74df31802f14dca3fa58ba5b9d1bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b5cdefbe9da3ff798b69ba79465cd9b6fce74df31802f14dca3fa58ba5b9d1bd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T07:56:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T07:56:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T07:56:57Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T07:57:22Z is after 2025-08-24T17:21:41Z" Jan 25 07:57:22 crc kubenswrapper[4832]: I0125 07:57:22.944952 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fcc553c4-1007-4dbc-8420-60b36d54467a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:56:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:56:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:56:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8be196a1dec67a58e78aa9de2efa770fc899f210cc9c13962f0ebe78b967ba34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:56:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b044eb1a229266f00938c08da6aa9e86425ca71d08c8434d7214d54850c36bbb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:56:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://82354c782a5e3edb960aa716e1fc5fa9ab40d1f483ae320f08abfb662c1f1911\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:56:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7833d14895ff5c8aa464bdd04ddfe77dd2cddb9658d863bf6421449e62657bd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:56:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T07:56:57Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T07:57:22Z is after 2025-08-24T17:21:41Z" Jan 25 07:57:22 crc kubenswrapper[4832]: I0125 07:57:22.958422 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:16Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T07:57:22Z is after 2025-08-24T17:21:41Z" Jan 25 07:57:22 crc kubenswrapper[4832]: I0125 07:57:22.968537 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-6dqw2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5b30a48c-b823-4cdd-ac0c-def5487d8fa6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d04c4243f10847106daab854b81ba5b24466780aa4900922ae2c460468a12e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxmsw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T07:57:16Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-6dqw2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T07:57:22Z is after 2025-08-24T17:21:41Z" Jan 25 07:57:22 crc kubenswrapper[4832]: I0125 07:57:22.985627 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d0e4b534-077a-47eb-a9aa-463b4dce27c2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:56:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:56:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e400282707469172abd90879bb5c4f96419dd2fbdbc5cc58c6ee9954624b221f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://22fb11acb07674f4808f4563567010790f12a87af272fdcf5ad1998e616c3f13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7970bc59b29bb18f7064917431bb4dd3388f593b65f71d697e3bc1c37493d087\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ae35d18ac48a31c47656c723134740770a44da6fa1587a853402bbfd4f51956\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://56b41ea1d1a7bb58c288bf3c661f5cd441412fc4790cd8361da2061bd35721dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c6f28ecd4c0dfb159fffbbdfc1ecbfee0ce21de2efa607937d80ec098bfc2534\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c6f28ecd4c0dfb159fffbbdfc1ecbfee0ce21de2efa607937d80ec098bfc2534\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T07:56:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T07:56:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3d6c060504d04d04a811fe906985b4981037f7c249befd89d21694b58983826\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b3d6c060504d04d04a811fe906985b4981037f7c249befd89d21694b58983826\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T07:56:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T07:56:58Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f98f07a514287378206a12966a18bcce2ce996434858c036f7e405a8c5d51721\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f98f07a514287378206a12966a18bcce2ce996434858c036f7e405a8c5d51721\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T07:56:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T07:56:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T07:56:57Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T07:57:22Z is after 2025-08-24T17:21:41Z" Jan 25 07:57:22 crc kubenswrapper[4832]: I0125 07:57:22.999270 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f08aec7c666388c5a9a5ccc970acf6e9df3262090951fd1a205cfb2f6cfb26a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e880d54d6b2d147d036dac73afd36230c3a984b018b7bd600dcbd33ca83aa84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T07:57:22Z is after 2025-08-24T17:21:41Z" Jan 25 07:57:23 crc kubenswrapper[4832]: I0125 07:57:23.010747 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-kzrcf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5439ad80-35f6-4da4-8745-8104e9963472\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c1f3fab8a8806d76e6199970ac471a73665e6ec874f959a1e7908df814babfff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dg29p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T07:57:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-kzrcf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T07:57:23Z is after 2025-08-24T17:21:41Z" Jan 25 07:57:23 crc kubenswrapper[4832]: I0125 07:57:23.015526 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:57:23 crc kubenswrapper[4832]: I0125 07:57:23.015565 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:57:23 crc kubenswrapper[4832]: I0125 07:57:23.015600 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:57:23 crc kubenswrapper[4832]: I0125 07:57:23.015619 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:57:23 crc kubenswrapper[4832]: I0125 07:57:23.015629 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:57:23Z","lastTransitionTime":"2026-01-25T07:57:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:57:23 crc kubenswrapper[4832]: I0125 07:57:23.023999 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-9r9sz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1fb47e8e-c812-41b4-9be7-3fad81e121b0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11d30ecfbac91cbd5f546d8f064b715e31917d7db31102376299e2c5fa7951f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2t6v2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c32b6a39b2bc87d55b11a88a54d0909633358c70f3fc555cd4308bc5bf2689a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2t6v2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T07:57:16Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-9r9sz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T07:57:23Z is after 2025-08-24T17:21:41Z" Jan 25 07:57:23 crc kubenswrapper[4832]: I0125 07:57:23.040416 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:16Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T07:57:23Z is after 2025-08-24T17:21:41Z" Jan 25 07:57:23 crc kubenswrapper[4832]: I0125 07:57:23.052642 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://49bab1f91a75d2c164a43ba253102a6ac5ba0fd6347fad172ae2052f055d3434\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T07:57:23Z is after 2025-08-24T17:21:41Z" Jan 25 07:57:23 crc kubenswrapper[4832]: I0125 07:57:23.062948 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:19Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:19Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://097b2ff685144140b86c80b5c605d0ef31116b56237a696d1da4bf98f65d7ae2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T07:57:23Z is after 2025-08-24T17:21:41Z" Jan 25 07:57:23 crc kubenswrapper[4832]: I0125 07:57:23.073717 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-ljmz9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f0e6de28-95c1-4b62-93a5-8141ed12ba8e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://90459cff650e6a278d83d57b502423c3c3bd87cadc083c7642dfc4cc33e7953c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s6dzs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T07:57:16Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-ljmz9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T07:57:23Z is after 2025-08-24T17:21:41Z" Jan 25 07:57:23 crc kubenswrapper[4832]: I0125 07:57:23.117970 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:57:23 crc kubenswrapper[4832]: I0125 07:57:23.118056 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:57:23 crc kubenswrapper[4832]: I0125 07:57:23.118069 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:57:23 crc kubenswrapper[4832]: I0125 07:57:23.118099 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:57:23 crc kubenswrapper[4832]: I0125 07:57:23.118114 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:57:23Z","lastTransitionTime":"2026-01-25T07:57:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:57:23 crc kubenswrapper[4832]: I0125 07:57:23.220921 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:57:23 crc kubenswrapper[4832]: I0125 07:57:23.220968 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:57:23 crc kubenswrapper[4832]: I0125 07:57:23.220980 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:57:23 crc kubenswrapper[4832]: I0125 07:57:23.221000 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:57:23 crc kubenswrapper[4832]: I0125 07:57:23.221016 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:57:23Z","lastTransitionTime":"2026-01-25T07:57:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:57:23 crc kubenswrapper[4832]: I0125 07:57:23.323708 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:57:23 crc kubenswrapper[4832]: I0125 07:57:23.323875 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:57:23 crc kubenswrapper[4832]: I0125 07:57:23.323900 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:57:23 crc kubenswrapper[4832]: I0125 07:57:23.323919 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:57:23 crc kubenswrapper[4832]: I0125 07:57:23.323931 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:57:23Z","lastTransitionTime":"2026-01-25T07:57:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:57:23 crc kubenswrapper[4832]: I0125 07:57:23.426116 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:57:23 crc kubenswrapper[4832]: I0125 07:57:23.426166 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:57:23 crc kubenswrapper[4832]: I0125 07:57:23.426178 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:57:23 crc kubenswrapper[4832]: I0125 07:57:23.426196 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:57:23 crc kubenswrapper[4832]: I0125 07:57:23.426207 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:57:23Z","lastTransitionTime":"2026-01-25T07:57:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:57:23 crc kubenswrapper[4832]: I0125 07:57:23.529006 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:57:23 crc kubenswrapper[4832]: I0125 07:57:23.529044 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:57:23 crc kubenswrapper[4832]: I0125 07:57:23.529055 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:57:23 crc kubenswrapper[4832]: I0125 07:57:23.529071 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:57:23 crc kubenswrapper[4832]: I0125 07:57:23.529081 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:57:23Z","lastTransitionTime":"2026-01-25T07:57:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:57:23 crc kubenswrapper[4832]: I0125 07:57:23.594207 4832 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-15 18:14:32.542614397 +0000 UTC Jan 25 07:57:23 crc kubenswrapper[4832]: I0125 07:57:23.631585 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:57:23 crc kubenswrapper[4832]: I0125 07:57:23.631628 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:57:23 crc kubenswrapper[4832]: I0125 07:57:23.631637 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:57:23 crc kubenswrapper[4832]: I0125 07:57:23.631652 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:57:23 crc kubenswrapper[4832]: I0125 07:57:23.631667 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:57:23Z","lastTransitionTime":"2026-01-25T07:57:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:57:23 crc kubenswrapper[4832]: I0125 07:57:23.669151 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 25 07:57:23 crc kubenswrapper[4832]: E0125 07:57:23.669320 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 25 07:57:23 crc kubenswrapper[4832]: I0125 07:57:23.734329 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:57:23 crc kubenswrapper[4832]: I0125 07:57:23.734396 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:57:23 crc kubenswrapper[4832]: I0125 07:57:23.734410 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:57:23 crc kubenswrapper[4832]: I0125 07:57:23.734425 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:57:23 crc kubenswrapper[4832]: I0125 07:57:23.734437 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:57:23Z","lastTransitionTime":"2026-01-25T07:57:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:57:23 crc kubenswrapper[4832]: I0125 07:57:23.837102 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:57:23 crc kubenswrapper[4832]: I0125 07:57:23.837147 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:57:23 crc kubenswrapper[4832]: I0125 07:57:23.837165 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:57:23 crc kubenswrapper[4832]: I0125 07:57:23.837191 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:57:23 crc kubenswrapper[4832]: I0125 07:57:23.837209 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:57:23Z","lastTransitionTime":"2026-01-25T07:57:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:57:23 crc kubenswrapper[4832]: I0125 07:57:23.881276 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-plv66" event={"ID":"9c6fdc72-86dc-433d-8aac-57b0eeefaca3","Type":"ContainerStarted","Data":"0c672a6d2179ac4f2004e0caeaec41230a60abe1473535c59b3a5cebb1d244f9"} Jan 25 07:57:23 crc kubenswrapper[4832]: I0125 07:57:23.881549 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-plv66" Jan 25 07:57:23 crc kubenswrapper[4832]: I0125 07:57:23.881579 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-plv66" Jan 25 07:57:23 crc kubenswrapper[4832]: I0125 07:57:23.889695 4832 generic.go:334] "Generic (PLEG): container finished" podID="947f1c61-f061-4448-b301-9c2554b67933" containerID="21c9f3889231e035c1db9611e076f2db7f52cca1449f9cd143323a8599d3141c" exitCode=0 Jan 25 07:57:23 crc kubenswrapper[4832]: I0125 07:57:23.889783 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-7tflx" event={"ID":"947f1c61-f061-4448-b301-9c2554b67933","Type":"ContainerDied","Data":"21c9f3889231e035c1db9611e076f2db7f52cca1449f9cd143323a8599d3141c"} Jan 25 07:57:23 crc kubenswrapper[4832]: I0125 07:57:23.903641 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d0e4b534-077a-47eb-a9aa-463b4dce27c2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:56:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:56:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e400282707469172abd90879bb5c4f96419dd2fbdbc5cc58c6ee9954624b221f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://22fb11acb07674f4808f4563567010790f12a87af272fdcf5ad1998e616c3f13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7970bc59b29bb18f7064917431bb4dd3388f593b65f71d697e3bc1c37493d087\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ae35d18ac48a31c47656c723134740770a44da6fa1587a853402bbfd4f51956\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://56b41ea1d1a7bb58c288bf3c661f5cd441412fc4790cd8361da2061bd35721dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c6f28ecd4c0dfb159fffbbdfc1ecbfee0ce21de2efa607937d80ec098bfc2534\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c6f28ecd4c0dfb159fffbbdfc1ecbfee0ce21de2efa607937d80ec098bfc2534\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T07:56:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T07:56:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3d6c060504d04d04a811fe906985b4981037f7c249befd89d21694b58983826\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b3d6c060504d04d04a811fe906985b4981037f7c249befd89d21694b58983826\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T07:56:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T07:56:58Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f98f07a514287378206a12966a18bcce2ce996434858c036f7e405a8c5d51721\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f98f07a514287378206a12966a18bcce2ce996434858c036f7e405a8c5d51721\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T07:56:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T07:56:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T07:56:57Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T07:57:23Z is after 2025-08-24T17:21:41Z" Jan 25 07:57:23 crc kubenswrapper[4832]: I0125 07:57:23.907665 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-plv66" Jan 25 07:57:23 crc kubenswrapper[4832]: I0125 07:57:23.911565 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-plv66" Jan 25 07:57:23 crc kubenswrapper[4832]: I0125 07:57:23.917254 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f08aec7c666388c5a9a5ccc970acf6e9df3262090951fd1a205cfb2f6cfb26a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e880d54d6b2d147d036dac73afd36230c3a984b018b7bd600dcbd33ca83aa84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T07:57:23Z is after 2025-08-24T17:21:41Z" Jan 25 07:57:23 crc kubenswrapper[4832]: I0125 07:57:23.950699 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:57:23 crc kubenswrapper[4832]: I0125 07:57:23.951071 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:57:23 crc kubenswrapper[4832]: I0125 07:57:23.951086 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:57:23 crc kubenswrapper[4832]: I0125 07:57:23.951106 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:57:23 crc kubenswrapper[4832]: I0125 07:57:23.951122 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:57:23Z","lastTransitionTime":"2026-01-25T07:57:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:57:23 crc kubenswrapper[4832]: I0125 07:57:23.973238 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-kzrcf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5439ad80-35f6-4da4-8745-8104e9963472\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c1f3fab8a8806d76e6199970ac471a73665e6ec874f959a1e7908df814babfff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dg29p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T07:57:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-kzrcf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T07:57:23Z is after 2025-08-24T17:21:41Z" Jan 25 07:57:23 crc kubenswrapper[4832]: I0125 07:57:23.991889 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:16Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T07:57:23Z is after 2025-08-24T17:21:41Z" Jan 25 07:57:24 crc kubenswrapper[4832]: I0125 07:57:24.003716 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://49bab1f91a75d2c164a43ba253102a6ac5ba0fd6347fad172ae2052f055d3434\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T07:57:24Z is after 2025-08-24T17:21:41Z" Jan 25 07:57:24 crc kubenswrapper[4832]: I0125 07:57:24.015404 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:19Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:19Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://097b2ff685144140b86c80b5c605d0ef31116b56237a696d1da4bf98f65d7ae2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T07:57:24Z is after 2025-08-24T17:21:41Z" Jan 25 07:57:24 crc kubenswrapper[4832]: I0125 07:57:24.025506 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-ljmz9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f0e6de28-95c1-4b62-93a5-8141ed12ba8e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://90459cff650e6a278d83d57b502423c3c3bd87cadc083c7642dfc4cc33e7953c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s6dzs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T07:57:16Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-ljmz9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T07:57:24Z is after 2025-08-24T17:21:41Z" Jan 25 07:57:24 crc kubenswrapper[4832]: I0125 07:57:24.035809 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-9r9sz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1fb47e8e-c812-41b4-9be7-3fad81e121b0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11d30ecfbac91cbd5f546d8f064b715e31917d7db31102376299e2c5fa7951f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2t6v2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c32b6a39b2bc87d55b11a88a54d0909633358c70f3fc555cd4308bc5bf2689a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2t6v2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T07:57:16Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-9r9sz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T07:57:24Z is after 2025-08-24T17:21:41Z" Jan 25 07:57:24 crc kubenswrapper[4832]: I0125 07:57:24.047625 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T07:57:24Z is after 2025-08-24T17:21:41Z" Jan 25 07:57:24 crc kubenswrapper[4832]: I0125 07:57:24.053922 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:57:24 crc kubenswrapper[4832]: I0125 07:57:24.053952 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:57:24 crc kubenswrapper[4832]: I0125 07:57:24.053961 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:57:24 crc kubenswrapper[4832]: I0125 07:57:24.053976 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:57:24 crc kubenswrapper[4832]: I0125 07:57:24.053986 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:57:24Z","lastTransitionTime":"2026-01-25T07:57:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:57:24 crc kubenswrapper[4832]: I0125 07:57:24.061705 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-7tflx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"947f1c61-f061-4448-b301-9c2554b67933\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6tmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://446dcb21c95e4112671db6f4b8376ff3361d3d386ecdaa190f615271511be812\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://446dcb21c95e4112671db6f4b8376ff3361d3d386ecdaa190f615271511be812\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T07:57:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T07:57:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6tmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2ca8e86a16d5f632146a210839dc52fb85013bd79ac5a467847d4a28a672539\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a2ca8e86a16d5f632146a210839dc52fb85013bd79ac5a467847d4a28a672539\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T07:57:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T07:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6tmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e8c763fc8bcc560d4435f2ed3be793465fb9e31b07bc26b76ce14bf7d9ce6b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3e8c763fc8bcc560d4435f2ed3be793465fb9e31b07bc26b76ce14bf7d9ce6b7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T07:57:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T07:57:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6tmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6a224c00f14700b78550beaa705d0f1cf0b2f13ef8ec3ba81aef885b81292f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a6a224c00f14700b78550beaa705d0f1cf0b2f13ef8ec3ba81aef885b81292f3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T07:57:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T07:57:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6tmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0565bbfef6aee4dc36b7eeea5fb9b0d26004395c38af8fb6f1745ff6853957e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0565bbfef6aee4dc36b7eeea5fb9b0d26004395c38af8fb6f1745ff6853957e4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T07:57:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T07:57:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6tmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6tmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T07:57:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-7tflx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T07:57:24Z is after 2025-08-24T17:21:41Z" Jan 25 07:57:24 crc kubenswrapper[4832]: I0125 07:57:24.076259 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4399c971-4476-4d24-ae22-8f9710ee1ea8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:56:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:56:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:56:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:56:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:56:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://427b76c32266adf832d2068d3a55977e793505c5bb68d7b55f73115596094910\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:56:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://37e9206fcc440929199c51b318bab8d2c23814d1307eaed596434c12edf2ed21\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:56:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://959f94a48ef709e3a3ca62ab6fda1874fd98e4fa70fbde0fa03da51bc8d0ed25\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:56:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://56d7d5b36830b76c8af4d6a98ec50b4096ef677b7ec94784724d5395dbc5e1a5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7e2213b4c4748dc37cf94e9b977630270dedbabf28e81c8a6d75e4ee3346ad7a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-25T07:57:15Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0125 07:57:10.242088 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0125 07:57:10.245266 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3222874030/tls.crt::/tmp/serving-cert-3222874030/tls.key\\\\\\\"\\\\nI0125 07:57:15.582629 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0125 07:57:15.585295 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0125 07:57:15.585315 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0125 07:57:15.585341 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0125 07:57:15.585347 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0125 07:57:15.590465 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0125 07:57:15.590486 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0125 07:57:15.590498 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0125 07:57:15.590502 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0125 07:57:15.590506 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0125 07:57:15.590510 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0125 07:57:15.590513 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0125 07:57:15.590670 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0125 07:57:15.594690 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-25T07:56:59Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c0b0c638bfaa98aaf9932b5ad1b0bfc04ba52038c40f3aa85103388c557ace5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:56:59Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5cdefbe9da3ff798b69ba79465cd9b6fce74df31802f14dca3fa58ba5b9d1bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b5cdefbe9da3ff798b69ba79465cd9b6fce74df31802f14dca3fa58ba5b9d1bd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T07:56:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T07:56:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T07:56:57Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T07:57:24Z is after 2025-08-24T17:21:41Z" Jan 25 07:57:24 crc kubenswrapper[4832]: I0125 07:57:24.088125 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fcc553c4-1007-4dbc-8420-60b36d54467a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:56:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:56:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:56:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8be196a1dec67a58e78aa9de2efa770fc899f210cc9c13962f0ebe78b967ba34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:56:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b044eb1a229266f00938c08da6aa9e86425ca71d08c8434d7214d54850c36bbb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:56:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://82354c782a5e3edb960aa716e1fc5fa9ab40d1f483ae320f08abfb662c1f1911\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:56:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7833d14895ff5c8aa464bdd04ddfe77dd2cddb9658d863bf6421449e62657bd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:56:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T07:56:57Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T07:57:24Z is after 2025-08-24T17:21:41Z" Jan 25 07:57:24 crc kubenswrapper[4832]: I0125 07:57:24.099624 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:16Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T07:57:24Z is after 2025-08-24T17:21:41Z" Jan 25 07:57:24 crc kubenswrapper[4832]: I0125 07:57:24.110016 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-6dqw2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5b30a48c-b823-4cdd-ac0c-def5487d8fa6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d04c4243f10847106daab854b81ba5b24466780aa4900922ae2c460468a12e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxmsw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T07:57:16Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-6dqw2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T07:57:24Z is after 2025-08-24T17:21:41Z" Jan 25 07:57:24 crc kubenswrapper[4832]: I0125 07:57:24.127637 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-plv66" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9c6fdc72-86dc-433d-8aac-57b0eeefaca3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4eb8d5ded80c75addd304eb271c805a5558200db4ad062ef7354d8a0e4d2892d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rkm2k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b2bdf85709ae59146893142e9c99259a30d0a3d382b2212b1863f677f6afc2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rkm2k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://955df1f749685e35f57096ab341705a767f9f044c498ff9fe0c578205ab00e47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rkm2k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a4281c5178e1f538e268252a65fbf98cf6d3febdb246a148f96a4aa074654ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rkm2k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9039a4038315d24ad4f721f3a16dc792881c104d23270f4ab5ffb3d84ff4cb99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rkm2k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0de5e2c0084fa8b9faf368e61b965f84d8411bcbdfb8b3cf6a35f4bc6088e68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rkm2k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c672a6d2179ac4f2004e0caeaec41230a60abe1473535c59b3a5cebb1d244f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rkm2k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d82289bf3a8f5881decb5d348cc43fdfd61f4ce6af17013a893b687d2c759d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rkm2k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac96bdf8380dbae226d8f186a0449b986660f21889eb73734620b26fb796fbf1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ac96bdf8380dbae226d8f186a0449b986660f21889eb73734620b26fb796fbf1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T07:57:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rkm2k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T07:57:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-plv66\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T07:57:24Z is after 2025-08-24T17:21:41Z" Jan 25 07:57:24 crc kubenswrapper[4832]: I0125 07:57:24.140458 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:16Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T07:57:24Z is after 2025-08-24T17:21:41Z" Jan 25 07:57:24 crc kubenswrapper[4832]: I0125 07:57:24.149708 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-6dqw2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5b30a48c-b823-4cdd-ac0c-def5487d8fa6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d04c4243f10847106daab854b81ba5b24466780aa4900922ae2c460468a12e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxmsw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T07:57:16Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-6dqw2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T07:57:24Z is after 2025-08-24T17:21:41Z" Jan 25 07:57:24 crc kubenswrapper[4832]: I0125 07:57:24.156230 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:57:24 crc kubenswrapper[4832]: I0125 07:57:24.156254 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:57:24 crc kubenswrapper[4832]: I0125 07:57:24.156263 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:57:24 crc kubenswrapper[4832]: I0125 07:57:24.156278 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:57:24 crc kubenswrapper[4832]: I0125 07:57:24.156290 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:57:24Z","lastTransitionTime":"2026-01-25T07:57:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:57:24 crc kubenswrapper[4832]: I0125 07:57:24.166146 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-plv66" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9c6fdc72-86dc-433d-8aac-57b0eeefaca3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4eb8d5ded80c75addd304eb271c805a5558200db4ad062ef7354d8a0e4d2892d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rkm2k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b2bdf85709ae59146893142e9c99259a30d0a3d382b2212b1863f677f6afc2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rkm2k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://955df1f749685e35f57096ab341705a767f9f044c498ff9fe0c578205ab00e47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rkm2k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a4281c5178e1f538e268252a65fbf98cf6d3febdb246a148f96a4aa074654ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rkm2k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9039a4038315d24ad4f721f3a16dc792881c104d23270f4ab5ffb3d84ff4cb99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rkm2k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0de5e2c0084fa8b9faf368e61b965f84d8411bcbdfb8b3cf6a35f4bc6088e68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rkm2k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c672a6d2179ac4f2004e0caeaec41230a60abe1473535c59b3a5cebb1d244f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rkm2k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d82289bf3a8f5881decb5d348cc43fdfd61f4ce6af17013a893b687d2c759d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rkm2k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac96bdf8380dbae226d8f186a0449b986660f21889eb73734620b26fb796fbf1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ac96bdf8380dbae226d8f186a0449b986660f21889eb73734620b26fb796fbf1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T07:57:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rkm2k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T07:57:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-plv66\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T07:57:24Z is after 2025-08-24T17:21:41Z" Jan 25 07:57:24 crc kubenswrapper[4832]: I0125 07:57:24.178004 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4399c971-4476-4d24-ae22-8f9710ee1ea8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:56:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:56:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:56:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:56:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:56:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://427b76c32266adf832d2068d3a55977e793505c5bb68d7b55f73115596094910\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:56:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://37e9206fcc440929199c51b318bab8d2c23814d1307eaed596434c12edf2ed21\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:56:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://959f94a48ef709e3a3ca62ab6fda1874fd98e4fa70fbde0fa03da51bc8d0ed25\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:56:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://56d7d5b36830b76c8af4d6a98ec50b4096ef677b7ec94784724d5395dbc5e1a5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7e2213b4c4748dc37cf94e9b977630270dedbabf28e81c8a6d75e4ee3346ad7a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-25T07:57:15Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0125 07:57:10.242088 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0125 07:57:10.245266 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3222874030/tls.crt::/tmp/serving-cert-3222874030/tls.key\\\\\\\"\\\\nI0125 07:57:15.582629 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0125 07:57:15.585295 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0125 07:57:15.585315 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0125 07:57:15.585341 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0125 07:57:15.585347 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0125 07:57:15.590465 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0125 07:57:15.590486 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0125 07:57:15.590498 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0125 07:57:15.590502 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0125 07:57:15.590506 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0125 07:57:15.590510 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0125 07:57:15.590513 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0125 07:57:15.590670 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0125 07:57:15.594690 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-25T07:56:59Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c0b0c638bfaa98aaf9932b5ad1b0bfc04ba52038c40f3aa85103388c557ace5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:56:59Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5cdefbe9da3ff798b69ba79465cd9b6fce74df31802f14dca3fa58ba5b9d1bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b5cdefbe9da3ff798b69ba79465cd9b6fce74df31802f14dca3fa58ba5b9d1bd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T07:56:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T07:56:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T07:56:57Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T07:57:24Z is after 2025-08-24T17:21:41Z" Jan 25 07:57:24 crc kubenswrapper[4832]: I0125 07:57:24.190434 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fcc553c4-1007-4dbc-8420-60b36d54467a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:56:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:56:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:56:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8be196a1dec67a58e78aa9de2efa770fc899f210cc9c13962f0ebe78b967ba34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:56:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b044eb1a229266f00938c08da6aa9e86425ca71d08c8434d7214d54850c36bbb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:56:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://82354c782a5e3edb960aa716e1fc5fa9ab40d1f483ae320f08abfb662c1f1911\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:56:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7833d14895ff5c8aa464bdd04ddfe77dd2cddb9658d863bf6421449e62657bd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:56:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T07:56:57Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T07:57:24Z is after 2025-08-24T17:21:41Z" Jan 25 07:57:24 crc kubenswrapper[4832]: I0125 07:57:24.203152 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f08aec7c666388c5a9a5ccc970acf6e9df3262090951fd1a205cfb2f6cfb26a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e880d54d6b2d147d036dac73afd36230c3a984b018b7bd600dcbd33ca83aa84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T07:57:24Z is after 2025-08-24T17:21:41Z" Jan 25 07:57:24 crc kubenswrapper[4832]: I0125 07:57:24.219480 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-kzrcf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5439ad80-35f6-4da4-8745-8104e9963472\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c1f3fab8a8806d76e6199970ac471a73665e6ec874f959a1e7908df814babfff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dg29p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T07:57:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-kzrcf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T07:57:24Z is after 2025-08-24T17:21:41Z" Jan 25 07:57:24 crc kubenswrapper[4832]: I0125 07:57:24.220219 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 25 07:57:24 crc kubenswrapper[4832]: I0125 07:57:24.220318 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 25 07:57:24 crc kubenswrapper[4832]: E0125 07:57:24.220405 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-25 07:57:32.220369103 +0000 UTC m=+34.894192636 (durationBeforeRetry 8s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 25 07:57:24 crc kubenswrapper[4832]: I0125 07:57:24.220455 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 25 07:57:24 crc kubenswrapper[4832]: I0125 07:57:24.220509 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 25 07:57:24 crc kubenswrapper[4832]: E0125 07:57:24.220546 4832 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 25 07:57:24 crc kubenswrapper[4832]: E0125 07:57:24.220595 4832 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 25 07:57:24 crc kubenswrapper[4832]: E0125 07:57:24.220637 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-25 07:57:32.220628291 +0000 UTC m=+34.894451824 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 25 07:57:24 crc kubenswrapper[4832]: E0125 07:57:24.220595 4832 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 25 07:57:24 crc kubenswrapper[4832]: E0125 07:57:24.220689 4832 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 25 07:57:24 crc kubenswrapper[4832]: E0125 07:57:24.220562 4832 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 25 07:57:24 crc kubenswrapper[4832]: E0125 07:57:24.220746 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-25 07:57:32.220732574 +0000 UTC m=+34.894556107 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 25 07:57:24 crc kubenswrapper[4832]: E0125 07:57:24.220762 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-25 07:57:32.220757045 +0000 UTC m=+34.894580578 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 25 07:57:24 crc kubenswrapper[4832]: I0125 07:57:24.237413 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d0e4b534-077a-47eb-a9aa-463b4dce27c2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:56:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:56:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e400282707469172abd90879bb5c4f96419dd2fbdbc5cc58c6ee9954624b221f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://22fb11acb07674f4808f4563567010790f12a87af272fdcf5ad1998e616c3f13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7970bc59b29bb18f7064917431bb4dd3388f593b65f71d697e3bc1c37493d087\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ae35d18ac48a31c47656c723134740770a44da6fa1587a853402bbfd4f51956\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://56b41ea1d1a7bb58c288bf3c661f5cd441412fc4790cd8361da2061bd35721dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c6f28ecd4c0dfb159fffbbdfc1ecbfee0ce21de2efa607937d80ec098bfc2534\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c6f28ecd4c0dfb159fffbbdfc1ecbfee0ce21de2efa607937d80ec098bfc2534\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T07:56:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T07:56:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3d6c060504d04d04a811fe906985b4981037f7c249befd89d21694b58983826\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b3d6c060504d04d04a811fe906985b4981037f7c249befd89d21694b58983826\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T07:56:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T07:56:58Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f98f07a514287378206a12966a18bcce2ce996434858c036f7e405a8c5d51721\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f98f07a514287378206a12966a18bcce2ce996434858c036f7e405a8c5d51721\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T07:56:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T07:56:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T07:56:57Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T07:57:24Z is after 2025-08-24T17:21:41Z" Jan 25 07:57:24 crc kubenswrapper[4832]: I0125 07:57:24.248794 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:19Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:19Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://097b2ff685144140b86c80b5c605d0ef31116b56237a696d1da4bf98f65d7ae2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T07:57:24Z is after 2025-08-24T17:21:41Z" Jan 25 07:57:24 crc kubenswrapper[4832]: I0125 07:57:24.258340 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:57:24 crc kubenswrapper[4832]: I0125 07:57:24.258365 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:57:24 crc kubenswrapper[4832]: I0125 07:57:24.258373 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:57:24 crc kubenswrapper[4832]: I0125 07:57:24.258399 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:57:24 crc kubenswrapper[4832]: I0125 07:57:24.258416 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:57:24Z","lastTransitionTime":"2026-01-25T07:57:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:57:24 crc kubenswrapper[4832]: I0125 07:57:24.260340 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-ljmz9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f0e6de28-95c1-4b62-93a5-8141ed12ba8e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://90459cff650e6a278d83d57b502423c3c3bd87cadc083c7642dfc4cc33e7953c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s6dzs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T07:57:16Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-ljmz9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T07:57:24Z is after 2025-08-24T17:21:41Z" Jan 25 07:57:24 crc kubenswrapper[4832]: I0125 07:57:24.271930 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-9r9sz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1fb47e8e-c812-41b4-9be7-3fad81e121b0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11d30ecfbac91cbd5f546d8f064b715e31917d7db31102376299e2c5fa7951f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2t6v2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c32b6a39b2bc87d55b11a88a54d0909633358c70f3fc555cd4308bc5bf2689a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2t6v2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T07:57:16Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-9r9sz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T07:57:24Z is after 2025-08-24T17:21:41Z" Jan 25 07:57:24 crc kubenswrapper[4832]: I0125 07:57:24.283509 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:16Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T07:57:24Z is after 2025-08-24T17:21:41Z" Jan 25 07:57:24 crc kubenswrapper[4832]: I0125 07:57:24.295086 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://49bab1f91a75d2c164a43ba253102a6ac5ba0fd6347fad172ae2052f055d3434\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T07:57:24Z is after 2025-08-24T17:21:41Z" Jan 25 07:57:24 crc kubenswrapper[4832]: I0125 07:57:24.310216 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-7tflx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"947f1c61-f061-4448-b301-9c2554b67933\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6tmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://446dcb21c95e4112671db6f4b8376ff3361d3d386ecdaa190f615271511be812\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://446dcb21c95e4112671db6f4b8376ff3361d3d386ecdaa190f615271511be812\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T07:57:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T07:57:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6tmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2ca8e86a16d5f632146a210839dc52fb85013bd79ac5a467847d4a28a672539\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a2ca8e86a16d5f632146a210839dc52fb85013bd79ac5a467847d4a28a672539\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T07:57:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T07:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6tmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e8c763fc8bcc560d4435f2ed3be793465fb9e31b07bc26b76ce14bf7d9ce6b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3e8c763fc8bcc560d4435f2ed3be793465fb9e31b07bc26b76ce14bf7d9ce6b7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T07:57:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T07:57:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6tmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6a224c00f14700b78550beaa705d0f1cf0b2f13ef8ec3ba81aef885b81292f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a6a224c00f14700b78550beaa705d0f1cf0b2f13ef8ec3ba81aef885b81292f3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T07:57:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T07:57:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6tmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0565bbfef6aee4dc36b7eeea5fb9b0d26004395c38af8fb6f1745ff6853957e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0565bbfef6aee4dc36b7eeea5fb9b0d26004395c38af8fb6f1745ff6853957e4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T07:57:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T07:57:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6tmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://21c9f3889231e035c1db9611e076f2db7f52cca1449f9cd143323a8599d3141c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://21c9f3889231e035c1db9611e076f2db7f52cca1449f9cd143323a8599d3141c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T07:57:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T07:57:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6tmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T07:57:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-7tflx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T07:57:24Z is after 2025-08-24T17:21:41Z" Jan 25 07:57:24 crc kubenswrapper[4832]: I0125 07:57:24.321787 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 25 07:57:24 crc kubenswrapper[4832]: E0125 07:57:24.321998 4832 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 25 07:57:24 crc kubenswrapper[4832]: E0125 07:57:24.322031 4832 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 25 07:57:24 crc kubenswrapper[4832]: E0125 07:57:24.322043 4832 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 25 07:57:24 crc kubenswrapper[4832]: E0125 07:57:24.322109 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-25 07:57:32.322091814 +0000 UTC m=+34.995915347 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 25 07:57:24 crc kubenswrapper[4832]: I0125 07:57:24.326167 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T07:57:24Z is after 2025-08-24T17:21:41Z" Jan 25 07:57:24 crc kubenswrapper[4832]: I0125 07:57:24.360766 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:57:24 crc kubenswrapper[4832]: I0125 07:57:24.360829 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:57:24 crc kubenswrapper[4832]: I0125 07:57:24.360848 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:57:24 crc kubenswrapper[4832]: I0125 07:57:24.360865 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:57:24 crc kubenswrapper[4832]: I0125 07:57:24.361165 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:57:24Z","lastTransitionTime":"2026-01-25T07:57:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:57:24 crc kubenswrapper[4832]: I0125 07:57:24.463494 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:57:24 crc kubenswrapper[4832]: I0125 07:57:24.463537 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:57:24 crc kubenswrapper[4832]: I0125 07:57:24.463548 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:57:24 crc kubenswrapper[4832]: I0125 07:57:24.463565 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:57:24 crc kubenswrapper[4832]: I0125 07:57:24.463577 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:57:24Z","lastTransitionTime":"2026-01-25T07:57:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:57:24 crc kubenswrapper[4832]: I0125 07:57:24.566107 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:57:24 crc kubenswrapper[4832]: I0125 07:57:24.566139 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:57:24 crc kubenswrapper[4832]: I0125 07:57:24.566152 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:57:24 crc kubenswrapper[4832]: I0125 07:57:24.566166 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:57:24 crc kubenswrapper[4832]: I0125 07:57:24.566177 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:57:24Z","lastTransitionTime":"2026-01-25T07:57:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:57:24 crc kubenswrapper[4832]: I0125 07:57:24.594879 4832 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-12 20:32:59.54248745 +0000 UTC Jan 25 07:57:24 crc kubenswrapper[4832]: I0125 07:57:24.668130 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:57:24 crc kubenswrapper[4832]: I0125 07:57:24.668168 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:57:24 crc kubenswrapper[4832]: I0125 07:57:24.668180 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:57:24 crc kubenswrapper[4832]: I0125 07:57:24.668196 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:57:24 crc kubenswrapper[4832]: I0125 07:57:24.668207 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:57:24Z","lastTransitionTime":"2026-01-25T07:57:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:57:24 crc kubenswrapper[4832]: I0125 07:57:24.668578 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 25 07:57:24 crc kubenswrapper[4832]: I0125 07:57:24.668654 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 25 07:57:24 crc kubenswrapper[4832]: E0125 07:57:24.668842 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 25 07:57:24 crc kubenswrapper[4832]: E0125 07:57:24.668934 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 25 07:57:24 crc kubenswrapper[4832]: I0125 07:57:24.770378 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:57:24 crc kubenswrapper[4832]: I0125 07:57:24.770425 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:57:24 crc kubenswrapper[4832]: I0125 07:57:24.770433 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:57:24 crc kubenswrapper[4832]: I0125 07:57:24.770446 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:57:24 crc kubenswrapper[4832]: I0125 07:57:24.770455 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:57:24Z","lastTransitionTime":"2026-01-25T07:57:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:57:24 crc kubenswrapper[4832]: I0125 07:57:24.873359 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:57:24 crc kubenswrapper[4832]: I0125 07:57:24.873407 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:57:24 crc kubenswrapper[4832]: I0125 07:57:24.873416 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:57:24 crc kubenswrapper[4832]: I0125 07:57:24.873430 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:57:24 crc kubenswrapper[4832]: I0125 07:57:24.873442 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:57:24Z","lastTransitionTime":"2026-01-25T07:57:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:57:24 crc kubenswrapper[4832]: I0125 07:57:24.896909 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-7tflx" event={"ID":"947f1c61-f061-4448-b301-9c2554b67933","Type":"ContainerStarted","Data":"62f9942e292890719dd629a44aa806877367db57a332a97f254fea093c039c5d"} Jan 25 07:57:24 crc kubenswrapper[4832]: I0125 07:57:24.896988 4832 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 25 07:57:24 crc kubenswrapper[4832]: I0125 07:57:24.910792 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T07:57:24Z is after 2025-08-24T17:21:41Z" Jan 25 07:57:24 crc kubenswrapper[4832]: I0125 07:57:24.927853 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-7tflx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"947f1c61-f061-4448-b301-9c2554b67933\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://62f9942e292890719dd629a44aa806877367db57a332a97f254fea093c039c5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6tmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://446dcb21c95e4112671db6f4b8376ff3361d3d386ecdaa190f615271511be812\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://446dcb21c95e4112671db6f4b8376ff3361d3d386ecdaa190f615271511be812\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T07:57:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T07:57:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6tmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2ca8e86a16d5f632146a210839dc52fb85013bd79ac5a467847d4a28a672539\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a2ca8e86a16d5f632146a210839dc52fb85013bd79ac5a467847d4a28a672539\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T07:57:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T07:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6tmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e8c763fc8bcc560d4435f2ed3be793465fb9e31b07bc26b76ce14bf7d9ce6b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3e8c763fc8bcc560d4435f2ed3be793465fb9e31b07bc26b76ce14bf7d9ce6b7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T07:57:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T07:57:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6tmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6a224c00f14700b78550beaa705d0f1cf0b2f13ef8ec3ba81aef885b81292f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a6a224c00f14700b78550beaa705d0f1cf0b2f13ef8ec3ba81aef885b81292f3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T07:57:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T07:57:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6tmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0565bbfef6aee4dc36b7eeea5fb9b0d26004395c38af8fb6f1745ff6853957e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0565bbfef6aee4dc36b7eeea5fb9b0d26004395c38af8fb6f1745ff6853957e4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T07:57:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T07:57:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6tmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://21c9f3889231e035c1db9611e076f2db7f52cca1449f9cd143323a8599d3141c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://21c9f3889231e035c1db9611e076f2db7f52cca1449f9cd143323a8599d3141c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T07:57:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T07:57:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6tmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T07:57:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-7tflx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T07:57:24Z is after 2025-08-24T17:21:41Z" Jan 25 07:57:24 crc kubenswrapper[4832]: I0125 07:57:24.944248 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4399c971-4476-4d24-ae22-8f9710ee1ea8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:56:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:56:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:56:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:56:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:56:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://427b76c32266adf832d2068d3a55977e793505c5bb68d7b55f73115596094910\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:56:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://37e9206fcc440929199c51b318bab8d2c23814d1307eaed596434c12edf2ed21\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:56:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://959f94a48ef709e3a3ca62ab6fda1874fd98e4fa70fbde0fa03da51bc8d0ed25\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:56:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://56d7d5b36830b76c8af4d6a98ec50b4096ef677b7ec94784724d5395dbc5e1a5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7e2213b4c4748dc37cf94e9b977630270dedbabf28e81c8a6d75e4ee3346ad7a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-25T07:57:15Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0125 07:57:10.242088 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0125 07:57:10.245266 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3222874030/tls.crt::/tmp/serving-cert-3222874030/tls.key\\\\\\\"\\\\nI0125 07:57:15.582629 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0125 07:57:15.585295 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0125 07:57:15.585315 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0125 07:57:15.585341 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0125 07:57:15.585347 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0125 07:57:15.590465 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0125 07:57:15.590486 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0125 07:57:15.590498 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0125 07:57:15.590502 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0125 07:57:15.590506 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0125 07:57:15.590510 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0125 07:57:15.590513 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0125 07:57:15.590670 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0125 07:57:15.594690 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-25T07:56:59Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c0b0c638bfaa98aaf9932b5ad1b0bfc04ba52038c40f3aa85103388c557ace5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:56:59Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5cdefbe9da3ff798b69ba79465cd9b6fce74df31802f14dca3fa58ba5b9d1bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b5cdefbe9da3ff798b69ba79465cd9b6fce74df31802f14dca3fa58ba5b9d1bd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T07:56:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T07:56:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T07:56:57Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T07:57:24Z is after 2025-08-24T17:21:41Z" Jan 25 07:57:24 crc kubenswrapper[4832]: I0125 07:57:24.961067 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fcc553c4-1007-4dbc-8420-60b36d54467a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:56:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:56:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:56:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8be196a1dec67a58e78aa9de2efa770fc899f210cc9c13962f0ebe78b967ba34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:56:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b044eb1a229266f00938c08da6aa9e86425ca71d08c8434d7214d54850c36bbb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:56:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://82354c782a5e3edb960aa716e1fc5fa9ab40d1f483ae320f08abfb662c1f1911\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:56:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7833d14895ff5c8aa464bdd04ddfe77dd2cddb9658d863bf6421449e62657bd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:56:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T07:56:57Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T07:57:24Z is after 2025-08-24T17:21:41Z" Jan 25 07:57:24 crc kubenswrapper[4832]: I0125 07:57:24.976081 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:57:24 crc kubenswrapper[4832]: I0125 07:57:24.976131 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:57:24 crc kubenswrapper[4832]: I0125 07:57:24.976142 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:57:24 crc kubenswrapper[4832]: I0125 07:57:24.976157 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:57:24 crc kubenswrapper[4832]: I0125 07:57:24.976167 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:57:24Z","lastTransitionTime":"2026-01-25T07:57:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:57:24 crc kubenswrapper[4832]: I0125 07:57:24.976503 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:16Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T07:57:24Z is after 2025-08-24T17:21:41Z" Jan 25 07:57:24 crc kubenswrapper[4832]: I0125 07:57:24.986976 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-6dqw2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5b30a48c-b823-4cdd-ac0c-def5487d8fa6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d04c4243f10847106daab854b81ba5b24466780aa4900922ae2c460468a12e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxmsw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T07:57:16Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-6dqw2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T07:57:24Z is after 2025-08-24T17:21:41Z" Jan 25 07:57:25 crc kubenswrapper[4832]: I0125 07:57:25.008342 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-plv66" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9c6fdc72-86dc-433d-8aac-57b0eeefaca3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4eb8d5ded80c75addd304eb271c805a5558200db4ad062ef7354d8a0e4d2892d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rkm2k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b2bdf85709ae59146893142e9c99259a30d0a3d382b2212b1863f677f6afc2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rkm2k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://955df1f749685e35f57096ab341705a767f9f044c498ff9fe0c578205ab00e47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rkm2k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a4281c5178e1f538e268252a65fbf98cf6d3febdb246a148f96a4aa074654ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rkm2k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9039a4038315d24ad4f721f3a16dc792881c104d23270f4ab5ffb3d84ff4cb99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rkm2k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0de5e2c0084fa8b9faf368e61b965f84d8411bcbdfb8b3cf6a35f4bc6088e68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rkm2k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c672a6d2179ac4f2004e0caeaec41230a60abe1473535c59b3a5cebb1d244f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rkm2k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d82289bf3a8f5881decb5d348cc43fdfd61f4ce6af17013a893b687d2c759d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rkm2k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac96bdf8380dbae226d8f186a0449b986660f21889eb73734620b26fb796fbf1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ac96bdf8380dbae226d8f186a0449b986660f21889eb73734620b26fb796fbf1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T07:57:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rkm2k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T07:57:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-plv66\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T07:57:25Z is after 2025-08-24T17:21:41Z" Jan 25 07:57:25 crc kubenswrapper[4832]: I0125 07:57:25.031284 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d0e4b534-077a-47eb-a9aa-463b4dce27c2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:56:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:56:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e400282707469172abd90879bb5c4f96419dd2fbdbc5cc58c6ee9954624b221f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://22fb11acb07674f4808f4563567010790f12a87af272fdcf5ad1998e616c3f13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7970bc59b29bb18f7064917431bb4dd3388f593b65f71d697e3bc1c37493d087\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ae35d18ac48a31c47656c723134740770a44da6fa1587a853402bbfd4f51956\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://56b41ea1d1a7bb58c288bf3c661f5cd441412fc4790cd8361da2061bd35721dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c6f28ecd4c0dfb159fffbbdfc1ecbfee0ce21de2efa607937d80ec098bfc2534\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c6f28ecd4c0dfb159fffbbdfc1ecbfee0ce21de2efa607937d80ec098bfc2534\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T07:56:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T07:56:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3d6c060504d04d04a811fe906985b4981037f7c249befd89d21694b58983826\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b3d6c060504d04d04a811fe906985b4981037f7c249befd89d21694b58983826\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T07:56:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T07:56:58Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f98f07a514287378206a12966a18bcce2ce996434858c036f7e405a8c5d51721\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f98f07a514287378206a12966a18bcce2ce996434858c036f7e405a8c5d51721\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T07:56:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T07:56:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T07:56:57Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T07:57:25Z is after 2025-08-24T17:21:41Z" Jan 25 07:57:25 crc kubenswrapper[4832]: I0125 07:57:25.046669 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f08aec7c666388c5a9a5ccc970acf6e9df3262090951fd1a205cfb2f6cfb26a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e880d54d6b2d147d036dac73afd36230c3a984b018b7bd600dcbd33ca83aa84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T07:57:25Z is after 2025-08-24T17:21:41Z" Jan 25 07:57:25 crc kubenswrapper[4832]: I0125 07:57:25.059572 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-kzrcf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5439ad80-35f6-4da4-8745-8104e9963472\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c1f3fab8a8806d76e6199970ac471a73665e6ec874f959a1e7908df814babfff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dg29p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T07:57:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-kzrcf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T07:57:25Z is after 2025-08-24T17:21:41Z" Jan 25 07:57:25 crc kubenswrapper[4832]: I0125 07:57:25.071050 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:16Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T07:57:25Z is after 2025-08-24T17:21:41Z" Jan 25 07:57:25 crc kubenswrapper[4832]: I0125 07:57:25.078614 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:57:25 crc kubenswrapper[4832]: I0125 07:57:25.078659 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:57:25 crc kubenswrapper[4832]: I0125 07:57:25.078668 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:57:25 crc kubenswrapper[4832]: I0125 07:57:25.078683 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:57:25 crc kubenswrapper[4832]: I0125 07:57:25.078694 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:57:25Z","lastTransitionTime":"2026-01-25T07:57:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:57:25 crc kubenswrapper[4832]: I0125 07:57:25.083556 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://49bab1f91a75d2c164a43ba253102a6ac5ba0fd6347fad172ae2052f055d3434\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T07:57:25Z is after 2025-08-24T17:21:41Z" Jan 25 07:57:25 crc kubenswrapper[4832]: I0125 07:57:25.093602 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:19Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:19Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://097b2ff685144140b86c80b5c605d0ef31116b56237a696d1da4bf98f65d7ae2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T07:57:25Z is after 2025-08-24T17:21:41Z" Jan 25 07:57:25 crc kubenswrapper[4832]: I0125 07:57:25.102368 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-ljmz9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f0e6de28-95c1-4b62-93a5-8141ed12ba8e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://90459cff650e6a278d83d57b502423c3c3bd87cadc083c7642dfc4cc33e7953c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s6dzs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T07:57:16Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-ljmz9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T07:57:25Z is after 2025-08-24T17:21:41Z" Jan 25 07:57:25 crc kubenswrapper[4832]: I0125 07:57:25.111568 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-9r9sz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1fb47e8e-c812-41b4-9be7-3fad81e121b0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11d30ecfbac91cbd5f546d8f064b715e31917d7db31102376299e2c5fa7951f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2t6v2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c32b6a39b2bc87d55b11a88a54d0909633358c70f3fc555cd4308bc5bf2689a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2t6v2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T07:57:16Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-9r9sz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T07:57:25Z is after 2025-08-24T17:21:41Z" Jan 25 07:57:25 crc kubenswrapper[4832]: I0125 07:57:25.180551 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:57:25 crc kubenswrapper[4832]: I0125 07:57:25.180855 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:57:25 crc kubenswrapper[4832]: I0125 07:57:25.180915 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:57:25 crc kubenswrapper[4832]: I0125 07:57:25.180988 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:57:25 crc kubenswrapper[4832]: I0125 07:57:25.181044 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:57:25Z","lastTransitionTime":"2026-01-25T07:57:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:57:25 crc kubenswrapper[4832]: I0125 07:57:25.283288 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:57:25 crc kubenswrapper[4832]: I0125 07:57:25.283315 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:57:25 crc kubenswrapper[4832]: I0125 07:57:25.283323 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:57:25 crc kubenswrapper[4832]: I0125 07:57:25.283339 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:57:25 crc kubenswrapper[4832]: I0125 07:57:25.283349 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:57:25Z","lastTransitionTime":"2026-01-25T07:57:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:57:25 crc kubenswrapper[4832]: I0125 07:57:25.385148 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:57:25 crc kubenswrapper[4832]: I0125 07:57:25.385178 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:57:25 crc kubenswrapper[4832]: I0125 07:57:25.385187 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:57:25 crc kubenswrapper[4832]: I0125 07:57:25.385202 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:57:25 crc kubenswrapper[4832]: I0125 07:57:25.385218 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:57:25Z","lastTransitionTime":"2026-01-25T07:57:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:57:25 crc kubenswrapper[4832]: I0125 07:57:25.487538 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:57:25 crc kubenswrapper[4832]: I0125 07:57:25.487642 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:57:25 crc kubenswrapper[4832]: I0125 07:57:25.487684 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:57:25 crc kubenswrapper[4832]: I0125 07:57:25.487725 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:57:25 crc kubenswrapper[4832]: I0125 07:57:25.487753 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:57:25Z","lastTransitionTime":"2026-01-25T07:57:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:57:25 crc kubenswrapper[4832]: I0125 07:57:25.591024 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:57:25 crc kubenswrapper[4832]: I0125 07:57:25.591075 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:57:25 crc kubenswrapper[4832]: I0125 07:57:25.591087 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:57:25 crc kubenswrapper[4832]: I0125 07:57:25.591133 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:57:25 crc kubenswrapper[4832]: I0125 07:57:25.591148 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:57:25Z","lastTransitionTime":"2026-01-25T07:57:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:57:25 crc kubenswrapper[4832]: I0125 07:57:25.595234 4832 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-19 04:30:17.680700243 +0000 UTC Jan 25 07:57:25 crc kubenswrapper[4832]: I0125 07:57:25.669264 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 25 07:57:25 crc kubenswrapper[4832]: E0125 07:57:25.669463 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 25 07:57:25 crc kubenswrapper[4832]: I0125 07:57:25.693478 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:57:25 crc kubenswrapper[4832]: I0125 07:57:25.693814 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:57:25 crc kubenswrapper[4832]: I0125 07:57:25.693905 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:57:25 crc kubenswrapper[4832]: I0125 07:57:25.693998 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:57:25 crc kubenswrapper[4832]: I0125 07:57:25.694121 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:57:25Z","lastTransitionTime":"2026-01-25T07:57:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:57:25 crc kubenswrapper[4832]: I0125 07:57:25.797029 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:57:25 crc kubenswrapper[4832]: I0125 07:57:25.797083 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:57:25 crc kubenswrapper[4832]: I0125 07:57:25.797095 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:57:25 crc kubenswrapper[4832]: I0125 07:57:25.797119 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:57:25 crc kubenswrapper[4832]: I0125 07:57:25.797130 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:57:25Z","lastTransitionTime":"2026-01-25T07:57:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:57:25 crc kubenswrapper[4832]: I0125 07:57:25.899983 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:57:25 crc kubenswrapper[4832]: I0125 07:57:25.900021 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:57:25 crc kubenswrapper[4832]: I0125 07:57:25.900031 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:57:25 crc kubenswrapper[4832]: I0125 07:57:25.900046 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:57:25 crc kubenswrapper[4832]: I0125 07:57:25.900056 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:57:25Z","lastTransitionTime":"2026-01-25T07:57:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:57:25 crc kubenswrapper[4832]: I0125 07:57:25.903803 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-plv66_9c6fdc72-86dc-433d-8aac-57b0eeefaca3/ovnkube-controller/0.log" Jan 25 07:57:25 crc kubenswrapper[4832]: I0125 07:57:25.907450 4832 generic.go:334] "Generic (PLEG): container finished" podID="9c6fdc72-86dc-433d-8aac-57b0eeefaca3" containerID="0c672a6d2179ac4f2004e0caeaec41230a60abe1473535c59b3a5cebb1d244f9" exitCode=1 Jan 25 07:57:25 crc kubenswrapper[4832]: I0125 07:57:25.907516 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-plv66" event={"ID":"9c6fdc72-86dc-433d-8aac-57b0eeefaca3","Type":"ContainerDied","Data":"0c672a6d2179ac4f2004e0caeaec41230a60abe1473535c59b3a5cebb1d244f9"} Jan 25 07:57:25 crc kubenswrapper[4832]: I0125 07:57:25.908481 4832 scope.go:117] "RemoveContainer" containerID="0c672a6d2179ac4f2004e0caeaec41230a60abe1473535c59b3a5cebb1d244f9" Jan 25 07:57:25 crc kubenswrapper[4832]: I0125 07:57:25.937333 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d0e4b534-077a-47eb-a9aa-463b4dce27c2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:56:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:56:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e400282707469172abd90879bb5c4f96419dd2fbdbc5cc58c6ee9954624b221f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://22fb11acb07674f4808f4563567010790f12a87af272fdcf5ad1998e616c3f13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7970bc59b29bb18f7064917431bb4dd3388f593b65f71d697e3bc1c37493d087\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ae35d18ac48a31c47656c723134740770a44da6fa1587a853402bbfd4f51956\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://56b41ea1d1a7bb58c288bf3c661f5cd441412fc4790cd8361da2061bd35721dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c6f28ecd4c0dfb159fffbbdfc1ecbfee0ce21de2efa607937d80ec098bfc2534\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c6f28ecd4c0dfb159fffbbdfc1ecbfee0ce21de2efa607937d80ec098bfc2534\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T07:56:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T07:56:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3d6c060504d04d04a811fe906985b4981037f7c249befd89d21694b58983826\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b3d6c060504d04d04a811fe906985b4981037f7c249befd89d21694b58983826\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T07:56:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T07:56:58Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f98f07a514287378206a12966a18bcce2ce996434858c036f7e405a8c5d51721\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f98f07a514287378206a12966a18bcce2ce996434858c036f7e405a8c5d51721\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T07:56:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T07:56:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T07:56:57Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T07:57:25Z is after 2025-08-24T17:21:41Z" Jan 25 07:57:25 crc kubenswrapper[4832]: I0125 07:57:25.956193 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f08aec7c666388c5a9a5ccc970acf6e9df3262090951fd1a205cfb2f6cfb26a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e880d54d6b2d147d036dac73afd36230c3a984b018b7bd600dcbd33ca83aa84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T07:57:25Z is after 2025-08-24T17:21:41Z" Jan 25 07:57:25 crc kubenswrapper[4832]: I0125 07:57:25.978467 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-kzrcf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5439ad80-35f6-4da4-8745-8104e9963472\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c1f3fab8a8806d76e6199970ac471a73665e6ec874f959a1e7908df814babfff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dg29p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T07:57:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-kzrcf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T07:57:25Z is after 2025-08-24T17:21:41Z" Jan 25 07:57:25 crc kubenswrapper[4832]: I0125 07:57:25.996729 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:16Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T07:57:25Z is after 2025-08-24T17:21:41Z" Jan 25 07:57:26 crc kubenswrapper[4832]: I0125 07:57:26.003077 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:57:26 crc kubenswrapper[4832]: I0125 07:57:26.003124 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:57:26 crc kubenswrapper[4832]: I0125 07:57:26.003138 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:57:26 crc kubenswrapper[4832]: I0125 07:57:26.003157 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:57:26 crc kubenswrapper[4832]: I0125 07:57:26.003171 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:57:26Z","lastTransitionTime":"2026-01-25T07:57:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:57:26 crc kubenswrapper[4832]: I0125 07:57:26.016188 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://49bab1f91a75d2c164a43ba253102a6ac5ba0fd6347fad172ae2052f055d3434\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T07:57:26Z is after 2025-08-24T17:21:41Z" Jan 25 07:57:26 crc kubenswrapper[4832]: I0125 07:57:26.030759 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:19Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:19Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://097b2ff685144140b86c80b5c605d0ef31116b56237a696d1da4bf98f65d7ae2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T07:57:26Z is after 2025-08-24T17:21:41Z" Jan 25 07:57:26 crc kubenswrapper[4832]: I0125 07:57:26.039434 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:57:26 crc kubenswrapper[4832]: I0125 07:57:26.039530 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:57:26 crc kubenswrapper[4832]: I0125 07:57:26.039543 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:57:26 crc kubenswrapper[4832]: I0125 07:57:26.039559 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:57:26 crc kubenswrapper[4832]: I0125 07:57:26.039569 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:57:26Z","lastTransitionTime":"2026-01-25T07:57:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:57:26 crc kubenswrapper[4832]: I0125 07:57:26.045904 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-ljmz9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f0e6de28-95c1-4b62-93a5-8141ed12ba8e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://90459cff650e6a278d83d57b502423c3c3bd87cadc083c7642dfc4cc33e7953c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s6dzs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T07:57:16Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-ljmz9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T07:57:26Z is after 2025-08-24T17:21:41Z" Jan 25 07:57:26 crc kubenswrapper[4832]: E0125 07:57:26.056340 4832 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-25T07:57:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:26Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-25T07:57:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:26Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-25T07:57:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:26Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-25T07:57:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:26Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"0979aa75-019e-429a-886d-abfe16bbe8b2\\\",\\\"systemUUID\\\":\\\"55010a19-6f9d-4b9e-9f82-47bdc3835176\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T07:57:26Z is after 2025-08-24T17:21:41Z" Jan 25 07:57:26 crc kubenswrapper[4832]: I0125 07:57:26.060620 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-9r9sz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1fb47e8e-c812-41b4-9be7-3fad81e121b0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11d30ecfbac91cbd5f546d8f064b715e31917d7db31102376299e2c5fa7951f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2t6v2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c32b6a39b2bc87d55b11a88a54d0909633358c70f3fc555cd4308bc5bf2689a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2t6v2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T07:57:16Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-9r9sz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T07:57:26Z is after 2025-08-24T17:21:41Z" Jan 25 07:57:26 crc kubenswrapper[4832]: I0125 07:57:26.062740 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:57:26 crc kubenswrapper[4832]: I0125 07:57:26.062777 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:57:26 crc kubenswrapper[4832]: I0125 07:57:26.062791 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:57:26 crc kubenswrapper[4832]: I0125 07:57:26.062817 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:57:26 crc kubenswrapper[4832]: I0125 07:57:26.062835 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:57:26Z","lastTransitionTime":"2026-01-25T07:57:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:57:26 crc kubenswrapper[4832]: E0125 07:57:26.076198 4832 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-25T07:57:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:26Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-25T07:57:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:26Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-25T07:57:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:26Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-25T07:57:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:26Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"0979aa75-019e-429a-886d-abfe16bbe8b2\\\",\\\"systemUUID\\\":\\\"55010a19-6f9d-4b9e-9f82-47bdc3835176\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T07:57:26Z is after 2025-08-24T17:21:41Z" Jan 25 07:57:26 crc kubenswrapper[4832]: I0125 07:57:26.081018 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:57:26 crc kubenswrapper[4832]: I0125 07:57:26.081078 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:57:26 crc kubenswrapper[4832]: I0125 07:57:26.081111 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:57:26 crc kubenswrapper[4832]: I0125 07:57:26.081143 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:57:26 crc kubenswrapper[4832]: I0125 07:57:26.081161 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:57:26Z","lastTransitionTime":"2026-01-25T07:57:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:57:26 crc kubenswrapper[4832]: I0125 07:57:26.086415 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T07:57:26Z is after 2025-08-24T17:21:41Z" Jan 25 07:57:26 crc kubenswrapper[4832]: E0125 07:57:26.103402 4832 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-25T07:57:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:26Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-25T07:57:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:26Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-25T07:57:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:26Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-25T07:57:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:26Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"0979aa75-019e-429a-886d-abfe16bbe8b2\\\",\\\"systemUUID\\\":\\\"55010a19-6f9d-4b9e-9f82-47bdc3835176\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T07:57:26Z is after 2025-08-24T17:21:41Z" Jan 25 07:57:26 crc kubenswrapper[4832]: I0125 07:57:26.104353 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-7tflx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"947f1c61-f061-4448-b301-9c2554b67933\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://62f9942e292890719dd629a44aa806877367db57a332a97f254fea093c039c5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6tmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://446dcb21c95e4112671db6f4b8376ff3361d3d386ecdaa190f615271511be812\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://446dcb21c95e4112671db6f4b8376ff3361d3d386ecdaa190f615271511be812\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T07:57:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T07:57:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6tmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2ca8e86a16d5f632146a210839dc52fb85013bd79ac5a467847d4a28a672539\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a2ca8e86a16d5f632146a210839dc52fb85013bd79ac5a467847d4a28a672539\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T07:57:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T07:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6tmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e8c763fc8bcc560d4435f2ed3be793465fb9e31b07bc26b76ce14bf7d9ce6b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3e8c763fc8bcc560d4435f2ed3be793465fb9e31b07bc26b76ce14bf7d9ce6b7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T07:57:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T07:57:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6tmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6a224c00f14700b78550beaa705d0f1cf0b2f13ef8ec3ba81aef885b81292f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a6a224c00f14700b78550beaa705d0f1cf0b2f13ef8ec3ba81aef885b81292f3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T07:57:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T07:57:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6tmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0565bbfef6aee4dc36b7eeea5fb9b0d26004395c38af8fb6f1745ff6853957e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0565bbfef6aee4dc36b7eeea5fb9b0d26004395c38af8fb6f1745ff6853957e4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T07:57:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T07:57:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6tmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://21c9f3889231e035c1db9611e076f2db7f52cca1449f9cd143323a8599d3141c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://21c9f3889231e035c1db9611e076f2db7f52cca1449f9cd143323a8599d3141c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T07:57:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T07:57:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6tmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T07:57:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-7tflx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T07:57:26Z is after 2025-08-24T17:21:41Z" Jan 25 07:57:26 crc kubenswrapper[4832]: I0125 07:57:26.107445 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:57:26 crc kubenswrapper[4832]: I0125 07:57:26.107476 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:57:26 crc kubenswrapper[4832]: I0125 07:57:26.107490 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:57:26 crc kubenswrapper[4832]: I0125 07:57:26.107516 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:57:26 crc kubenswrapper[4832]: I0125 07:57:26.107530 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:57:26Z","lastTransitionTime":"2026-01-25T07:57:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:57:26 crc kubenswrapper[4832]: I0125 07:57:26.118524 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4399c971-4476-4d24-ae22-8f9710ee1ea8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:56:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:56:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:56:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:56:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:56:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://427b76c32266adf832d2068d3a55977e793505c5bb68d7b55f73115596094910\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:56:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://37e9206fcc440929199c51b318bab8d2c23814d1307eaed596434c12edf2ed21\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:56:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://959f94a48ef709e3a3ca62ab6fda1874fd98e4fa70fbde0fa03da51bc8d0ed25\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:56:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://56d7d5b36830b76c8af4d6a98ec50b4096ef677b7ec94784724d5395dbc5e1a5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7e2213b4c4748dc37cf94e9b977630270dedbabf28e81c8a6d75e4ee3346ad7a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-25T07:57:15Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0125 07:57:10.242088 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0125 07:57:10.245266 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3222874030/tls.crt::/tmp/serving-cert-3222874030/tls.key\\\\\\\"\\\\nI0125 07:57:15.582629 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0125 07:57:15.585295 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0125 07:57:15.585315 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0125 07:57:15.585341 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0125 07:57:15.585347 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0125 07:57:15.590465 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0125 07:57:15.590486 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0125 07:57:15.590498 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0125 07:57:15.590502 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0125 07:57:15.590506 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0125 07:57:15.590510 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0125 07:57:15.590513 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0125 07:57:15.590670 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0125 07:57:15.594690 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-25T07:56:59Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c0b0c638bfaa98aaf9932b5ad1b0bfc04ba52038c40f3aa85103388c557ace5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:56:59Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5cdefbe9da3ff798b69ba79465cd9b6fce74df31802f14dca3fa58ba5b9d1bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b5cdefbe9da3ff798b69ba79465cd9b6fce74df31802f14dca3fa58ba5b9d1bd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T07:56:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T07:56:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T07:56:57Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T07:57:26Z is after 2025-08-24T17:21:41Z" Jan 25 07:57:26 crc kubenswrapper[4832]: E0125 07:57:26.121434 4832 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-25T07:57:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:26Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-25T07:57:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:26Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-25T07:57:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:26Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-25T07:57:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:26Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"0979aa75-019e-429a-886d-abfe16bbe8b2\\\",\\\"systemUUID\\\":\\\"55010a19-6f9d-4b9e-9f82-47bdc3835176\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T07:57:26Z is after 2025-08-24T17:21:41Z" Jan 25 07:57:26 crc kubenswrapper[4832]: I0125 07:57:26.129060 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:57:26 crc kubenswrapper[4832]: I0125 07:57:26.129099 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:57:26 crc kubenswrapper[4832]: I0125 07:57:26.129112 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:57:26 crc kubenswrapper[4832]: I0125 07:57:26.129132 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:57:26 crc kubenswrapper[4832]: I0125 07:57:26.129144 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:57:26Z","lastTransitionTime":"2026-01-25T07:57:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:57:26 crc kubenswrapper[4832]: I0125 07:57:26.135930 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fcc553c4-1007-4dbc-8420-60b36d54467a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:56:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:56:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:56:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8be196a1dec67a58e78aa9de2efa770fc899f210cc9c13962f0ebe78b967ba34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:56:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b044eb1a229266f00938c08da6aa9e86425ca71d08c8434d7214d54850c36bbb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:56:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://82354c782a5e3edb960aa716e1fc5fa9ab40d1f483ae320f08abfb662c1f1911\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:56:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7833d14895ff5c8aa464bdd04ddfe77dd2cddb9658d863bf6421449e62657bd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:56:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T07:56:57Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T07:57:26Z is after 2025-08-24T17:21:41Z" Jan 25 07:57:26 crc kubenswrapper[4832]: I0125 07:57:26.148579 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:16Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T07:57:26Z is after 2025-08-24T17:21:41Z" Jan 25 07:57:26 crc kubenswrapper[4832]: E0125 07:57:26.148668 4832 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-25T07:57:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:26Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-25T07:57:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:26Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-25T07:57:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:26Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-25T07:57:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:26Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"0979aa75-019e-429a-886d-abfe16bbe8b2\\\",\\\"systemUUID\\\":\\\"55010a19-6f9d-4b9e-9f82-47bdc3835176\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T07:57:26Z is after 2025-08-24T17:21:41Z" Jan 25 07:57:26 crc kubenswrapper[4832]: E0125 07:57:26.148862 4832 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 25 07:57:26 crc kubenswrapper[4832]: I0125 07:57:26.150892 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:57:26 crc kubenswrapper[4832]: I0125 07:57:26.150918 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:57:26 crc kubenswrapper[4832]: I0125 07:57:26.150929 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:57:26 crc kubenswrapper[4832]: I0125 07:57:26.150947 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:57:26 crc kubenswrapper[4832]: I0125 07:57:26.150959 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:57:26Z","lastTransitionTime":"2026-01-25T07:57:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:57:26 crc kubenswrapper[4832]: I0125 07:57:26.159448 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-6dqw2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5b30a48c-b823-4cdd-ac0c-def5487d8fa6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d04c4243f10847106daab854b81ba5b24466780aa4900922ae2c460468a12e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxmsw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T07:57:16Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-6dqw2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T07:57:26Z is after 2025-08-24T17:21:41Z" Jan 25 07:57:26 crc kubenswrapper[4832]: I0125 07:57:26.177583 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-plv66" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9c6fdc72-86dc-433d-8aac-57b0eeefaca3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4eb8d5ded80c75addd304eb271c805a5558200db4ad062ef7354d8a0e4d2892d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rkm2k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b2bdf85709ae59146893142e9c99259a30d0a3d382b2212b1863f677f6afc2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rkm2k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://955df1f749685e35f57096ab341705a767f9f044c498ff9fe0c578205ab00e47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rkm2k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a4281c5178e1f538e268252a65fbf98cf6d3febdb246a148f96a4aa074654ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rkm2k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9039a4038315d24ad4f721f3a16dc792881c104d23270f4ab5ffb3d84ff4cb99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rkm2k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0de5e2c0084fa8b9faf368e61b965f84d8411bcbdfb8b3cf6a35f4bc6088e68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rkm2k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c672a6d2179ac4f2004e0caeaec41230a60abe1473535c59b3a5cebb1d244f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0c672a6d2179ac4f2004e0caeaec41230a60abe1473535c59b3a5cebb1d244f9\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-25T07:57:25Z\\\",\\\"message\\\":\\\"ork/v1/apis/informers/externalversions/factory.go:140\\\\nI0125 07:57:25.333460 6081 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0125 07:57:25.333530 6081 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0125 07:57:25.333554 6081 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0125 07:57:25.333566 6081 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0125 07:57:25.333572 6081 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0125 07:57:25.333584 6081 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0125 07:57:25.333592 6081 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0125 07:57:25.333604 6081 factory.go:656] Stopping watch factory\\\\nI0125 07:57:25.333619 6081 handler.go:208] Removed *v1.Node event handler 7\\\\nI0125 07:57:25.333630 6081 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0125 07:57:25.333625 6081 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0125 07:57:25.333641 6081 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0125 07:57:25.333650 6081 handler.go:208] Removed *v1.Node event handler 2\\\\nI0125 07:57:25.333645 6081 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0125 07:57:25.333660 6081 handler.go:208] Removed *v1.EgressIP ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-25T07:57:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rkm2k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d82289bf3a8f5881decb5d348cc43fdfd61f4ce6af17013a893b687d2c759d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rkm2k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac96bdf8380dbae226d8f186a0449b986660f21889eb73734620b26fb796fbf1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ac96bdf8380dbae226d8f186a0449b986660f21889eb73734620b26fb796fbf1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T07:57:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rkm2k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T07:57:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-plv66\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T07:57:26Z is after 2025-08-24T17:21:41Z" Jan 25 07:57:26 crc kubenswrapper[4832]: I0125 07:57:26.254261 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:57:26 crc kubenswrapper[4832]: I0125 07:57:26.254304 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:57:26 crc kubenswrapper[4832]: I0125 07:57:26.254315 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:57:26 crc kubenswrapper[4832]: I0125 07:57:26.254332 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:57:26 crc kubenswrapper[4832]: I0125 07:57:26.254348 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:57:26Z","lastTransitionTime":"2026-01-25T07:57:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:57:26 crc kubenswrapper[4832]: I0125 07:57:26.356773 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:57:26 crc kubenswrapper[4832]: I0125 07:57:26.356817 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:57:26 crc kubenswrapper[4832]: I0125 07:57:26.356829 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:57:26 crc kubenswrapper[4832]: I0125 07:57:26.356846 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:57:26 crc kubenswrapper[4832]: I0125 07:57:26.356862 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:57:26Z","lastTransitionTime":"2026-01-25T07:57:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:57:26 crc kubenswrapper[4832]: I0125 07:57:26.459554 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:57:26 crc kubenswrapper[4832]: I0125 07:57:26.459611 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:57:26 crc kubenswrapper[4832]: I0125 07:57:26.459621 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:57:26 crc kubenswrapper[4832]: I0125 07:57:26.459642 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:57:26 crc kubenswrapper[4832]: I0125 07:57:26.459653 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:57:26Z","lastTransitionTime":"2026-01-25T07:57:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:57:26 crc kubenswrapper[4832]: I0125 07:57:26.562949 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:57:26 crc kubenswrapper[4832]: I0125 07:57:26.562991 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:57:26 crc kubenswrapper[4832]: I0125 07:57:26.563002 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:57:26 crc kubenswrapper[4832]: I0125 07:57:26.563058 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:57:26 crc kubenswrapper[4832]: I0125 07:57:26.563071 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:57:26Z","lastTransitionTime":"2026-01-25T07:57:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:57:26 crc kubenswrapper[4832]: I0125 07:57:26.595591 4832 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-10 00:14:53.13262863 +0000 UTC Jan 25 07:57:26 crc kubenswrapper[4832]: I0125 07:57:26.667271 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:57:26 crc kubenswrapper[4832]: I0125 07:57:26.667342 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:57:26 crc kubenswrapper[4832]: I0125 07:57:26.667364 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:57:26 crc kubenswrapper[4832]: I0125 07:57:26.667423 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:57:26 crc kubenswrapper[4832]: I0125 07:57:26.667449 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:57:26Z","lastTransitionTime":"2026-01-25T07:57:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:57:26 crc kubenswrapper[4832]: I0125 07:57:26.668800 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 25 07:57:26 crc kubenswrapper[4832]: E0125 07:57:26.668973 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 25 07:57:26 crc kubenswrapper[4832]: I0125 07:57:26.668809 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 25 07:57:26 crc kubenswrapper[4832]: E0125 07:57:26.669611 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 25 07:57:26 crc kubenswrapper[4832]: I0125 07:57:26.772555 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:57:26 crc kubenswrapper[4832]: I0125 07:57:26.772614 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:57:26 crc kubenswrapper[4832]: I0125 07:57:26.772627 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:57:26 crc kubenswrapper[4832]: I0125 07:57:26.772647 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:57:26 crc kubenswrapper[4832]: I0125 07:57:26.772657 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:57:26Z","lastTransitionTime":"2026-01-25T07:57:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:57:26 crc kubenswrapper[4832]: I0125 07:57:26.876030 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:57:26 crc kubenswrapper[4832]: I0125 07:57:26.876114 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:57:26 crc kubenswrapper[4832]: I0125 07:57:26.876140 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:57:26 crc kubenswrapper[4832]: I0125 07:57:26.876215 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:57:26 crc kubenswrapper[4832]: I0125 07:57:26.876250 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:57:26Z","lastTransitionTime":"2026-01-25T07:57:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:57:26 crc kubenswrapper[4832]: I0125 07:57:26.915027 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-plv66_9c6fdc72-86dc-433d-8aac-57b0eeefaca3/ovnkube-controller/1.log" Jan 25 07:57:26 crc kubenswrapper[4832]: I0125 07:57:26.915751 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-plv66_9c6fdc72-86dc-433d-8aac-57b0eeefaca3/ovnkube-controller/0.log" Jan 25 07:57:26 crc kubenswrapper[4832]: I0125 07:57:26.920552 4832 generic.go:334] "Generic (PLEG): container finished" podID="9c6fdc72-86dc-433d-8aac-57b0eeefaca3" containerID="535d226369544a445f4a5592a1a733db46fea474ae6700626093ea53a57fa858" exitCode=1 Jan 25 07:57:26 crc kubenswrapper[4832]: I0125 07:57:26.920648 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-plv66" event={"ID":"9c6fdc72-86dc-433d-8aac-57b0eeefaca3","Type":"ContainerDied","Data":"535d226369544a445f4a5592a1a733db46fea474ae6700626093ea53a57fa858"} Jan 25 07:57:26 crc kubenswrapper[4832]: I0125 07:57:26.920827 4832 scope.go:117] "RemoveContainer" containerID="0c672a6d2179ac4f2004e0caeaec41230a60abe1473535c59b3a5cebb1d244f9" Jan 25 07:57:26 crc kubenswrapper[4832]: I0125 07:57:26.922006 4832 scope.go:117] "RemoveContainer" containerID="535d226369544a445f4a5592a1a733db46fea474ae6700626093ea53a57fa858" Jan 25 07:57:26 crc kubenswrapper[4832]: E0125 07:57:26.922315 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-plv66_openshift-ovn-kubernetes(9c6fdc72-86dc-433d-8aac-57b0eeefaca3)\"" pod="openshift-ovn-kubernetes/ovnkube-node-plv66" podUID="9c6fdc72-86dc-433d-8aac-57b0eeefaca3" Jan 25 07:57:26 crc kubenswrapper[4832]: I0125 07:57:26.938448 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T07:57:26Z is after 2025-08-24T17:21:41Z" Jan 25 07:57:26 crc kubenswrapper[4832]: I0125 07:57:26.958937 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-7tflx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"947f1c61-f061-4448-b301-9c2554b67933\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://62f9942e292890719dd629a44aa806877367db57a332a97f254fea093c039c5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6tmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://446dcb21c95e4112671db6f4b8376ff3361d3d386ecdaa190f615271511be812\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://446dcb21c95e4112671db6f4b8376ff3361d3d386ecdaa190f615271511be812\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T07:57:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T07:57:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6tmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2ca8e86a16d5f632146a210839dc52fb85013bd79ac5a467847d4a28a672539\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a2ca8e86a16d5f632146a210839dc52fb85013bd79ac5a467847d4a28a672539\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T07:57:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T07:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6tmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e8c763fc8bcc560d4435f2ed3be793465fb9e31b07bc26b76ce14bf7d9ce6b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3e8c763fc8bcc560d4435f2ed3be793465fb9e31b07bc26b76ce14bf7d9ce6b7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T07:57:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T07:57:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6tmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6a224c00f14700b78550beaa705d0f1cf0b2f13ef8ec3ba81aef885b81292f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a6a224c00f14700b78550beaa705d0f1cf0b2f13ef8ec3ba81aef885b81292f3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T07:57:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T07:57:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6tmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0565bbfef6aee4dc36b7eeea5fb9b0d26004395c38af8fb6f1745ff6853957e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0565bbfef6aee4dc36b7eeea5fb9b0d26004395c38af8fb6f1745ff6853957e4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T07:57:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T07:57:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6tmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://21c9f3889231e035c1db9611e076f2db7f52cca1449f9cd143323a8599d3141c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://21c9f3889231e035c1db9611e076f2db7f52cca1449f9cd143323a8599d3141c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T07:57:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T07:57:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6tmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T07:57:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-7tflx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T07:57:26Z is after 2025-08-24T17:21:41Z" Jan 25 07:57:26 crc kubenswrapper[4832]: I0125 07:57:26.979880 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fcc553c4-1007-4dbc-8420-60b36d54467a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:56:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:56:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:56:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8be196a1dec67a58e78aa9de2efa770fc899f210cc9c13962f0ebe78b967ba34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:56:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b044eb1a229266f00938c08da6aa9e86425ca71d08c8434d7214d54850c36bbb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:56:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://82354c782a5e3edb960aa716e1fc5fa9ab40d1f483ae320f08abfb662c1f1911\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:56:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7833d14895ff5c8aa464bdd04ddfe77dd2cddb9658d863bf6421449e62657bd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:56:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T07:56:57Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T07:57:26Z is after 2025-08-24T17:21:41Z" Jan 25 07:57:26 crc kubenswrapper[4832]: I0125 07:57:26.981249 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:57:26 crc kubenswrapper[4832]: I0125 07:57:26.981294 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:57:26 crc kubenswrapper[4832]: I0125 07:57:26.981310 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:57:26 crc kubenswrapper[4832]: I0125 07:57:26.981329 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:57:26 crc kubenswrapper[4832]: I0125 07:57:26.981350 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:57:26Z","lastTransitionTime":"2026-01-25T07:57:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:57:26 crc kubenswrapper[4832]: I0125 07:57:26.994236 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:16Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T07:57:26Z is after 2025-08-24T17:21:41Z" Jan 25 07:57:27 crc kubenswrapper[4832]: I0125 07:57:27.005241 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-6dqw2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5b30a48c-b823-4cdd-ac0c-def5487d8fa6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d04c4243f10847106daab854b81ba5b24466780aa4900922ae2c460468a12e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxmsw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T07:57:16Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-6dqw2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T07:57:27Z is after 2025-08-24T17:21:41Z" Jan 25 07:57:27 crc kubenswrapper[4832]: I0125 07:57:27.023612 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-plv66" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9c6fdc72-86dc-433d-8aac-57b0eeefaca3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4eb8d5ded80c75addd304eb271c805a5558200db4ad062ef7354d8a0e4d2892d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rkm2k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b2bdf85709ae59146893142e9c99259a30d0a3d382b2212b1863f677f6afc2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rkm2k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://955df1f749685e35f57096ab341705a767f9f044c498ff9fe0c578205ab00e47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rkm2k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a4281c5178e1f538e268252a65fbf98cf6d3febdb246a148f96a4aa074654ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rkm2k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9039a4038315d24ad4f721f3a16dc792881c104d23270f4ab5ffb3d84ff4cb99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rkm2k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0de5e2c0084fa8b9faf368e61b965f84d8411bcbdfb8b3cf6a35f4bc6088e68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rkm2k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://535d226369544a445f4a5592a1a733db46fea474ae6700626093ea53a57fa858\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0c672a6d2179ac4f2004e0caeaec41230a60abe1473535c59b3a5cebb1d244f9\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-25T07:57:25Z\\\",\\\"message\\\":\\\"ork/v1/apis/informers/externalversions/factory.go:140\\\\nI0125 07:57:25.333460 6081 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0125 07:57:25.333530 6081 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0125 07:57:25.333554 6081 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0125 07:57:25.333566 6081 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0125 07:57:25.333572 6081 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0125 07:57:25.333584 6081 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0125 07:57:25.333592 6081 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0125 07:57:25.333604 6081 factory.go:656] Stopping watch factory\\\\nI0125 07:57:25.333619 6081 handler.go:208] Removed *v1.Node event handler 7\\\\nI0125 07:57:25.333630 6081 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0125 07:57:25.333625 6081 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0125 07:57:25.333641 6081 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0125 07:57:25.333650 6081 handler.go:208] Removed *v1.Node event handler 2\\\\nI0125 07:57:25.333645 6081 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0125 07:57:25.333660 6081 handler.go:208] Removed *v1.EgressIP ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-25T07:57:23Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://535d226369544a445f4a5592a1a733db46fea474ae6700626093ea53a57fa858\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-25T07:57:26Z\\\",\\\"message\\\":\\\"lse, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.5.139\\\\\\\", Port:17698, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI0125 07:57:26.725541 6225 services_controller.go:452] Built service openshift-apiserver/check-endpoints per-node LB for network=default: []services.LB{}\\\\nI0125 07:57:26.725548 6225 services_controller.go:453] Built service openshift-apiserver/check-endpoints template LB for network=default: []services.LB{}\\\\nI0125 07:57:26.725513 6225 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-diagnostics/network-check-target-xd92c\\\\nI0125 07:57:26.725560 6225 obj_retry.go:303] Retry object setup: *v1.Pod openshift-image-registry/node-ca-6dqw2\\\\nF0125 07:57:26.725573 6225 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-25T07:57:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rkm2k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d82289bf3a8f5881decb5d348cc43fdfd61f4ce6af17013a893b687d2c759d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rkm2k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac96bdf8380dbae226d8f186a0449b986660f21889eb73734620b26fb796fbf1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ac96bdf8380dbae226d8f186a0449b986660f21889eb73734620b26fb796fbf1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T07:57:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rkm2k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T07:57:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-plv66\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T07:57:27Z is after 2025-08-24T17:21:41Z" Jan 25 07:57:27 crc kubenswrapper[4832]: I0125 07:57:27.038221 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4399c971-4476-4d24-ae22-8f9710ee1ea8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:56:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:56:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:56:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:56:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:56:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://427b76c32266adf832d2068d3a55977e793505c5bb68d7b55f73115596094910\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:56:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://37e9206fcc440929199c51b318bab8d2c23814d1307eaed596434c12edf2ed21\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:56:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://959f94a48ef709e3a3ca62ab6fda1874fd98e4fa70fbde0fa03da51bc8d0ed25\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:56:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://56d7d5b36830b76c8af4d6a98ec50b4096ef677b7ec94784724d5395dbc5e1a5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7e2213b4c4748dc37cf94e9b977630270dedbabf28e81c8a6d75e4ee3346ad7a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-25T07:57:15Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0125 07:57:10.242088 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0125 07:57:10.245266 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3222874030/tls.crt::/tmp/serving-cert-3222874030/tls.key\\\\\\\"\\\\nI0125 07:57:15.582629 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0125 07:57:15.585295 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0125 07:57:15.585315 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0125 07:57:15.585341 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0125 07:57:15.585347 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0125 07:57:15.590465 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0125 07:57:15.590486 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0125 07:57:15.590498 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0125 07:57:15.590502 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0125 07:57:15.590506 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0125 07:57:15.590510 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0125 07:57:15.590513 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0125 07:57:15.590670 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0125 07:57:15.594690 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-25T07:56:59Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c0b0c638bfaa98aaf9932b5ad1b0bfc04ba52038c40f3aa85103388c557ace5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:56:59Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5cdefbe9da3ff798b69ba79465cd9b6fce74df31802f14dca3fa58ba5b9d1bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b5cdefbe9da3ff798b69ba79465cd9b6fce74df31802f14dca3fa58ba5b9d1bd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T07:56:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T07:56:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T07:56:57Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T07:57:27Z is after 2025-08-24T17:21:41Z" Jan 25 07:57:27 crc kubenswrapper[4832]: I0125 07:57:27.057134 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d0e4b534-077a-47eb-a9aa-463b4dce27c2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:56:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:56:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e400282707469172abd90879bb5c4f96419dd2fbdbc5cc58c6ee9954624b221f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://22fb11acb07674f4808f4563567010790f12a87af272fdcf5ad1998e616c3f13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7970bc59b29bb18f7064917431bb4dd3388f593b65f71d697e3bc1c37493d087\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ae35d18ac48a31c47656c723134740770a44da6fa1587a853402bbfd4f51956\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://56b41ea1d1a7bb58c288bf3c661f5cd441412fc4790cd8361da2061bd35721dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c6f28ecd4c0dfb159fffbbdfc1ecbfee0ce21de2efa607937d80ec098bfc2534\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c6f28ecd4c0dfb159fffbbdfc1ecbfee0ce21de2efa607937d80ec098bfc2534\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T07:56:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T07:56:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3d6c060504d04d04a811fe906985b4981037f7c249befd89d21694b58983826\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b3d6c060504d04d04a811fe906985b4981037f7c249befd89d21694b58983826\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T07:56:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T07:56:58Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f98f07a514287378206a12966a18bcce2ce996434858c036f7e405a8c5d51721\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f98f07a514287378206a12966a18bcce2ce996434858c036f7e405a8c5d51721\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T07:56:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T07:56:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T07:56:57Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T07:57:27Z is after 2025-08-24T17:21:41Z" Jan 25 07:57:27 crc kubenswrapper[4832]: I0125 07:57:27.075120 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f08aec7c666388c5a9a5ccc970acf6e9df3262090951fd1a205cfb2f6cfb26a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e880d54d6b2d147d036dac73afd36230c3a984b018b7bd600dcbd33ca83aa84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T07:57:27Z is after 2025-08-24T17:21:41Z" Jan 25 07:57:27 crc kubenswrapper[4832]: I0125 07:57:27.085648 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:57:27 crc kubenswrapper[4832]: I0125 07:57:27.085712 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:57:27 crc kubenswrapper[4832]: I0125 07:57:27.085725 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:57:27 crc kubenswrapper[4832]: I0125 07:57:27.085748 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:57:27 crc kubenswrapper[4832]: I0125 07:57:27.085762 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:57:27Z","lastTransitionTime":"2026-01-25T07:57:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:57:27 crc kubenswrapper[4832]: I0125 07:57:27.093976 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-kzrcf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5439ad80-35f6-4da4-8745-8104e9963472\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c1f3fab8a8806d76e6199970ac471a73665e6ec874f959a1e7908df814babfff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dg29p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T07:57:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-kzrcf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T07:57:27Z is after 2025-08-24T17:21:41Z" Jan 25 07:57:27 crc kubenswrapper[4832]: I0125 07:57:27.105272 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://49bab1f91a75d2c164a43ba253102a6ac5ba0fd6347fad172ae2052f055d3434\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T07:57:27Z is after 2025-08-24T17:21:41Z" Jan 25 07:57:27 crc kubenswrapper[4832]: I0125 07:57:27.116782 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:19Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:19Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://097b2ff685144140b86c80b5c605d0ef31116b56237a696d1da4bf98f65d7ae2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T07:57:27Z is after 2025-08-24T17:21:41Z" Jan 25 07:57:27 crc kubenswrapper[4832]: I0125 07:57:27.127079 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-ljmz9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f0e6de28-95c1-4b62-93a5-8141ed12ba8e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://90459cff650e6a278d83d57b502423c3c3bd87cadc083c7642dfc4cc33e7953c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s6dzs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T07:57:16Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-ljmz9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T07:57:27Z is after 2025-08-24T17:21:41Z" Jan 25 07:57:27 crc kubenswrapper[4832]: I0125 07:57:27.136832 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-9r9sz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1fb47e8e-c812-41b4-9be7-3fad81e121b0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11d30ecfbac91cbd5f546d8f064b715e31917d7db31102376299e2c5fa7951f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2t6v2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c32b6a39b2bc87d55b11a88a54d0909633358c70f3fc555cd4308bc5bf2689a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2t6v2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T07:57:16Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-9r9sz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T07:57:27Z is after 2025-08-24T17:21:41Z" Jan 25 07:57:27 crc kubenswrapper[4832]: I0125 07:57:27.147351 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:16Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T07:57:27Z is after 2025-08-24T17:21:41Z" Jan 25 07:57:27 crc kubenswrapper[4832]: I0125 07:57:27.188570 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:57:27 crc kubenswrapper[4832]: I0125 07:57:27.188619 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:57:27 crc kubenswrapper[4832]: I0125 07:57:27.188631 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:57:27 crc kubenswrapper[4832]: I0125 07:57:27.188648 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:57:27 crc kubenswrapper[4832]: I0125 07:57:27.188659 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:57:27Z","lastTransitionTime":"2026-01-25T07:57:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:57:27 crc kubenswrapper[4832]: I0125 07:57:27.290605 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:57:27 crc kubenswrapper[4832]: I0125 07:57:27.290660 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:57:27 crc kubenswrapper[4832]: I0125 07:57:27.290675 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:57:27 crc kubenswrapper[4832]: I0125 07:57:27.290697 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:57:27 crc kubenswrapper[4832]: I0125 07:57:27.290713 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:57:27Z","lastTransitionTime":"2026-01-25T07:57:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:57:27 crc kubenswrapper[4832]: I0125 07:57:27.395676 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:57:27 crc kubenswrapper[4832]: I0125 07:57:27.395719 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:57:27 crc kubenswrapper[4832]: I0125 07:57:27.395729 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:57:27 crc kubenswrapper[4832]: I0125 07:57:27.395745 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:57:27 crc kubenswrapper[4832]: I0125 07:57:27.395757 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:57:27Z","lastTransitionTime":"2026-01-25T07:57:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:57:27 crc kubenswrapper[4832]: I0125 07:57:27.498511 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:57:27 crc kubenswrapper[4832]: I0125 07:57:27.498550 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:57:27 crc kubenswrapper[4832]: I0125 07:57:27.498560 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:57:27 crc kubenswrapper[4832]: I0125 07:57:27.498575 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:57:27 crc kubenswrapper[4832]: I0125 07:57:27.498583 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:57:27Z","lastTransitionTime":"2026-01-25T07:57:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:57:27 crc kubenswrapper[4832]: I0125 07:57:27.596274 4832 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-05 05:25:27.082009976 +0000 UTC Jan 25 07:57:27 crc kubenswrapper[4832]: I0125 07:57:27.601130 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:57:27 crc kubenswrapper[4832]: I0125 07:57:27.601179 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:57:27 crc kubenswrapper[4832]: I0125 07:57:27.601191 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:57:27 crc kubenswrapper[4832]: I0125 07:57:27.601210 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:57:27 crc kubenswrapper[4832]: I0125 07:57:27.601223 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:57:27Z","lastTransitionTime":"2026-01-25T07:57:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:57:27 crc kubenswrapper[4832]: I0125 07:57:27.668643 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 25 07:57:27 crc kubenswrapper[4832]: E0125 07:57:27.668810 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 25 07:57:27 crc kubenswrapper[4832]: I0125 07:57:27.682218 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:16Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T07:57:27Z is after 2025-08-24T17:21:41Z" Jan 25 07:57:27 crc kubenswrapper[4832]: I0125 07:57:27.697473 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://49bab1f91a75d2c164a43ba253102a6ac5ba0fd6347fad172ae2052f055d3434\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T07:57:27Z is after 2025-08-24T17:21:41Z" Jan 25 07:57:27 crc kubenswrapper[4832]: I0125 07:57:27.703640 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:57:27 crc kubenswrapper[4832]: I0125 07:57:27.703672 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:57:27 crc kubenswrapper[4832]: I0125 07:57:27.703682 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:57:27 crc kubenswrapper[4832]: I0125 07:57:27.703695 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:57:27 crc kubenswrapper[4832]: I0125 07:57:27.703706 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:57:27Z","lastTransitionTime":"2026-01-25T07:57:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:57:27 crc kubenswrapper[4832]: I0125 07:57:27.710583 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:19Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:19Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://097b2ff685144140b86c80b5c605d0ef31116b56237a696d1da4bf98f65d7ae2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T07:57:27Z is after 2025-08-24T17:21:41Z" Jan 25 07:57:27 crc kubenswrapper[4832]: I0125 07:57:27.724492 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-ljmz9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f0e6de28-95c1-4b62-93a5-8141ed12ba8e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://90459cff650e6a278d83d57b502423c3c3bd87cadc083c7642dfc4cc33e7953c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s6dzs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T07:57:16Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-ljmz9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T07:57:27Z is after 2025-08-24T17:21:41Z" Jan 25 07:57:27 crc kubenswrapper[4832]: I0125 07:57:27.738958 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-9r9sz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1fb47e8e-c812-41b4-9be7-3fad81e121b0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11d30ecfbac91cbd5f546d8f064b715e31917d7db31102376299e2c5fa7951f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2t6v2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c32b6a39b2bc87d55b11a88a54d0909633358c70f3fc555cd4308bc5bf2689a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2t6v2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T07:57:16Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-9r9sz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T07:57:27Z is after 2025-08-24T17:21:41Z" Jan 25 07:57:27 crc kubenswrapper[4832]: I0125 07:57:27.753828 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T07:57:27Z is after 2025-08-24T17:21:41Z" Jan 25 07:57:27 crc kubenswrapper[4832]: I0125 07:57:27.772379 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-7tflx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"947f1c61-f061-4448-b301-9c2554b67933\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://62f9942e292890719dd629a44aa806877367db57a332a97f254fea093c039c5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6tmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://446dcb21c95e4112671db6f4b8376ff3361d3d386ecdaa190f615271511be812\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://446dcb21c95e4112671db6f4b8376ff3361d3d386ecdaa190f615271511be812\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T07:57:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T07:57:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6tmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2ca8e86a16d5f632146a210839dc52fb85013bd79ac5a467847d4a28a672539\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a2ca8e86a16d5f632146a210839dc52fb85013bd79ac5a467847d4a28a672539\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T07:57:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T07:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6tmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e8c763fc8bcc560d4435f2ed3be793465fb9e31b07bc26b76ce14bf7d9ce6b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3e8c763fc8bcc560d4435f2ed3be793465fb9e31b07bc26b76ce14bf7d9ce6b7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T07:57:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T07:57:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6tmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6a224c00f14700b78550beaa705d0f1cf0b2f13ef8ec3ba81aef885b81292f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a6a224c00f14700b78550beaa705d0f1cf0b2f13ef8ec3ba81aef885b81292f3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T07:57:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T07:57:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6tmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0565bbfef6aee4dc36b7eeea5fb9b0d26004395c38af8fb6f1745ff6853957e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0565bbfef6aee4dc36b7eeea5fb9b0d26004395c38af8fb6f1745ff6853957e4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T07:57:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T07:57:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6tmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://21c9f3889231e035c1db9611e076f2db7f52cca1449f9cd143323a8599d3141c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://21c9f3889231e035c1db9611e076f2db7f52cca1449f9cd143323a8599d3141c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T07:57:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T07:57:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6tmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T07:57:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-7tflx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T07:57:27Z is after 2025-08-24T17:21:41Z" Jan 25 07:57:27 crc kubenswrapper[4832]: I0125 07:57:27.789538 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4399c971-4476-4d24-ae22-8f9710ee1ea8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:56:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:56:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:56:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:56:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:56:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://427b76c32266adf832d2068d3a55977e793505c5bb68d7b55f73115596094910\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:56:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://37e9206fcc440929199c51b318bab8d2c23814d1307eaed596434c12edf2ed21\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:56:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://959f94a48ef709e3a3ca62ab6fda1874fd98e4fa70fbde0fa03da51bc8d0ed25\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:56:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://56d7d5b36830b76c8af4d6a98ec50b4096ef677b7ec94784724d5395dbc5e1a5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7e2213b4c4748dc37cf94e9b977630270dedbabf28e81c8a6d75e4ee3346ad7a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-25T07:57:15Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0125 07:57:10.242088 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0125 07:57:10.245266 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3222874030/tls.crt::/tmp/serving-cert-3222874030/tls.key\\\\\\\"\\\\nI0125 07:57:15.582629 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0125 07:57:15.585295 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0125 07:57:15.585315 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0125 07:57:15.585341 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0125 07:57:15.585347 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0125 07:57:15.590465 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0125 07:57:15.590486 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0125 07:57:15.590498 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0125 07:57:15.590502 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0125 07:57:15.590506 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0125 07:57:15.590510 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0125 07:57:15.590513 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0125 07:57:15.590670 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0125 07:57:15.594690 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-25T07:56:59Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c0b0c638bfaa98aaf9932b5ad1b0bfc04ba52038c40f3aa85103388c557ace5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:56:59Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5cdefbe9da3ff798b69ba79465cd9b6fce74df31802f14dca3fa58ba5b9d1bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b5cdefbe9da3ff798b69ba79465cd9b6fce74df31802f14dca3fa58ba5b9d1bd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T07:56:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T07:56:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T07:56:57Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T07:57:27Z is after 2025-08-24T17:21:41Z" Jan 25 07:57:27 crc kubenswrapper[4832]: I0125 07:57:27.803099 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fcc553c4-1007-4dbc-8420-60b36d54467a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:56:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:56:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:56:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8be196a1dec67a58e78aa9de2efa770fc899f210cc9c13962f0ebe78b967ba34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:56:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b044eb1a229266f00938c08da6aa9e86425ca71d08c8434d7214d54850c36bbb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:56:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://82354c782a5e3edb960aa716e1fc5fa9ab40d1f483ae320f08abfb662c1f1911\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:56:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7833d14895ff5c8aa464bdd04ddfe77dd2cddb9658d863bf6421449e62657bd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:56:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T07:56:57Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T07:57:27Z is after 2025-08-24T17:21:41Z" Jan 25 07:57:27 crc kubenswrapper[4832]: I0125 07:57:27.805459 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:57:27 crc kubenswrapper[4832]: I0125 07:57:27.805498 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:57:27 crc kubenswrapper[4832]: I0125 07:57:27.805511 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:57:27 crc kubenswrapper[4832]: I0125 07:57:27.805530 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:57:27 crc kubenswrapper[4832]: I0125 07:57:27.805542 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:57:27Z","lastTransitionTime":"2026-01-25T07:57:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:57:27 crc kubenswrapper[4832]: I0125 07:57:27.819599 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:16Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T07:57:27Z is after 2025-08-24T17:21:41Z" Jan 25 07:57:27 crc kubenswrapper[4832]: I0125 07:57:27.831546 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-6dqw2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5b30a48c-b823-4cdd-ac0c-def5487d8fa6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d04c4243f10847106daab854b81ba5b24466780aa4900922ae2c460468a12e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxmsw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T07:57:16Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-6dqw2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T07:57:27Z is after 2025-08-24T17:21:41Z" Jan 25 07:57:27 crc kubenswrapper[4832]: I0125 07:57:27.856695 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-plv66" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9c6fdc72-86dc-433d-8aac-57b0eeefaca3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4eb8d5ded80c75addd304eb271c805a5558200db4ad062ef7354d8a0e4d2892d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rkm2k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b2bdf85709ae59146893142e9c99259a30d0a3d382b2212b1863f677f6afc2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rkm2k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://955df1f749685e35f57096ab341705a767f9f044c498ff9fe0c578205ab00e47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rkm2k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a4281c5178e1f538e268252a65fbf98cf6d3febdb246a148f96a4aa074654ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rkm2k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9039a4038315d24ad4f721f3a16dc792881c104d23270f4ab5ffb3d84ff4cb99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rkm2k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0de5e2c0084fa8b9faf368e61b965f84d8411bcbdfb8b3cf6a35f4bc6088e68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rkm2k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://535d226369544a445f4a5592a1a733db46fea474ae6700626093ea53a57fa858\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0c672a6d2179ac4f2004e0caeaec41230a60abe1473535c59b3a5cebb1d244f9\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-25T07:57:25Z\\\",\\\"message\\\":\\\"ork/v1/apis/informers/externalversions/factory.go:140\\\\nI0125 07:57:25.333460 6081 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0125 07:57:25.333530 6081 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0125 07:57:25.333554 6081 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0125 07:57:25.333566 6081 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0125 07:57:25.333572 6081 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0125 07:57:25.333584 6081 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0125 07:57:25.333592 6081 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0125 07:57:25.333604 6081 factory.go:656] Stopping watch factory\\\\nI0125 07:57:25.333619 6081 handler.go:208] Removed *v1.Node event handler 7\\\\nI0125 07:57:25.333630 6081 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0125 07:57:25.333625 6081 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0125 07:57:25.333641 6081 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0125 07:57:25.333650 6081 handler.go:208] Removed *v1.Node event handler 2\\\\nI0125 07:57:25.333645 6081 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0125 07:57:25.333660 6081 handler.go:208] Removed *v1.EgressIP ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-25T07:57:23Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://535d226369544a445f4a5592a1a733db46fea474ae6700626093ea53a57fa858\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-25T07:57:26Z\\\",\\\"message\\\":\\\"lse, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.5.139\\\\\\\", Port:17698, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI0125 07:57:26.725541 6225 services_controller.go:452] Built service openshift-apiserver/check-endpoints per-node LB for network=default: []services.LB{}\\\\nI0125 07:57:26.725548 6225 services_controller.go:453] Built service openshift-apiserver/check-endpoints template LB for network=default: []services.LB{}\\\\nI0125 07:57:26.725513 6225 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-diagnostics/network-check-target-xd92c\\\\nI0125 07:57:26.725560 6225 obj_retry.go:303] Retry object setup: *v1.Pod openshift-image-registry/node-ca-6dqw2\\\\nF0125 07:57:26.725573 6225 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-25T07:57:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rkm2k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d82289bf3a8f5881decb5d348cc43fdfd61f4ce6af17013a893b687d2c759d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rkm2k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac96bdf8380dbae226d8f186a0449b986660f21889eb73734620b26fb796fbf1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ac96bdf8380dbae226d8f186a0449b986660f21889eb73734620b26fb796fbf1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T07:57:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rkm2k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T07:57:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-plv66\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T07:57:27Z is after 2025-08-24T17:21:41Z" Jan 25 07:57:27 crc kubenswrapper[4832]: I0125 07:57:27.880525 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d0e4b534-077a-47eb-a9aa-463b4dce27c2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:56:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:56:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e400282707469172abd90879bb5c4f96419dd2fbdbc5cc58c6ee9954624b221f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://22fb11acb07674f4808f4563567010790f12a87af272fdcf5ad1998e616c3f13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7970bc59b29bb18f7064917431bb4dd3388f593b65f71d697e3bc1c37493d087\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ae35d18ac48a31c47656c723134740770a44da6fa1587a853402bbfd4f51956\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://56b41ea1d1a7bb58c288bf3c661f5cd441412fc4790cd8361da2061bd35721dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c6f28ecd4c0dfb159fffbbdfc1ecbfee0ce21de2efa607937d80ec098bfc2534\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c6f28ecd4c0dfb159fffbbdfc1ecbfee0ce21de2efa607937d80ec098bfc2534\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T07:56:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T07:56:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3d6c060504d04d04a811fe906985b4981037f7c249befd89d21694b58983826\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b3d6c060504d04d04a811fe906985b4981037f7c249befd89d21694b58983826\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T07:56:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T07:56:58Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f98f07a514287378206a12966a18bcce2ce996434858c036f7e405a8c5d51721\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f98f07a514287378206a12966a18bcce2ce996434858c036f7e405a8c5d51721\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T07:56:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T07:56:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T07:56:57Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T07:57:27Z is after 2025-08-24T17:21:41Z" Jan 25 07:57:27 crc kubenswrapper[4832]: I0125 07:57:27.896737 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f08aec7c666388c5a9a5ccc970acf6e9df3262090951fd1a205cfb2f6cfb26a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e880d54d6b2d147d036dac73afd36230c3a984b018b7bd600dcbd33ca83aa84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T07:57:27Z is after 2025-08-24T17:21:41Z" Jan 25 07:57:27 crc kubenswrapper[4832]: I0125 07:57:27.909470 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:57:27 crc kubenswrapper[4832]: I0125 07:57:27.909495 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:57:27 crc kubenswrapper[4832]: I0125 07:57:27.909504 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:57:27 crc kubenswrapper[4832]: I0125 07:57:27.909518 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:57:27 crc kubenswrapper[4832]: I0125 07:57:27.909527 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:57:27Z","lastTransitionTime":"2026-01-25T07:57:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:57:27 crc kubenswrapper[4832]: I0125 07:57:27.911671 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-kzrcf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5439ad80-35f6-4da4-8745-8104e9963472\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c1f3fab8a8806d76e6199970ac471a73665e6ec874f959a1e7908df814babfff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dg29p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T07:57:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-kzrcf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T07:57:27Z is after 2025-08-24T17:21:41Z" Jan 25 07:57:27 crc kubenswrapper[4832]: I0125 07:57:27.925215 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-plv66_9c6fdc72-86dc-433d-8aac-57b0eeefaca3/ovnkube-controller/1.log" Jan 25 07:57:27 crc kubenswrapper[4832]: I0125 07:57:27.928489 4832 scope.go:117] "RemoveContainer" containerID="535d226369544a445f4a5592a1a733db46fea474ae6700626093ea53a57fa858" Jan 25 07:57:27 crc kubenswrapper[4832]: E0125 07:57:27.928670 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-plv66_openshift-ovn-kubernetes(9c6fdc72-86dc-433d-8aac-57b0eeefaca3)\"" pod="openshift-ovn-kubernetes/ovnkube-node-plv66" podUID="9c6fdc72-86dc-433d-8aac-57b0eeefaca3" Jan 25 07:57:27 crc kubenswrapper[4832]: I0125 07:57:27.941299 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://49bab1f91a75d2c164a43ba253102a6ac5ba0fd6347fad172ae2052f055d3434\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T07:57:27Z is after 2025-08-24T17:21:41Z" Jan 25 07:57:27 crc kubenswrapper[4832]: I0125 07:57:27.953008 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:19Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:19Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://097b2ff685144140b86c80b5c605d0ef31116b56237a696d1da4bf98f65d7ae2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T07:57:27Z is after 2025-08-24T17:21:41Z" Jan 25 07:57:27 crc kubenswrapper[4832]: I0125 07:57:27.962700 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-ljmz9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f0e6de28-95c1-4b62-93a5-8141ed12ba8e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://90459cff650e6a278d83d57b502423c3c3bd87cadc083c7642dfc4cc33e7953c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s6dzs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T07:57:16Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-ljmz9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T07:57:27Z is after 2025-08-24T17:21:41Z" Jan 25 07:57:27 crc kubenswrapper[4832]: I0125 07:57:27.977310 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-9r9sz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1fb47e8e-c812-41b4-9be7-3fad81e121b0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11d30ecfbac91cbd5f546d8f064b715e31917d7db31102376299e2c5fa7951f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2t6v2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c32b6a39b2bc87d55b11a88a54d0909633358c70f3fc555cd4308bc5bf2689a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2t6v2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T07:57:16Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-9r9sz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T07:57:27Z is after 2025-08-24T17:21:41Z" Jan 25 07:57:27 crc kubenswrapper[4832]: I0125 07:57:27.989538 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:16Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T07:57:27Z is after 2025-08-24T17:21:41Z" Jan 25 07:57:28 crc kubenswrapper[4832]: I0125 07:57:28.000815 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T07:57:27Z is after 2025-08-24T17:21:41Z" Jan 25 07:57:28 crc kubenswrapper[4832]: I0125 07:57:28.012857 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:57:28 crc kubenswrapper[4832]: I0125 07:57:28.013041 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:57:28 crc kubenswrapper[4832]: I0125 07:57:28.013105 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:57:28 crc kubenswrapper[4832]: I0125 07:57:28.013166 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:57:28 crc kubenswrapper[4832]: I0125 07:57:28.013227 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:57:28Z","lastTransitionTime":"2026-01-25T07:57:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:57:28 crc kubenswrapper[4832]: I0125 07:57:28.017294 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-7tflx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"947f1c61-f061-4448-b301-9c2554b67933\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://62f9942e292890719dd629a44aa806877367db57a332a97f254fea093c039c5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6tmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://446dcb21c95e4112671db6f4b8376ff3361d3d386ecdaa190f615271511be812\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://446dcb21c95e4112671db6f4b8376ff3361d3d386ecdaa190f615271511be812\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T07:57:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T07:57:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6tmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2ca8e86a16d5f632146a210839dc52fb85013bd79ac5a467847d4a28a672539\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a2ca8e86a16d5f632146a210839dc52fb85013bd79ac5a467847d4a28a672539\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T07:57:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T07:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6tmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e8c763fc8bcc560d4435f2ed3be793465fb9e31b07bc26b76ce14bf7d9ce6b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3e8c763fc8bcc560d4435f2ed3be793465fb9e31b07bc26b76ce14bf7d9ce6b7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T07:57:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T07:57:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6tmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6a224c00f14700b78550beaa705d0f1cf0b2f13ef8ec3ba81aef885b81292f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a6a224c00f14700b78550beaa705d0f1cf0b2f13ef8ec3ba81aef885b81292f3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T07:57:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T07:57:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6tmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0565bbfef6aee4dc36b7eeea5fb9b0d26004395c38af8fb6f1745ff6853957e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0565bbfef6aee4dc36b7eeea5fb9b0d26004395c38af8fb6f1745ff6853957e4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T07:57:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T07:57:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6tmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://21c9f3889231e035c1db9611e076f2db7f52cca1449f9cd143323a8599d3141c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://21c9f3889231e035c1db9611e076f2db7f52cca1449f9cd143323a8599d3141c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T07:57:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T07:57:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6tmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T07:57:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-7tflx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T07:57:28Z is after 2025-08-24T17:21:41Z" Jan 25 07:57:28 crc kubenswrapper[4832]: I0125 07:57:28.033672 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fcc553c4-1007-4dbc-8420-60b36d54467a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:56:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:56:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:56:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8be196a1dec67a58e78aa9de2efa770fc899f210cc9c13962f0ebe78b967ba34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:56:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b044eb1a229266f00938c08da6aa9e86425ca71d08c8434d7214d54850c36bbb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:56:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://82354c782a5e3edb960aa716e1fc5fa9ab40d1f483ae320f08abfb662c1f1911\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:56:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7833d14895ff5c8aa464bdd04ddfe77dd2cddb9658d863bf6421449e62657bd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:56:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T07:56:57Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T07:57:28Z is after 2025-08-24T17:21:41Z" Jan 25 07:57:28 crc kubenswrapper[4832]: I0125 07:57:28.049836 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:16Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T07:57:28Z is after 2025-08-24T17:21:41Z" Jan 25 07:57:28 crc kubenswrapper[4832]: I0125 07:57:28.063406 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-6dqw2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5b30a48c-b823-4cdd-ac0c-def5487d8fa6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d04c4243f10847106daab854b81ba5b24466780aa4900922ae2c460468a12e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxmsw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T07:57:16Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-6dqw2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T07:57:28Z is after 2025-08-24T17:21:41Z" Jan 25 07:57:28 crc kubenswrapper[4832]: I0125 07:57:28.085007 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-plv66" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9c6fdc72-86dc-433d-8aac-57b0eeefaca3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4eb8d5ded80c75addd304eb271c805a5558200db4ad062ef7354d8a0e4d2892d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rkm2k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b2bdf85709ae59146893142e9c99259a30d0a3d382b2212b1863f677f6afc2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rkm2k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://955df1f749685e35f57096ab341705a767f9f044c498ff9fe0c578205ab00e47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rkm2k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a4281c5178e1f538e268252a65fbf98cf6d3febdb246a148f96a4aa074654ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rkm2k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9039a4038315d24ad4f721f3a16dc792881c104d23270f4ab5ffb3d84ff4cb99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rkm2k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0de5e2c0084fa8b9faf368e61b965f84d8411bcbdfb8b3cf6a35f4bc6088e68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rkm2k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://535d226369544a445f4a5592a1a733db46fea474ae6700626093ea53a57fa858\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://535d226369544a445f4a5592a1a733db46fea474ae6700626093ea53a57fa858\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-25T07:57:26Z\\\",\\\"message\\\":\\\"lse, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.5.139\\\\\\\", Port:17698, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI0125 07:57:26.725541 6225 services_controller.go:452] Built service openshift-apiserver/check-endpoints per-node LB for network=default: []services.LB{}\\\\nI0125 07:57:26.725548 6225 services_controller.go:453] Built service openshift-apiserver/check-endpoints template LB for network=default: []services.LB{}\\\\nI0125 07:57:26.725513 6225 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-diagnostics/network-check-target-xd92c\\\\nI0125 07:57:26.725560 6225 obj_retry.go:303] Retry object setup: *v1.Pod openshift-image-registry/node-ca-6dqw2\\\\nF0125 07:57:26.725573 6225 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-25T07:57:26Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-plv66_openshift-ovn-kubernetes(9c6fdc72-86dc-433d-8aac-57b0eeefaca3)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rkm2k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d82289bf3a8f5881decb5d348cc43fdfd61f4ce6af17013a893b687d2c759d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rkm2k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac96bdf8380dbae226d8f186a0449b986660f21889eb73734620b26fb796fbf1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ac96bdf8380dbae226d8f186a0449b986660f21889eb73734620b26fb796fbf1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T07:57:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rkm2k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T07:57:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-plv66\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T07:57:28Z is after 2025-08-24T17:21:41Z" Jan 25 07:57:28 crc kubenswrapper[4832]: I0125 07:57:28.099527 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4399c971-4476-4d24-ae22-8f9710ee1ea8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:56:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:56:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:56:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:56:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:56:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://427b76c32266adf832d2068d3a55977e793505c5bb68d7b55f73115596094910\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:56:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://37e9206fcc440929199c51b318bab8d2c23814d1307eaed596434c12edf2ed21\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:56:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://959f94a48ef709e3a3ca62ab6fda1874fd98e4fa70fbde0fa03da51bc8d0ed25\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:56:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://56d7d5b36830b76c8af4d6a98ec50b4096ef677b7ec94784724d5395dbc5e1a5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7e2213b4c4748dc37cf94e9b977630270dedbabf28e81c8a6d75e4ee3346ad7a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-25T07:57:15Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0125 07:57:10.242088 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0125 07:57:10.245266 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3222874030/tls.crt::/tmp/serving-cert-3222874030/tls.key\\\\\\\"\\\\nI0125 07:57:15.582629 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0125 07:57:15.585295 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0125 07:57:15.585315 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0125 07:57:15.585341 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0125 07:57:15.585347 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0125 07:57:15.590465 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0125 07:57:15.590486 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0125 07:57:15.590498 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0125 07:57:15.590502 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0125 07:57:15.590506 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0125 07:57:15.590510 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0125 07:57:15.590513 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0125 07:57:15.590670 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0125 07:57:15.594690 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-25T07:56:59Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c0b0c638bfaa98aaf9932b5ad1b0bfc04ba52038c40f3aa85103388c557ace5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:56:59Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5cdefbe9da3ff798b69ba79465cd9b6fce74df31802f14dca3fa58ba5b9d1bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b5cdefbe9da3ff798b69ba79465cd9b6fce74df31802f14dca3fa58ba5b9d1bd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T07:56:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T07:56:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T07:56:57Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T07:57:28Z is after 2025-08-24T17:21:41Z" Jan 25 07:57:28 crc kubenswrapper[4832]: I0125 07:57:28.116472 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:57:28 crc kubenswrapper[4832]: I0125 07:57:28.116522 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:57:28 crc kubenswrapper[4832]: I0125 07:57:28.116535 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:57:28 crc kubenswrapper[4832]: I0125 07:57:28.116554 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:57:28 crc kubenswrapper[4832]: I0125 07:57:28.116569 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:57:28Z","lastTransitionTime":"2026-01-25T07:57:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:57:28 crc kubenswrapper[4832]: I0125 07:57:28.120773 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d0e4b534-077a-47eb-a9aa-463b4dce27c2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:56:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:56:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e400282707469172abd90879bb5c4f96419dd2fbdbc5cc58c6ee9954624b221f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://22fb11acb07674f4808f4563567010790f12a87af272fdcf5ad1998e616c3f13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7970bc59b29bb18f7064917431bb4dd3388f593b65f71d697e3bc1c37493d087\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ae35d18ac48a31c47656c723134740770a44da6fa1587a853402bbfd4f51956\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://56b41ea1d1a7bb58c288bf3c661f5cd441412fc4790cd8361da2061bd35721dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c6f28ecd4c0dfb159fffbbdfc1ecbfee0ce21de2efa607937d80ec098bfc2534\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c6f28ecd4c0dfb159fffbbdfc1ecbfee0ce21de2efa607937d80ec098bfc2534\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T07:56:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T07:56:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3d6c060504d04d04a811fe906985b4981037f7c249befd89d21694b58983826\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b3d6c060504d04d04a811fe906985b4981037f7c249befd89d21694b58983826\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T07:56:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T07:56:58Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f98f07a514287378206a12966a18bcce2ce996434858c036f7e405a8c5d51721\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f98f07a514287378206a12966a18bcce2ce996434858c036f7e405a8c5d51721\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T07:56:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T07:56:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T07:56:57Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T07:57:28Z is after 2025-08-24T17:21:41Z" Jan 25 07:57:28 crc kubenswrapper[4832]: I0125 07:57:28.133749 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f08aec7c666388c5a9a5ccc970acf6e9df3262090951fd1a205cfb2f6cfb26a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e880d54d6b2d147d036dac73afd36230c3a984b018b7bd600dcbd33ca83aa84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T07:57:28Z is after 2025-08-24T17:21:41Z" Jan 25 07:57:28 crc kubenswrapper[4832]: I0125 07:57:28.147436 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-kzrcf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5439ad80-35f6-4da4-8745-8104e9963472\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c1f3fab8a8806d76e6199970ac471a73665e6ec874f959a1e7908df814babfff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dg29p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T07:57:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-kzrcf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T07:57:28Z is after 2025-08-24T17:21:41Z" Jan 25 07:57:28 crc kubenswrapper[4832]: I0125 07:57:28.219565 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:57:28 crc kubenswrapper[4832]: I0125 07:57:28.219657 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:57:28 crc kubenswrapper[4832]: I0125 07:57:28.219685 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:57:28 crc kubenswrapper[4832]: I0125 07:57:28.219722 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:57:28 crc kubenswrapper[4832]: I0125 07:57:28.219750 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:57:28Z","lastTransitionTime":"2026-01-25T07:57:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:57:28 crc kubenswrapper[4832]: I0125 07:57:28.321891 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:57:28 crc kubenswrapper[4832]: I0125 07:57:28.322166 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:57:28 crc kubenswrapper[4832]: I0125 07:57:28.322295 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:57:28 crc kubenswrapper[4832]: I0125 07:57:28.322418 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:57:28 crc kubenswrapper[4832]: I0125 07:57:28.322565 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:57:28Z","lastTransitionTime":"2026-01-25T07:57:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:57:28 crc kubenswrapper[4832]: I0125 07:57:28.425466 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:57:28 crc kubenswrapper[4832]: I0125 07:57:28.425519 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:57:28 crc kubenswrapper[4832]: I0125 07:57:28.425528 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:57:28 crc kubenswrapper[4832]: I0125 07:57:28.425544 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:57:28 crc kubenswrapper[4832]: I0125 07:57:28.425553 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:57:28Z","lastTransitionTime":"2026-01-25T07:57:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:57:28 crc kubenswrapper[4832]: I0125 07:57:28.528522 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:57:28 crc kubenswrapper[4832]: I0125 07:57:28.528568 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:57:28 crc kubenswrapper[4832]: I0125 07:57:28.528582 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:57:28 crc kubenswrapper[4832]: I0125 07:57:28.528602 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:57:28 crc kubenswrapper[4832]: I0125 07:57:28.528626 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:57:28Z","lastTransitionTime":"2026-01-25T07:57:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:57:28 crc kubenswrapper[4832]: I0125 07:57:28.596934 4832 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-11 13:15:57.557597529 +0000 UTC Jan 25 07:57:28 crc kubenswrapper[4832]: I0125 07:57:28.631539 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:57:28 crc kubenswrapper[4832]: I0125 07:57:28.631586 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:57:28 crc kubenswrapper[4832]: I0125 07:57:28.631595 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:57:28 crc kubenswrapper[4832]: I0125 07:57:28.631609 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:57:28 crc kubenswrapper[4832]: I0125 07:57:28.631619 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:57:28Z","lastTransitionTime":"2026-01-25T07:57:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:57:28 crc kubenswrapper[4832]: I0125 07:57:28.668778 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 25 07:57:28 crc kubenswrapper[4832]: I0125 07:57:28.668862 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 25 07:57:28 crc kubenswrapper[4832]: E0125 07:57:28.668921 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 25 07:57:28 crc kubenswrapper[4832]: E0125 07:57:28.669003 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 25 07:57:28 crc kubenswrapper[4832]: I0125 07:57:28.734141 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:57:28 crc kubenswrapper[4832]: I0125 07:57:28.734187 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:57:28 crc kubenswrapper[4832]: I0125 07:57:28.734196 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:57:28 crc kubenswrapper[4832]: I0125 07:57:28.734210 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:57:28 crc kubenswrapper[4832]: I0125 07:57:28.734220 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:57:28Z","lastTransitionTime":"2026-01-25T07:57:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:57:28 crc kubenswrapper[4832]: I0125 07:57:28.835764 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:57:28 crc kubenswrapper[4832]: I0125 07:57:28.835807 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:57:28 crc kubenswrapper[4832]: I0125 07:57:28.835816 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:57:28 crc kubenswrapper[4832]: I0125 07:57:28.835830 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:57:28 crc kubenswrapper[4832]: I0125 07:57:28.835840 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:57:28Z","lastTransitionTime":"2026-01-25T07:57:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:57:28 crc kubenswrapper[4832]: I0125 07:57:28.938063 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:57:28 crc kubenswrapper[4832]: I0125 07:57:28.938123 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:57:28 crc kubenswrapper[4832]: I0125 07:57:28.938135 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:57:28 crc kubenswrapper[4832]: I0125 07:57:28.938154 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:57:28 crc kubenswrapper[4832]: I0125 07:57:28.938168 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:57:28Z","lastTransitionTime":"2026-01-25T07:57:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:57:29 crc kubenswrapper[4832]: I0125 07:57:29.040958 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:57:29 crc kubenswrapper[4832]: I0125 07:57:29.041001 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:57:29 crc kubenswrapper[4832]: I0125 07:57:29.041009 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:57:29 crc kubenswrapper[4832]: I0125 07:57:29.041023 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:57:29 crc kubenswrapper[4832]: I0125 07:57:29.041032 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:57:29Z","lastTransitionTime":"2026-01-25T07:57:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:57:29 crc kubenswrapper[4832]: I0125 07:57:29.143379 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:57:29 crc kubenswrapper[4832]: I0125 07:57:29.143429 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:57:29 crc kubenswrapper[4832]: I0125 07:57:29.143439 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:57:29 crc kubenswrapper[4832]: I0125 07:57:29.143454 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:57:29 crc kubenswrapper[4832]: I0125 07:57:29.143463 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:57:29Z","lastTransitionTime":"2026-01-25T07:57:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:57:29 crc kubenswrapper[4832]: I0125 07:57:29.245820 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:57:29 crc kubenswrapper[4832]: I0125 07:57:29.245855 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:57:29 crc kubenswrapper[4832]: I0125 07:57:29.245865 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:57:29 crc kubenswrapper[4832]: I0125 07:57:29.245880 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:57:29 crc kubenswrapper[4832]: I0125 07:57:29.245890 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:57:29Z","lastTransitionTime":"2026-01-25T07:57:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:57:29 crc kubenswrapper[4832]: I0125 07:57:29.348712 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:57:29 crc kubenswrapper[4832]: I0125 07:57:29.348752 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:57:29 crc kubenswrapper[4832]: I0125 07:57:29.348763 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:57:29 crc kubenswrapper[4832]: I0125 07:57:29.348779 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:57:29 crc kubenswrapper[4832]: I0125 07:57:29.348794 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:57:29Z","lastTransitionTime":"2026-01-25T07:57:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:57:29 crc kubenswrapper[4832]: I0125 07:57:29.452733 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:57:29 crc kubenswrapper[4832]: I0125 07:57:29.452788 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:57:29 crc kubenswrapper[4832]: I0125 07:57:29.452805 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:57:29 crc kubenswrapper[4832]: I0125 07:57:29.452827 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:57:29 crc kubenswrapper[4832]: I0125 07:57:29.452844 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:57:29Z","lastTransitionTime":"2026-01-25T07:57:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:57:29 crc kubenswrapper[4832]: I0125 07:57:29.556761 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:57:29 crc kubenswrapper[4832]: I0125 07:57:29.556817 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:57:29 crc kubenswrapper[4832]: I0125 07:57:29.556831 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:57:29 crc kubenswrapper[4832]: I0125 07:57:29.556856 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:57:29 crc kubenswrapper[4832]: I0125 07:57:29.556871 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:57:29Z","lastTransitionTime":"2026-01-25T07:57:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:57:29 crc kubenswrapper[4832]: I0125 07:57:29.597542 4832 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-18 04:26:09.697620718 +0000 UTC Jan 25 07:57:29 crc kubenswrapper[4832]: I0125 07:57:29.660448 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:57:29 crc kubenswrapper[4832]: I0125 07:57:29.660515 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:57:29 crc kubenswrapper[4832]: I0125 07:57:29.660533 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:57:29 crc kubenswrapper[4832]: I0125 07:57:29.660562 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:57:29 crc kubenswrapper[4832]: I0125 07:57:29.660581 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:57:29Z","lastTransitionTime":"2026-01-25T07:57:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:57:29 crc kubenswrapper[4832]: I0125 07:57:29.667177 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-ct7hc"] Jan 25 07:57:29 crc kubenswrapper[4832]: I0125 07:57:29.667869 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-ct7hc" Jan 25 07:57:29 crc kubenswrapper[4832]: I0125 07:57:29.668574 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 25 07:57:29 crc kubenswrapper[4832]: E0125 07:57:29.668725 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 25 07:57:29 crc kubenswrapper[4832]: I0125 07:57:29.669562 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Jan 25 07:57:29 crc kubenswrapper[4832]: I0125 07:57:29.670551 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Jan 25 07:57:29 crc kubenswrapper[4832]: I0125 07:57:29.700701 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d0e4b534-077a-47eb-a9aa-463b4dce27c2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:56:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:56:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e400282707469172abd90879bb5c4f96419dd2fbdbc5cc58c6ee9954624b221f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://22fb11acb07674f4808f4563567010790f12a87af272fdcf5ad1998e616c3f13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7970bc59b29bb18f7064917431bb4dd3388f593b65f71d697e3bc1c37493d087\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ae35d18ac48a31c47656c723134740770a44da6fa1587a853402bbfd4f51956\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://56b41ea1d1a7bb58c288bf3c661f5cd441412fc4790cd8361da2061bd35721dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c6f28ecd4c0dfb159fffbbdfc1ecbfee0ce21de2efa607937d80ec098bfc2534\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c6f28ecd4c0dfb159fffbbdfc1ecbfee0ce21de2efa607937d80ec098bfc2534\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T07:56:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T07:56:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3d6c060504d04d04a811fe906985b4981037f7c249befd89d21694b58983826\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b3d6c060504d04d04a811fe906985b4981037f7c249befd89d21694b58983826\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T07:56:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T07:56:58Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f98f07a514287378206a12966a18bcce2ce996434858c036f7e405a8c5d51721\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f98f07a514287378206a12966a18bcce2ce996434858c036f7e405a8c5d51721\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T07:56:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T07:56:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T07:56:57Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T07:57:29Z is after 2025-08-24T17:21:41Z" Jan 25 07:57:29 crc kubenswrapper[4832]: I0125 07:57:29.715015 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f08aec7c666388c5a9a5ccc970acf6e9df3262090951fd1a205cfb2f6cfb26a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e880d54d6b2d147d036dac73afd36230c3a984b018b7bd600dcbd33ca83aa84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T07:57:29Z is after 2025-08-24T17:21:41Z" Jan 25 07:57:29 crc kubenswrapper[4832]: I0125 07:57:29.734984 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-kzrcf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5439ad80-35f6-4da4-8745-8104e9963472\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c1f3fab8a8806d76e6199970ac471a73665e6ec874f959a1e7908df814babfff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dg29p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T07:57:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-kzrcf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T07:57:29Z is after 2025-08-24T17:21:41Z" Jan 25 07:57:29 crc kubenswrapper[4832]: I0125 07:57:29.749367 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:16Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T07:57:29Z is after 2025-08-24T17:21:41Z" Jan 25 07:57:29 crc kubenswrapper[4832]: I0125 07:57:29.763618 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:57:29 crc kubenswrapper[4832]: I0125 07:57:29.763669 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:57:29 crc kubenswrapper[4832]: I0125 07:57:29.763686 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:57:29 crc kubenswrapper[4832]: I0125 07:57:29.763709 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:57:29 crc kubenswrapper[4832]: I0125 07:57:29.763726 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:57:29Z","lastTransitionTime":"2026-01-25T07:57:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:57:29 crc kubenswrapper[4832]: I0125 07:57:29.772135 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://49bab1f91a75d2c164a43ba253102a6ac5ba0fd6347fad172ae2052f055d3434\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T07:57:29Z is after 2025-08-24T17:21:41Z" Jan 25 07:57:29 crc kubenswrapper[4832]: I0125 07:57:29.782597 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cd2cg\" (UniqueName: \"kubernetes.io/projected/1be4ce34-f46c-4ee9-8fb5-7ac13dafef85-kube-api-access-cd2cg\") pod \"ovnkube-control-plane-749d76644c-ct7hc\" (UID: \"1be4ce34-f46c-4ee9-8fb5-7ac13dafef85\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-ct7hc" Jan 25 07:57:29 crc kubenswrapper[4832]: I0125 07:57:29.782688 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/1be4ce34-f46c-4ee9-8fb5-7ac13dafef85-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-ct7hc\" (UID: \"1be4ce34-f46c-4ee9-8fb5-7ac13dafef85\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-ct7hc" Jan 25 07:57:29 crc kubenswrapper[4832]: I0125 07:57:29.782721 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/1be4ce34-f46c-4ee9-8fb5-7ac13dafef85-env-overrides\") pod \"ovnkube-control-plane-749d76644c-ct7hc\" (UID: \"1be4ce34-f46c-4ee9-8fb5-7ac13dafef85\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-ct7hc" Jan 25 07:57:29 crc kubenswrapper[4832]: I0125 07:57:29.782909 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/1be4ce34-f46c-4ee9-8fb5-7ac13dafef85-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-ct7hc\" (UID: \"1be4ce34-f46c-4ee9-8fb5-7ac13dafef85\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-ct7hc" Jan 25 07:57:29 crc kubenswrapper[4832]: I0125 07:57:29.789433 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:19Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:19Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://097b2ff685144140b86c80b5c605d0ef31116b56237a696d1da4bf98f65d7ae2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T07:57:29Z is after 2025-08-24T17:21:41Z" Jan 25 07:57:29 crc kubenswrapper[4832]: I0125 07:57:29.805916 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-ljmz9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f0e6de28-95c1-4b62-93a5-8141ed12ba8e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://90459cff650e6a278d83d57b502423c3c3bd87cadc083c7642dfc4cc33e7953c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s6dzs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T07:57:16Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-ljmz9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T07:57:29Z is after 2025-08-24T17:21:41Z" Jan 25 07:57:29 crc kubenswrapper[4832]: I0125 07:57:29.824027 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-9r9sz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1fb47e8e-c812-41b4-9be7-3fad81e121b0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11d30ecfbac91cbd5f546d8f064b715e31917d7db31102376299e2c5fa7951f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2t6v2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c32b6a39b2bc87d55b11a88a54d0909633358c70f3fc555cd4308bc5bf2689a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2t6v2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T07:57:16Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-9r9sz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T07:57:29Z is after 2025-08-24T17:21:41Z" Jan 25 07:57:29 crc kubenswrapper[4832]: I0125 07:57:29.843164 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T07:57:29Z is after 2025-08-24T17:21:41Z" Jan 25 07:57:29 crc kubenswrapper[4832]: I0125 07:57:29.866141 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:57:29 crc kubenswrapper[4832]: I0125 07:57:29.866208 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:57:29 crc kubenswrapper[4832]: I0125 07:57:29.866219 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:57:29 crc kubenswrapper[4832]: I0125 07:57:29.866232 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:57:29 crc kubenswrapper[4832]: I0125 07:57:29.866261 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:57:29Z","lastTransitionTime":"2026-01-25T07:57:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:57:29 crc kubenswrapper[4832]: I0125 07:57:29.869804 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-7tflx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"947f1c61-f061-4448-b301-9c2554b67933\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://62f9942e292890719dd629a44aa806877367db57a332a97f254fea093c039c5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6tmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://446dcb21c95e4112671db6f4b8376ff3361d3d386ecdaa190f615271511be812\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://446dcb21c95e4112671db6f4b8376ff3361d3d386ecdaa190f615271511be812\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T07:57:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T07:57:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6tmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2ca8e86a16d5f632146a210839dc52fb85013bd79ac5a467847d4a28a672539\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a2ca8e86a16d5f632146a210839dc52fb85013bd79ac5a467847d4a28a672539\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T07:57:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T07:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6tmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e8c763fc8bcc560d4435f2ed3be793465fb9e31b07bc26b76ce14bf7d9ce6b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3e8c763fc8bcc560d4435f2ed3be793465fb9e31b07bc26b76ce14bf7d9ce6b7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T07:57:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T07:57:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6tmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6a224c00f14700b78550beaa705d0f1cf0b2f13ef8ec3ba81aef885b81292f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a6a224c00f14700b78550beaa705d0f1cf0b2f13ef8ec3ba81aef885b81292f3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T07:57:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T07:57:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6tmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0565bbfef6aee4dc36b7eeea5fb9b0d26004395c38af8fb6f1745ff6853957e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0565bbfef6aee4dc36b7eeea5fb9b0d26004395c38af8fb6f1745ff6853957e4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T07:57:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T07:57:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6tmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://21c9f3889231e035c1db9611e076f2db7f52cca1449f9cd143323a8599d3141c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://21c9f3889231e035c1db9611e076f2db7f52cca1449f9cd143323a8599d3141c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T07:57:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T07:57:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6tmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T07:57:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-7tflx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T07:57:29Z is after 2025-08-24T17:21:41Z" Jan 25 07:57:29 crc kubenswrapper[4832]: I0125 07:57:29.884149 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/1be4ce34-f46c-4ee9-8fb5-7ac13dafef85-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-ct7hc\" (UID: \"1be4ce34-f46c-4ee9-8fb5-7ac13dafef85\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-ct7hc" Jan 25 07:57:29 crc kubenswrapper[4832]: I0125 07:57:29.884237 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cd2cg\" (UniqueName: \"kubernetes.io/projected/1be4ce34-f46c-4ee9-8fb5-7ac13dafef85-kube-api-access-cd2cg\") pod \"ovnkube-control-plane-749d76644c-ct7hc\" (UID: \"1be4ce34-f46c-4ee9-8fb5-7ac13dafef85\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-ct7hc" Jan 25 07:57:29 crc kubenswrapper[4832]: I0125 07:57:29.884315 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/1be4ce34-f46c-4ee9-8fb5-7ac13dafef85-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-ct7hc\" (UID: \"1be4ce34-f46c-4ee9-8fb5-7ac13dafef85\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-ct7hc" Jan 25 07:57:29 crc kubenswrapper[4832]: I0125 07:57:29.884346 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/1be4ce34-f46c-4ee9-8fb5-7ac13dafef85-env-overrides\") pod \"ovnkube-control-plane-749d76644c-ct7hc\" (UID: \"1be4ce34-f46c-4ee9-8fb5-7ac13dafef85\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-ct7hc" Jan 25 07:57:29 crc kubenswrapper[4832]: I0125 07:57:29.885277 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/1be4ce34-f46c-4ee9-8fb5-7ac13dafef85-env-overrides\") pod \"ovnkube-control-plane-749d76644c-ct7hc\" (UID: \"1be4ce34-f46c-4ee9-8fb5-7ac13dafef85\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-ct7hc" Jan 25 07:57:29 crc kubenswrapper[4832]: I0125 07:57:29.885341 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/1be4ce34-f46c-4ee9-8fb5-7ac13dafef85-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-ct7hc\" (UID: \"1be4ce34-f46c-4ee9-8fb5-7ac13dafef85\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-ct7hc" Jan 25 07:57:29 crc kubenswrapper[4832]: I0125 07:57:29.885416 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-ct7hc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1be4ce34-f46c-4ee9-8fb5-7ac13dafef85\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cd2cg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cd2cg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T07:57:29Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-ct7hc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T07:57:29Z is after 2025-08-24T17:21:41Z" Jan 25 07:57:29 crc kubenswrapper[4832]: I0125 07:57:29.890540 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/1be4ce34-f46c-4ee9-8fb5-7ac13dafef85-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-ct7hc\" (UID: \"1be4ce34-f46c-4ee9-8fb5-7ac13dafef85\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-ct7hc" Jan 25 07:57:29 crc kubenswrapper[4832]: I0125 07:57:29.905323 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cd2cg\" (UniqueName: \"kubernetes.io/projected/1be4ce34-f46c-4ee9-8fb5-7ac13dafef85-kube-api-access-cd2cg\") pod \"ovnkube-control-plane-749d76644c-ct7hc\" (UID: \"1be4ce34-f46c-4ee9-8fb5-7ac13dafef85\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-ct7hc" Jan 25 07:57:29 crc kubenswrapper[4832]: I0125 07:57:29.908101 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4399c971-4476-4d24-ae22-8f9710ee1ea8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:56:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:56:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:56:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:56:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:56:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://427b76c32266adf832d2068d3a55977e793505c5bb68d7b55f73115596094910\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:56:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://37e9206fcc440929199c51b318bab8d2c23814d1307eaed596434c12edf2ed21\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:56:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://959f94a48ef709e3a3ca62ab6fda1874fd98e4fa70fbde0fa03da51bc8d0ed25\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:56:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://56d7d5b36830b76c8af4d6a98ec50b4096ef677b7ec94784724d5395dbc5e1a5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7e2213b4c4748dc37cf94e9b977630270dedbabf28e81c8a6d75e4ee3346ad7a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-25T07:57:15Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0125 07:57:10.242088 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0125 07:57:10.245266 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3222874030/tls.crt::/tmp/serving-cert-3222874030/tls.key\\\\\\\"\\\\nI0125 07:57:15.582629 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0125 07:57:15.585295 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0125 07:57:15.585315 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0125 07:57:15.585341 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0125 07:57:15.585347 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0125 07:57:15.590465 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0125 07:57:15.590486 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0125 07:57:15.590498 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0125 07:57:15.590502 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0125 07:57:15.590506 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0125 07:57:15.590510 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0125 07:57:15.590513 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0125 07:57:15.590670 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0125 07:57:15.594690 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-25T07:56:59Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c0b0c638bfaa98aaf9932b5ad1b0bfc04ba52038c40f3aa85103388c557ace5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:56:59Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5cdefbe9da3ff798b69ba79465cd9b6fce74df31802f14dca3fa58ba5b9d1bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b5cdefbe9da3ff798b69ba79465cd9b6fce74df31802f14dca3fa58ba5b9d1bd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T07:56:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T07:56:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T07:56:57Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T07:57:29Z is after 2025-08-24T17:21:41Z" Jan 25 07:57:29 crc kubenswrapper[4832]: I0125 07:57:29.922797 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fcc553c4-1007-4dbc-8420-60b36d54467a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:56:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:56:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:56:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8be196a1dec67a58e78aa9de2efa770fc899f210cc9c13962f0ebe78b967ba34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:56:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b044eb1a229266f00938c08da6aa9e86425ca71d08c8434d7214d54850c36bbb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:56:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://82354c782a5e3edb960aa716e1fc5fa9ab40d1f483ae320f08abfb662c1f1911\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:56:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7833d14895ff5c8aa464bdd04ddfe77dd2cddb9658d863bf6421449e62657bd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:56:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T07:56:57Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T07:57:29Z is after 2025-08-24T17:21:41Z" Jan 25 07:57:29 crc kubenswrapper[4832]: I0125 07:57:29.936781 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:16Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T07:57:29Z is after 2025-08-24T17:21:41Z" Jan 25 07:57:29 crc kubenswrapper[4832]: I0125 07:57:29.950021 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-6dqw2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5b30a48c-b823-4cdd-ac0c-def5487d8fa6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d04c4243f10847106daab854b81ba5b24466780aa4900922ae2c460468a12e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxmsw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T07:57:16Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-6dqw2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T07:57:29Z is after 2025-08-24T17:21:41Z" Jan 25 07:57:29 crc kubenswrapper[4832]: I0125 07:57:29.968939 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:57:29 crc kubenswrapper[4832]: I0125 07:57:29.968995 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:57:29 crc kubenswrapper[4832]: I0125 07:57:29.969006 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:57:29 crc kubenswrapper[4832]: I0125 07:57:29.969020 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:57:29 crc kubenswrapper[4832]: I0125 07:57:29.969030 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:57:29Z","lastTransitionTime":"2026-01-25T07:57:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:57:29 crc kubenswrapper[4832]: I0125 07:57:29.976191 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-plv66" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9c6fdc72-86dc-433d-8aac-57b0eeefaca3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4eb8d5ded80c75addd304eb271c805a5558200db4ad062ef7354d8a0e4d2892d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rkm2k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b2bdf85709ae59146893142e9c99259a30d0a3d382b2212b1863f677f6afc2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rkm2k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://955df1f749685e35f57096ab341705a767f9f044c498ff9fe0c578205ab00e47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rkm2k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a4281c5178e1f538e268252a65fbf98cf6d3febdb246a148f96a4aa074654ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rkm2k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9039a4038315d24ad4f721f3a16dc792881c104d23270f4ab5ffb3d84ff4cb99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rkm2k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0de5e2c0084fa8b9faf368e61b965f84d8411bcbdfb8b3cf6a35f4bc6088e68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rkm2k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://535d226369544a445f4a5592a1a733db46fea474ae6700626093ea53a57fa858\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://535d226369544a445f4a5592a1a733db46fea474ae6700626093ea53a57fa858\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-25T07:57:26Z\\\",\\\"message\\\":\\\"lse, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.5.139\\\\\\\", Port:17698, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI0125 07:57:26.725541 6225 services_controller.go:452] Built service openshift-apiserver/check-endpoints per-node LB for network=default: []services.LB{}\\\\nI0125 07:57:26.725548 6225 services_controller.go:453] Built service openshift-apiserver/check-endpoints template LB for network=default: []services.LB{}\\\\nI0125 07:57:26.725513 6225 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-diagnostics/network-check-target-xd92c\\\\nI0125 07:57:26.725560 6225 obj_retry.go:303] Retry object setup: *v1.Pod openshift-image-registry/node-ca-6dqw2\\\\nF0125 07:57:26.725573 6225 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-25T07:57:26Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-plv66_openshift-ovn-kubernetes(9c6fdc72-86dc-433d-8aac-57b0eeefaca3)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rkm2k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d82289bf3a8f5881decb5d348cc43fdfd61f4ce6af17013a893b687d2c759d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rkm2k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac96bdf8380dbae226d8f186a0449b986660f21889eb73734620b26fb796fbf1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ac96bdf8380dbae226d8f186a0449b986660f21889eb73734620b26fb796fbf1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T07:57:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rkm2k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T07:57:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-plv66\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T07:57:29Z is after 2025-08-24T17:21:41Z" Jan 25 07:57:29 crc kubenswrapper[4832]: I0125 07:57:29.984277 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-ct7hc" Jan 25 07:57:30 crc kubenswrapper[4832]: W0125 07:57:30.000435 4832 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1be4ce34_f46c_4ee9_8fb5_7ac13dafef85.slice/crio-0e7cafecb25fe6d45d599d473672e38a8374a593c63142b16919b91c5a939546 WatchSource:0}: Error finding container 0e7cafecb25fe6d45d599d473672e38a8374a593c63142b16919b91c5a939546: Status 404 returned error can't find the container with id 0e7cafecb25fe6d45d599d473672e38a8374a593c63142b16919b91c5a939546 Jan 25 07:57:30 crc kubenswrapper[4832]: I0125 07:57:30.071319 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:57:30 crc kubenswrapper[4832]: I0125 07:57:30.071357 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:57:30 crc kubenswrapper[4832]: I0125 07:57:30.071367 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:57:30 crc kubenswrapper[4832]: I0125 07:57:30.071399 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:57:30 crc kubenswrapper[4832]: I0125 07:57:30.071411 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:57:30Z","lastTransitionTime":"2026-01-25T07:57:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:57:30 crc kubenswrapper[4832]: I0125 07:57:30.174623 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:57:30 crc kubenswrapper[4832]: I0125 07:57:30.174664 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:57:30 crc kubenswrapper[4832]: I0125 07:57:30.174675 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:57:30 crc kubenswrapper[4832]: I0125 07:57:30.174691 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:57:30 crc kubenswrapper[4832]: I0125 07:57:30.174705 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:57:30Z","lastTransitionTime":"2026-01-25T07:57:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:57:30 crc kubenswrapper[4832]: I0125 07:57:30.276948 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:57:30 crc kubenswrapper[4832]: I0125 07:57:30.277006 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:57:30 crc kubenswrapper[4832]: I0125 07:57:30.277015 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:57:30 crc kubenswrapper[4832]: I0125 07:57:30.277030 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:57:30 crc kubenswrapper[4832]: I0125 07:57:30.277040 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:57:30Z","lastTransitionTime":"2026-01-25T07:57:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:57:30 crc kubenswrapper[4832]: I0125 07:57:30.379373 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:57:30 crc kubenswrapper[4832]: I0125 07:57:30.379433 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:57:30 crc kubenswrapper[4832]: I0125 07:57:30.379442 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:57:30 crc kubenswrapper[4832]: I0125 07:57:30.379457 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:57:30 crc kubenswrapper[4832]: I0125 07:57:30.379466 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:57:30Z","lastTransitionTime":"2026-01-25T07:57:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:57:30 crc kubenswrapper[4832]: I0125 07:57:30.482157 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:57:30 crc kubenswrapper[4832]: I0125 07:57:30.482199 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:57:30 crc kubenswrapper[4832]: I0125 07:57:30.482211 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:57:30 crc kubenswrapper[4832]: I0125 07:57:30.482228 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:57:30 crc kubenswrapper[4832]: I0125 07:57:30.482239 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:57:30Z","lastTransitionTime":"2026-01-25T07:57:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:57:30 crc kubenswrapper[4832]: I0125 07:57:30.585503 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:57:30 crc kubenswrapper[4832]: I0125 07:57:30.585807 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:57:30 crc kubenswrapper[4832]: I0125 07:57:30.585817 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:57:30 crc kubenswrapper[4832]: I0125 07:57:30.585832 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:57:30 crc kubenswrapper[4832]: I0125 07:57:30.585841 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:57:30Z","lastTransitionTime":"2026-01-25T07:57:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:57:30 crc kubenswrapper[4832]: I0125 07:57:30.598190 4832 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-17 12:24:05.281971976 +0000 UTC Jan 25 07:57:30 crc kubenswrapper[4832]: I0125 07:57:30.668593 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 25 07:57:30 crc kubenswrapper[4832]: I0125 07:57:30.668645 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 25 07:57:30 crc kubenswrapper[4832]: E0125 07:57:30.668757 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 25 07:57:30 crc kubenswrapper[4832]: E0125 07:57:30.668818 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 25 07:57:30 crc kubenswrapper[4832]: I0125 07:57:30.688882 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:57:30 crc kubenswrapper[4832]: I0125 07:57:30.688915 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:57:30 crc kubenswrapper[4832]: I0125 07:57:30.688927 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:57:30 crc kubenswrapper[4832]: I0125 07:57:30.688941 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:57:30 crc kubenswrapper[4832]: I0125 07:57:30.688954 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:57:30Z","lastTransitionTime":"2026-01-25T07:57:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:57:30 crc kubenswrapper[4832]: I0125 07:57:30.791752 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/network-metrics-daemon-nzj5s"] Jan 25 07:57:30 crc kubenswrapper[4832]: I0125 07:57:30.792165 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-nzj5s" Jan 25 07:57:30 crc kubenswrapper[4832]: E0125 07:57:30.792225 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-nzj5s" podUID="b1a15135-866b-4644-97aa-34c7da815b6b" Jan 25 07:57:30 crc kubenswrapper[4832]: I0125 07:57:30.792773 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:57:30 crc kubenswrapper[4832]: I0125 07:57:30.792824 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:57:30 crc kubenswrapper[4832]: I0125 07:57:30.792838 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:57:30 crc kubenswrapper[4832]: I0125 07:57:30.792867 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:57:30 crc kubenswrapper[4832]: I0125 07:57:30.792880 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:57:30Z","lastTransitionTime":"2026-01-25T07:57:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:57:30 crc kubenswrapper[4832]: I0125 07:57:30.811236 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:16Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T07:57:30Z is after 2025-08-24T17:21:41Z" Jan 25 07:57:30 crc kubenswrapper[4832]: I0125 07:57:30.828974 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://49bab1f91a75d2c164a43ba253102a6ac5ba0fd6347fad172ae2052f055d3434\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T07:57:30Z is after 2025-08-24T17:21:41Z" Jan 25 07:57:30 crc kubenswrapper[4832]: I0125 07:57:30.844059 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:19Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:19Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://097b2ff685144140b86c80b5c605d0ef31116b56237a696d1da4bf98f65d7ae2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T07:57:30Z is after 2025-08-24T17:21:41Z" Jan 25 07:57:30 crc kubenswrapper[4832]: I0125 07:57:30.859528 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-ljmz9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f0e6de28-95c1-4b62-93a5-8141ed12ba8e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://90459cff650e6a278d83d57b502423c3c3bd87cadc083c7642dfc4cc33e7953c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s6dzs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T07:57:16Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-ljmz9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T07:57:30Z is after 2025-08-24T17:21:41Z" Jan 25 07:57:30 crc kubenswrapper[4832]: I0125 07:57:30.878336 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-9r9sz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1fb47e8e-c812-41b4-9be7-3fad81e121b0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11d30ecfbac91cbd5f546d8f064b715e31917d7db31102376299e2c5fa7951f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2t6v2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c32b6a39b2bc87d55b11a88a54d0909633358c70f3fc555cd4308bc5bf2689a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2t6v2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T07:57:16Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-9r9sz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T07:57:30Z is after 2025-08-24T17:21:41Z" Jan 25 07:57:30 crc kubenswrapper[4832]: I0125 07:57:30.894807 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:57:30 crc kubenswrapper[4832]: I0125 07:57:30.894849 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:57:30 crc kubenswrapper[4832]: I0125 07:57:30.894857 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:57:30 crc kubenswrapper[4832]: I0125 07:57:30.894870 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:57:30 crc kubenswrapper[4832]: I0125 07:57:30.894878 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:57:30Z","lastTransitionTime":"2026-01-25T07:57:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:57:30 crc kubenswrapper[4832]: I0125 07:57:30.895049 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6wc7l\" (UniqueName: \"kubernetes.io/projected/b1a15135-866b-4644-97aa-34c7da815b6b-kube-api-access-6wc7l\") pod \"network-metrics-daemon-nzj5s\" (UID: \"b1a15135-866b-4644-97aa-34c7da815b6b\") " pod="openshift-multus/network-metrics-daemon-nzj5s" Jan 25 07:57:30 crc kubenswrapper[4832]: I0125 07:57:30.895151 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/b1a15135-866b-4644-97aa-34c7da815b6b-metrics-certs\") pod \"network-metrics-daemon-nzj5s\" (UID: \"b1a15135-866b-4644-97aa-34c7da815b6b\") " pod="openshift-multus/network-metrics-daemon-nzj5s" Jan 25 07:57:30 crc kubenswrapper[4832]: I0125 07:57:30.896679 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T07:57:30Z is after 2025-08-24T17:21:41Z" Jan 25 07:57:30 crc kubenswrapper[4832]: I0125 07:57:30.915828 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-7tflx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"947f1c61-f061-4448-b301-9c2554b67933\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://62f9942e292890719dd629a44aa806877367db57a332a97f254fea093c039c5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6tmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://446dcb21c95e4112671db6f4b8376ff3361d3d386ecdaa190f615271511be812\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://446dcb21c95e4112671db6f4b8376ff3361d3d386ecdaa190f615271511be812\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T07:57:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T07:57:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6tmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2ca8e86a16d5f632146a210839dc52fb85013bd79ac5a467847d4a28a672539\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a2ca8e86a16d5f632146a210839dc52fb85013bd79ac5a467847d4a28a672539\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T07:57:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T07:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6tmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e8c763fc8bcc560d4435f2ed3be793465fb9e31b07bc26b76ce14bf7d9ce6b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3e8c763fc8bcc560d4435f2ed3be793465fb9e31b07bc26b76ce14bf7d9ce6b7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T07:57:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T07:57:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6tmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6a224c00f14700b78550beaa705d0f1cf0b2f13ef8ec3ba81aef885b81292f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a6a224c00f14700b78550beaa705d0f1cf0b2f13ef8ec3ba81aef885b81292f3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T07:57:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T07:57:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6tmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0565bbfef6aee4dc36b7eeea5fb9b0d26004395c38af8fb6f1745ff6853957e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0565bbfef6aee4dc36b7eeea5fb9b0d26004395c38af8fb6f1745ff6853957e4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T07:57:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T07:57:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6tmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://21c9f3889231e035c1db9611e076f2db7f52cca1449f9cd143323a8599d3141c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://21c9f3889231e035c1db9611e076f2db7f52cca1449f9cd143323a8599d3141c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T07:57:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T07:57:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6tmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T07:57:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-7tflx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T07:57:30Z is after 2025-08-24T17:21:41Z" Jan 25 07:57:30 crc kubenswrapper[4832]: I0125 07:57:30.928966 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4399c971-4476-4d24-ae22-8f9710ee1ea8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:56:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:56:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:56:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:56:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:56:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://427b76c32266adf832d2068d3a55977e793505c5bb68d7b55f73115596094910\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:56:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://37e9206fcc440929199c51b318bab8d2c23814d1307eaed596434c12edf2ed21\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:56:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://959f94a48ef709e3a3ca62ab6fda1874fd98e4fa70fbde0fa03da51bc8d0ed25\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:56:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://56d7d5b36830b76c8af4d6a98ec50b4096ef677b7ec94784724d5395dbc5e1a5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7e2213b4c4748dc37cf94e9b977630270dedbabf28e81c8a6d75e4ee3346ad7a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-25T07:57:15Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0125 07:57:10.242088 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0125 07:57:10.245266 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3222874030/tls.crt::/tmp/serving-cert-3222874030/tls.key\\\\\\\"\\\\nI0125 07:57:15.582629 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0125 07:57:15.585295 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0125 07:57:15.585315 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0125 07:57:15.585341 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0125 07:57:15.585347 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0125 07:57:15.590465 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0125 07:57:15.590486 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0125 07:57:15.590498 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0125 07:57:15.590502 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0125 07:57:15.590506 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0125 07:57:15.590510 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0125 07:57:15.590513 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0125 07:57:15.590670 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0125 07:57:15.594690 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-25T07:56:59Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c0b0c638bfaa98aaf9932b5ad1b0bfc04ba52038c40f3aa85103388c557ace5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:56:59Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5cdefbe9da3ff798b69ba79465cd9b6fce74df31802f14dca3fa58ba5b9d1bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b5cdefbe9da3ff798b69ba79465cd9b6fce74df31802f14dca3fa58ba5b9d1bd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T07:56:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T07:56:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T07:56:57Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T07:57:30Z is after 2025-08-24T17:21:41Z" Jan 25 07:57:30 crc kubenswrapper[4832]: I0125 07:57:30.940318 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-ct7hc" event={"ID":"1be4ce34-f46c-4ee9-8fb5-7ac13dafef85","Type":"ContainerStarted","Data":"80d0c4fe9bedb92c87bfea3e2e7706dac8825366b74adb48b257fa32f31a6277"} Jan 25 07:57:30 crc kubenswrapper[4832]: I0125 07:57:30.940377 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-ct7hc" event={"ID":"1be4ce34-f46c-4ee9-8fb5-7ac13dafef85","Type":"ContainerStarted","Data":"0c584b1d69c283cdea5cd50a6f1e3b9a1fd4b4b82bfb1142fb4bb32fd7c7d3fc"} Jan 25 07:57:30 crc kubenswrapper[4832]: I0125 07:57:30.940407 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-ct7hc" event={"ID":"1be4ce34-f46c-4ee9-8fb5-7ac13dafef85","Type":"ContainerStarted","Data":"0e7cafecb25fe6d45d599d473672e38a8374a593c63142b16919b91c5a939546"} Jan 25 07:57:30 crc kubenswrapper[4832]: I0125 07:57:30.942680 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fcc553c4-1007-4dbc-8420-60b36d54467a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:56:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:56:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:56:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8be196a1dec67a58e78aa9de2efa770fc899f210cc9c13962f0ebe78b967ba34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:56:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b044eb1a229266f00938c08da6aa9e86425ca71d08c8434d7214d54850c36bbb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:56:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://82354c782a5e3edb960aa716e1fc5fa9ab40d1f483ae320f08abfb662c1f1911\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:56:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7833d14895ff5c8aa464bdd04ddfe77dd2cddb9658d863bf6421449e62657bd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:56:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T07:56:57Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T07:57:30Z is after 2025-08-24T17:21:41Z" Jan 25 07:57:30 crc kubenswrapper[4832]: I0125 07:57:30.956166 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:16Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T07:57:30Z is after 2025-08-24T17:21:41Z" Jan 25 07:57:30 crc kubenswrapper[4832]: I0125 07:57:30.968456 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-6dqw2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5b30a48c-b823-4cdd-ac0c-def5487d8fa6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d04c4243f10847106daab854b81ba5b24466780aa4900922ae2c460468a12e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxmsw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T07:57:16Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-6dqw2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T07:57:30Z is after 2025-08-24T17:21:41Z" Jan 25 07:57:30 crc kubenswrapper[4832]: I0125 07:57:30.986699 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-plv66" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9c6fdc72-86dc-433d-8aac-57b0eeefaca3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4eb8d5ded80c75addd304eb271c805a5558200db4ad062ef7354d8a0e4d2892d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rkm2k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b2bdf85709ae59146893142e9c99259a30d0a3d382b2212b1863f677f6afc2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rkm2k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://955df1f749685e35f57096ab341705a767f9f044c498ff9fe0c578205ab00e47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rkm2k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a4281c5178e1f538e268252a65fbf98cf6d3febdb246a148f96a4aa074654ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rkm2k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9039a4038315d24ad4f721f3a16dc792881c104d23270f4ab5ffb3d84ff4cb99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rkm2k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0de5e2c0084fa8b9faf368e61b965f84d8411bcbdfb8b3cf6a35f4bc6088e68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rkm2k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://535d226369544a445f4a5592a1a733db46fea474ae6700626093ea53a57fa858\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://535d226369544a445f4a5592a1a733db46fea474ae6700626093ea53a57fa858\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-25T07:57:26Z\\\",\\\"message\\\":\\\"lse, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.5.139\\\\\\\", Port:17698, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI0125 07:57:26.725541 6225 services_controller.go:452] Built service openshift-apiserver/check-endpoints per-node LB for network=default: []services.LB{}\\\\nI0125 07:57:26.725548 6225 services_controller.go:453] Built service openshift-apiserver/check-endpoints template LB for network=default: []services.LB{}\\\\nI0125 07:57:26.725513 6225 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-diagnostics/network-check-target-xd92c\\\\nI0125 07:57:26.725560 6225 obj_retry.go:303] Retry object setup: *v1.Pod openshift-image-registry/node-ca-6dqw2\\\\nF0125 07:57:26.725573 6225 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-25T07:57:26Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-plv66_openshift-ovn-kubernetes(9c6fdc72-86dc-433d-8aac-57b0eeefaca3)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rkm2k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d82289bf3a8f5881decb5d348cc43fdfd61f4ce6af17013a893b687d2c759d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rkm2k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac96bdf8380dbae226d8f186a0449b986660f21889eb73734620b26fb796fbf1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ac96bdf8380dbae226d8f186a0449b986660f21889eb73734620b26fb796fbf1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T07:57:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rkm2k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T07:57:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-plv66\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T07:57:30Z is after 2025-08-24T17:21:41Z" Jan 25 07:57:30 crc kubenswrapper[4832]: I0125 07:57:30.996360 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6wc7l\" (UniqueName: \"kubernetes.io/projected/b1a15135-866b-4644-97aa-34c7da815b6b-kube-api-access-6wc7l\") pod \"network-metrics-daemon-nzj5s\" (UID: \"b1a15135-866b-4644-97aa-34c7da815b6b\") " pod="openshift-multus/network-metrics-daemon-nzj5s" Jan 25 07:57:30 crc kubenswrapper[4832]: I0125 07:57:30.996464 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/b1a15135-866b-4644-97aa-34c7da815b6b-metrics-certs\") pod \"network-metrics-daemon-nzj5s\" (UID: \"b1a15135-866b-4644-97aa-34c7da815b6b\") " pod="openshift-multus/network-metrics-daemon-nzj5s" Jan 25 07:57:30 crc kubenswrapper[4832]: E0125 07:57:30.996675 4832 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 25 07:57:30 crc kubenswrapper[4832]: E0125 07:57:30.996770 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b1a15135-866b-4644-97aa-34c7da815b6b-metrics-certs podName:b1a15135-866b-4644-97aa-34c7da815b6b nodeName:}" failed. No retries permitted until 2026-01-25 07:57:31.496746226 +0000 UTC m=+34.170569779 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/b1a15135-866b-4644-97aa-34c7da815b6b-metrics-certs") pod "network-metrics-daemon-nzj5s" (UID: "b1a15135-866b-4644-97aa-34c7da815b6b") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 25 07:57:30 crc kubenswrapper[4832]: I0125 07:57:30.998116 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:57:30 crc kubenswrapper[4832]: I0125 07:57:30.998158 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:57:30 crc kubenswrapper[4832]: I0125 07:57:30.998204 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:57:30 crc kubenswrapper[4832]: I0125 07:57:30.998228 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:57:30 crc kubenswrapper[4832]: I0125 07:57:30.998245 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:57:30Z","lastTransitionTime":"2026-01-25T07:57:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:57:31 crc kubenswrapper[4832]: I0125 07:57:30.999962 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-ct7hc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1be4ce34-f46c-4ee9-8fb5-7ac13dafef85\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cd2cg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cd2cg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T07:57:29Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-ct7hc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T07:57:30Z is after 2025-08-24T17:21:41Z" Jan 25 07:57:31 crc kubenswrapper[4832]: I0125 07:57:31.014970 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6wc7l\" (UniqueName: \"kubernetes.io/projected/b1a15135-866b-4644-97aa-34c7da815b6b-kube-api-access-6wc7l\") pod \"network-metrics-daemon-nzj5s\" (UID: \"b1a15135-866b-4644-97aa-34c7da815b6b\") " pod="openshift-multus/network-metrics-daemon-nzj5s" Jan 25 07:57:31 crc kubenswrapper[4832]: I0125 07:57:31.020669 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d0e4b534-077a-47eb-a9aa-463b4dce27c2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:56:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:56:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e400282707469172abd90879bb5c4f96419dd2fbdbc5cc58c6ee9954624b221f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://22fb11acb07674f4808f4563567010790f12a87af272fdcf5ad1998e616c3f13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7970bc59b29bb18f7064917431bb4dd3388f593b65f71d697e3bc1c37493d087\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ae35d18ac48a31c47656c723134740770a44da6fa1587a853402bbfd4f51956\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://56b41ea1d1a7bb58c288bf3c661f5cd441412fc4790cd8361da2061bd35721dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c6f28ecd4c0dfb159fffbbdfc1ecbfee0ce21de2efa607937d80ec098bfc2534\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c6f28ecd4c0dfb159fffbbdfc1ecbfee0ce21de2efa607937d80ec098bfc2534\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T07:56:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T07:56:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3d6c060504d04d04a811fe906985b4981037f7c249befd89d21694b58983826\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b3d6c060504d04d04a811fe906985b4981037f7c249befd89d21694b58983826\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T07:56:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T07:56:58Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f98f07a514287378206a12966a18bcce2ce996434858c036f7e405a8c5d51721\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f98f07a514287378206a12966a18bcce2ce996434858c036f7e405a8c5d51721\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T07:56:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T07:56:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T07:56:57Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T07:57:31Z is after 2025-08-24T17:21:41Z" Jan 25 07:57:31 crc kubenswrapper[4832]: I0125 07:57:31.035060 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f08aec7c666388c5a9a5ccc970acf6e9df3262090951fd1a205cfb2f6cfb26a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e880d54d6b2d147d036dac73afd36230c3a984b018b7bd600dcbd33ca83aa84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T07:57:31Z is after 2025-08-24T17:21:41Z" Jan 25 07:57:31 crc kubenswrapper[4832]: I0125 07:57:31.048916 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-kzrcf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5439ad80-35f6-4da4-8745-8104e9963472\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c1f3fab8a8806d76e6199970ac471a73665e6ec874f959a1e7908df814babfff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dg29p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T07:57:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-kzrcf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T07:57:31Z is after 2025-08-24T17:21:41Z" Jan 25 07:57:31 crc kubenswrapper[4832]: I0125 07:57:31.064045 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-nzj5s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b1a15135-866b-4644-97aa-34c7da815b6b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6wc7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6wc7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T07:57:30Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-nzj5s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T07:57:31Z is after 2025-08-24T17:21:41Z" Jan 25 07:57:31 crc kubenswrapper[4832]: I0125 07:57:31.078271 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-ljmz9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f0e6de28-95c1-4b62-93a5-8141ed12ba8e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://90459cff650e6a278d83d57b502423c3c3bd87cadc083c7642dfc4cc33e7953c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s6dzs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T07:57:16Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-ljmz9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T07:57:31Z is after 2025-08-24T17:21:41Z" Jan 25 07:57:31 crc kubenswrapper[4832]: I0125 07:57:31.090892 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-9r9sz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1fb47e8e-c812-41b4-9be7-3fad81e121b0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11d30ecfbac91cbd5f546d8f064b715e31917d7db31102376299e2c5fa7951f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2t6v2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c32b6a39b2bc87d55b11a88a54d0909633358c70f3fc555cd4308bc5bf2689a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2t6v2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T07:57:16Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-9r9sz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T07:57:31Z is after 2025-08-24T17:21:41Z" Jan 25 07:57:31 crc kubenswrapper[4832]: I0125 07:57:31.103577 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:57:31 crc kubenswrapper[4832]: I0125 07:57:31.103627 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:57:31 crc kubenswrapper[4832]: I0125 07:57:31.103642 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:57:31 crc kubenswrapper[4832]: I0125 07:57:31.103663 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:57:31 crc kubenswrapper[4832]: I0125 07:57:31.103678 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:57:31Z","lastTransitionTime":"2026-01-25T07:57:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:57:31 crc kubenswrapper[4832]: I0125 07:57:31.110560 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:16Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T07:57:31Z is after 2025-08-24T17:21:41Z" Jan 25 07:57:31 crc kubenswrapper[4832]: I0125 07:57:31.128445 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://49bab1f91a75d2c164a43ba253102a6ac5ba0fd6347fad172ae2052f055d3434\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T07:57:31Z is after 2025-08-24T17:21:41Z" Jan 25 07:57:31 crc kubenswrapper[4832]: I0125 07:57:31.140277 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:19Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:19Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://097b2ff685144140b86c80b5c605d0ef31116b56237a696d1da4bf98f65d7ae2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T07:57:31Z is after 2025-08-24T17:21:41Z" Jan 25 07:57:31 crc kubenswrapper[4832]: I0125 07:57:31.151745 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T07:57:31Z is after 2025-08-24T17:21:41Z" Jan 25 07:57:31 crc kubenswrapper[4832]: I0125 07:57:31.163605 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-7tflx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"947f1c61-f061-4448-b301-9c2554b67933\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://62f9942e292890719dd629a44aa806877367db57a332a97f254fea093c039c5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6tmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://446dcb21c95e4112671db6f4b8376ff3361d3d386ecdaa190f615271511be812\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://446dcb21c95e4112671db6f4b8376ff3361d3d386ecdaa190f615271511be812\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T07:57:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T07:57:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6tmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2ca8e86a16d5f632146a210839dc52fb85013bd79ac5a467847d4a28a672539\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a2ca8e86a16d5f632146a210839dc52fb85013bd79ac5a467847d4a28a672539\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T07:57:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T07:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6tmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e8c763fc8bcc560d4435f2ed3be793465fb9e31b07bc26b76ce14bf7d9ce6b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3e8c763fc8bcc560d4435f2ed3be793465fb9e31b07bc26b76ce14bf7d9ce6b7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T07:57:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T07:57:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6tmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6a224c00f14700b78550beaa705d0f1cf0b2f13ef8ec3ba81aef885b81292f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a6a224c00f14700b78550beaa705d0f1cf0b2f13ef8ec3ba81aef885b81292f3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T07:57:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T07:57:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6tmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0565bbfef6aee4dc36b7eeea5fb9b0d26004395c38af8fb6f1745ff6853957e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0565bbfef6aee4dc36b7eeea5fb9b0d26004395c38af8fb6f1745ff6853957e4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T07:57:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T07:57:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6tmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://21c9f3889231e035c1db9611e076f2db7f52cca1449f9cd143323a8599d3141c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://21c9f3889231e035c1db9611e076f2db7f52cca1449f9cd143323a8599d3141c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T07:57:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T07:57:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6tmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T07:57:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-7tflx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T07:57:31Z is after 2025-08-24T17:21:41Z" Jan 25 07:57:31 crc kubenswrapper[4832]: I0125 07:57:31.172983 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-6dqw2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5b30a48c-b823-4cdd-ac0c-def5487d8fa6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d04c4243f10847106daab854b81ba5b24466780aa4900922ae2c460468a12e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxmsw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T07:57:16Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-6dqw2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T07:57:31Z is after 2025-08-24T17:21:41Z" Jan 25 07:57:31 crc kubenswrapper[4832]: I0125 07:57:31.189545 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-plv66" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9c6fdc72-86dc-433d-8aac-57b0eeefaca3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4eb8d5ded80c75addd304eb271c805a5558200db4ad062ef7354d8a0e4d2892d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rkm2k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b2bdf85709ae59146893142e9c99259a30d0a3d382b2212b1863f677f6afc2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rkm2k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://955df1f749685e35f57096ab341705a767f9f044c498ff9fe0c578205ab00e47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rkm2k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a4281c5178e1f538e268252a65fbf98cf6d3febdb246a148f96a4aa074654ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rkm2k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9039a4038315d24ad4f721f3a16dc792881c104d23270f4ab5ffb3d84ff4cb99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rkm2k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0de5e2c0084fa8b9faf368e61b965f84d8411bcbdfb8b3cf6a35f4bc6088e68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rkm2k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://535d226369544a445f4a5592a1a733db46fea474ae6700626093ea53a57fa858\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://535d226369544a445f4a5592a1a733db46fea474ae6700626093ea53a57fa858\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-25T07:57:26Z\\\",\\\"message\\\":\\\"lse, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.5.139\\\\\\\", Port:17698, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI0125 07:57:26.725541 6225 services_controller.go:452] Built service openshift-apiserver/check-endpoints per-node LB for network=default: []services.LB{}\\\\nI0125 07:57:26.725548 6225 services_controller.go:453] Built service openshift-apiserver/check-endpoints template LB for network=default: []services.LB{}\\\\nI0125 07:57:26.725513 6225 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-diagnostics/network-check-target-xd92c\\\\nI0125 07:57:26.725560 6225 obj_retry.go:303] Retry object setup: *v1.Pod openshift-image-registry/node-ca-6dqw2\\\\nF0125 07:57:26.725573 6225 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-25T07:57:26Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-plv66_openshift-ovn-kubernetes(9c6fdc72-86dc-433d-8aac-57b0eeefaca3)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rkm2k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d82289bf3a8f5881decb5d348cc43fdfd61f4ce6af17013a893b687d2c759d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rkm2k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac96bdf8380dbae226d8f186a0449b986660f21889eb73734620b26fb796fbf1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ac96bdf8380dbae226d8f186a0449b986660f21889eb73734620b26fb796fbf1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T07:57:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rkm2k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T07:57:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-plv66\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T07:57:31Z is after 2025-08-24T17:21:41Z" Jan 25 07:57:31 crc kubenswrapper[4832]: I0125 07:57:31.200527 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-ct7hc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1be4ce34-f46c-4ee9-8fb5-7ac13dafef85\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c584b1d69c283cdea5cd50a6f1e3b9a1fd4b4b82bfb1142fb4bb32fd7c7d3fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cd2cg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://80d0c4fe9bedb92c87bfea3e2e7706dac8825366b74adb48b257fa32f31a6277\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cd2cg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T07:57:29Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-ct7hc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T07:57:31Z is after 2025-08-24T17:21:41Z" Jan 25 07:57:31 crc kubenswrapper[4832]: I0125 07:57:31.206143 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:57:31 crc kubenswrapper[4832]: I0125 07:57:31.206174 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:57:31 crc kubenswrapper[4832]: I0125 07:57:31.206186 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:57:31 crc kubenswrapper[4832]: I0125 07:57:31.206202 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:57:31 crc kubenswrapper[4832]: I0125 07:57:31.206214 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:57:31Z","lastTransitionTime":"2026-01-25T07:57:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:57:31 crc kubenswrapper[4832]: I0125 07:57:31.214470 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4399c971-4476-4d24-ae22-8f9710ee1ea8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:56:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:56:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:56:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:56:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:56:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://427b76c32266adf832d2068d3a55977e793505c5bb68d7b55f73115596094910\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:56:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://37e9206fcc440929199c51b318bab8d2c23814d1307eaed596434c12edf2ed21\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:56:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://959f94a48ef709e3a3ca62ab6fda1874fd98e4fa70fbde0fa03da51bc8d0ed25\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:56:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://56d7d5b36830b76c8af4d6a98ec50b4096ef677b7ec94784724d5395dbc5e1a5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7e2213b4c4748dc37cf94e9b977630270dedbabf28e81c8a6d75e4ee3346ad7a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-25T07:57:15Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0125 07:57:10.242088 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0125 07:57:10.245266 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3222874030/tls.crt::/tmp/serving-cert-3222874030/tls.key\\\\\\\"\\\\nI0125 07:57:15.582629 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0125 07:57:15.585295 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0125 07:57:15.585315 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0125 07:57:15.585341 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0125 07:57:15.585347 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0125 07:57:15.590465 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0125 07:57:15.590486 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0125 07:57:15.590498 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0125 07:57:15.590502 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0125 07:57:15.590506 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0125 07:57:15.590510 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0125 07:57:15.590513 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0125 07:57:15.590670 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0125 07:57:15.594690 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-25T07:56:59Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c0b0c638bfaa98aaf9932b5ad1b0bfc04ba52038c40f3aa85103388c557ace5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:56:59Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5cdefbe9da3ff798b69ba79465cd9b6fce74df31802f14dca3fa58ba5b9d1bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b5cdefbe9da3ff798b69ba79465cd9b6fce74df31802f14dca3fa58ba5b9d1bd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T07:56:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T07:56:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T07:56:57Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T07:57:31Z is after 2025-08-24T17:21:41Z" Jan 25 07:57:31 crc kubenswrapper[4832]: I0125 07:57:31.226141 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fcc553c4-1007-4dbc-8420-60b36d54467a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:56:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:56:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:56:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8be196a1dec67a58e78aa9de2efa770fc899f210cc9c13962f0ebe78b967ba34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:56:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b044eb1a229266f00938c08da6aa9e86425ca71d08c8434d7214d54850c36bbb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:56:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://82354c782a5e3edb960aa716e1fc5fa9ab40d1f483ae320f08abfb662c1f1911\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:56:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7833d14895ff5c8aa464bdd04ddfe77dd2cddb9658d863bf6421449e62657bd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:56:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T07:56:57Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T07:57:31Z is after 2025-08-24T17:21:41Z" Jan 25 07:57:31 crc kubenswrapper[4832]: I0125 07:57:31.237404 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:16Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T07:57:31Z is after 2025-08-24T17:21:41Z" Jan 25 07:57:31 crc kubenswrapper[4832]: I0125 07:57:31.250062 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-kzrcf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5439ad80-35f6-4da4-8745-8104e9963472\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c1f3fab8a8806d76e6199970ac471a73665e6ec874f959a1e7908df814babfff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dg29p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T07:57:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-kzrcf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T07:57:31Z is after 2025-08-24T17:21:41Z" Jan 25 07:57:31 crc kubenswrapper[4832]: I0125 07:57:31.261153 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-nzj5s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b1a15135-866b-4644-97aa-34c7da815b6b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6wc7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6wc7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T07:57:30Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-nzj5s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T07:57:31Z is after 2025-08-24T17:21:41Z" Jan 25 07:57:31 crc kubenswrapper[4832]: I0125 07:57:31.279560 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d0e4b534-077a-47eb-a9aa-463b4dce27c2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:56:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:56:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e400282707469172abd90879bb5c4f96419dd2fbdbc5cc58c6ee9954624b221f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://22fb11acb07674f4808f4563567010790f12a87af272fdcf5ad1998e616c3f13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7970bc59b29bb18f7064917431bb4dd3388f593b65f71d697e3bc1c37493d087\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ae35d18ac48a31c47656c723134740770a44da6fa1587a853402bbfd4f51956\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://56b41ea1d1a7bb58c288bf3c661f5cd441412fc4790cd8361da2061bd35721dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c6f28ecd4c0dfb159fffbbdfc1ecbfee0ce21de2efa607937d80ec098bfc2534\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c6f28ecd4c0dfb159fffbbdfc1ecbfee0ce21de2efa607937d80ec098bfc2534\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T07:56:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T07:56:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3d6c060504d04d04a811fe906985b4981037f7c249befd89d21694b58983826\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b3d6c060504d04d04a811fe906985b4981037f7c249befd89d21694b58983826\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T07:56:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T07:56:58Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f98f07a514287378206a12966a18bcce2ce996434858c036f7e405a8c5d51721\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f98f07a514287378206a12966a18bcce2ce996434858c036f7e405a8c5d51721\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T07:56:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T07:56:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T07:56:57Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T07:57:31Z is after 2025-08-24T17:21:41Z" Jan 25 07:57:31 crc kubenswrapper[4832]: I0125 07:57:31.291960 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f08aec7c666388c5a9a5ccc970acf6e9df3262090951fd1a205cfb2f6cfb26a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e880d54d6b2d147d036dac73afd36230c3a984b018b7bd600dcbd33ca83aa84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T07:57:31Z is after 2025-08-24T17:21:41Z" Jan 25 07:57:31 crc kubenswrapper[4832]: I0125 07:57:31.308931 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:57:31 crc kubenswrapper[4832]: I0125 07:57:31.308991 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:57:31 crc kubenswrapper[4832]: I0125 07:57:31.309002 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:57:31 crc kubenswrapper[4832]: I0125 07:57:31.309025 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:57:31 crc kubenswrapper[4832]: I0125 07:57:31.309035 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:57:31Z","lastTransitionTime":"2026-01-25T07:57:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:57:31 crc kubenswrapper[4832]: I0125 07:57:31.410978 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:57:31 crc kubenswrapper[4832]: I0125 07:57:31.411030 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:57:31 crc kubenswrapper[4832]: I0125 07:57:31.411043 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:57:31 crc kubenswrapper[4832]: I0125 07:57:31.411062 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:57:31 crc kubenswrapper[4832]: I0125 07:57:31.411077 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:57:31Z","lastTransitionTime":"2026-01-25T07:57:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:57:31 crc kubenswrapper[4832]: I0125 07:57:31.501698 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/b1a15135-866b-4644-97aa-34c7da815b6b-metrics-certs\") pod \"network-metrics-daemon-nzj5s\" (UID: \"b1a15135-866b-4644-97aa-34c7da815b6b\") " pod="openshift-multus/network-metrics-daemon-nzj5s" Jan 25 07:57:31 crc kubenswrapper[4832]: E0125 07:57:31.501816 4832 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 25 07:57:31 crc kubenswrapper[4832]: E0125 07:57:31.501886 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b1a15135-866b-4644-97aa-34c7da815b6b-metrics-certs podName:b1a15135-866b-4644-97aa-34c7da815b6b nodeName:}" failed. No retries permitted until 2026-01-25 07:57:32.501871395 +0000 UTC m=+35.175694928 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/b1a15135-866b-4644-97aa-34c7da815b6b-metrics-certs") pod "network-metrics-daemon-nzj5s" (UID: "b1a15135-866b-4644-97aa-34c7da815b6b") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 25 07:57:31 crc kubenswrapper[4832]: I0125 07:57:31.512988 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:57:31 crc kubenswrapper[4832]: I0125 07:57:31.513025 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:57:31 crc kubenswrapper[4832]: I0125 07:57:31.513041 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:57:31 crc kubenswrapper[4832]: I0125 07:57:31.513056 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:57:31 crc kubenswrapper[4832]: I0125 07:57:31.513066 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:57:31Z","lastTransitionTime":"2026-01-25T07:57:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:57:31 crc kubenswrapper[4832]: I0125 07:57:31.598722 4832 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-13 20:44:22.889299728 +0000 UTC Jan 25 07:57:31 crc kubenswrapper[4832]: I0125 07:57:31.615426 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:57:31 crc kubenswrapper[4832]: I0125 07:57:31.615464 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:57:31 crc kubenswrapper[4832]: I0125 07:57:31.615473 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:57:31 crc kubenswrapper[4832]: I0125 07:57:31.615494 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:57:31 crc kubenswrapper[4832]: I0125 07:57:31.615506 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:57:31Z","lastTransitionTime":"2026-01-25T07:57:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:57:31 crc kubenswrapper[4832]: I0125 07:57:31.668771 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 25 07:57:31 crc kubenswrapper[4832]: E0125 07:57:31.668889 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 25 07:57:31 crc kubenswrapper[4832]: I0125 07:57:31.717604 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:57:31 crc kubenswrapper[4832]: I0125 07:57:31.717633 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:57:31 crc kubenswrapper[4832]: I0125 07:57:31.717641 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:57:31 crc kubenswrapper[4832]: I0125 07:57:31.717655 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:57:31 crc kubenswrapper[4832]: I0125 07:57:31.717665 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:57:31Z","lastTransitionTime":"2026-01-25T07:57:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:57:31 crc kubenswrapper[4832]: I0125 07:57:31.819869 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:57:31 crc kubenswrapper[4832]: I0125 07:57:31.820101 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:57:31 crc kubenswrapper[4832]: I0125 07:57:31.820168 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:57:31 crc kubenswrapper[4832]: I0125 07:57:31.820296 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:57:31 crc kubenswrapper[4832]: I0125 07:57:31.820367 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:57:31Z","lastTransitionTime":"2026-01-25T07:57:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:57:31 crc kubenswrapper[4832]: I0125 07:57:31.922722 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:57:31 crc kubenswrapper[4832]: I0125 07:57:31.922762 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:57:31 crc kubenswrapper[4832]: I0125 07:57:31.922771 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:57:31 crc kubenswrapper[4832]: I0125 07:57:31.922785 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:57:31 crc kubenswrapper[4832]: I0125 07:57:31.922795 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:57:31Z","lastTransitionTime":"2026-01-25T07:57:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:57:32 crc kubenswrapper[4832]: I0125 07:57:32.025849 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:57:32 crc kubenswrapper[4832]: I0125 07:57:32.025908 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:57:32 crc kubenswrapper[4832]: I0125 07:57:32.025920 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:57:32 crc kubenswrapper[4832]: I0125 07:57:32.025948 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:57:32 crc kubenswrapper[4832]: I0125 07:57:32.025960 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:57:32Z","lastTransitionTime":"2026-01-25T07:57:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:57:32 crc kubenswrapper[4832]: I0125 07:57:32.129279 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:57:32 crc kubenswrapper[4832]: I0125 07:57:32.129350 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:57:32 crc kubenswrapper[4832]: I0125 07:57:32.129370 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:57:32 crc kubenswrapper[4832]: I0125 07:57:32.129433 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:57:32 crc kubenswrapper[4832]: I0125 07:57:32.129462 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:57:32Z","lastTransitionTime":"2026-01-25T07:57:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:57:32 crc kubenswrapper[4832]: I0125 07:57:32.233422 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:57:32 crc kubenswrapper[4832]: I0125 07:57:32.233485 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:57:32 crc kubenswrapper[4832]: I0125 07:57:32.233504 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:57:32 crc kubenswrapper[4832]: I0125 07:57:32.233531 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:57:32 crc kubenswrapper[4832]: I0125 07:57:32.233550 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:57:32Z","lastTransitionTime":"2026-01-25T07:57:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:57:32 crc kubenswrapper[4832]: I0125 07:57:32.312582 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 25 07:57:32 crc kubenswrapper[4832]: I0125 07:57:32.312829 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 25 07:57:32 crc kubenswrapper[4832]: E0125 07:57:32.312852 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-25 07:57:48.312817147 +0000 UTC m=+50.986640690 (durationBeforeRetry 16s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 25 07:57:32 crc kubenswrapper[4832]: I0125 07:57:32.312921 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 25 07:57:32 crc kubenswrapper[4832]: I0125 07:57:32.313036 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 25 07:57:32 crc kubenswrapper[4832]: E0125 07:57:32.313042 4832 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 25 07:57:32 crc kubenswrapper[4832]: E0125 07:57:32.313072 4832 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 25 07:57:32 crc kubenswrapper[4832]: E0125 07:57:32.313092 4832 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 25 07:57:32 crc kubenswrapper[4832]: E0125 07:57:32.313171 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-25 07:57:48.313145816 +0000 UTC m=+50.986969379 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 25 07:57:32 crc kubenswrapper[4832]: E0125 07:57:32.313176 4832 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 25 07:57:32 crc kubenswrapper[4832]: E0125 07:57:32.313226 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-25 07:57:48.313214518 +0000 UTC m=+50.987038091 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 25 07:57:32 crc kubenswrapper[4832]: E0125 07:57:32.313291 4832 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 25 07:57:32 crc kubenswrapper[4832]: E0125 07:57:32.313345 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-25 07:57:48.313333461 +0000 UTC m=+50.987157004 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 25 07:57:32 crc kubenswrapper[4832]: I0125 07:57:32.337280 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:57:32 crc kubenswrapper[4832]: I0125 07:57:32.337436 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:57:32 crc kubenswrapper[4832]: I0125 07:57:32.337464 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:57:32 crc kubenswrapper[4832]: I0125 07:57:32.337538 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:57:32 crc kubenswrapper[4832]: I0125 07:57:32.337563 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:57:32Z","lastTransitionTime":"2026-01-25T07:57:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:57:32 crc kubenswrapper[4832]: I0125 07:57:32.414637 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 25 07:57:32 crc kubenswrapper[4832]: E0125 07:57:32.414953 4832 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 25 07:57:32 crc kubenswrapper[4832]: E0125 07:57:32.415021 4832 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 25 07:57:32 crc kubenswrapper[4832]: E0125 07:57:32.415044 4832 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 25 07:57:32 crc kubenswrapper[4832]: E0125 07:57:32.415163 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-25 07:57:48.415123124 +0000 UTC m=+51.088946697 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 25 07:57:32 crc kubenswrapper[4832]: I0125 07:57:32.440194 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:57:32 crc kubenswrapper[4832]: I0125 07:57:32.440231 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:57:32 crc kubenswrapper[4832]: I0125 07:57:32.440241 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:57:32 crc kubenswrapper[4832]: I0125 07:57:32.440255 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:57:32 crc kubenswrapper[4832]: I0125 07:57:32.440265 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:57:32Z","lastTransitionTime":"2026-01-25T07:57:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:57:32 crc kubenswrapper[4832]: I0125 07:57:32.516240 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/b1a15135-866b-4644-97aa-34c7da815b6b-metrics-certs\") pod \"network-metrics-daemon-nzj5s\" (UID: \"b1a15135-866b-4644-97aa-34c7da815b6b\") " pod="openshift-multus/network-metrics-daemon-nzj5s" Jan 25 07:57:32 crc kubenswrapper[4832]: E0125 07:57:32.516439 4832 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 25 07:57:32 crc kubenswrapper[4832]: E0125 07:57:32.516921 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b1a15135-866b-4644-97aa-34c7da815b6b-metrics-certs podName:b1a15135-866b-4644-97aa-34c7da815b6b nodeName:}" failed. No retries permitted until 2026-01-25 07:57:34.516892077 +0000 UTC m=+37.190715650 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/b1a15135-866b-4644-97aa-34c7da815b6b-metrics-certs") pod "network-metrics-daemon-nzj5s" (UID: "b1a15135-866b-4644-97aa-34c7da815b6b") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 25 07:57:32 crc kubenswrapper[4832]: I0125 07:57:32.542646 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:57:32 crc kubenswrapper[4832]: I0125 07:57:32.543038 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:57:32 crc kubenswrapper[4832]: I0125 07:57:32.543267 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:57:32 crc kubenswrapper[4832]: I0125 07:57:32.543454 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:57:32 crc kubenswrapper[4832]: I0125 07:57:32.543614 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:57:32Z","lastTransitionTime":"2026-01-25T07:57:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:57:32 crc kubenswrapper[4832]: I0125 07:57:32.598950 4832 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-11 04:58:03.846487262 +0000 UTC Jan 25 07:57:32 crc kubenswrapper[4832]: I0125 07:57:32.648238 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:57:32 crc kubenswrapper[4832]: I0125 07:57:32.648346 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:57:32 crc kubenswrapper[4832]: I0125 07:57:32.648372 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:57:32 crc kubenswrapper[4832]: I0125 07:57:32.648458 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:57:32 crc kubenswrapper[4832]: I0125 07:57:32.648486 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:57:32Z","lastTransitionTime":"2026-01-25T07:57:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:57:32 crc kubenswrapper[4832]: I0125 07:57:32.669522 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 25 07:57:32 crc kubenswrapper[4832]: I0125 07:57:32.669609 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-nzj5s" Jan 25 07:57:32 crc kubenswrapper[4832]: E0125 07:57:32.669675 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 25 07:57:32 crc kubenswrapper[4832]: E0125 07:57:32.670006 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-nzj5s" podUID="b1a15135-866b-4644-97aa-34c7da815b6b" Jan 25 07:57:32 crc kubenswrapper[4832]: I0125 07:57:32.670167 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 25 07:57:32 crc kubenswrapper[4832]: E0125 07:57:32.670241 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 25 07:57:32 crc kubenswrapper[4832]: I0125 07:57:32.752593 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:57:32 crc kubenswrapper[4832]: I0125 07:57:32.752648 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:57:32 crc kubenswrapper[4832]: I0125 07:57:32.752666 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:57:32 crc kubenswrapper[4832]: I0125 07:57:32.752690 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:57:32 crc kubenswrapper[4832]: I0125 07:57:32.752707 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:57:32Z","lastTransitionTime":"2026-01-25T07:57:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:57:32 crc kubenswrapper[4832]: I0125 07:57:32.856727 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:57:32 crc kubenswrapper[4832]: I0125 07:57:32.856814 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:57:32 crc kubenswrapper[4832]: I0125 07:57:32.856824 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:57:32 crc kubenswrapper[4832]: I0125 07:57:32.856843 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:57:32 crc kubenswrapper[4832]: I0125 07:57:32.856856 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:57:32Z","lastTransitionTime":"2026-01-25T07:57:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:57:32 crc kubenswrapper[4832]: I0125 07:57:32.960847 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:57:32 crc kubenswrapper[4832]: I0125 07:57:32.960912 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:57:32 crc kubenswrapper[4832]: I0125 07:57:32.960930 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:57:32 crc kubenswrapper[4832]: I0125 07:57:32.960958 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:57:32 crc kubenswrapper[4832]: I0125 07:57:32.960982 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:57:32Z","lastTransitionTime":"2026-01-25T07:57:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:57:33 crc kubenswrapper[4832]: I0125 07:57:33.064117 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:57:33 crc kubenswrapper[4832]: I0125 07:57:33.064187 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:57:33 crc kubenswrapper[4832]: I0125 07:57:33.064201 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:57:33 crc kubenswrapper[4832]: I0125 07:57:33.064218 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:57:33 crc kubenswrapper[4832]: I0125 07:57:33.064259 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:57:33Z","lastTransitionTime":"2026-01-25T07:57:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:57:33 crc kubenswrapper[4832]: I0125 07:57:33.167659 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:57:33 crc kubenswrapper[4832]: I0125 07:57:33.167728 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:57:33 crc kubenswrapper[4832]: I0125 07:57:33.167740 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:57:33 crc kubenswrapper[4832]: I0125 07:57:33.167762 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:57:33 crc kubenswrapper[4832]: I0125 07:57:33.167775 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:57:33Z","lastTransitionTime":"2026-01-25T07:57:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:57:33 crc kubenswrapper[4832]: I0125 07:57:33.270780 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:57:33 crc kubenswrapper[4832]: I0125 07:57:33.270836 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:57:33 crc kubenswrapper[4832]: I0125 07:57:33.270850 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:57:33 crc kubenswrapper[4832]: I0125 07:57:33.270872 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:57:33 crc kubenswrapper[4832]: I0125 07:57:33.270887 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:57:33Z","lastTransitionTime":"2026-01-25T07:57:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:57:33 crc kubenswrapper[4832]: I0125 07:57:33.376073 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:57:33 crc kubenswrapper[4832]: I0125 07:57:33.376133 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:57:33 crc kubenswrapper[4832]: I0125 07:57:33.376144 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:57:33 crc kubenswrapper[4832]: I0125 07:57:33.376161 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:57:33 crc kubenswrapper[4832]: I0125 07:57:33.376172 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:57:33Z","lastTransitionTime":"2026-01-25T07:57:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:57:33 crc kubenswrapper[4832]: I0125 07:57:33.478478 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:57:33 crc kubenswrapper[4832]: I0125 07:57:33.478529 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:57:33 crc kubenswrapper[4832]: I0125 07:57:33.478543 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:57:33 crc kubenswrapper[4832]: I0125 07:57:33.478561 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:57:33 crc kubenswrapper[4832]: I0125 07:57:33.478574 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:57:33Z","lastTransitionTime":"2026-01-25T07:57:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:57:33 crc kubenswrapper[4832]: I0125 07:57:33.580766 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:57:33 crc kubenswrapper[4832]: I0125 07:57:33.580807 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:57:33 crc kubenswrapper[4832]: I0125 07:57:33.580816 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:57:33 crc kubenswrapper[4832]: I0125 07:57:33.580830 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:57:33 crc kubenswrapper[4832]: I0125 07:57:33.580838 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:57:33Z","lastTransitionTime":"2026-01-25T07:57:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:57:33 crc kubenswrapper[4832]: I0125 07:57:33.599272 4832 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-09 07:23:56.854738367 +0000 UTC Jan 25 07:57:33 crc kubenswrapper[4832]: I0125 07:57:33.669318 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 25 07:57:33 crc kubenswrapper[4832]: E0125 07:57:33.669723 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 25 07:57:33 crc kubenswrapper[4832]: I0125 07:57:33.683184 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:57:33 crc kubenswrapper[4832]: I0125 07:57:33.683227 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:57:33 crc kubenswrapper[4832]: I0125 07:57:33.683236 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:57:33 crc kubenswrapper[4832]: I0125 07:57:33.683251 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:57:33 crc kubenswrapper[4832]: I0125 07:57:33.683263 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:57:33Z","lastTransitionTime":"2026-01-25T07:57:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:57:33 crc kubenswrapper[4832]: I0125 07:57:33.785614 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:57:33 crc kubenswrapper[4832]: I0125 07:57:33.785680 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:57:33 crc kubenswrapper[4832]: I0125 07:57:33.785697 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:57:33 crc kubenswrapper[4832]: I0125 07:57:33.785724 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:57:33 crc kubenswrapper[4832]: I0125 07:57:33.785742 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:57:33Z","lastTransitionTime":"2026-01-25T07:57:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:57:33 crc kubenswrapper[4832]: I0125 07:57:33.888444 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:57:33 crc kubenswrapper[4832]: I0125 07:57:33.888515 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:57:33 crc kubenswrapper[4832]: I0125 07:57:33.888529 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:57:33 crc kubenswrapper[4832]: I0125 07:57:33.888557 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:57:33 crc kubenswrapper[4832]: I0125 07:57:33.888573 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:57:33Z","lastTransitionTime":"2026-01-25T07:57:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:57:33 crc kubenswrapper[4832]: I0125 07:57:33.992188 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:57:33 crc kubenswrapper[4832]: I0125 07:57:33.992252 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:57:33 crc kubenswrapper[4832]: I0125 07:57:33.992266 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:57:33 crc kubenswrapper[4832]: I0125 07:57:33.992295 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:57:33 crc kubenswrapper[4832]: I0125 07:57:33.992318 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:57:33Z","lastTransitionTime":"2026-01-25T07:57:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:57:34 crc kubenswrapper[4832]: I0125 07:57:34.096245 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:57:34 crc kubenswrapper[4832]: I0125 07:57:34.096345 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:57:34 crc kubenswrapper[4832]: I0125 07:57:34.096379 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:57:34 crc kubenswrapper[4832]: I0125 07:57:34.096458 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:57:34 crc kubenswrapper[4832]: I0125 07:57:34.096480 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:57:34Z","lastTransitionTime":"2026-01-25T07:57:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:57:34 crc kubenswrapper[4832]: I0125 07:57:34.198972 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:57:34 crc kubenswrapper[4832]: I0125 07:57:34.199096 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:57:34 crc kubenswrapper[4832]: I0125 07:57:34.199118 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:57:34 crc kubenswrapper[4832]: I0125 07:57:34.199154 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:57:34 crc kubenswrapper[4832]: I0125 07:57:34.199178 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:57:34Z","lastTransitionTime":"2026-01-25T07:57:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:57:34 crc kubenswrapper[4832]: I0125 07:57:34.303132 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:57:34 crc kubenswrapper[4832]: I0125 07:57:34.303197 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:57:34 crc kubenswrapper[4832]: I0125 07:57:34.303214 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:57:34 crc kubenswrapper[4832]: I0125 07:57:34.303249 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:57:34 crc kubenswrapper[4832]: I0125 07:57:34.303267 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:57:34Z","lastTransitionTime":"2026-01-25T07:57:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:57:34 crc kubenswrapper[4832]: I0125 07:57:34.406376 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:57:34 crc kubenswrapper[4832]: I0125 07:57:34.406498 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:57:34 crc kubenswrapper[4832]: I0125 07:57:34.406523 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:57:34 crc kubenswrapper[4832]: I0125 07:57:34.406556 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:57:34 crc kubenswrapper[4832]: I0125 07:57:34.406583 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:57:34Z","lastTransitionTime":"2026-01-25T07:57:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:57:34 crc kubenswrapper[4832]: I0125 07:57:34.510302 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:57:34 crc kubenswrapper[4832]: I0125 07:57:34.510353 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:57:34 crc kubenswrapper[4832]: I0125 07:57:34.510362 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:57:34 crc kubenswrapper[4832]: I0125 07:57:34.510380 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:57:34 crc kubenswrapper[4832]: I0125 07:57:34.510430 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:57:34Z","lastTransitionTime":"2026-01-25T07:57:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:57:34 crc kubenswrapper[4832]: I0125 07:57:34.537411 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/b1a15135-866b-4644-97aa-34c7da815b6b-metrics-certs\") pod \"network-metrics-daemon-nzj5s\" (UID: \"b1a15135-866b-4644-97aa-34c7da815b6b\") " pod="openshift-multus/network-metrics-daemon-nzj5s" Jan 25 07:57:34 crc kubenswrapper[4832]: E0125 07:57:34.537621 4832 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 25 07:57:34 crc kubenswrapper[4832]: E0125 07:57:34.537714 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b1a15135-866b-4644-97aa-34c7da815b6b-metrics-certs podName:b1a15135-866b-4644-97aa-34c7da815b6b nodeName:}" failed. No retries permitted until 2026-01-25 07:57:38.537689104 +0000 UTC m=+41.211512667 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/b1a15135-866b-4644-97aa-34c7da815b6b-metrics-certs") pod "network-metrics-daemon-nzj5s" (UID: "b1a15135-866b-4644-97aa-34c7da815b6b") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 25 07:57:34 crc kubenswrapper[4832]: I0125 07:57:34.599725 4832 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-28 06:58:37.663964936 +0000 UTC Jan 25 07:57:34 crc kubenswrapper[4832]: I0125 07:57:34.613577 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:57:34 crc kubenswrapper[4832]: I0125 07:57:34.613627 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:57:34 crc kubenswrapper[4832]: I0125 07:57:34.613644 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:57:34 crc kubenswrapper[4832]: I0125 07:57:34.613673 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:57:34 crc kubenswrapper[4832]: I0125 07:57:34.613691 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:57:34Z","lastTransitionTime":"2026-01-25T07:57:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:57:34 crc kubenswrapper[4832]: I0125 07:57:34.669696 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-nzj5s" Jan 25 07:57:34 crc kubenswrapper[4832]: I0125 07:57:34.669785 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 25 07:57:34 crc kubenswrapper[4832]: I0125 07:57:34.669823 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 25 07:57:34 crc kubenswrapper[4832]: E0125 07:57:34.670048 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-nzj5s" podUID="b1a15135-866b-4644-97aa-34c7da815b6b" Jan 25 07:57:34 crc kubenswrapper[4832]: E0125 07:57:34.670286 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 25 07:57:34 crc kubenswrapper[4832]: E0125 07:57:34.670490 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 25 07:57:34 crc kubenswrapper[4832]: I0125 07:57:34.717229 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:57:34 crc kubenswrapper[4832]: I0125 07:57:34.717312 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:57:34 crc kubenswrapper[4832]: I0125 07:57:34.717329 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:57:34 crc kubenswrapper[4832]: I0125 07:57:34.717359 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:57:34 crc kubenswrapper[4832]: I0125 07:57:34.717428 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:57:34Z","lastTransitionTime":"2026-01-25T07:57:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:57:34 crc kubenswrapper[4832]: I0125 07:57:34.821251 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:57:34 crc kubenswrapper[4832]: I0125 07:57:34.821318 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:57:34 crc kubenswrapper[4832]: I0125 07:57:34.821337 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:57:34 crc kubenswrapper[4832]: I0125 07:57:34.821364 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:57:34 crc kubenswrapper[4832]: I0125 07:57:34.821421 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:57:34Z","lastTransitionTime":"2026-01-25T07:57:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:57:34 crc kubenswrapper[4832]: I0125 07:57:34.912672 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 25 07:57:34 crc kubenswrapper[4832]: I0125 07:57:34.924061 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:57:34 crc kubenswrapper[4832]: I0125 07:57:34.924104 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:57:34 crc kubenswrapper[4832]: I0125 07:57:34.924113 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:57:34 crc kubenswrapper[4832]: I0125 07:57:34.924129 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:57:34 crc kubenswrapper[4832]: I0125 07:57:34.924139 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:57:34Z","lastTransitionTime":"2026-01-25T07:57:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:57:34 crc kubenswrapper[4832]: I0125 07:57:34.933580 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4399c971-4476-4d24-ae22-8f9710ee1ea8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:56:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:56:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:56:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://427b76c32266adf832d2068d3a55977e793505c5bb68d7b55f73115596094910\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:56:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://37e9206fcc440929199c51b318bab8d2c23814d1307eaed596434c12edf2ed21\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:56:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://959f94a48ef709e3a3ca62ab6fda1874fd98e4fa70fbde0fa03da51bc8d0ed25\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:56:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://56d7d5b36830b76c8af4d6a98ec50b4096ef677b7ec94784724d5395dbc5e1a5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7e2213b4c4748dc37cf94e9b977630270dedbabf28e81c8a6d75e4ee3346ad7a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-25T07:57:15Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0125 07:57:10.242088 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0125 07:57:10.245266 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3222874030/tls.crt::/tmp/serving-cert-3222874030/tls.key\\\\\\\"\\\\nI0125 07:57:15.582629 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0125 07:57:15.585295 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0125 07:57:15.585315 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0125 07:57:15.585341 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0125 07:57:15.585347 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0125 07:57:15.590465 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0125 07:57:15.590486 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0125 07:57:15.590498 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0125 07:57:15.590502 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0125 07:57:15.590506 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0125 07:57:15.590510 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0125 07:57:15.590513 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0125 07:57:15.590670 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0125 07:57:15.594690 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-25T07:56:59Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c0b0c638bfaa98aaf9932b5ad1b0bfc04ba52038c40f3aa85103388c557ace5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:56:59Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5cdefbe9da3ff798b69ba79465cd9b6fce74df31802f14dca3fa58ba5b9d1bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b5cdefbe9da3ff798b69ba79465cd9b6fce74df31802f14dca3fa58ba5b9d1bd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T07:56:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T07:56:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T07:56:57Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T07:57:34Z is after 2025-08-24T17:21:41Z" Jan 25 07:57:34 crc kubenswrapper[4832]: I0125 07:57:34.954658 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fcc553c4-1007-4dbc-8420-60b36d54467a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:56:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:56:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:56:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8be196a1dec67a58e78aa9de2efa770fc899f210cc9c13962f0ebe78b967ba34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:56:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b044eb1a229266f00938c08da6aa9e86425ca71d08c8434d7214d54850c36bbb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:56:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://82354c782a5e3edb960aa716e1fc5fa9ab40d1f483ae320f08abfb662c1f1911\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:56:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7833d14895ff5c8aa464bdd04ddfe77dd2cddb9658d863bf6421449e62657bd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:56:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T07:56:57Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T07:57:34Z is after 2025-08-24T17:21:41Z" Jan 25 07:57:34 crc kubenswrapper[4832]: I0125 07:57:34.974595 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:16Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T07:57:34Z is after 2025-08-24T17:21:41Z" Jan 25 07:57:34 crc kubenswrapper[4832]: I0125 07:57:34.990593 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-6dqw2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5b30a48c-b823-4cdd-ac0c-def5487d8fa6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d04c4243f10847106daab854b81ba5b24466780aa4900922ae2c460468a12e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxmsw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T07:57:16Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-6dqw2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T07:57:34Z is after 2025-08-24T17:21:41Z" Jan 25 07:57:35 crc kubenswrapper[4832]: I0125 07:57:35.023137 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-plv66" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9c6fdc72-86dc-433d-8aac-57b0eeefaca3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4eb8d5ded80c75addd304eb271c805a5558200db4ad062ef7354d8a0e4d2892d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rkm2k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b2bdf85709ae59146893142e9c99259a30d0a3d382b2212b1863f677f6afc2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rkm2k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://955df1f749685e35f57096ab341705a767f9f044c498ff9fe0c578205ab00e47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rkm2k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a4281c5178e1f538e268252a65fbf98cf6d3febdb246a148f96a4aa074654ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rkm2k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9039a4038315d24ad4f721f3a16dc792881c104d23270f4ab5ffb3d84ff4cb99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rkm2k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0de5e2c0084fa8b9faf368e61b965f84d8411bcbdfb8b3cf6a35f4bc6088e68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rkm2k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://535d226369544a445f4a5592a1a733db46fea474ae6700626093ea53a57fa858\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://535d226369544a445f4a5592a1a733db46fea474ae6700626093ea53a57fa858\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-25T07:57:26Z\\\",\\\"message\\\":\\\"lse, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.5.139\\\\\\\", Port:17698, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI0125 07:57:26.725541 6225 services_controller.go:452] Built service openshift-apiserver/check-endpoints per-node LB for network=default: []services.LB{}\\\\nI0125 07:57:26.725548 6225 services_controller.go:453] Built service openshift-apiserver/check-endpoints template LB for network=default: []services.LB{}\\\\nI0125 07:57:26.725513 6225 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-diagnostics/network-check-target-xd92c\\\\nI0125 07:57:26.725560 6225 obj_retry.go:303] Retry object setup: *v1.Pod openshift-image-registry/node-ca-6dqw2\\\\nF0125 07:57:26.725573 6225 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-25T07:57:26Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-plv66_openshift-ovn-kubernetes(9c6fdc72-86dc-433d-8aac-57b0eeefaca3)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rkm2k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d82289bf3a8f5881decb5d348cc43fdfd61f4ce6af17013a893b687d2c759d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rkm2k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac96bdf8380dbae226d8f186a0449b986660f21889eb73734620b26fb796fbf1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ac96bdf8380dbae226d8f186a0449b986660f21889eb73734620b26fb796fbf1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T07:57:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rkm2k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T07:57:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-plv66\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T07:57:35Z is after 2025-08-24T17:21:41Z" Jan 25 07:57:35 crc kubenswrapper[4832]: I0125 07:57:35.027506 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:57:35 crc kubenswrapper[4832]: I0125 07:57:35.027552 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:57:35 crc kubenswrapper[4832]: I0125 07:57:35.027561 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:57:35 crc kubenswrapper[4832]: I0125 07:57:35.027575 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:57:35 crc kubenswrapper[4832]: I0125 07:57:35.027584 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:57:35Z","lastTransitionTime":"2026-01-25T07:57:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:57:35 crc kubenswrapper[4832]: I0125 07:57:35.039361 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-ct7hc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1be4ce34-f46c-4ee9-8fb5-7ac13dafef85\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c584b1d69c283cdea5cd50a6f1e3b9a1fd4b4b82bfb1142fb4bb32fd7c7d3fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cd2cg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://80d0c4fe9bedb92c87bfea3e2e7706dac8825366b74adb48b257fa32f31a6277\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cd2cg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T07:57:29Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-ct7hc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T07:57:35Z is after 2025-08-24T17:21:41Z" Jan 25 07:57:35 crc kubenswrapper[4832]: I0125 07:57:35.067784 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d0e4b534-077a-47eb-a9aa-463b4dce27c2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:56:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:56:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e400282707469172abd90879bb5c4f96419dd2fbdbc5cc58c6ee9954624b221f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://22fb11acb07674f4808f4563567010790f12a87af272fdcf5ad1998e616c3f13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7970bc59b29bb18f7064917431bb4dd3388f593b65f71d697e3bc1c37493d087\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ae35d18ac48a31c47656c723134740770a44da6fa1587a853402bbfd4f51956\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://56b41ea1d1a7bb58c288bf3c661f5cd441412fc4790cd8361da2061bd35721dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c6f28ecd4c0dfb159fffbbdfc1ecbfee0ce21de2efa607937d80ec098bfc2534\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c6f28ecd4c0dfb159fffbbdfc1ecbfee0ce21de2efa607937d80ec098bfc2534\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T07:56:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T07:56:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3d6c060504d04d04a811fe906985b4981037f7c249befd89d21694b58983826\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b3d6c060504d04d04a811fe906985b4981037f7c249befd89d21694b58983826\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T07:56:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T07:56:58Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f98f07a514287378206a12966a18bcce2ce996434858c036f7e405a8c5d51721\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f98f07a514287378206a12966a18bcce2ce996434858c036f7e405a8c5d51721\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T07:56:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T07:56:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T07:56:57Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T07:57:35Z is after 2025-08-24T17:21:41Z" Jan 25 07:57:35 crc kubenswrapper[4832]: I0125 07:57:35.092197 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f08aec7c666388c5a9a5ccc970acf6e9df3262090951fd1a205cfb2f6cfb26a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e880d54d6b2d147d036dac73afd36230c3a984b018b7bd600dcbd33ca83aa84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T07:57:35Z is after 2025-08-24T17:21:41Z" Jan 25 07:57:35 crc kubenswrapper[4832]: I0125 07:57:35.117996 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-kzrcf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5439ad80-35f6-4da4-8745-8104e9963472\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c1f3fab8a8806d76e6199970ac471a73665e6ec874f959a1e7908df814babfff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dg29p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T07:57:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-kzrcf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T07:57:35Z is after 2025-08-24T17:21:41Z" Jan 25 07:57:35 crc kubenswrapper[4832]: I0125 07:57:35.129507 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:57:35 crc kubenswrapper[4832]: I0125 07:57:35.129551 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:57:35 crc kubenswrapper[4832]: I0125 07:57:35.129568 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:57:35 crc kubenswrapper[4832]: I0125 07:57:35.129591 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:57:35 crc kubenswrapper[4832]: I0125 07:57:35.129608 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:57:35Z","lastTransitionTime":"2026-01-25T07:57:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:57:35 crc kubenswrapper[4832]: I0125 07:57:35.133696 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-nzj5s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b1a15135-866b-4644-97aa-34c7da815b6b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6wc7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6wc7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T07:57:30Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-nzj5s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T07:57:35Z is after 2025-08-24T17:21:41Z" Jan 25 07:57:35 crc kubenswrapper[4832]: I0125 07:57:35.155712 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:16Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T07:57:35Z is after 2025-08-24T17:21:41Z" Jan 25 07:57:35 crc kubenswrapper[4832]: I0125 07:57:35.179212 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://49bab1f91a75d2c164a43ba253102a6ac5ba0fd6347fad172ae2052f055d3434\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T07:57:35Z is after 2025-08-24T17:21:41Z" Jan 25 07:57:35 crc kubenswrapper[4832]: I0125 07:57:35.193767 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:19Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:19Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://097b2ff685144140b86c80b5c605d0ef31116b56237a696d1da4bf98f65d7ae2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T07:57:35Z is after 2025-08-24T17:21:41Z" Jan 25 07:57:35 crc kubenswrapper[4832]: I0125 07:57:35.207074 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-ljmz9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f0e6de28-95c1-4b62-93a5-8141ed12ba8e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://90459cff650e6a278d83d57b502423c3c3bd87cadc083c7642dfc4cc33e7953c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s6dzs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T07:57:16Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-ljmz9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T07:57:35Z is after 2025-08-24T17:21:41Z" Jan 25 07:57:35 crc kubenswrapper[4832]: I0125 07:57:35.223434 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-9r9sz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1fb47e8e-c812-41b4-9be7-3fad81e121b0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11d30ecfbac91cbd5f546d8f064b715e31917d7db31102376299e2c5fa7951f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2t6v2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c32b6a39b2bc87d55b11a88a54d0909633358c70f3fc555cd4308bc5bf2689a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2t6v2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T07:57:16Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-9r9sz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T07:57:35Z is after 2025-08-24T17:21:41Z" Jan 25 07:57:35 crc kubenswrapper[4832]: I0125 07:57:35.231721 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:57:35 crc kubenswrapper[4832]: I0125 07:57:35.231871 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:57:35 crc kubenswrapper[4832]: I0125 07:57:35.231978 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:57:35 crc kubenswrapper[4832]: I0125 07:57:35.232076 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:57:35 crc kubenswrapper[4832]: I0125 07:57:35.232161 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:57:35Z","lastTransitionTime":"2026-01-25T07:57:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:57:35 crc kubenswrapper[4832]: I0125 07:57:35.240401 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T07:57:35Z is after 2025-08-24T17:21:41Z" Jan 25 07:57:35 crc kubenswrapper[4832]: I0125 07:57:35.261725 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-7tflx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"947f1c61-f061-4448-b301-9c2554b67933\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://62f9942e292890719dd629a44aa806877367db57a332a97f254fea093c039c5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6tmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://446dcb21c95e4112671db6f4b8376ff3361d3d386ecdaa190f615271511be812\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://446dcb21c95e4112671db6f4b8376ff3361d3d386ecdaa190f615271511be812\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T07:57:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T07:57:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6tmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2ca8e86a16d5f632146a210839dc52fb85013bd79ac5a467847d4a28a672539\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a2ca8e86a16d5f632146a210839dc52fb85013bd79ac5a467847d4a28a672539\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T07:57:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T07:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6tmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e8c763fc8bcc560d4435f2ed3be793465fb9e31b07bc26b76ce14bf7d9ce6b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3e8c763fc8bcc560d4435f2ed3be793465fb9e31b07bc26b76ce14bf7d9ce6b7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T07:57:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T07:57:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6tmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6a224c00f14700b78550beaa705d0f1cf0b2f13ef8ec3ba81aef885b81292f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a6a224c00f14700b78550beaa705d0f1cf0b2f13ef8ec3ba81aef885b81292f3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T07:57:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T07:57:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6tmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0565bbfef6aee4dc36b7eeea5fb9b0d26004395c38af8fb6f1745ff6853957e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0565bbfef6aee4dc36b7eeea5fb9b0d26004395c38af8fb6f1745ff6853957e4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T07:57:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T07:57:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6tmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://21c9f3889231e035c1db9611e076f2db7f52cca1449f9cd143323a8599d3141c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://21c9f3889231e035c1db9611e076f2db7f52cca1449f9cd143323a8599d3141c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T07:57:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T07:57:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6tmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T07:57:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-7tflx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T07:57:35Z is after 2025-08-24T17:21:41Z" Jan 25 07:57:35 crc kubenswrapper[4832]: I0125 07:57:35.335527 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:57:35 crc kubenswrapper[4832]: I0125 07:57:35.335585 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:57:35 crc kubenswrapper[4832]: I0125 07:57:35.335603 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:57:35 crc kubenswrapper[4832]: I0125 07:57:35.335627 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:57:35 crc kubenswrapper[4832]: I0125 07:57:35.335645 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:57:35Z","lastTransitionTime":"2026-01-25T07:57:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:57:35 crc kubenswrapper[4832]: I0125 07:57:35.439544 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:57:35 crc kubenswrapper[4832]: I0125 07:57:35.439619 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:57:35 crc kubenswrapper[4832]: I0125 07:57:35.439641 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:57:35 crc kubenswrapper[4832]: I0125 07:57:35.439664 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:57:35 crc kubenswrapper[4832]: I0125 07:57:35.439683 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:57:35Z","lastTransitionTime":"2026-01-25T07:57:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:57:35 crc kubenswrapper[4832]: I0125 07:57:35.544114 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:57:35 crc kubenswrapper[4832]: I0125 07:57:35.544186 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:57:35 crc kubenswrapper[4832]: I0125 07:57:35.544204 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:57:35 crc kubenswrapper[4832]: I0125 07:57:35.544680 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:57:35 crc kubenswrapper[4832]: I0125 07:57:35.544743 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:57:35Z","lastTransitionTime":"2026-01-25T07:57:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:57:35 crc kubenswrapper[4832]: I0125 07:57:35.600880 4832 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-24 14:27:52.913533495 +0000 UTC Jan 25 07:57:35 crc kubenswrapper[4832]: I0125 07:57:35.647489 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:57:35 crc kubenswrapper[4832]: I0125 07:57:35.647525 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:57:35 crc kubenswrapper[4832]: I0125 07:57:35.647532 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:57:35 crc kubenswrapper[4832]: I0125 07:57:35.647546 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:57:35 crc kubenswrapper[4832]: I0125 07:57:35.647555 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:57:35Z","lastTransitionTime":"2026-01-25T07:57:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:57:35 crc kubenswrapper[4832]: I0125 07:57:35.669347 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 25 07:57:35 crc kubenswrapper[4832]: E0125 07:57:35.669515 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 25 07:57:35 crc kubenswrapper[4832]: I0125 07:57:35.750133 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:57:35 crc kubenswrapper[4832]: I0125 07:57:35.750201 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:57:35 crc kubenswrapper[4832]: I0125 07:57:35.750214 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:57:35 crc kubenswrapper[4832]: I0125 07:57:35.750231 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:57:35 crc kubenswrapper[4832]: I0125 07:57:35.750244 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:57:35Z","lastTransitionTime":"2026-01-25T07:57:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:57:35 crc kubenswrapper[4832]: I0125 07:57:35.852806 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:57:35 crc kubenswrapper[4832]: I0125 07:57:35.852854 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:57:35 crc kubenswrapper[4832]: I0125 07:57:35.852869 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:57:35 crc kubenswrapper[4832]: I0125 07:57:35.852891 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:57:35 crc kubenswrapper[4832]: I0125 07:57:35.852908 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:57:35Z","lastTransitionTime":"2026-01-25T07:57:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:57:35 crc kubenswrapper[4832]: I0125 07:57:35.956166 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:57:35 crc kubenswrapper[4832]: I0125 07:57:35.956222 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:57:35 crc kubenswrapper[4832]: I0125 07:57:35.956234 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:57:35 crc kubenswrapper[4832]: I0125 07:57:35.956251 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:57:35 crc kubenswrapper[4832]: I0125 07:57:35.956266 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:57:35Z","lastTransitionTime":"2026-01-25T07:57:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:57:36 crc kubenswrapper[4832]: I0125 07:57:36.059506 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:57:36 crc kubenswrapper[4832]: I0125 07:57:36.059592 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:57:36 crc kubenswrapper[4832]: I0125 07:57:36.059610 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:57:36 crc kubenswrapper[4832]: I0125 07:57:36.059630 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:57:36 crc kubenswrapper[4832]: I0125 07:57:36.059671 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:57:36Z","lastTransitionTime":"2026-01-25T07:57:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:57:36 crc kubenswrapper[4832]: I0125 07:57:36.162577 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:57:36 crc kubenswrapper[4832]: I0125 07:57:36.162680 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:57:36 crc kubenswrapper[4832]: I0125 07:57:36.162705 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:57:36 crc kubenswrapper[4832]: I0125 07:57:36.162725 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:57:36 crc kubenswrapper[4832]: I0125 07:57:36.162737 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:57:36Z","lastTransitionTime":"2026-01-25T07:57:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:57:36 crc kubenswrapper[4832]: I0125 07:57:36.226525 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:57:36 crc kubenswrapper[4832]: I0125 07:57:36.226559 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:57:36 crc kubenswrapper[4832]: I0125 07:57:36.226568 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:57:36 crc kubenswrapper[4832]: I0125 07:57:36.226583 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:57:36 crc kubenswrapper[4832]: I0125 07:57:36.226594 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:57:36Z","lastTransitionTime":"2026-01-25T07:57:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:57:36 crc kubenswrapper[4832]: E0125 07:57:36.243810 4832 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-25T07:57:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:36Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-25T07:57:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:36Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-25T07:57:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:36Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-25T07:57:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:36Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"0979aa75-019e-429a-886d-abfe16bbe8b2\\\",\\\"systemUUID\\\":\\\"55010a19-6f9d-4b9e-9f82-47bdc3835176\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T07:57:36Z is after 2025-08-24T17:21:41Z" Jan 25 07:57:36 crc kubenswrapper[4832]: I0125 07:57:36.247665 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:57:36 crc kubenswrapper[4832]: I0125 07:57:36.247693 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:57:36 crc kubenswrapper[4832]: I0125 07:57:36.247701 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:57:36 crc kubenswrapper[4832]: I0125 07:57:36.247715 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:57:36 crc kubenswrapper[4832]: I0125 07:57:36.247725 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:57:36Z","lastTransitionTime":"2026-01-25T07:57:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:57:36 crc kubenswrapper[4832]: E0125 07:57:36.260177 4832 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-25T07:57:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:36Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-25T07:57:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:36Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-25T07:57:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:36Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-25T07:57:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:36Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"0979aa75-019e-429a-886d-abfe16bbe8b2\\\",\\\"systemUUID\\\":\\\"55010a19-6f9d-4b9e-9f82-47bdc3835176\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T07:57:36Z is after 2025-08-24T17:21:41Z" Jan 25 07:57:36 crc kubenswrapper[4832]: I0125 07:57:36.266987 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:57:36 crc kubenswrapper[4832]: I0125 07:57:36.267153 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:57:36 crc kubenswrapper[4832]: I0125 07:57:36.267251 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:57:36 crc kubenswrapper[4832]: I0125 07:57:36.267348 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:57:36 crc kubenswrapper[4832]: I0125 07:57:36.267472 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:57:36Z","lastTransitionTime":"2026-01-25T07:57:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:57:36 crc kubenswrapper[4832]: E0125 07:57:36.280146 4832 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-25T07:57:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:36Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-25T07:57:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:36Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-25T07:57:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:36Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-25T07:57:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:36Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"0979aa75-019e-429a-886d-abfe16bbe8b2\\\",\\\"systemUUID\\\":\\\"55010a19-6f9d-4b9e-9f82-47bdc3835176\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T07:57:36Z is after 2025-08-24T17:21:41Z" Jan 25 07:57:36 crc kubenswrapper[4832]: I0125 07:57:36.284181 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:57:36 crc kubenswrapper[4832]: I0125 07:57:36.284235 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:57:36 crc kubenswrapper[4832]: I0125 07:57:36.284252 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:57:36 crc kubenswrapper[4832]: I0125 07:57:36.284275 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:57:36 crc kubenswrapper[4832]: I0125 07:57:36.284367 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:57:36Z","lastTransitionTime":"2026-01-25T07:57:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:57:36 crc kubenswrapper[4832]: E0125 07:57:36.298473 4832 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-25T07:57:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:36Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-25T07:57:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:36Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-25T07:57:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:36Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-25T07:57:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:36Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"0979aa75-019e-429a-886d-abfe16bbe8b2\\\",\\\"systemUUID\\\":\\\"55010a19-6f9d-4b9e-9f82-47bdc3835176\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T07:57:36Z is after 2025-08-24T17:21:41Z" Jan 25 07:57:36 crc kubenswrapper[4832]: I0125 07:57:36.302822 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:57:36 crc kubenswrapper[4832]: I0125 07:57:36.302878 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:57:36 crc kubenswrapper[4832]: I0125 07:57:36.302896 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:57:36 crc kubenswrapper[4832]: I0125 07:57:36.302919 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:57:36 crc kubenswrapper[4832]: I0125 07:57:36.302937 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:57:36Z","lastTransitionTime":"2026-01-25T07:57:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:57:36 crc kubenswrapper[4832]: E0125 07:57:36.319029 4832 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-25T07:57:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:36Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-25T07:57:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:36Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-25T07:57:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:36Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-25T07:57:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:36Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"0979aa75-019e-429a-886d-abfe16bbe8b2\\\",\\\"systemUUID\\\":\\\"55010a19-6f9d-4b9e-9f82-47bdc3835176\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T07:57:36Z is after 2025-08-24T17:21:41Z" Jan 25 07:57:36 crc kubenswrapper[4832]: E0125 07:57:36.319364 4832 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 25 07:57:36 crc kubenswrapper[4832]: I0125 07:57:36.321717 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:57:36 crc kubenswrapper[4832]: I0125 07:57:36.321782 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:57:36 crc kubenswrapper[4832]: I0125 07:57:36.321798 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:57:36 crc kubenswrapper[4832]: I0125 07:57:36.321818 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:57:36 crc kubenswrapper[4832]: I0125 07:57:36.321834 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:57:36Z","lastTransitionTime":"2026-01-25T07:57:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:57:36 crc kubenswrapper[4832]: I0125 07:57:36.424848 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:57:36 crc kubenswrapper[4832]: I0125 07:57:36.424932 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:57:36 crc kubenswrapper[4832]: I0125 07:57:36.424948 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:57:36 crc kubenswrapper[4832]: I0125 07:57:36.424994 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:57:36 crc kubenswrapper[4832]: I0125 07:57:36.425011 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:57:36Z","lastTransitionTime":"2026-01-25T07:57:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:57:36 crc kubenswrapper[4832]: I0125 07:57:36.528124 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:57:36 crc kubenswrapper[4832]: I0125 07:57:36.528195 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:57:36 crc kubenswrapper[4832]: I0125 07:57:36.528213 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:57:36 crc kubenswrapper[4832]: I0125 07:57:36.528240 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:57:36 crc kubenswrapper[4832]: I0125 07:57:36.528259 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:57:36Z","lastTransitionTime":"2026-01-25T07:57:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:57:36 crc kubenswrapper[4832]: I0125 07:57:36.601368 4832 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-22 02:59:35.93460626 +0000 UTC Jan 25 07:57:36 crc kubenswrapper[4832]: I0125 07:57:36.630846 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:57:36 crc kubenswrapper[4832]: I0125 07:57:36.631027 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:57:36 crc kubenswrapper[4832]: I0125 07:57:36.631048 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:57:36 crc kubenswrapper[4832]: I0125 07:57:36.631073 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:57:36 crc kubenswrapper[4832]: I0125 07:57:36.631092 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:57:36Z","lastTransitionTime":"2026-01-25T07:57:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:57:36 crc kubenswrapper[4832]: I0125 07:57:36.668904 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 25 07:57:36 crc kubenswrapper[4832]: I0125 07:57:36.668989 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 25 07:57:36 crc kubenswrapper[4832]: I0125 07:57:36.668930 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-nzj5s" Jan 25 07:57:36 crc kubenswrapper[4832]: E0125 07:57:36.669103 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 25 07:57:36 crc kubenswrapper[4832]: E0125 07:57:36.669724 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-nzj5s" podUID="b1a15135-866b-4644-97aa-34c7da815b6b" Jan 25 07:57:36 crc kubenswrapper[4832]: E0125 07:57:36.669854 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 25 07:57:36 crc kubenswrapper[4832]: I0125 07:57:36.734178 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:57:36 crc kubenswrapper[4832]: I0125 07:57:36.734226 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:57:36 crc kubenswrapper[4832]: I0125 07:57:36.734241 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:57:36 crc kubenswrapper[4832]: I0125 07:57:36.734260 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:57:36 crc kubenswrapper[4832]: I0125 07:57:36.734275 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:57:36Z","lastTransitionTime":"2026-01-25T07:57:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:57:36 crc kubenswrapper[4832]: I0125 07:57:36.837037 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:57:36 crc kubenswrapper[4832]: I0125 07:57:36.837133 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:57:36 crc kubenswrapper[4832]: I0125 07:57:36.837154 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:57:36 crc kubenswrapper[4832]: I0125 07:57:36.837212 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:57:36 crc kubenswrapper[4832]: I0125 07:57:36.837230 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:57:36Z","lastTransitionTime":"2026-01-25T07:57:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:57:36 crc kubenswrapper[4832]: I0125 07:57:36.939836 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:57:36 crc kubenswrapper[4832]: I0125 07:57:36.939916 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:57:36 crc kubenswrapper[4832]: I0125 07:57:36.939936 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:57:36 crc kubenswrapper[4832]: I0125 07:57:36.939962 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:57:36 crc kubenswrapper[4832]: I0125 07:57:36.939981 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:57:36Z","lastTransitionTime":"2026-01-25T07:57:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:57:37 crc kubenswrapper[4832]: I0125 07:57:37.042899 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:57:37 crc kubenswrapper[4832]: I0125 07:57:37.042972 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:57:37 crc kubenswrapper[4832]: I0125 07:57:37.042994 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:57:37 crc kubenswrapper[4832]: I0125 07:57:37.043022 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:57:37 crc kubenswrapper[4832]: I0125 07:57:37.043045 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:57:37Z","lastTransitionTime":"2026-01-25T07:57:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:57:37 crc kubenswrapper[4832]: I0125 07:57:37.146235 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:57:37 crc kubenswrapper[4832]: I0125 07:57:37.146302 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:57:37 crc kubenswrapper[4832]: I0125 07:57:37.146320 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:57:37 crc kubenswrapper[4832]: I0125 07:57:37.146344 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:57:37 crc kubenswrapper[4832]: I0125 07:57:37.146441 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:57:37Z","lastTransitionTime":"2026-01-25T07:57:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:57:37 crc kubenswrapper[4832]: I0125 07:57:37.248969 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:57:37 crc kubenswrapper[4832]: I0125 07:57:37.248997 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:57:37 crc kubenswrapper[4832]: I0125 07:57:37.249006 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:57:37 crc kubenswrapper[4832]: I0125 07:57:37.249020 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:57:37 crc kubenswrapper[4832]: I0125 07:57:37.249029 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:57:37Z","lastTransitionTime":"2026-01-25T07:57:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:57:37 crc kubenswrapper[4832]: I0125 07:57:37.352530 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:57:37 crc kubenswrapper[4832]: I0125 07:57:37.352580 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:57:37 crc kubenswrapper[4832]: I0125 07:57:37.352596 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:57:37 crc kubenswrapper[4832]: I0125 07:57:37.352619 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:57:37 crc kubenswrapper[4832]: I0125 07:57:37.352635 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:57:37Z","lastTransitionTime":"2026-01-25T07:57:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:57:37 crc kubenswrapper[4832]: I0125 07:57:37.455197 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:57:37 crc kubenswrapper[4832]: I0125 07:57:37.455272 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:57:37 crc kubenswrapper[4832]: I0125 07:57:37.455291 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:57:37 crc kubenswrapper[4832]: I0125 07:57:37.455321 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:57:37 crc kubenswrapper[4832]: I0125 07:57:37.455342 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:57:37Z","lastTransitionTime":"2026-01-25T07:57:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:57:37 crc kubenswrapper[4832]: I0125 07:57:37.558345 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:57:37 crc kubenswrapper[4832]: I0125 07:57:37.558432 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:57:37 crc kubenswrapper[4832]: I0125 07:57:37.558446 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:57:37 crc kubenswrapper[4832]: I0125 07:57:37.558482 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:57:37 crc kubenswrapper[4832]: I0125 07:57:37.558497 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:57:37Z","lastTransitionTime":"2026-01-25T07:57:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:57:37 crc kubenswrapper[4832]: I0125 07:57:37.602541 4832 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-27 15:53:50.854075758 +0000 UTC Jan 25 07:57:37 crc kubenswrapper[4832]: I0125 07:57:37.662068 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:57:37 crc kubenswrapper[4832]: I0125 07:57:37.662116 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:57:37 crc kubenswrapper[4832]: I0125 07:57:37.662129 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:57:37 crc kubenswrapper[4832]: I0125 07:57:37.662156 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:57:37 crc kubenswrapper[4832]: I0125 07:57:37.662172 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:57:37Z","lastTransitionTime":"2026-01-25T07:57:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:57:37 crc kubenswrapper[4832]: I0125 07:57:37.668630 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 25 07:57:37 crc kubenswrapper[4832]: E0125 07:57:37.668833 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 25 07:57:37 crc kubenswrapper[4832]: I0125 07:57:37.685823 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-kzrcf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5439ad80-35f6-4da4-8745-8104e9963472\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c1f3fab8a8806d76e6199970ac471a73665e6ec874f959a1e7908df814babfff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dg29p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T07:57:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-kzrcf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T07:57:37Z is after 2025-08-24T17:21:41Z" Jan 25 07:57:37 crc kubenswrapper[4832]: I0125 07:57:37.698933 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-nzj5s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b1a15135-866b-4644-97aa-34c7da815b6b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6wc7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6wc7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T07:57:30Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-nzj5s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T07:57:37Z is after 2025-08-24T17:21:41Z" Jan 25 07:57:37 crc kubenswrapper[4832]: I0125 07:57:37.733878 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d0e4b534-077a-47eb-a9aa-463b4dce27c2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:56:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:56:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e400282707469172abd90879bb5c4f96419dd2fbdbc5cc58c6ee9954624b221f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://22fb11acb07674f4808f4563567010790f12a87af272fdcf5ad1998e616c3f13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7970bc59b29bb18f7064917431bb4dd3388f593b65f71d697e3bc1c37493d087\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ae35d18ac48a31c47656c723134740770a44da6fa1587a853402bbfd4f51956\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://56b41ea1d1a7bb58c288bf3c661f5cd441412fc4790cd8361da2061bd35721dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c6f28ecd4c0dfb159fffbbdfc1ecbfee0ce21de2efa607937d80ec098bfc2534\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c6f28ecd4c0dfb159fffbbdfc1ecbfee0ce21de2efa607937d80ec098bfc2534\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T07:56:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T07:56:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3d6c060504d04d04a811fe906985b4981037f7c249befd89d21694b58983826\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b3d6c060504d04d04a811fe906985b4981037f7c249befd89d21694b58983826\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T07:56:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T07:56:58Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f98f07a514287378206a12966a18bcce2ce996434858c036f7e405a8c5d51721\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f98f07a514287378206a12966a18bcce2ce996434858c036f7e405a8c5d51721\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T07:56:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T07:56:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T07:56:57Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T07:57:37Z is after 2025-08-24T17:21:41Z" Jan 25 07:57:37 crc kubenswrapper[4832]: I0125 07:57:37.748356 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f08aec7c666388c5a9a5ccc970acf6e9df3262090951fd1a205cfb2f6cfb26a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e880d54d6b2d147d036dac73afd36230c3a984b018b7bd600dcbd33ca83aa84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T07:57:37Z is after 2025-08-24T17:21:41Z" Jan 25 07:57:37 crc kubenswrapper[4832]: I0125 07:57:37.759904 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-ljmz9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f0e6de28-95c1-4b62-93a5-8141ed12ba8e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://90459cff650e6a278d83d57b502423c3c3bd87cadc083c7642dfc4cc33e7953c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s6dzs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T07:57:16Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-ljmz9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T07:57:37Z is after 2025-08-24T17:21:41Z" Jan 25 07:57:37 crc kubenswrapper[4832]: I0125 07:57:37.763621 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:57:37 crc kubenswrapper[4832]: I0125 07:57:37.763653 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:57:37 crc kubenswrapper[4832]: I0125 07:57:37.763664 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:57:37 crc kubenswrapper[4832]: I0125 07:57:37.763678 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:57:37 crc kubenswrapper[4832]: I0125 07:57:37.763689 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:57:37Z","lastTransitionTime":"2026-01-25T07:57:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:57:37 crc kubenswrapper[4832]: I0125 07:57:37.773970 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-9r9sz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1fb47e8e-c812-41b4-9be7-3fad81e121b0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11d30ecfbac91cbd5f546d8f064b715e31917d7db31102376299e2c5fa7951f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2t6v2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c32b6a39b2bc87d55b11a88a54d0909633358c70f3fc555cd4308bc5bf2689a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2t6v2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T07:57:16Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-9r9sz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T07:57:37Z is after 2025-08-24T17:21:41Z" Jan 25 07:57:37 crc kubenswrapper[4832]: I0125 07:57:37.787013 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:16Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T07:57:37Z is after 2025-08-24T17:21:41Z" Jan 25 07:57:37 crc kubenswrapper[4832]: I0125 07:57:37.800762 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://49bab1f91a75d2c164a43ba253102a6ac5ba0fd6347fad172ae2052f055d3434\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T07:57:37Z is after 2025-08-24T17:21:41Z" Jan 25 07:57:37 crc kubenswrapper[4832]: I0125 07:57:37.811552 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:19Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:19Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://097b2ff685144140b86c80b5c605d0ef31116b56237a696d1da4bf98f65d7ae2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T07:57:37Z is after 2025-08-24T17:21:41Z" Jan 25 07:57:37 crc kubenswrapper[4832]: I0125 07:57:37.823043 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T07:57:37Z is after 2025-08-24T17:21:41Z" Jan 25 07:57:37 crc kubenswrapper[4832]: I0125 07:57:37.838407 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-7tflx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"947f1c61-f061-4448-b301-9c2554b67933\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://62f9942e292890719dd629a44aa806877367db57a332a97f254fea093c039c5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6tmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://446dcb21c95e4112671db6f4b8376ff3361d3d386ecdaa190f615271511be812\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://446dcb21c95e4112671db6f4b8376ff3361d3d386ecdaa190f615271511be812\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T07:57:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T07:57:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6tmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2ca8e86a16d5f632146a210839dc52fb85013bd79ac5a467847d4a28a672539\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a2ca8e86a16d5f632146a210839dc52fb85013bd79ac5a467847d4a28a672539\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T07:57:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T07:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6tmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e8c763fc8bcc560d4435f2ed3be793465fb9e31b07bc26b76ce14bf7d9ce6b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3e8c763fc8bcc560d4435f2ed3be793465fb9e31b07bc26b76ce14bf7d9ce6b7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T07:57:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T07:57:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6tmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6a224c00f14700b78550beaa705d0f1cf0b2f13ef8ec3ba81aef885b81292f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a6a224c00f14700b78550beaa705d0f1cf0b2f13ef8ec3ba81aef885b81292f3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T07:57:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T07:57:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6tmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0565bbfef6aee4dc36b7eeea5fb9b0d26004395c38af8fb6f1745ff6853957e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0565bbfef6aee4dc36b7eeea5fb9b0d26004395c38af8fb6f1745ff6853957e4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T07:57:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T07:57:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6tmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://21c9f3889231e035c1db9611e076f2db7f52cca1449f9cd143323a8599d3141c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://21c9f3889231e035c1db9611e076f2db7f52cca1449f9cd143323a8599d3141c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T07:57:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T07:57:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6tmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T07:57:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-7tflx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T07:57:37Z is after 2025-08-24T17:21:41Z" Jan 25 07:57:37 crc kubenswrapper[4832]: I0125 07:57:37.849431 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-6dqw2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5b30a48c-b823-4cdd-ac0c-def5487d8fa6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d04c4243f10847106daab854b81ba5b24466780aa4900922ae2c460468a12e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxmsw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T07:57:16Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-6dqw2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T07:57:37Z is after 2025-08-24T17:21:41Z" Jan 25 07:57:37 crc kubenswrapper[4832]: I0125 07:57:37.865947 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:57:37 crc kubenswrapper[4832]: I0125 07:57:37.866561 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:57:37 crc kubenswrapper[4832]: I0125 07:57:37.866598 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:57:37 crc kubenswrapper[4832]: I0125 07:57:37.866624 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:57:37 crc kubenswrapper[4832]: I0125 07:57:37.866645 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:57:37Z","lastTransitionTime":"2026-01-25T07:57:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:57:37 crc kubenswrapper[4832]: I0125 07:57:37.873644 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-plv66" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9c6fdc72-86dc-433d-8aac-57b0eeefaca3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4eb8d5ded80c75addd304eb271c805a5558200db4ad062ef7354d8a0e4d2892d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rkm2k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b2bdf85709ae59146893142e9c99259a30d0a3d382b2212b1863f677f6afc2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rkm2k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://955df1f749685e35f57096ab341705a767f9f044c498ff9fe0c578205ab00e47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rkm2k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a4281c5178e1f538e268252a65fbf98cf6d3febdb246a148f96a4aa074654ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rkm2k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9039a4038315d24ad4f721f3a16dc792881c104d23270f4ab5ffb3d84ff4cb99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rkm2k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0de5e2c0084fa8b9faf368e61b965f84d8411bcbdfb8b3cf6a35f4bc6088e68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rkm2k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://535d226369544a445f4a5592a1a733db46fea474ae6700626093ea53a57fa858\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://535d226369544a445f4a5592a1a733db46fea474ae6700626093ea53a57fa858\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-25T07:57:26Z\\\",\\\"message\\\":\\\"lse, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.5.139\\\\\\\", Port:17698, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI0125 07:57:26.725541 6225 services_controller.go:452] Built service openshift-apiserver/check-endpoints per-node LB for network=default: []services.LB{}\\\\nI0125 07:57:26.725548 6225 services_controller.go:453] Built service openshift-apiserver/check-endpoints template LB for network=default: []services.LB{}\\\\nI0125 07:57:26.725513 6225 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-diagnostics/network-check-target-xd92c\\\\nI0125 07:57:26.725560 6225 obj_retry.go:303] Retry object setup: *v1.Pod openshift-image-registry/node-ca-6dqw2\\\\nF0125 07:57:26.725573 6225 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-25T07:57:26Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-plv66_openshift-ovn-kubernetes(9c6fdc72-86dc-433d-8aac-57b0eeefaca3)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rkm2k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d82289bf3a8f5881decb5d348cc43fdfd61f4ce6af17013a893b687d2c759d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rkm2k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac96bdf8380dbae226d8f186a0449b986660f21889eb73734620b26fb796fbf1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ac96bdf8380dbae226d8f186a0449b986660f21889eb73734620b26fb796fbf1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T07:57:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rkm2k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T07:57:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-plv66\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T07:57:37Z is after 2025-08-24T17:21:41Z" Jan 25 07:57:37 crc kubenswrapper[4832]: I0125 07:57:37.888361 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-ct7hc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1be4ce34-f46c-4ee9-8fb5-7ac13dafef85\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c584b1d69c283cdea5cd50a6f1e3b9a1fd4b4b82bfb1142fb4bb32fd7c7d3fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cd2cg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://80d0c4fe9bedb92c87bfea3e2e7706dac8825366b74adb48b257fa32f31a6277\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cd2cg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T07:57:29Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-ct7hc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T07:57:37Z is after 2025-08-24T17:21:41Z" Jan 25 07:57:37 crc kubenswrapper[4832]: I0125 07:57:37.906257 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4399c971-4476-4d24-ae22-8f9710ee1ea8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:56:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:56:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:56:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://427b76c32266adf832d2068d3a55977e793505c5bb68d7b55f73115596094910\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:56:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://37e9206fcc440929199c51b318bab8d2c23814d1307eaed596434c12edf2ed21\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:56:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://959f94a48ef709e3a3ca62ab6fda1874fd98e4fa70fbde0fa03da51bc8d0ed25\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:56:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://56d7d5b36830b76c8af4d6a98ec50b4096ef677b7ec94784724d5395dbc5e1a5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7e2213b4c4748dc37cf94e9b977630270dedbabf28e81c8a6d75e4ee3346ad7a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-25T07:57:15Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0125 07:57:10.242088 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0125 07:57:10.245266 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3222874030/tls.crt::/tmp/serving-cert-3222874030/tls.key\\\\\\\"\\\\nI0125 07:57:15.582629 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0125 07:57:15.585295 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0125 07:57:15.585315 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0125 07:57:15.585341 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0125 07:57:15.585347 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0125 07:57:15.590465 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0125 07:57:15.590486 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0125 07:57:15.590498 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0125 07:57:15.590502 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0125 07:57:15.590506 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0125 07:57:15.590510 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0125 07:57:15.590513 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0125 07:57:15.590670 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0125 07:57:15.594690 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-25T07:56:59Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c0b0c638bfaa98aaf9932b5ad1b0bfc04ba52038c40f3aa85103388c557ace5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:56:59Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5cdefbe9da3ff798b69ba79465cd9b6fce74df31802f14dca3fa58ba5b9d1bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b5cdefbe9da3ff798b69ba79465cd9b6fce74df31802f14dca3fa58ba5b9d1bd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T07:56:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T07:56:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T07:56:57Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T07:57:37Z is after 2025-08-24T17:21:41Z" Jan 25 07:57:37 crc kubenswrapper[4832]: I0125 07:57:37.917687 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fcc553c4-1007-4dbc-8420-60b36d54467a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:56:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:56:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:56:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8be196a1dec67a58e78aa9de2efa770fc899f210cc9c13962f0ebe78b967ba34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:56:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b044eb1a229266f00938c08da6aa9e86425ca71d08c8434d7214d54850c36bbb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:56:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://82354c782a5e3edb960aa716e1fc5fa9ab40d1f483ae320f08abfb662c1f1911\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:56:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7833d14895ff5c8aa464bdd04ddfe77dd2cddb9658d863bf6421449e62657bd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:56:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T07:56:57Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T07:57:37Z is after 2025-08-24T17:21:41Z" Jan 25 07:57:37 crc kubenswrapper[4832]: I0125 07:57:37.928610 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:16Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T07:57:37Z is after 2025-08-24T17:21:41Z" Jan 25 07:57:37 crc kubenswrapper[4832]: I0125 07:57:37.968917 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:57:37 crc kubenswrapper[4832]: I0125 07:57:37.968955 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:57:37 crc kubenswrapper[4832]: I0125 07:57:37.968965 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:57:37 crc kubenswrapper[4832]: I0125 07:57:37.968980 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:57:37 crc kubenswrapper[4832]: I0125 07:57:37.968991 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:57:37Z","lastTransitionTime":"2026-01-25T07:57:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:57:38 crc kubenswrapper[4832]: I0125 07:57:38.071680 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:57:38 crc kubenswrapper[4832]: I0125 07:57:38.071714 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:57:38 crc kubenswrapper[4832]: I0125 07:57:38.071723 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:57:38 crc kubenswrapper[4832]: I0125 07:57:38.071737 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:57:38 crc kubenswrapper[4832]: I0125 07:57:38.071747 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:57:38Z","lastTransitionTime":"2026-01-25T07:57:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:57:38 crc kubenswrapper[4832]: I0125 07:57:38.173812 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:57:38 crc kubenswrapper[4832]: I0125 07:57:38.173869 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:57:38 crc kubenswrapper[4832]: I0125 07:57:38.173880 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:57:38 crc kubenswrapper[4832]: I0125 07:57:38.173901 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:57:38 crc kubenswrapper[4832]: I0125 07:57:38.173919 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:57:38Z","lastTransitionTime":"2026-01-25T07:57:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:57:38 crc kubenswrapper[4832]: I0125 07:57:38.277084 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:57:38 crc kubenswrapper[4832]: I0125 07:57:38.277176 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:57:38 crc kubenswrapper[4832]: I0125 07:57:38.277192 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:57:38 crc kubenswrapper[4832]: I0125 07:57:38.277214 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:57:38 crc kubenswrapper[4832]: I0125 07:57:38.277227 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:57:38Z","lastTransitionTime":"2026-01-25T07:57:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:57:38 crc kubenswrapper[4832]: I0125 07:57:38.379899 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:57:38 crc kubenswrapper[4832]: I0125 07:57:38.379951 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:57:38 crc kubenswrapper[4832]: I0125 07:57:38.379967 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:57:38 crc kubenswrapper[4832]: I0125 07:57:38.379990 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:57:38 crc kubenswrapper[4832]: I0125 07:57:38.380004 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:57:38Z","lastTransitionTime":"2026-01-25T07:57:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:57:38 crc kubenswrapper[4832]: I0125 07:57:38.482337 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:57:38 crc kubenswrapper[4832]: I0125 07:57:38.482433 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:57:38 crc kubenswrapper[4832]: I0125 07:57:38.482458 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:57:38 crc kubenswrapper[4832]: I0125 07:57:38.482480 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:57:38 crc kubenswrapper[4832]: I0125 07:57:38.482528 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:57:38Z","lastTransitionTime":"2026-01-25T07:57:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:57:38 crc kubenswrapper[4832]: I0125 07:57:38.578234 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/b1a15135-866b-4644-97aa-34c7da815b6b-metrics-certs\") pod \"network-metrics-daemon-nzj5s\" (UID: \"b1a15135-866b-4644-97aa-34c7da815b6b\") " pod="openshift-multus/network-metrics-daemon-nzj5s" Jan 25 07:57:38 crc kubenswrapper[4832]: E0125 07:57:38.578428 4832 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 25 07:57:38 crc kubenswrapper[4832]: E0125 07:57:38.578527 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b1a15135-866b-4644-97aa-34c7da815b6b-metrics-certs podName:b1a15135-866b-4644-97aa-34c7da815b6b nodeName:}" failed. No retries permitted until 2026-01-25 07:57:46.578506506 +0000 UTC m=+49.252330129 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/b1a15135-866b-4644-97aa-34c7da815b6b-metrics-certs") pod "network-metrics-daemon-nzj5s" (UID: "b1a15135-866b-4644-97aa-34c7da815b6b") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 25 07:57:38 crc kubenswrapper[4832]: I0125 07:57:38.584820 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:57:38 crc kubenswrapper[4832]: I0125 07:57:38.584852 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:57:38 crc kubenswrapper[4832]: I0125 07:57:38.584877 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:57:38 crc kubenswrapper[4832]: I0125 07:57:38.584892 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:57:38 crc kubenswrapper[4832]: I0125 07:57:38.584902 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:57:38Z","lastTransitionTime":"2026-01-25T07:57:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:57:38 crc kubenswrapper[4832]: I0125 07:57:38.603338 4832 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-16 05:38:34.627444681 +0000 UTC Jan 25 07:57:38 crc kubenswrapper[4832]: I0125 07:57:38.668652 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-nzj5s" Jan 25 07:57:38 crc kubenswrapper[4832]: I0125 07:57:38.668685 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 25 07:57:38 crc kubenswrapper[4832]: I0125 07:57:38.668659 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 25 07:57:38 crc kubenswrapper[4832]: E0125 07:57:38.668794 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-nzj5s" podUID="b1a15135-866b-4644-97aa-34c7da815b6b" Jan 25 07:57:38 crc kubenswrapper[4832]: E0125 07:57:38.668902 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 25 07:57:38 crc kubenswrapper[4832]: E0125 07:57:38.669017 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 25 07:57:38 crc kubenswrapper[4832]: I0125 07:57:38.687785 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:57:38 crc kubenswrapper[4832]: I0125 07:57:38.687832 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:57:38 crc kubenswrapper[4832]: I0125 07:57:38.687840 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:57:38 crc kubenswrapper[4832]: I0125 07:57:38.687853 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:57:38 crc kubenswrapper[4832]: I0125 07:57:38.687891 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:57:38Z","lastTransitionTime":"2026-01-25T07:57:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:57:38 crc kubenswrapper[4832]: I0125 07:57:38.790640 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:57:38 crc kubenswrapper[4832]: I0125 07:57:38.790686 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:57:38 crc kubenswrapper[4832]: I0125 07:57:38.790697 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:57:38 crc kubenswrapper[4832]: I0125 07:57:38.790714 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:57:38 crc kubenswrapper[4832]: I0125 07:57:38.790725 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:57:38Z","lastTransitionTime":"2026-01-25T07:57:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:57:38 crc kubenswrapper[4832]: I0125 07:57:38.894988 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:57:38 crc kubenswrapper[4832]: I0125 07:57:38.895062 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:57:38 crc kubenswrapper[4832]: I0125 07:57:38.895094 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:57:38 crc kubenswrapper[4832]: I0125 07:57:38.895132 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:57:38 crc kubenswrapper[4832]: I0125 07:57:38.895149 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:57:38Z","lastTransitionTime":"2026-01-25T07:57:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:57:38 crc kubenswrapper[4832]: I0125 07:57:38.997204 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:57:38 crc kubenswrapper[4832]: I0125 07:57:38.997256 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:57:38 crc kubenswrapper[4832]: I0125 07:57:38.997270 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:57:38 crc kubenswrapper[4832]: I0125 07:57:38.997290 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:57:38 crc kubenswrapper[4832]: I0125 07:57:38.997302 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:57:38Z","lastTransitionTime":"2026-01-25T07:57:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:57:39 crc kubenswrapper[4832]: I0125 07:57:39.099053 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:57:39 crc kubenswrapper[4832]: I0125 07:57:39.099121 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:57:39 crc kubenswrapper[4832]: I0125 07:57:39.099144 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:57:39 crc kubenswrapper[4832]: I0125 07:57:39.099174 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:57:39 crc kubenswrapper[4832]: I0125 07:57:39.099195 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:57:39Z","lastTransitionTime":"2026-01-25T07:57:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:57:39 crc kubenswrapper[4832]: I0125 07:57:39.202045 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:57:39 crc kubenswrapper[4832]: I0125 07:57:39.202103 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:57:39 crc kubenswrapper[4832]: I0125 07:57:39.202112 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:57:39 crc kubenswrapper[4832]: I0125 07:57:39.202128 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:57:39 crc kubenswrapper[4832]: I0125 07:57:39.202138 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:57:39Z","lastTransitionTime":"2026-01-25T07:57:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:57:39 crc kubenswrapper[4832]: I0125 07:57:39.304306 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:57:39 crc kubenswrapper[4832]: I0125 07:57:39.304356 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:57:39 crc kubenswrapper[4832]: I0125 07:57:39.304364 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:57:39 crc kubenswrapper[4832]: I0125 07:57:39.304377 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:57:39 crc kubenswrapper[4832]: I0125 07:57:39.304429 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:57:39Z","lastTransitionTime":"2026-01-25T07:57:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:57:39 crc kubenswrapper[4832]: I0125 07:57:39.313822 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-plv66" Jan 25 07:57:39 crc kubenswrapper[4832]: I0125 07:57:39.315149 4832 scope.go:117] "RemoveContainer" containerID="535d226369544a445f4a5592a1a733db46fea474ae6700626093ea53a57fa858" Jan 25 07:57:39 crc kubenswrapper[4832]: I0125 07:57:39.407061 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:57:39 crc kubenswrapper[4832]: I0125 07:57:39.407293 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:57:39 crc kubenswrapper[4832]: I0125 07:57:39.407304 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:57:39 crc kubenswrapper[4832]: I0125 07:57:39.407321 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:57:39 crc kubenswrapper[4832]: I0125 07:57:39.407332 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:57:39Z","lastTransitionTime":"2026-01-25T07:57:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:57:39 crc kubenswrapper[4832]: I0125 07:57:39.510359 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:57:39 crc kubenswrapper[4832]: I0125 07:57:39.510527 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:57:39 crc kubenswrapper[4832]: I0125 07:57:39.510552 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:57:39 crc kubenswrapper[4832]: I0125 07:57:39.510588 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:57:39 crc kubenswrapper[4832]: I0125 07:57:39.510611 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:57:39Z","lastTransitionTime":"2026-01-25T07:57:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:57:39 crc kubenswrapper[4832]: I0125 07:57:39.604473 4832 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-13 05:14:01.767861326 +0000 UTC Jan 25 07:57:39 crc kubenswrapper[4832]: I0125 07:57:39.613596 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:57:39 crc kubenswrapper[4832]: I0125 07:57:39.613660 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:57:39 crc kubenswrapper[4832]: I0125 07:57:39.613683 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:57:39 crc kubenswrapper[4832]: I0125 07:57:39.613714 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:57:39 crc kubenswrapper[4832]: I0125 07:57:39.613737 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:57:39Z","lastTransitionTime":"2026-01-25T07:57:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:57:39 crc kubenswrapper[4832]: I0125 07:57:39.669449 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 25 07:57:39 crc kubenswrapper[4832]: E0125 07:57:39.669570 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 25 07:57:39 crc kubenswrapper[4832]: I0125 07:57:39.715582 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:57:39 crc kubenswrapper[4832]: I0125 07:57:39.715619 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:57:39 crc kubenswrapper[4832]: I0125 07:57:39.715631 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:57:39 crc kubenswrapper[4832]: I0125 07:57:39.715647 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:57:39 crc kubenswrapper[4832]: I0125 07:57:39.715657 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:57:39Z","lastTransitionTime":"2026-01-25T07:57:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:57:39 crc kubenswrapper[4832]: I0125 07:57:39.817785 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:57:39 crc kubenswrapper[4832]: I0125 07:57:39.817831 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:57:39 crc kubenswrapper[4832]: I0125 07:57:39.817840 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:57:39 crc kubenswrapper[4832]: I0125 07:57:39.817855 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:57:39 crc kubenswrapper[4832]: I0125 07:57:39.817864 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:57:39Z","lastTransitionTime":"2026-01-25T07:57:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:57:39 crc kubenswrapper[4832]: I0125 07:57:39.920529 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:57:39 crc kubenswrapper[4832]: I0125 07:57:39.920581 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:57:39 crc kubenswrapper[4832]: I0125 07:57:39.920592 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:57:39 crc kubenswrapper[4832]: I0125 07:57:39.920610 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:57:39 crc kubenswrapper[4832]: I0125 07:57:39.920623 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:57:39Z","lastTransitionTime":"2026-01-25T07:57:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:57:39 crc kubenswrapper[4832]: I0125 07:57:39.977545 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-plv66_9c6fdc72-86dc-433d-8aac-57b0eeefaca3/ovnkube-controller/1.log" Jan 25 07:57:39 crc kubenswrapper[4832]: I0125 07:57:39.980076 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-plv66" event={"ID":"9c6fdc72-86dc-433d-8aac-57b0eeefaca3","Type":"ContainerStarted","Data":"46f7a9d8da7bc60b49c21eb3838eb9b38263ef6bf7be257ababc09c050822355"} Jan 25 07:57:39 crc kubenswrapper[4832]: I0125 07:57:39.980511 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-plv66" Jan 25 07:57:39 crc kubenswrapper[4832]: I0125 07:57:39.991870 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:19Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:19Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://097b2ff685144140b86c80b5c605d0ef31116b56237a696d1da4bf98f65d7ae2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T07:57:39Z is after 2025-08-24T17:21:41Z" Jan 25 07:57:40 crc kubenswrapper[4832]: I0125 07:57:40.001234 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-ljmz9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f0e6de28-95c1-4b62-93a5-8141ed12ba8e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://90459cff650e6a278d83d57b502423c3c3bd87cadc083c7642dfc4cc33e7953c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s6dzs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T07:57:16Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-ljmz9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T07:57:39Z is after 2025-08-24T17:21:41Z" Jan 25 07:57:40 crc kubenswrapper[4832]: I0125 07:57:40.013890 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-9r9sz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1fb47e8e-c812-41b4-9be7-3fad81e121b0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11d30ecfbac91cbd5f546d8f064b715e31917d7db31102376299e2c5fa7951f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2t6v2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c32b6a39b2bc87d55b11a88a54d0909633358c70f3fc555cd4308bc5bf2689a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2t6v2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T07:57:16Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-9r9sz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T07:57:40Z is after 2025-08-24T17:21:41Z" Jan 25 07:57:40 crc kubenswrapper[4832]: I0125 07:57:40.023404 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:57:40 crc kubenswrapper[4832]: I0125 07:57:40.023431 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:57:40 crc kubenswrapper[4832]: I0125 07:57:40.023441 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:57:40 crc kubenswrapper[4832]: I0125 07:57:40.023455 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:57:40 crc kubenswrapper[4832]: I0125 07:57:40.023464 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:57:40Z","lastTransitionTime":"2026-01-25T07:57:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:57:40 crc kubenswrapper[4832]: I0125 07:57:40.031312 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:16Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T07:57:40Z is after 2025-08-24T17:21:41Z" Jan 25 07:57:40 crc kubenswrapper[4832]: I0125 07:57:40.043894 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://49bab1f91a75d2c164a43ba253102a6ac5ba0fd6347fad172ae2052f055d3434\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T07:57:40Z is after 2025-08-24T17:21:41Z" Jan 25 07:57:40 crc kubenswrapper[4832]: I0125 07:57:40.065290 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-7tflx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"947f1c61-f061-4448-b301-9c2554b67933\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://62f9942e292890719dd629a44aa806877367db57a332a97f254fea093c039c5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6tmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://446dcb21c95e4112671db6f4b8376ff3361d3d386ecdaa190f615271511be812\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://446dcb21c95e4112671db6f4b8376ff3361d3d386ecdaa190f615271511be812\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T07:57:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T07:57:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6tmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2ca8e86a16d5f632146a210839dc52fb85013bd79ac5a467847d4a28a672539\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a2ca8e86a16d5f632146a210839dc52fb85013bd79ac5a467847d4a28a672539\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T07:57:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T07:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6tmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e8c763fc8bcc560d4435f2ed3be793465fb9e31b07bc26b76ce14bf7d9ce6b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3e8c763fc8bcc560d4435f2ed3be793465fb9e31b07bc26b76ce14bf7d9ce6b7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T07:57:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T07:57:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6tmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6a224c00f14700b78550beaa705d0f1cf0b2f13ef8ec3ba81aef885b81292f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a6a224c00f14700b78550beaa705d0f1cf0b2f13ef8ec3ba81aef885b81292f3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T07:57:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T07:57:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6tmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0565bbfef6aee4dc36b7eeea5fb9b0d26004395c38af8fb6f1745ff6853957e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0565bbfef6aee4dc36b7eeea5fb9b0d26004395c38af8fb6f1745ff6853957e4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T07:57:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T07:57:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6tmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://21c9f3889231e035c1db9611e076f2db7f52cca1449f9cd143323a8599d3141c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://21c9f3889231e035c1db9611e076f2db7f52cca1449f9cd143323a8599d3141c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T07:57:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T07:57:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6tmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T07:57:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-7tflx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T07:57:40Z is after 2025-08-24T17:21:41Z" Jan 25 07:57:40 crc kubenswrapper[4832]: I0125 07:57:40.080148 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T07:57:40Z is after 2025-08-24T17:21:41Z" Jan 25 07:57:40 crc kubenswrapper[4832]: I0125 07:57:40.091844 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:16Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T07:57:40Z is after 2025-08-24T17:21:41Z" Jan 25 07:57:40 crc kubenswrapper[4832]: I0125 07:57:40.101933 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-6dqw2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5b30a48c-b823-4cdd-ac0c-def5487d8fa6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d04c4243f10847106daab854b81ba5b24466780aa4900922ae2c460468a12e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxmsw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T07:57:16Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-6dqw2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T07:57:40Z is after 2025-08-24T17:21:41Z" Jan 25 07:57:40 crc kubenswrapper[4832]: I0125 07:57:40.126100 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:57:40 crc kubenswrapper[4832]: I0125 07:57:40.126132 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:57:40 crc kubenswrapper[4832]: I0125 07:57:40.126141 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:57:40 crc kubenswrapper[4832]: I0125 07:57:40.126161 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:57:40 crc kubenswrapper[4832]: I0125 07:57:40.126172 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:57:40Z","lastTransitionTime":"2026-01-25T07:57:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:57:40 crc kubenswrapper[4832]: I0125 07:57:40.157594 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-plv66" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9c6fdc72-86dc-433d-8aac-57b0eeefaca3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4eb8d5ded80c75addd304eb271c805a5558200db4ad062ef7354d8a0e4d2892d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rkm2k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b2bdf85709ae59146893142e9c99259a30d0a3d382b2212b1863f677f6afc2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rkm2k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://955df1f749685e35f57096ab341705a767f9f044c498ff9fe0c578205ab00e47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rkm2k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a4281c5178e1f538e268252a65fbf98cf6d3febdb246a148f96a4aa074654ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rkm2k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9039a4038315d24ad4f721f3a16dc792881c104d23270f4ab5ffb3d84ff4cb99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rkm2k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0de5e2c0084fa8b9faf368e61b965f84d8411bcbdfb8b3cf6a35f4bc6088e68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rkm2k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://46f7a9d8da7bc60b49c21eb3838eb9b38263ef6bf7be257ababc09c050822355\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://535d226369544a445f4a5592a1a733db46fea474ae6700626093ea53a57fa858\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-25T07:57:26Z\\\",\\\"message\\\":\\\"lse, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.5.139\\\\\\\", Port:17698, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI0125 07:57:26.725541 6225 services_controller.go:452] Built service openshift-apiserver/check-endpoints per-node LB for network=default: []services.LB{}\\\\nI0125 07:57:26.725548 6225 services_controller.go:453] Built service openshift-apiserver/check-endpoints template LB for network=default: []services.LB{}\\\\nI0125 07:57:26.725513 6225 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-diagnostics/network-check-target-xd92c\\\\nI0125 07:57:26.725560 6225 obj_retry.go:303] Retry object setup: *v1.Pod openshift-image-registry/node-ca-6dqw2\\\\nF0125 07:57:26.725573 6225 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-25T07:57:26Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rkm2k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d82289bf3a8f5881decb5d348cc43fdfd61f4ce6af17013a893b687d2c759d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rkm2k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac96bdf8380dbae226d8f186a0449b986660f21889eb73734620b26fb796fbf1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ac96bdf8380dbae226d8f186a0449b986660f21889eb73734620b26fb796fbf1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T07:57:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rkm2k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T07:57:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-plv66\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T07:57:40Z is after 2025-08-24T17:21:41Z" Jan 25 07:57:40 crc kubenswrapper[4832]: I0125 07:57:40.177427 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-ct7hc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1be4ce34-f46c-4ee9-8fb5-7ac13dafef85\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c584b1d69c283cdea5cd50a6f1e3b9a1fd4b4b82bfb1142fb4bb32fd7c7d3fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cd2cg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://80d0c4fe9bedb92c87bfea3e2e7706dac8825366b74adb48b257fa32f31a6277\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cd2cg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T07:57:29Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-ct7hc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T07:57:40Z is after 2025-08-24T17:21:41Z" Jan 25 07:57:40 crc kubenswrapper[4832]: I0125 07:57:40.192186 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4399c971-4476-4d24-ae22-8f9710ee1ea8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:56:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:56:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:56:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://427b76c32266adf832d2068d3a55977e793505c5bb68d7b55f73115596094910\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:56:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://37e9206fcc440929199c51b318bab8d2c23814d1307eaed596434c12edf2ed21\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:56:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://959f94a48ef709e3a3ca62ab6fda1874fd98e4fa70fbde0fa03da51bc8d0ed25\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:56:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://56d7d5b36830b76c8af4d6a98ec50b4096ef677b7ec94784724d5395dbc5e1a5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7e2213b4c4748dc37cf94e9b977630270dedbabf28e81c8a6d75e4ee3346ad7a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-25T07:57:15Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0125 07:57:10.242088 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0125 07:57:10.245266 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3222874030/tls.crt::/tmp/serving-cert-3222874030/tls.key\\\\\\\"\\\\nI0125 07:57:15.582629 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0125 07:57:15.585295 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0125 07:57:15.585315 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0125 07:57:15.585341 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0125 07:57:15.585347 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0125 07:57:15.590465 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0125 07:57:15.590486 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0125 07:57:15.590498 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0125 07:57:15.590502 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0125 07:57:15.590506 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0125 07:57:15.590510 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0125 07:57:15.590513 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0125 07:57:15.590670 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0125 07:57:15.594690 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-25T07:56:59Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c0b0c638bfaa98aaf9932b5ad1b0bfc04ba52038c40f3aa85103388c557ace5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:56:59Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5cdefbe9da3ff798b69ba79465cd9b6fce74df31802f14dca3fa58ba5b9d1bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b5cdefbe9da3ff798b69ba79465cd9b6fce74df31802f14dca3fa58ba5b9d1bd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T07:56:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T07:56:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T07:56:57Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T07:57:40Z is after 2025-08-24T17:21:41Z" Jan 25 07:57:40 crc kubenswrapper[4832]: I0125 07:57:40.204766 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fcc553c4-1007-4dbc-8420-60b36d54467a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:56:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:56:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:56:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8be196a1dec67a58e78aa9de2efa770fc899f210cc9c13962f0ebe78b967ba34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:56:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b044eb1a229266f00938c08da6aa9e86425ca71d08c8434d7214d54850c36bbb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:56:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://82354c782a5e3edb960aa716e1fc5fa9ab40d1f483ae320f08abfb662c1f1911\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:56:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7833d14895ff5c8aa464bdd04ddfe77dd2cddb9658d863bf6421449e62657bd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:56:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T07:56:57Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T07:57:40Z is after 2025-08-24T17:21:41Z" Jan 25 07:57:40 crc kubenswrapper[4832]: I0125 07:57:40.214647 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f08aec7c666388c5a9a5ccc970acf6e9df3262090951fd1a205cfb2f6cfb26a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e880d54d6b2d147d036dac73afd36230c3a984b018b7bd600dcbd33ca83aa84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T07:57:40Z is after 2025-08-24T17:21:41Z" Jan 25 07:57:40 crc kubenswrapper[4832]: I0125 07:57:40.225273 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-kzrcf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5439ad80-35f6-4da4-8745-8104e9963472\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c1f3fab8a8806d76e6199970ac471a73665e6ec874f959a1e7908df814babfff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dg29p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T07:57:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-kzrcf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T07:57:40Z is after 2025-08-24T17:21:41Z" Jan 25 07:57:40 crc kubenswrapper[4832]: I0125 07:57:40.227563 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:57:40 crc kubenswrapper[4832]: I0125 07:57:40.227585 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:57:40 crc kubenswrapper[4832]: I0125 07:57:40.227593 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:57:40 crc kubenswrapper[4832]: I0125 07:57:40.227606 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:57:40 crc kubenswrapper[4832]: I0125 07:57:40.227615 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:57:40Z","lastTransitionTime":"2026-01-25T07:57:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:57:40 crc kubenswrapper[4832]: I0125 07:57:40.234791 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-nzj5s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b1a15135-866b-4644-97aa-34c7da815b6b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6wc7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6wc7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T07:57:30Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-nzj5s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T07:57:40Z is after 2025-08-24T17:21:41Z" Jan 25 07:57:40 crc kubenswrapper[4832]: I0125 07:57:40.251231 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d0e4b534-077a-47eb-a9aa-463b4dce27c2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:56:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:56:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e400282707469172abd90879bb5c4f96419dd2fbdbc5cc58c6ee9954624b221f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://22fb11acb07674f4808f4563567010790f12a87af272fdcf5ad1998e616c3f13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7970bc59b29bb18f7064917431bb4dd3388f593b65f71d697e3bc1c37493d087\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ae35d18ac48a31c47656c723134740770a44da6fa1587a853402bbfd4f51956\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://56b41ea1d1a7bb58c288bf3c661f5cd441412fc4790cd8361da2061bd35721dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c6f28ecd4c0dfb159fffbbdfc1ecbfee0ce21de2efa607937d80ec098bfc2534\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c6f28ecd4c0dfb159fffbbdfc1ecbfee0ce21de2efa607937d80ec098bfc2534\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T07:56:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T07:56:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3d6c060504d04d04a811fe906985b4981037f7c249befd89d21694b58983826\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b3d6c060504d04d04a811fe906985b4981037f7c249befd89d21694b58983826\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T07:56:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T07:56:58Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f98f07a514287378206a12966a18bcce2ce996434858c036f7e405a8c5d51721\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f98f07a514287378206a12966a18bcce2ce996434858c036f7e405a8c5d51721\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T07:56:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T07:56:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T07:56:57Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T07:57:40Z is after 2025-08-24T17:21:41Z" Jan 25 07:57:40 crc kubenswrapper[4832]: I0125 07:57:40.330135 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:57:40 crc kubenswrapper[4832]: I0125 07:57:40.330160 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:57:40 crc kubenswrapper[4832]: I0125 07:57:40.330167 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:57:40 crc kubenswrapper[4832]: I0125 07:57:40.330180 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:57:40 crc kubenswrapper[4832]: I0125 07:57:40.330190 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:57:40Z","lastTransitionTime":"2026-01-25T07:57:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:57:40 crc kubenswrapper[4832]: I0125 07:57:40.432345 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:57:40 crc kubenswrapper[4832]: I0125 07:57:40.432434 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:57:40 crc kubenswrapper[4832]: I0125 07:57:40.432458 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:57:40 crc kubenswrapper[4832]: I0125 07:57:40.432483 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:57:40 crc kubenswrapper[4832]: I0125 07:57:40.432500 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:57:40Z","lastTransitionTime":"2026-01-25T07:57:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:57:40 crc kubenswrapper[4832]: I0125 07:57:40.534768 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:57:40 crc kubenswrapper[4832]: I0125 07:57:40.534828 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:57:40 crc kubenswrapper[4832]: I0125 07:57:40.534852 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:57:40 crc kubenswrapper[4832]: I0125 07:57:40.534877 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:57:40 crc kubenswrapper[4832]: I0125 07:57:40.534894 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:57:40Z","lastTransitionTime":"2026-01-25T07:57:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:57:40 crc kubenswrapper[4832]: I0125 07:57:40.605099 4832 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-18 20:37:21.125329297 +0000 UTC Jan 25 07:57:40 crc kubenswrapper[4832]: I0125 07:57:40.636731 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:57:40 crc kubenswrapper[4832]: I0125 07:57:40.636790 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:57:40 crc kubenswrapper[4832]: I0125 07:57:40.636812 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:57:40 crc kubenswrapper[4832]: I0125 07:57:40.636839 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:57:40 crc kubenswrapper[4832]: I0125 07:57:40.636859 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:57:40Z","lastTransitionTime":"2026-01-25T07:57:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:57:40 crc kubenswrapper[4832]: I0125 07:57:40.669224 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-nzj5s" Jan 25 07:57:40 crc kubenswrapper[4832]: I0125 07:57:40.669361 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 25 07:57:40 crc kubenswrapper[4832]: I0125 07:57:40.669521 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 25 07:57:40 crc kubenswrapper[4832]: E0125 07:57:40.669529 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-nzj5s" podUID="b1a15135-866b-4644-97aa-34c7da815b6b" Jan 25 07:57:40 crc kubenswrapper[4832]: E0125 07:57:40.669924 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 25 07:57:40 crc kubenswrapper[4832]: E0125 07:57:40.670062 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 25 07:57:40 crc kubenswrapper[4832]: I0125 07:57:40.738820 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:57:40 crc kubenswrapper[4832]: I0125 07:57:40.738871 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:57:40 crc kubenswrapper[4832]: I0125 07:57:40.738887 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:57:40 crc kubenswrapper[4832]: I0125 07:57:40.738910 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:57:40 crc kubenswrapper[4832]: I0125 07:57:40.738927 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:57:40Z","lastTransitionTime":"2026-01-25T07:57:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:57:40 crc kubenswrapper[4832]: I0125 07:57:40.841482 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:57:40 crc kubenswrapper[4832]: I0125 07:57:40.841727 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:57:40 crc kubenswrapper[4832]: I0125 07:57:40.841869 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:57:40 crc kubenswrapper[4832]: I0125 07:57:40.841987 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:57:40 crc kubenswrapper[4832]: I0125 07:57:40.842114 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:57:40Z","lastTransitionTime":"2026-01-25T07:57:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:57:40 crc kubenswrapper[4832]: I0125 07:57:40.944049 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:57:40 crc kubenswrapper[4832]: I0125 07:57:40.944100 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:57:40 crc kubenswrapper[4832]: I0125 07:57:40.944118 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:57:40 crc kubenswrapper[4832]: I0125 07:57:40.944137 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:57:40 crc kubenswrapper[4832]: I0125 07:57:40.944151 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:57:40Z","lastTransitionTime":"2026-01-25T07:57:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:57:40 crc kubenswrapper[4832]: I0125 07:57:40.985603 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-plv66_9c6fdc72-86dc-433d-8aac-57b0eeefaca3/ovnkube-controller/2.log" Jan 25 07:57:40 crc kubenswrapper[4832]: I0125 07:57:40.986573 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-plv66_9c6fdc72-86dc-433d-8aac-57b0eeefaca3/ovnkube-controller/1.log" Jan 25 07:57:40 crc kubenswrapper[4832]: I0125 07:57:40.989944 4832 generic.go:334] "Generic (PLEG): container finished" podID="9c6fdc72-86dc-433d-8aac-57b0eeefaca3" containerID="46f7a9d8da7bc60b49c21eb3838eb9b38263ef6bf7be257ababc09c050822355" exitCode=1 Jan 25 07:57:40 crc kubenswrapper[4832]: I0125 07:57:40.990016 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-plv66" event={"ID":"9c6fdc72-86dc-433d-8aac-57b0eeefaca3","Type":"ContainerDied","Data":"46f7a9d8da7bc60b49c21eb3838eb9b38263ef6bf7be257ababc09c050822355"} Jan 25 07:57:40 crc kubenswrapper[4832]: I0125 07:57:40.990114 4832 scope.go:117] "RemoveContainer" containerID="535d226369544a445f4a5592a1a733db46fea474ae6700626093ea53a57fa858" Jan 25 07:57:40 crc kubenswrapper[4832]: I0125 07:57:40.990986 4832 scope.go:117] "RemoveContainer" containerID="46f7a9d8da7bc60b49c21eb3838eb9b38263ef6bf7be257ababc09c050822355" Jan 25 07:57:40 crc kubenswrapper[4832]: E0125 07:57:40.991243 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-plv66_openshift-ovn-kubernetes(9c6fdc72-86dc-433d-8aac-57b0eeefaca3)\"" pod="openshift-ovn-kubernetes/ovnkube-node-plv66" podUID="9c6fdc72-86dc-433d-8aac-57b0eeefaca3" Jan 25 07:57:41 crc kubenswrapper[4832]: I0125 07:57:41.009997 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-kzrcf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5439ad80-35f6-4da4-8745-8104e9963472\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c1f3fab8a8806d76e6199970ac471a73665e6ec874f959a1e7908df814babfff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dg29p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T07:57:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-kzrcf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T07:57:41Z is after 2025-08-24T17:21:41Z" Jan 25 07:57:41 crc kubenswrapper[4832]: I0125 07:57:41.026894 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-nzj5s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b1a15135-866b-4644-97aa-34c7da815b6b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6wc7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6wc7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T07:57:30Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-nzj5s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T07:57:41Z is after 2025-08-24T17:21:41Z" Jan 25 07:57:41 crc kubenswrapper[4832]: I0125 07:57:41.046785 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:57:41 crc kubenswrapper[4832]: I0125 07:57:41.046939 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:57:41 crc kubenswrapper[4832]: I0125 07:57:41.046968 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:57:41 crc kubenswrapper[4832]: I0125 07:57:41.046999 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:57:41 crc kubenswrapper[4832]: I0125 07:57:41.047024 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:57:41Z","lastTransitionTime":"2026-01-25T07:57:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:57:41 crc kubenswrapper[4832]: I0125 07:57:41.056378 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d0e4b534-077a-47eb-a9aa-463b4dce27c2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:56:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:56:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e400282707469172abd90879bb5c4f96419dd2fbdbc5cc58c6ee9954624b221f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://22fb11acb07674f4808f4563567010790f12a87af272fdcf5ad1998e616c3f13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7970bc59b29bb18f7064917431bb4dd3388f593b65f71d697e3bc1c37493d087\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ae35d18ac48a31c47656c723134740770a44da6fa1587a853402bbfd4f51956\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://56b41ea1d1a7bb58c288bf3c661f5cd441412fc4790cd8361da2061bd35721dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c6f28ecd4c0dfb159fffbbdfc1ecbfee0ce21de2efa607937d80ec098bfc2534\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c6f28ecd4c0dfb159fffbbdfc1ecbfee0ce21de2efa607937d80ec098bfc2534\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T07:56:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T07:56:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3d6c060504d04d04a811fe906985b4981037f7c249befd89d21694b58983826\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b3d6c060504d04d04a811fe906985b4981037f7c249befd89d21694b58983826\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T07:56:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T07:56:58Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f98f07a514287378206a12966a18bcce2ce996434858c036f7e405a8c5d51721\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f98f07a514287378206a12966a18bcce2ce996434858c036f7e405a8c5d51721\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T07:56:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T07:56:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T07:56:57Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T07:57:41Z is after 2025-08-24T17:21:41Z" Jan 25 07:57:41 crc kubenswrapper[4832]: I0125 07:57:41.078569 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f08aec7c666388c5a9a5ccc970acf6e9df3262090951fd1a205cfb2f6cfb26a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e880d54d6b2d147d036dac73afd36230c3a984b018b7bd600dcbd33ca83aa84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T07:57:41Z is after 2025-08-24T17:21:41Z" Jan 25 07:57:41 crc kubenswrapper[4832]: I0125 07:57:41.093942 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-ljmz9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f0e6de28-95c1-4b62-93a5-8141ed12ba8e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://90459cff650e6a278d83d57b502423c3c3bd87cadc083c7642dfc4cc33e7953c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s6dzs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T07:57:16Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-ljmz9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T07:57:41Z is after 2025-08-24T17:21:41Z" Jan 25 07:57:41 crc kubenswrapper[4832]: I0125 07:57:41.111349 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-9r9sz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1fb47e8e-c812-41b4-9be7-3fad81e121b0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11d30ecfbac91cbd5f546d8f064b715e31917d7db31102376299e2c5fa7951f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2t6v2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c32b6a39b2bc87d55b11a88a54d0909633358c70f3fc555cd4308bc5bf2689a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2t6v2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T07:57:16Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-9r9sz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T07:57:41Z is after 2025-08-24T17:21:41Z" Jan 25 07:57:41 crc kubenswrapper[4832]: I0125 07:57:41.128505 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:16Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T07:57:41Z is after 2025-08-24T17:21:41Z" Jan 25 07:57:41 crc kubenswrapper[4832]: I0125 07:57:41.145752 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://49bab1f91a75d2c164a43ba253102a6ac5ba0fd6347fad172ae2052f055d3434\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T07:57:41Z is after 2025-08-24T17:21:41Z" Jan 25 07:57:41 crc kubenswrapper[4832]: I0125 07:57:41.149380 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:57:41 crc kubenswrapper[4832]: I0125 07:57:41.149485 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:57:41 crc kubenswrapper[4832]: I0125 07:57:41.149506 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:57:41 crc kubenswrapper[4832]: I0125 07:57:41.149535 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:57:41 crc kubenswrapper[4832]: I0125 07:57:41.149553 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:57:41Z","lastTransitionTime":"2026-01-25T07:57:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:57:41 crc kubenswrapper[4832]: I0125 07:57:41.164828 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:19Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:19Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://097b2ff685144140b86c80b5c605d0ef31116b56237a696d1da4bf98f65d7ae2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T07:57:41Z is after 2025-08-24T17:21:41Z" Jan 25 07:57:41 crc kubenswrapper[4832]: I0125 07:57:41.179762 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T07:57:41Z is after 2025-08-24T17:21:41Z" Jan 25 07:57:41 crc kubenswrapper[4832]: I0125 07:57:41.196936 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-7tflx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"947f1c61-f061-4448-b301-9c2554b67933\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://62f9942e292890719dd629a44aa806877367db57a332a97f254fea093c039c5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6tmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://446dcb21c95e4112671db6f4b8376ff3361d3d386ecdaa190f615271511be812\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://446dcb21c95e4112671db6f4b8376ff3361d3d386ecdaa190f615271511be812\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T07:57:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T07:57:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6tmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2ca8e86a16d5f632146a210839dc52fb85013bd79ac5a467847d4a28a672539\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a2ca8e86a16d5f632146a210839dc52fb85013bd79ac5a467847d4a28a672539\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T07:57:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T07:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6tmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e8c763fc8bcc560d4435f2ed3be793465fb9e31b07bc26b76ce14bf7d9ce6b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3e8c763fc8bcc560d4435f2ed3be793465fb9e31b07bc26b76ce14bf7d9ce6b7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T07:57:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T07:57:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6tmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6a224c00f14700b78550beaa705d0f1cf0b2f13ef8ec3ba81aef885b81292f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a6a224c00f14700b78550beaa705d0f1cf0b2f13ef8ec3ba81aef885b81292f3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T07:57:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T07:57:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6tmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0565bbfef6aee4dc36b7eeea5fb9b0d26004395c38af8fb6f1745ff6853957e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0565bbfef6aee4dc36b7eeea5fb9b0d26004395c38af8fb6f1745ff6853957e4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T07:57:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T07:57:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6tmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://21c9f3889231e035c1db9611e076f2db7f52cca1449f9cd143323a8599d3141c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://21c9f3889231e035c1db9611e076f2db7f52cca1449f9cd143323a8599d3141c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T07:57:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T07:57:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6tmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T07:57:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-7tflx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T07:57:41Z is after 2025-08-24T17:21:41Z" Jan 25 07:57:41 crc kubenswrapper[4832]: I0125 07:57:41.207836 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-6dqw2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5b30a48c-b823-4cdd-ac0c-def5487d8fa6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d04c4243f10847106daab854b81ba5b24466780aa4900922ae2c460468a12e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxmsw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T07:57:16Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-6dqw2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T07:57:41Z is after 2025-08-24T17:21:41Z" Jan 25 07:57:41 crc kubenswrapper[4832]: I0125 07:57:41.227840 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-plv66" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9c6fdc72-86dc-433d-8aac-57b0eeefaca3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4eb8d5ded80c75addd304eb271c805a5558200db4ad062ef7354d8a0e4d2892d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rkm2k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b2bdf85709ae59146893142e9c99259a30d0a3d382b2212b1863f677f6afc2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rkm2k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://955df1f749685e35f57096ab341705a767f9f044c498ff9fe0c578205ab00e47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rkm2k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a4281c5178e1f538e268252a65fbf98cf6d3febdb246a148f96a4aa074654ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rkm2k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9039a4038315d24ad4f721f3a16dc792881c104d23270f4ab5ffb3d84ff4cb99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rkm2k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0de5e2c0084fa8b9faf368e61b965f84d8411bcbdfb8b3cf6a35f4bc6088e68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rkm2k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://46f7a9d8da7bc60b49c21eb3838eb9b38263ef6bf7be257ababc09c050822355\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://535d226369544a445f4a5592a1a733db46fea474ae6700626093ea53a57fa858\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-25T07:57:26Z\\\",\\\"message\\\":\\\"lse, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.5.139\\\\\\\", Port:17698, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI0125 07:57:26.725541 6225 services_controller.go:452] Built service openshift-apiserver/check-endpoints per-node LB for network=default: []services.LB{}\\\\nI0125 07:57:26.725548 6225 services_controller.go:453] Built service openshift-apiserver/check-endpoints template LB for network=default: []services.LB{}\\\\nI0125 07:57:26.725513 6225 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-diagnostics/network-check-target-xd92c\\\\nI0125 07:57:26.725560 6225 obj_retry.go:303] Retry object setup: *v1.Pod openshift-image-registry/node-ca-6dqw2\\\\nF0125 07:57:26.725573 6225 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-25T07:57:26Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://46f7a9d8da7bc60b49c21eb3838eb9b38263ef6bf7be257ababc09c050822355\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-25T07:57:40Z\\\",\\\"message\\\":\\\" node crc\\\\nI0125 07:57:40.180788 6436 obj_retry.go:386] Retry successful for *v1.Pod openshift-multus/multus-additional-cni-plugins-7tflx after 0 failed attempt(s)\\\\nI0125 07:57:40.180793 6436 default_network_controller.go:776] Recording success event on pod openshift-multus/multus-additional-cni-plugins-7tflx\\\\nI0125 07:57:40.180768 6436 ovn.go:134] Ensuring zone local for Pod openshift-machine-config-operator/machine-config-daemon-9r9sz in node crc\\\\nI0125 07:57:40.180804 6436 obj_retry.go:386] Retry successful for *v1.Pod openshift-machine-config-operator/machine-config-daemon-9r9sz after 0 failed attempt(s)\\\\nI0125 07:57:40.180809 6436 default_network_controller.go:776] Recording success event on pod openshift-machine-config-operator/machine-config-daemon-9r9sz\\\\nI0125 07:57:40.180747 6436 obj_retry.go:386] Retry successful for *v1.Pod openshift-image-registry/node-ca-6dqw2 after 0 failed attempt(s)\\\\nI0125 07:57:40.180817 6436 default_network_controller.go:776] Recording success event on pod openshift-image-registry/node-ca-6dqw2\\\\nI0125 07:57:40.180731 6436 default_network_controller.go:776] Recording success event on pod openshift-ovn-kubernetes/ovnkube-node-plv66\\\\nF0125 07:57:40.180824 6436 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-25T07:57:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rkm2k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d82289bf3a8f5881decb5d348cc43fdfd61f4ce6af17013a893b687d2c759d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rkm2k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac96bdf8380dbae226d8f186a0449b986660f21889eb73734620b26fb796fbf1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ac96bdf8380dbae226d8f186a0449b986660f21889eb73734620b26fb796fbf1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T07:57:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rkm2k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T07:57:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-plv66\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T07:57:41Z is after 2025-08-24T17:21:41Z" Jan 25 07:57:41 crc kubenswrapper[4832]: I0125 07:57:41.238358 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-ct7hc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1be4ce34-f46c-4ee9-8fb5-7ac13dafef85\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c584b1d69c283cdea5cd50a6f1e3b9a1fd4b4b82bfb1142fb4bb32fd7c7d3fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cd2cg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://80d0c4fe9bedb92c87bfea3e2e7706dac8825366b74adb48b257fa32f31a6277\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cd2cg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T07:57:29Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-ct7hc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T07:57:41Z is after 2025-08-24T17:21:41Z" Jan 25 07:57:41 crc kubenswrapper[4832]: I0125 07:57:41.251112 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4399c971-4476-4d24-ae22-8f9710ee1ea8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:56:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:56:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:56:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://427b76c32266adf832d2068d3a55977e793505c5bb68d7b55f73115596094910\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:56:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://37e9206fcc440929199c51b318bab8d2c23814d1307eaed596434c12edf2ed21\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:56:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://959f94a48ef709e3a3ca62ab6fda1874fd98e4fa70fbde0fa03da51bc8d0ed25\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:56:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://56d7d5b36830b76c8af4d6a98ec50b4096ef677b7ec94784724d5395dbc5e1a5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7e2213b4c4748dc37cf94e9b977630270dedbabf28e81c8a6d75e4ee3346ad7a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-25T07:57:15Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0125 07:57:10.242088 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0125 07:57:10.245266 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3222874030/tls.crt::/tmp/serving-cert-3222874030/tls.key\\\\\\\"\\\\nI0125 07:57:15.582629 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0125 07:57:15.585295 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0125 07:57:15.585315 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0125 07:57:15.585341 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0125 07:57:15.585347 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0125 07:57:15.590465 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0125 07:57:15.590486 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0125 07:57:15.590498 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0125 07:57:15.590502 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0125 07:57:15.590506 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0125 07:57:15.590510 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0125 07:57:15.590513 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0125 07:57:15.590670 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0125 07:57:15.594690 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-25T07:56:59Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c0b0c638bfaa98aaf9932b5ad1b0bfc04ba52038c40f3aa85103388c557ace5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:56:59Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5cdefbe9da3ff798b69ba79465cd9b6fce74df31802f14dca3fa58ba5b9d1bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b5cdefbe9da3ff798b69ba79465cd9b6fce74df31802f14dca3fa58ba5b9d1bd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T07:56:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T07:56:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T07:56:57Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T07:57:41Z is after 2025-08-24T17:21:41Z" Jan 25 07:57:41 crc kubenswrapper[4832]: I0125 07:57:41.251539 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:57:41 crc kubenswrapper[4832]: I0125 07:57:41.251587 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:57:41 crc kubenswrapper[4832]: I0125 07:57:41.251600 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:57:41 crc kubenswrapper[4832]: I0125 07:57:41.251614 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:57:41 crc kubenswrapper[4832]: I0125 07:57:41.251623 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:57:41Z","lastTransitionTime":"2026-01-25T07:57:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:57:41 crc kubenswrapper[4832]: I0125 07:57:41.263185 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fcc553c4-1007-4dbc-8420-60b36d54467a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:56:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:56:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:56:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8be196a1dec67a58e78aa9de2efa770fc899f210cc9c13962f0ebe78b967ba34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:56:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b044eb1a229266f00938c08da6aa9e86425ca71d08c8434d7214d54850c36bbb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:56:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://82354c782a5e3edb960aa716e1fc5fa9ab40d1f483ae320f08abfb662c1f1911\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:56:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7833d14895ff5c8aa464bdd04ddfe77dd2cddb9658d863bf6421449e62657bd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:56:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T07:56:57Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T07:57:41Z is after 2025-08-24T17:21:41Z" Jan 25 07:57:41 crc kubenswrapper[4832]: I0125 07:57:41.275237 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:16Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T07:57:41Z is after 2025-08-24T17:21:41Z" Jan 25 07:57:41 crc kubenswrapper[4832]: I0125 07:57:41.353695 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:57:41 crc kubenswrapper[4832]: I0125 07:57:41.353727 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:57:41 crc kubenswrapper[4832]: I0125 07:57:41.353737 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:57:41 crc kubenswrapper[4832]: I0125 07:57:41.353749 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:57:41 crc kubenswrapper[4832]: I0125 07:57:41.353758 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:57:41Z","lastTransitionTime":"2026-01-25T07:57:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:57:41 crc kubenswrapper[4832]: I0125 07:57:41.455579 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:57:41 crc kubenswrapper[4832]: I0125 07:57:41.455630 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:57:41 crc kubenswrapper[4832]: I0125 07:57:41.455639 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:57:41 crc kubenswrapper[4832]: I0125 07:57:41.455681 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:57:41 crc kubenswrapper[4832]: I0125 07:57:41.455692 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:57:41Z","lastTransitionTime":"2026-01-25T07:57:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:57:41 crc kubenswrapper[4832]: I0125 07:57:41.558793 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:57:41 crc kubenswrapper[4832]: I0125 07:57:41.558839 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:57:41 crc kubenswrapper[4832]: I0125 07:57:41.558850 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:57:41 crc kubenswrapper[4832]: I0125 07:57:41.558869 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:57:41 crc kubenswrapper[4832]: I0125 07:57:41.558881 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:57:41Z","lastTransitionTime":"2026-01-25T07:57:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:57:41 crc kubenswrapper[4832]: I0125 07:57:41.605701 4832 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-11 07:20:10.173089004 +0000 UTC Jan 25 07:57:41 crc kubenswrapper[4832]: I0125 07:57:41.662190 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:57:41 crc kubenswrapper[4832]: I0125 07:57:41.662239 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:57:41 crc kubenswrapper[4832]: I0125 07:57:41.662250 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:57:41 crc kubenswrapper[4832]: I0125 07:57:41.662267 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:57:41 crc kubenswrapper[4832]: I0125 07:57:41.662283 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:57:41Z","lastTransitionTime":"2026-01-25T07:57:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:57:41 crc kubenswrapper[4832]: I0125 07:57:41.669670 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 25 07:57:41 crc kubenswrapper[4832]: E0125 07:57:41.669801 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 25 07:57:41 crc kubenswrapper[4832]: I0125 07:57:41.765345 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:57:41 crc kubenswrapper[4832]: I0125 07:57:41.765465 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:57:41 crc kubenswrapper[4832]: I0125 07:57:41.765493 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:57:41 crc kubenswrapper[4832]: I0125 07:57:41.765521 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:57:41 crc kubenswrapper[4832]: I0125 07:57:41.765552 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:57:41Z","lastTransitionTime":"2026-01-25T07:57:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:57:41 crc kubenswrapper[4832]: I0125 07:57:41.868989 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:57:41 crc kubenswrapper[4832]: I0125 07:57:41.869039 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:57:41 crc kubenswrapper[4832]: I0125 07:57:41.869055 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:57:41 crc kubenswrapper[4832]: I0125 07:57:41.869078 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:57:41 crc kubenswrapper[4832]: I0125 07:57:41.869095 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:57:41Z","lastTransitionTime":"2026-01-25T07:57:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:57:41 crc kubenswrapper[4832]: I0125 07:57:41.972189 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:57:41 crc kubenswrapper[4832]: I0125 07:57:41.972235 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:57:41 crc kubenswrapper[4832]: I0125 07:57:41.972253 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:57:41 crc kubenswrapper[4832]: I0125 07:57:41.972276 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:57:41 crc kubenswrapper[4832]: I0125 07:57:41.972292 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:57:41Z","lastTransitionTime":"2026-01-25T07:57:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:57:41 crc kubenswrapper[4832]: I0125 07:57:41.998729 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-plv66_9c6fdc72-86dc-433d-8aac-57b0eeefaca3/ovnkube-controller/2.log" Jan 25 07:57:42 crc kubenswrapper[4832]: I0125 07:57:42.007317 4832 scope.go:117] "RemoveContainer" containerID="46f7a9d8da7bc60b49c21eb3838eb9b38263ef6bf7be257ababc09c050822355" Jan 25 07:57:42 crc kubenswrapper[4832]: E0125 07:57:42.007665 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-plv66_openshift-ovn-kubernetes(9c6fdc72-86dc-433d-8aac-57b0eeefaca3)\"" pod="openshift-ovn-kubernetes/ovnkube-node-plv66" podUID="9c6fdc72-86dc-433d-8aac-57b0eeefaca3" Jan 25 07:57:42 crc kubenswrapper[4832]: I0125 07:57:42.027187 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-ct7hc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1be4ce34-f46c-4ee9-8fb5-7ac13dafef85\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c584b1d69c283cdea5cd50a6f1e3b9a1fd4b4b82bfb1142fb4bb32fd7c7d3fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cd2cg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://80d0c4fe9bedb92c87bfea3e2e7706dac8825366b74adb48b257fa32f31a6277\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cd2cg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T07:57:29Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-ct7hc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T07:57:42Z is after 2025-08-24T17:21:41Z" Jan 25 07:57:42 crc kubenswrapper[4832]: I0125 07:57:42.046946 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4399c971-4476-4d24-ae22-8f9710ee1ea8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:56:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:56:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:56:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://427b76c32266adf832d2068d3a55977e793505c5bb68d7b55f73115596094910\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:56:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://37e9206fcc440929199c51b318bab8d2c23814d1307eaed596434c12edf2ed21\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:56:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://959f94a48ef709e3a3ca62ab6fda1874fd98e4fa70fbde0fa03da51bc8d0ed25\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:56:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://56d7d5b36830b76c8af4d6a98ec50b4096ef677b7ec94784724d5395dbc5e1a5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7e2213b4c4748dc37cf94e9b977630270dedbabf28e81c8a6d75e4ee3346ad7a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-25T07:57:15Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0125 07:57:10.242088 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0125 07:57:10.245266 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3222874030/tls.crt::/tmp/serving-cert-3222874030/tls.key\\\\\\\"\\\\nI0125 07:57:15.582629 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0125 07:57:15.585295 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0125 07:57:15.585315 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0125 07:57:15.585341 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0125 07:57:15.585347 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0125 07:57:15.590465 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0125 07:57:15.590486 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0125 07:57:15.590498 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0125 07:57:15.590502 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0125 07:57:15.590506 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0125 07:57:15.590510 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0125 07:57:15.590513 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0125 07:57:15.590670 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0125 07:57:15.594690 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-25T07:56:59Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c0b0c638bfaa98aaf9932b5ad1b0bfc04ba52038c40f3aa85103388c557ace5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:56:59Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5cdefbe9da3ff798b69ba79465cd9b6fce74df31802f14dca3fa58ba5b9d1bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b5cdefbe9da3ff798b69ba79465cd9b6fce74df31802f14dca3fa58ba5b9d1bd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T07:56:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T07:56:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T07:56:57Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T07:57:42Z is after 2025-08-24T17:21:41Z" Jan 25 07:57:42 crc kubenswrapper[4832]: I0125 07:57:42.069856 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fcc553c4-1007-4dbc-8420-60b36d54467a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:56:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:56:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:56:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8be196a1dec67a58e78aa9de2efa770fc899f210cc9c13962f0ebe78b967ba34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:56:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b044eb1a229266f00938c08da6aa9e86425ca71d08c8434d7214d54850c36bbb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:56:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://82354c782a5e3edb960aa716e1fc5fa9ab40d1f483ae320f08abfb662c1f1911\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:56:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7833d14895ff5c8aa464bdd04ddfe77dd2cddb9658d863bf6421449e62657bd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:56:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T07:56:57Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T07:57:42Z is after 2025-08-24T17:21:41Z" Jan 25 07:57:42 crc kubenswrapper[4832]: I0125 07:57:42.075019 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:57:42 crc kubenswrapper[4832]: I0125 07:57:42.075082 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:57:42 crc kubenswrapper[4832]: I0125 07:57:42.075105 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:57:42 crc kubenswrapper[4832]: I0125 07:57:42.075137 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:57:42 crc kubenswrapper[4832]: I0125 07:57:42.075161 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:57:42Z","lastTransitionTime":"2026-01-25T07:57:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:57:42 crc kubenswrapper[4832]: I0125 07:57:42.092140 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:16Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T07:57:42Z is after 2025-08-24T17:21:41Z" Jan 25 07:57:42 crc kubenswrapper[4832]: I0125 07:57:42.108831 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-6dqw2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5b30a48c-b823-4cdd-ac0c-def5487d8fa6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d04c4243f10847106daab854b81ba5b24466780aa4900922ae2c460468a12e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxmsw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T07:57:16Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-6dqw2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T07:57:42Z is after 2025-08-24T17:21:41Z" Jan 25 07:57:42 crc kubenswrapper[4832]: I0125 07:57:42.131098 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-plv66" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9c6fdc72-86dc-433d-8aac-57b0eeefaca3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4eb8d5ded80c75addd304eb271c805a5558200db4ad062ef7354d8a0e4d2892d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rkm2k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b2bdf85709ae59146893142e9c99259a30d0a3d382b2212b1863f677f6afc2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rkm2k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://955df1f749685e35f57096ab341705a767f9f044c498ff9fe0c578205ab00e47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rkm2k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a4281c5178e1f538e268252a65fbf98cf6d3febdb246a148f96a4aa074654ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rkm2k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9039a4038315d24ad4f721f3a16dc792881c104d23270f4ab5ffb3d84ff4cb99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rkm2k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0de5e2c0084fa8b9faf368e61b965f84d8411bcbdfb8b3cf6a35f4bc6088e68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rkm2k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://46f7a9d8da7bc60b49c21eb3838eb9b38263ef6bf7be257ababc09c050822355\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://46f7a9d8da7bc60b49c21eb3838eb9b38263ef6bf7be257ababc09c050822355\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-25T07:57:40Z\\\",\\\"message\\\":\\\" node crc\\\\nI0125 07:57:40.180788 6436 obj_retry.go:386] Retry successful for *v1.Pod openshift-multus/multus-additional-cni-plugins-7tflx after 0 failed attempt(s)\\\\nI0125 07:57:40.180793 6436 default_network_controller.go:776] Recording success event on pod openshift-multus/multus-additional-cni-plugins-7tflx\\\\nI0125 07:57:40.180768 6436 ovn.go:134] Ensuring zone local for Pod openshift-machine-config-operator/machine-config-daemon-9r9sz in node crc\\\\nI0125 07:57:40.180804 6436 obj_retry.go:386] Retry successful for *v1.Pod openshift-machine-config-operator/machine-config-daemon-9r9sz after 0 failed attempt(s)\\\\nI0125 07:57:40.180809 6436 default_network_controller.go:776] Recording success event on pod openshift-machine-config-operator/machine-config-daemon-9r9sz\\\\nI0125 07:57:40.180747 6436 obj_retry.go:386] Retry successful for *v1.Pod openshift-image-registry/node-ca-6dqw2 after 0 failed attempt(s)\\\\nI0125 07:57:40.180817 6436 default_network_controller.go:776] Recording success event on pod openshift-image-registry/node-ca-6dqw2\\\\nI0125 07:57:40.180731 6436 default_network_controller.go:776] Recording success event on pod openshift-ovn-kubernetes/ovnkube-node-plv66\\\\nF0125 07:57:40.180824 6436 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-25T07:57:39Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-plv66_openshift-ovn-kubernetes(9c6fdc72-86dc-433d-8aac-57b0eeefaca3)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rkm2k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d82289bf3a8f5881decb5d348cc43fdfd61f4ce6af17013a893b687d2c759d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rkm2k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac96bdf8380dbae226d8f186a0449b986660f21889eb73734620b26fb796fbf1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ac96bdf8380dbae226d8f186a0449b986660f21889eb73734620b26fb796fbf1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T07:57:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rkm2k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T07:57:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-plv66\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T07:57:42Z is after 2025-08-24T17:21:41Z" Jan 25 07:57:42 crc kubenswrapper[4832]: I0125 07:57:42.165015 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d0e4b534-077a-47eb-a9aa-463b4dce27c2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:56:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:56:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e400282707469172abd90879bb5c4f96419dd2fbdbc5cc58c6ee9954624b221f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://22fb11acb07674f4808f4563567010790f12a87af272fdcf5ad1998e616c3f13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7970bc59b29bb18f7064917431bb4dd3388f593b65f71d697e3bc1c37493d087\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ae35d18ac48a31c47656c723134740770a44da6fa1587a853402bbfd4f51956\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://56b41ea1d1a7bb58c288bf3c661f5cd441412fc4790cd8361da2061bd35721dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c6f28ecd4c0dfb159fffbbdfc1ecbfee0ce21de2efa607937d80ec098bfc2534\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c6f28ecd4c0dfb159fffbbdfc1ecbfee0ce21de2efa607937d80ec098bfc2534\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T07:56:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T07:56:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3d6c060504d04d04a811fe906985b4981037f7c249befd89d21694b58983826\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b3d6c060504d04d04a811fe906985b4981037f7c249befd89d21694b58983826\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T07:56:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T07:56:58Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f98f07a514287378206a12966a18bcce2ce996434858c036f7e405a8c5d51721\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f98f07a514287378206a12966a18bcce2ce996434858c036f7e405a8c5d51721\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T07:56:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T07:56:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T07:56:57Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T07:57:42Z is after 2025-08-24T17:21:41Z" Jan 25 07:57:42 crc kubenswrapper[4832]: I0125 07:57:42.177704 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:57:42 crc kubenswrapper[4832]: I0125 07:57:42.177758 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:57:42 crc kubenswrapper[4832]: I0125 07:57:42.177772 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:57:42 crc kubenswrapper[4832]: I0125 07:57:42.177794 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:57:42 crc kubenswrapper[4832]: I0125 07:57:42.177809 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:57:42Z","lastTransitionTime":"2026-01-25T07:57:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:57:42 crc kubenswrapper[4832]: I0125 07:57:42.180433 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f08aec7c666388c5a9a5ccc970acf6e9df3262090951fd1a205cfb2f6cfb26a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e880d54d6b2d147d036dac73afd36230c3a984b018b7bd600dcbd33ca83aa84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T07:57:42Z is after 2025-08-24T17:21:41Z" Jan 25 07:57:42 crc kubenswrapper[4832]: I0125 07:57:42.194127 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-kzrcf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5439ad80-35f6-4da4-8745-8104e9963472\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c1f3fab8a8806d76e6199970ac471a73665e6ec874f959a1e7908df814babfff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dg29p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T07:57:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-kzrcf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T07:57:42Z is after 2025-08-24T17:21:41Z" Jan 25 07:57:42 crc kubenswrapper[4832]: I0125 07:57:42.206118 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-nzj5s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b1a15135-866b-4644-97aa-34c7da815b6b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6wc7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6wc7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T07:57:30Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-nzj5s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T07:57:42Z is after 2025-08-24T17:21:41Z" Jan 25 07:57:42 crc kubenswrapper[4832]: I0125 07:57:42.218657 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:16Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T07:57:42Z is after 2025-08-24T17:21:41Z" Jan 25 07:57:42 crc kubenswrapper[4832]: I0125 07:57:42.236019 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://49bab1f91a75d2c164a43ba253102a6ac5ba0fd6347fad172ae2052f055d3434\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T07:57:42Z is after 2025-08-24T17:21:41Z" Jan 25 07:57:42 crc kubenswrapper[4832]: I0125 07:57:42.248131 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:19Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:19Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://097b2ff685144140b86c80b5c605d0ef31116b56237a696d1da4bf98f65d7ae2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T07:57:42Z is after 2025-08-24T17:21:41Z" Jan 25 07:57:42 crc kubenswrapper[4832]: I0125 07:57:42.259085 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-ljmz9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f0e6de28-95c1-4b62-93a5-8141ed12ba8e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://90459cff650e6a278d83d57b502423c3c3bd87cadc083c7642dfc4cc33e7953c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s6dzs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T07:57:16Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-ljmz9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T07:57:42Z is after 2025-08-24T17:21:41Z" Jan 25 07:57:42 crc kubenswrapper[4832]: I0125 07:57:42.271696 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-9r9sz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1fb47e8e-c812-41b4-9be7-3fad81e121b0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11d30ecfbac91cbd5f546d8f064b715e31917d7db31102376299e2c5fa7951f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2t6v2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c32b6a39b2bc87d55b11a88a54d0909633358c70f3fc555cd4308bc5bf2689a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2t6v2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T07:57:16Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-9r9sz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T07:57:42Z is after 2025-08-24T17:21:41Z" Jan 25 07:57:42 crc kubenswrapper[4832]: I0125 07:57:42.280050 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:57:42 crc kubenswrapper[4832]: I0125 07:57:42.280078 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:57:42 crc kubenswrapper[4832]: I0125 07:57:42.280087 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:57:42 crc kubenswrapper[4832]: I0125 07:57:42.280100 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:57:42 crc kubenswrapper[4832]: I0125 07:57:42.280110 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:57:42Z","lastTransitionTime":"2026-01-25T07:57:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:57:42 crc kubenswrapper[4832]: I0125 07:57:42.284973 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T07:57:42Z is after 2025-08-24T17:21:41Z" Jan 25 07:57:42 crc kubenswrapper[4832]: I0125 07:57:42.300588 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-7tflx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"947f1c61-f061-4448-b301-9c2554b67933\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://62f9942e292890719dd629a44aa806877367db57a332a97f254fea093c039c5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6tmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://446dcb21c95e4112671db6f4b8376ff3361d3d386ecdaa190f615271511be812\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://446dcb21c95e4112671db6f4b8376ff3361d3d386ecdaa190f615271511be812\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T07:57:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T07:57:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6tmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2ca8e86a16d5f632146a210839dc52fb85013bd79ac5a467847d4a28a672539\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a2ca8e86a16d5f632146a210839dc52fb85013bd79ac5a467847d4a28a672539\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T07:57:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T07:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6tmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e8c763fc8bcc560d4435f2ed3be793465fb9e31b07bc26b76ce14bf7d9ce6b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3e8c763fc8bcc560d4435f2ed3be793465fb9e31b07bc26b76ce14bf7d9ce6b7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T07:57:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T07:57:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6tmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6a224c00f14700b78550beaa705d0f1cf0b2f13ef8ec3ba81aef885b81292f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a6a224c00f14700b78550beaa705d0f1cf0b2f13ef8ec3ba81aef885b81292f3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T07:57:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T07:57:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6tmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0565bbfef6aee4dc36b7eeea5fb9b0d26004395c38af8fb6f1745ff6853957e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0565bbfef6aee4dc36b7eeea5fb9b0d26004395c38af8fb6f1745ff6853957e4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T07:57:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T07:57:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6tmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://21c9f3889231e035c1db9611e076f2db7f52cca1449f9cd143323a8599d3141c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://21c9f3889231e035c1db9611e076f2db7f52cca1449f9cd143323a8599d3141c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T07:57:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T07:57:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6tmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T07:57:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-7tflx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T07:57:42Z is after 2025-08-24T17:21:41Z" Jan 25 07:57:42 crc kubenswrapper[4832]: I0125 07:57:42.382673 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:57:42 crc kubenswrapper[4832]: I0125 07:57:42.382740 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:57:42 crc kubenswrapper[4832]: I0125 07:57:42.382772 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:57:42 crc kubenswrapper[4832]: I0125 07:57:42.382791 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:57:42 crc kubenswrapper[4832]: I0125 07:57:42.382805 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:57:42Z","lastTransitionTime":"2026-01-25T07:57:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:57:42 crc kubenswrapper[4832]: I0125 07:57:42.485372 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:57:42 crc kubenswrapper[4832]: I0125 07:57:42.485430 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:57:42 crc kubenswrapper[4832]: I0125 07:57:42.485472 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:57:42 crc kubenswrapper[4832]: I0125 07:57:42.485487 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:57:42 crc kubenswrapper[4832]: I0125 07:57:42.485498 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:57:42Z","lastTransitionTime":"2026-01-25T07:57:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:57:42 crc kubenswrapper[4832]: I0125 07:57:42.588753 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:57:42 crc kubenswrapper[4832]: I0125 07:57:42.588827 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:57:42 crc kubenswrapper[4832]: I0125 07:57:42.588852 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:57:42 crc kubenswrapper[4832]: I0125 07:57:42.588884 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:57:42 crc kubenswrapper[4832]: I0125 07:57:42.588908 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:57:42Z","lastTransitionTime":"2026-01-25T07:57:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:57:42 crc kubenswrapper[4832]: I0125 07:57:42.606249 4832 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-04 19:20:05.327439893 +0000 UTC Jan 25 07:57:42 crc kubenswrapper[4832]: I0125 07:57:42.669349 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 25 07:57:42 crc kubenswrapper[4832]: I0125 07:57:42.669363 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 25 07:57:42 crc kubenswrapper[4832]: I0125 07:57:42.669528 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-nzj5s" Jan 25 07:57:42 crc kubenswrapper[4832]: E0125 07:57:42.669710 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 25 07:57:42 crc kubenswrapper[4832]: E0125 07:57:42.669826 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 25 07:57:42 crc kubenswrapper[4832]: E0125 07:57:42.669937 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-nzj5s" podUID="b1a15135-866b-4644-97aa-34c7da815b6b" Jan 25 07:57:42 crc kubenswrapper[4832]: I0125 07:57:42.691348 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:57:42 crc kubenswrapper[4832]: I0125 07:57:42.691431 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:57:42 crc kubenswrapper[4832]: I0125 07:57:42.691451 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:57:42 crc kubenswrapper[4832]: I0125 07:57:42.691477 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:57:42 crc kubenswrapper[4832]: I0125 07:57:42.691495 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:57:42Z","lastTransitionTime":"2026-01-25T07:57:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:57:42 crc kubenswrapper[4832]: I0125 07:57:42.795431 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:57:42 crc kubenswrapper[4832]: I0125 07:57:42.795499 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:57:42 crc kubenswrapper[4832]: I0125 07:57:42.795520 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:57:42 crc kubenswrapper[4832]: I0125 07:57:42.795546 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:57:42 crc kubenswrapper[4832]: I0125 07:57:42.795564 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:57:42Z","lastTransitionTime":"2026-01-25T07:57:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:57:42 crc kubenswrapper[4832]: I0125 07:57:42.898020 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:57:42 crc kubenswrapper[4832]: I0125 07:57:42.898056 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:57:42 crc kubenswrapper[4832]: I0125 07:57:42.898065 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:57:42 crc kubenswrapper[4832]: I0125 07:57:42.898077 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:57:42 crc kubenswrapper[4832]: I0125 07:57:42.898086 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:57:42Z","lastTransitionTime":"2026-01-25T07:57:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:57:43 crc kubenswrapper[4832]: I0125 07:57:43.000235 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:57:43 crc kubenswrapper[4832]: I0125 07:57:43.000282 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:57:43 crc kubenswrapper[4832]: I0125 07:57:43.000303 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:57:43 crc kubenswrapper[4832]: I0125 07:57:43.000328 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:57:43 crc kubenswrapper[4832]: I0125 07:57:43.000344 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:57:43Z","lastTransitionTime":"2026-01-25T07:57:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:57:43 crc kubenswrapper[4832]: I0125 07:57:43.102222 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:57:43 crc kubenswrapper[4832]: I0125 07:57:43.102257 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:57:43 crc kubenswrapper[4832]: I0125 07:57:43.102272 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:57:43 crc kubenswrapper[4832]: I0125 07:57:43.102293 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:57:43 crc kubenswrapper[4832]: I0125 07:57:43.102308 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:57:43Z","lastTransitionTime":"2026-01-25T07:57:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:57:43 crc kubenswrapper[4832]: I0125 07:57:43.205309 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:57:43 crc kubenswrapper[4832]: I0125 07:57:43.205371 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:57:43 crc kubenswrapper[4832]: I0125 07:57:43.205399 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:57:43 crc kubenswrapper[4832]: I0125 07:57:43.205420 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:57:43 crc kubenswrapper[4832]: I0125 07:57:43.205433 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:57:43Z","lastTransitionTime":"2026-01-25T07:57:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:57:43 crc kubenswrapper[4832]: I0125 07:57:43.307944 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:57:43 crc kubenswrapper[4832]: I0125 07:57:43.308004 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:57:43 crc kubenswrapper[4832]: I0125 07:57:43.308022 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:57:43 crc kubenswrapper[4832]: I0125 07:57:43.308047 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:57:43 crc kubenswrapper[4832]: I0125 07:57:43.308063 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:57:43Z","lastTransitionTime":"2026-01-25T07:57:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:57:43 crc kubenswrapper[4832]: I0125 07:57:43.411749 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:57:43 crc kubenswrapper[4832]: I0125 07:57:43.411821 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:57:43 crc kubenswrapper[4832]: I0125 07:57:43.411850 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:57:43 crc kubenswrapper[4832]: I0125 07:57:43.411882 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:57:43 crc kubenswrapper[4832]: I0125 07:57:43.411906 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:57:43Z","lastTransitionTime":"2026-01-25T07:57:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:57:43 crc kubenswrapper[4832]: I0125 07:57:43.515264 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:57:43 crc kubenswrapper[4832]: I0125 07:57:43.515311 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:57:43 crc kubenswrapper[4832]: I0125 07:57:43.515319 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:57:43 crc kubenswrapper[4832]: I0125 07:57:43.515334 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:57:43 crc kubenswrapper[4832]: I0125 07:57:43.515342 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:57:43Z","lastTransitionTime":"2026-01-25T07:57:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:57:43 crc kubenswrapper[4832]: I0125 07:57:43.606903 4832 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-24 05:28:40.364823744 +0000 UTC Jan 25 07:57:43 crc kubenswrapper[4832]: I0125 07:57:43.617913 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:57:43 crc kubenswrapper[4832]: I0125 07:57:43.617951 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:57:43 crc kubenswrapper[4832]: I0125 07:57:43.617963 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:57:43 crc kubenswrapper[4832]: I0125 07:57:43.617979 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:57:43 crc kubenswrapper[4832]: I0125 07:57:43.617989 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:57:43Z","lastTransitionTime":"2026-01-25T07:57:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:57:43 crc kubenswrapper[4832]: I0125 07:57:43.668679 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 25 07:57:43 crc kubenswrapper[4832]: E0125 07:57:43.668834 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 25 07:57:43 crc kubenswrapper[4832]: I0125 07:57:43.723948 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:57:43 crc kubenswrapper[4832]: I0125 07:57:43.724017 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:57:43 crc kubenswrapper[4832]: I0125 07:57:43.724039 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:57:43 crc kubenswrapper[4832]: I0125 07:57:43.724064 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:57:43 crc kubenswrapper[4832]: I0125 07:57:43.724095 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:57:43Z","lastTransitionTime":"2026-01-25T07:57:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:57:43 crc kubenswrapper[4832]: I0125 07:57:43.827163 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:57:43 crc kubenswrapper[4832]: I0125 07:57:43.827233 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:57:43 crc kubenswrapper[4832]: I0125 07:57:43.827256 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:57:43 crc kubenswrapper[4832]: I0125 07:57:43.827286 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:57:43 crc kubenswrapper[4832]: I0125 07:57:43.827308 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:57:43Z","lastTransitionTime":"2026-01-25T07:57:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:57:43 crc kubenswrapper[4832]: I0125 07:57:43.929640 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:57:43 crc kubenswrapper[4832]: I0125 07:57:43.929678 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:57:43 crc kubenswrapper[4832]: I0125 07:57:43.929687 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:57:43 crc kubenswrapper[4832]: I0125 07:57:43.929699 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:57:43 crc kubenswrapper[4832]: I0125 07:57:43.929708 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:57:43Z","lastTransitionTime":"2026-01-25T07:57:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:57:44 crc kubenswrapper[4832]: I0125 07:57:44.032646 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:57:44 crc kubenswrapper[4832]: I0125 07:57:44.032711 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:57:44 crc kubenswrapper[4832]: I0125 07:57:44.032729 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:57:44 crc kubenswrapper[4832]: I0125 07:57:44.032756 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:57:44 crc kubenswrapper[4832]: I0125 07:57:44.032777 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:57:44Z","lastTransitionTime":"2026-01-25T07:57:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:57:44 crc kubenswrapper[4832]: I0125 07:57:44.136470 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:57:44 crc kubenswrapper[4832]: I0125 07:57:44.136529 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:57:44 crc kubenswrapper[4832]: I0125 07:57:44.136544 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:57:44 crc kubenswrapper[4832]: I0125 07:57:44.136563 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:57:44 crc kubenswrapper[4832]: I0125 07:57:44.136578 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:57:44Z","lastTransitionTime":"2026-01-25T07:57:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:57:44 crc kubenswrapper[4832]: I0125 07:57:44.239523 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:57:44 crc kubenswrapper[4832]: I0125 07:57:44.239581 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:57:44 crc kubenswrapper[4832]: I0125 07:57:44.239598 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:57:44 crc kubenswrapper[4832]: I0125 07:57:44.239620 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:57:44 crc kubenswrapper[4832]: I0125 07:57:44.239637 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:57:44Z","lastTransitionTime":"2026-01-25T07:57:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:57:44 crc kubenswrapper[4832]: I0125 07:57:44.342751 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:57:44 crc kubenswrapper[4832]: I0125 07:57:44.342785 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:57:44 crc kubenswrapper[4832]: I0125 07:57:44.342795 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:57:44 crc kubenswrapper[4832]: I0125 07:57:44.342812 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:57:44 crc kubenswrapper[4832]: I0125 07:57:44.342829 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:57:44Z","lastTransitionTime":"2026-01-25T07:57:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:57:44 crc kubenswrapper[4832]: I0125 07:57:44.445605 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:57:44 crc kubenswrapper[4832]: I0125 07:57:44.445649 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:57:44 crc kubenswrapper[4832]: I0125 07:57:44.445663 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:57:44 crc kubenswrapper[4832]: I0125 07:57:44.445683 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:57:44 crc kubenswrapper[4832]: I0125 07:57:44.445700 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:57:44Z","lastTransitionTime":"2026-01-25T07:57:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:57:44 crc kubenswrapper[4832]: I0125 07:57:44.548014 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:57:44 crc kubenswrapper[4832]: I0125 07:57:44.548061 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:57:44 crc kubenswrapper[4832]: I0125 07:57:44.548087 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:57:44 crc kubenswrapper[4832]: I0125 07:57:44.548112 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:57:44 crc kubenswrapper[4832]: I0125 07:57:44.548128 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:57:44Z","lastTransitionTime":"2026-01-25T07:57:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:57:44 crc kubenswrapper[4832]: I0125 07:57:44.607577 4832 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-17 19:58:06.347050128 +0000 UTC Jan 25 07:57:44 crc kubenswrapper[4832]: I0125 07:57:44.650647 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:57:44 crc kubenswrapper[4832]: I0125 07:57:44.650707 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:57:44 crc kubenswrapper[4832]: I0125 07:57:44.650743 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:57:44 crc kubenswrapper[4832]: I0125 07:57:44.650779 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:57:44 crc kubenswrapper[4832]: I0125 07:57:44.650801 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:57:44Z","lastTransitionTime":"2026-01-25T07:57:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:57:44 crc kubenswrapper[4832]: I0125 07:57:44.669179 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 25 07:57:44 crc kubenswrapper[4832]: I0125 07:57:44.669227 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 25 07:57:44 crc kubenswrapper[4832]: E0125 07:57:44.669285 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 25 07:57:44 crc kubenswrapper[4832]: I0125 07:57:44.669340 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-nzj5s" Jan 25 07:57:44 crc kubenswrapper[4832]: E0125 07:57:44.669526 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 25 07:57:44 crc kubenswrapper[4832]: E0125 07:57:44.669690 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-nzj5s" podUID="b1a15135-866b-4644-97aa-34c7da815b6b" Jan 25 07:57:44 crc kubenswrapper[4832]: I0125 07:57:44.753782 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:57:44 crc kubenswrapper[4832]: I0125 07:57:44.753910 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:57:44 crc kubenswrapper[4832]: I0125 07:57:44.753932 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:57:44 crc kubenswrapper[4832]: I0125 07:57:44.753962 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:57:44 crc kubenswrapper[4832]: I0125 07:57:44.753984 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:57:44Z","lastTransitionTime":"2026-01-25T07:57:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:57:44 crc kubenswrapper[4832]: I0125 07:57:44.856314 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:57:44 crc kubenswrapper[4832]: I0125 07:57:44.856346 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:57:44 crc kubenswrapper[4832]: I0125 07:57:44.856355 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:57:44 crc kubenswrapper[4832]: I0125 07:57:44.856370 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:57:44 crc kubenswrapper[4832]: I0125 07:57:44.856379 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:57:44Z","lastTransitionTime":"2026-01-25T07:57:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:57:44 crc kubenswrapper[4832]: I0125 07:57:44.958518 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:57:44 crc kubenswrapper[4832]: I0125 07:57:44.958560 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:57:44 crc kubenswrapper[4832]: I0125 07:57:44.958573 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:57:44 crc kubenswrapper[4832]: I0125 07:57:44.958591 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:57:44 crc kubenswrapper[4832]: I0125 07:57:44.958601 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:57:44Z","lastTransitionTime":"2026-01-25T07:57:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:57:45 crc kubenswrapper[4832]: I0125 07:57:45.061366 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:57:45 crc kubenswrapper[4832]: I0125 07:57:45.061459 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:57:45 crc kubenswrapper[4832]: I0125 07:57:45.061478 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:57:45 crc kubenswrapper[4832]: I0125 07:57:45.061517 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:57:45 crc kubenswrapper[4832]: I0125 07:57:45.061533 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:57:45Z","lastTransitionTime":"2026-01-25T07:57:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:57:45 crc kubenswrapper[4832]: I0125 07:57:45.164920 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:57:45 crc kubenswrapper[4832]: I0125 07:57:45.164974 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:57:45 crc kubenswrapper[4832]: I0125 07:57:45.164990 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:57:45 crc kubenswrapper[4832]: I0125 07:57:45.165018 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:57:45 crc kubenswrapper[4832]: I0125 07:57:45.165035 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:57:45Z","lastTransitionTime":"2026-01-25T07:57:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:57:45 crc kubenswrapper[4832]: I0125 07:57:45.268249 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:57:45 crc kubenswrapper[4832]: I0125 07:57:45.268306 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:57:45 crc kubenswrapper[4832]: I0125 07:57:45.268318 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:57:45 crc kubenswrapper[4832]: I0125 07:57:45.268339 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:57:45 crc kubenswrapper[4832]: I0125 07:57:45.268351 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:57:45Z","lastTransitionTime":"2026-01-25T07:57:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:57:45 crc kubenswrapper[4832]: I0125 07:57:45.372051 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:57:45 crc kubenswrapper[4832]: I0125 07:57:45.372118 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:57:45 crc kubenswrapper[4832]: I0125 07:57:45.372141 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:57:45 crc kubenswrapper[4832]: I0125 07:57:45.372174 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:57:45 crc kubenswrapper[4832]: I0125 07:57:45.372195 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:57:45Z","lastTransitionTime":"2026-01-25T07:57:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:57:45 crc kubenswrapper[4832]: I0125 07:57:45.476046 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:57:45 crc kubenswrapper[4832]: I0125 07:57:45.476107 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:57:45 crc kubenswrapper[4832]: I0125 07:57:45.476123 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:57:45 crc kubenswrapper[4832]: I0125 07:57:45.476152 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:57:45 crc kubenswrapper[4832]: I0125 07:57:45.476170 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:57:45Z","lastTransitionTime":"2026-01-25T07:57:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:57:45 crc kubenswrapper[4832]: I0125 07:57:45.532025 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 25 07:57:45 crc kubenswrapper[4832]: I0125 07:57:45.545366 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/openshift-kube-scheduler-crc"] Jan 25 07:57:45 crc kubenswrapper[4832]: I0125 07:57:45.558657 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T07:57:45Z is after 2025-08-24T17:21:41Z" Jan 25 07:57:45 crc kubenswrapper[4832]: I0125 07:57:45.579106 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:57:45 crc kubenswrapper[4832]: I0125 07:57:45.579150 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:57:45 crc kubenswrapper[4832]: I0125 07:57:45.579161 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:57:45 crc kubenswrapper[4832]: I0125 07:57:45.579177 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:57:45 crc kubenswrapper[4832]: I0125 07:57:45.579190 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:57:45Z","lastTransitionTime":"2026-01-25T07:57:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:57:45 crc kubenswrapper[4832]: I0125 07:57:45.581310 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-7tflx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"947f1c61-f061-4448-b301-9c2554b67933\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://62f9942e292890719dd629a44aa806877367db57a332a97f254fea093c039c5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6tmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://446dcb21c95e4112671db6f4b8376ff3361d3d386ecdaa190f615271511be812\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://446dcb21c95e4112671db6f4b8376ff3361d3d386ecdaa190f615271511be812\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T07:57:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T07:57:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6tmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2ca8e86a16d5f632146a210839dc52fb85013bd79ac5a467847d4a28a672539\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a2ca8e86a16d5f632146a210839dc52fb85013bd79ac5a467847d4a28a672539\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T07:57:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T07:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6tmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e8c763fc8bcc560d4435f2ed3be793465fb9e31b07bc26b76ce14bf7d9ce6b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3e8c763fc8bcc560d4435f2ed3be793465fb9e31b07bc26b76ce14bf7d9ce6b7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T07:57:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T07:57:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6tmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6a224c00f14700b78550beaa705d0f1cf0b2f13ef8ec3ba81aef885b81292f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a6a224c00f14700b78550beaa705d0f1cf0b2f13ef8ec3ba81aef885b81292f3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T07:57:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T07:57:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6tmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0565bbfef6aee4dc36b7eeea5fb9b0d26004395c38af8fb6f1745ff6853957e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0565bbfef6aee4dc36b7eeea5fb9b0d26004395c38af8fb6f1745ff6853957e4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T07:57:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T07:57:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6tmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://21c9f3889231e035c1db9611e076f2db7f52cca1449f9cd143323a8599d3141c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://21c9f3889231e035c1db9611e076f2db7f52cca1449f9cd143323a8599d3141c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T07:57:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T07:57:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6tmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T07:57:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-7tflx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T07:57:45Z is after 2025-08-24T17:21:41Z" Jan 25 07:57:45 crc kubenswrapper[4832]: I0125 07:57:45.604589 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-plv66" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9c6fdc72-86dc-433d-8aac-57b0eeefaca3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4eb8d5ded80c75addd304eb271c805a5558200db4ad062ef7354d8a0e4d2892d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rkm2k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b2bdf85709ae59146893142e9c99259a30d0a3d382b2212b1863f677f6afc2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rkm2k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://955df1f749685e35f57096ab341705a767f9f044c498ff9fe0c578205ab00e47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rkm2k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a4281c5178e1f538e268252a65fbf98cf6d3febdb246a148f96a4aa074654ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rkm2k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9039a4038315d24ad4f721f3a16dc792881c104d23270f4ab5ffb3d84ff4cb99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rkm2k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0de5e2c0084fa8b9faf368e61b965f84d8411bcbdfb8b3cf6a35f4bc6088e68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rkm2k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://46f7a9d8da7bc60b49c21eb3838eb9b38263ef6bf7be257ababc09c050822355\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://46f7a9d8da7bc60b49c21eb3838eb9b38263ef6bf7be257ababc09c050822355\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-25T07:57:40Z\\\",\\\"message\\\":\\\" node crc\\\\nI0125 07:57:40.180788 6436 obj_retry.go:386] Retry successful for *v1.Pod openshift-multus/multus-additional-cni-plugins-7tflx after 0 failed attempt(s)\\\\nI0125 07:57:40.180793 6436 default_network_controller.go:776] Recording success event on pod openshift-multus/multus-additional-cni-plugins-7tflx\\\\nI0125 07:57:40.180768 6436 ovn.go:134] Ensuring zone local for Pod openshift-machine-config-operator/machine-config-daemon-9r9sz in node crc\\\\nI0125 07:57:40.180804 6436 obj_retry.go:386] Retry successful for *v1.Pod openshift-machine-config-operator/machine-config-daemon-9r9sz after 0 failed attempt(s)\\\\nI0125 07:57:40.180809 6436 default_network_controller.go:776] Recording success event on pod openshift-machine-config-operator/machine-config-daemon-9r9sz\\\\nI0125 07:57:40.180747 6436 obj_retry.go:386] Retry successful for *v1.Pod openshift-image-registry/node-ca-6dqw2 after 0 failed attempt(s)\\\\nI0125 07:57:40.180817 6436 default_network_controller.go:776] Recording success event on pod openshift-image-registry/node-ca-6dqw2\\\\nI0125 07:57:40.180731 6436 default_network_controller.go:776] Recording success event on pod openshift-ovn-kubernetes/ovnkube-node-plv66\\\\nF0125 07:57:40.180824 6436 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-25T07:57:39Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-plv66_openshift-ovn-kubernetes(9c6fdc72-86dc-433d-8aac-57b0eeefaca3)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rkm2k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d82289bf3a8f5881decb5d348cc43fdfd61f4ce6af17013a893b687d2c759d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rkm2k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac96bdf8380dbae226d8f186a0449b986660f21889eb73734620b26fb796fbf1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ac96bdf8380dbae226d8f186a0449b986660f21889eb73734620b26fb796fbf1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T07:57:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rkm2k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T07:57:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-plv66\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T07:57:45Z is after 2025-08-24T17:21:41Z" Jan 25 07:57:45 crc kubenswrapper[4832]: I0125 07:57:45.607900 4832 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-26 20:54:05.577465283 +0000 UTC Jan 25 07:57:45 crc kubenswrapper[4832]: I0125 07:57:45.614991 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-ct7hc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1be4ce34-f46c-4ee9-8fb5-7ac13dafef85\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c584b1d69c283cdea5cd50a6f1e3b9a1fd4b4b82bfb1142fb4bb32fd7c7d3fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cd2cg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://80d0c4fe9bedb92c87bfea3e2e7706dac8825366b74adb48b257fa32f31a6277\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cd2cg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T07:57:29Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-ct7hc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T07:57:45Z is after 2025-08-24T17:21:41Z" Jan 25 07:57:45 crc kubenswrapper[4832]: I0125 07:57:45.626059 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4399c971-4476-4d24-ae22-8f9710ee1ea8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:56:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:56:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:56:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://427b76c32266adf832d2068d3a55977e793505c5bb68d7b55f73115596094910\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:56:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://37e9206fcc440929199c51b318bab8d2c23814d1307eaed596434c12edf2ed21\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:56:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://959f94a48ef709e3a3ca62ab6fda1874fd98e4fa70fbde0fa03da51bc8d0ed25\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:56:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://56d7d5b36830b76c8af4d6a98ec50b4096ef677b7ec94784724d5395dbc5e1a5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7e2213b4c4748dc37cf94e9b977630270dedbabf28e81c8a6d75e4ee3346ad7a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-25T07:57:15Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0125 07:57:10.242088 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0125 07:57:10.245266 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3222874030/tls.crt::/tmp/serving-cert-3222874030/tls.key\\\\\\\"\\\\nI0125 07:57:15.582629 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0125 07:57:15.585295 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0125 07:57:15.585315 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0125 07:57:15.585341 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0125 07:57:15.585347 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0125 07:57:15.590465 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0125 07:57:15.590486 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0125 07:57:15.590498 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0125 07:57:15.590502 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0125 07:57:15.590506 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0125 07:57:15.590510 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0125 07:57:15.590513 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0125 07:57:15.590670 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0125 07:57:15.594690 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-25T07:56:59Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c0b0c638bfaa98aaf9932b5ad1b0bfc04ba52038c40f3aa85103388c557ace5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:56:59Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5cdefbe9da3ff798b69ba79465cd9b6fce74df31802f14dca3fa58ba5b9d1bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b5cdefbe9da3ff798b69ba79465cd9b6fce74df31802f14dca3fa58ba5b9d1bd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T07:56:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T07:56:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T07:56:57Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T07:57:45Z is after 2025-08-24T17:21:41Z" Jan 25 07:57:45 crc kubenswrapper[4832]: I0125 07:57:45.638855 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fcc553c4-1007-4dbc-8420-60b36d54467a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:56:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:56:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:56:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8be196a1dec67a58e78aa9de2efa770fc899f210cc9c13962f0ebe78b967ba34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:56:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b044eb1a229266f00938c08da6aa9e86425ca71d08c8434d7214d54850c36bbb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:56:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://82354c782a5e3edb960aa716e1fc5fa9ab40d1f483ae320f08abfb662c1f1911\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:56:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7833d14895ff5c8aa464bdd04ddfe77dd2cddb9658d863bf6421449e62657bd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:56:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T07:56:57Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T07:57:45Z is after 2025-08-24T17:21:41Z" Jan 25 07:57:45 crc kubenswrapper[4832]: I0125 07:57:45.655011 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:16Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T07:57:45Z is after 2025-08-24T17:21:41Z" Jan 25 07:57:45 crc kubenswrapper[4832]: I0125 07:57:45.665406 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-6dqw2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5b30a48c-b823-4cdd-ac0c-def5487d8fa6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d04c4243f10847106daab854b81ba5b24466780aa4900922ae2c460468a12e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxmsw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T07:57:16Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-6dqw2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T07:57:45Z is after 2025-08-24T17:21:41Z" Jan 25 07:57:45 crc kubenswrapper[4832]: I0125 07:57:45.668831 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 25 07:57:45 crc kubenswrapper[4832]: E0125 07:57:45.668970 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 25 07:57:45 crc kubenswrapper[4832]: I0125 07:57:45.680886 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:57:45 crc kubenswrapper[4832]: I0125 07:57:45.680969 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:57:45 crc kubenswrapper[4832]: I0125 07:57:45.680984 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:57:45 crc kubenswrapper[4832]: I0125 07:57:45.681006 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:57:45 crc kubenswrapper[4832]: I0125 07:57:45.681022 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:57:45Z","lastTransitionTime":"2026-01-25T07:57:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:57:45 crc kubenswrapper[4832]: I0125 07:57:45.680996 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-nzj5s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b1a15135-866b-4644-97aa-34c7da815b6b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6wc7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6wc7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T07:57:30Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-nzj5s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T07:57:45Z is after 2025-08-24T17:21:41Z" Jan 25 07:57:45 crc kubenswrapper[4832]: I0125 07:57:45.704888 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d0e4b534-077a-47eb-a9aa-463b4dce27c2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:56:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:56:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e400282707469172abd90879bb5c4f96419dd2fbdbc5cc58c6ee9954624b221f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://22fb11acb07674f4808f4563567010790f12a87af272fdcf5ad1998e616c3f13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7970bc59b29bb18f7064917431bb4dd3388f593b65f71d697e3bc1c37493d087\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ae35d18ac48a31c47656c723134740770a44da6fa1587a853402bbfd4f51956\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://56b41ea1d1a7bb58c288bf3c661f5cd441412fc4790cd8361da2061bd35721dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c6f28ecd4c0dfb159fffbbdfc1ecbfee0ce21de2efa607937d80ec098bfc2534\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c6f28ecd4c0dfb159fffbbdfc1ecbfee0ce21de2efa607937d80ec098bfc2534\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T07:56:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T07:56:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3d6c060504d04d04a811fe906985b4981037f7c249befd89d21694b58983826\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b3d6c060504d04d04a811fe906985b4981037f7c249befd89d21694b58983826\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T07:56:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T07:56:58Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f98f07a514287378206a12966a18bcce2ce996434858c036f7e405a8c5d51721\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f98f07a514287378206a12966a18bcce2ce996434858c036f7e405a8c5d51721\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T07:56:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T07:56:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T07:56:57Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T07:57:45Z is after 2025-08-24T17:21:41Z" Jan 25 07:57:45 crc kubenswrapper[4832]: I0125 07:57:45.717806 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f08aec7c666388c5a9a5ccc970acf6e9df3262090951fd1a205cfb2f6cfb26a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e880d54d6b2d147d036dac73afd36230c3a984b018b7bd600dcbd33ca83aa84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T07:57:45Z is after 2025-08-24T17:21:41Z" Jan 25 07:57:45 crc kubenswrapper[4832]: I0125 07:57:45.730913 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-kzrcf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5439ad80-35f6-4da4-8745-8104e9963472\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c1f3fab8a8806d76e6199970ac471a73665e6ec874f959a1e7908df814babfff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dg29p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T07:57:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-kzrcf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T07:57:45Z is after 2025-08-24T17:21:41Z" Jan 25 07:57:45 crc kubenswrapper[4832]: I0125 07:57:45.742066 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-9r9sz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1fb47e8e-c812-41b4-9be7-3fad81e121b0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11d30ecfbac91cbd5f546d8f064b715e31917d7db31102376299e2c5fa7951f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2t6v2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c32b6a39b2bc87d55b11a88a54d0909633358c70f3fc555cd4308bc5bf2689a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2t6v2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T07:57:16Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-9r9sz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T07:57:45Z is after 2025-08-24T17:21:41Z" Jan 25 07:57:45 crc kubenswrapper[4832]: I0125 07:57:45.752763 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:16Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T07:57:45Z is after 2025-08-24T17:21:41Z" Jan 25 07:57:45 crc kubenswrapper[4832]: I0125 07:57:45.763504 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://49bab1f91a75d2c164a43ba253102a6ac5ba0fd6347fad172ae2052f055d3434\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T07:57:45Z is after 2025-08-24T17:21:41Z" Jan 25 07:57:45 crc kubenswrapper[4832]: I0125 07:57:45.773704 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:19Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:19Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://097b2ff685144140b86c80b5c605d0ef31116b56237a696d1da4bf98f65d7ae2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T07:57:45Z is after 2025-08-24T17:21:41Z" Jan 25 07:57:45 crc kubenswrapper[4832]: I0125 07:57:45.782431 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-ljmz9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f0e6de28-95c1-4b62-93a5-8141ed12ba8e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://90459cff650e6a278d83d57b502423c3c3bd87cadc083c7642dfc4cc33e7953c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s6dzs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T07:57:16Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-ljmz9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T07:57:45Z is after 2025-08-24T17:21:41Z" Jan 25 07:57:45 crc kubenswrapper[4832]: I0125 07:57:45.783272 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:57:45 crc kubenswrapper[4832]: I0125 07:57:45.783302 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:57:45 crc kubenswrapper[4832]: I0125 07:57:45.783311 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:57:45 crc kubenswrapper[4832]: I0125 07:57:45.783326 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:57:45 crc kubenswrapper[4832]: I0125 07:57:45.783339 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:57:45Z","lastTransitionTime":"2026-01-25T07:57:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:57:45 crc kubenswrapper[4832]: I0125 07:57:45.885726 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:57:45 crc kubenswrapper[4832]: I0125 07:57:45.885766 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:57:45 crc kubenswrapper[4832]: I0125 07:57:45.885777 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:57:45 crc kubenswrapper[4832]: I0125 07:57:45.885794 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:57:45 crc kubenswrapper[4832]: I0125 07:57:45.885807 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:57:45Z","lastTransitionTime":"2026-01-25T07:57:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:57:45 crc kubenswrapper[4832]: I0125 07:57:45.987948 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:57:45 crc kubenswrapper[4832]: I0125 07:57:45.987986 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:57:45 crc kubenswrapper[4832]: I0125 07:57:45.987994 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:57:45 crc kubenswrapper[4832]: I0125 07:57:45.988007 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:57:45 crc kubenswrapper[4832]: I0125 07:57:45.988015 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:57:45Z","lastTransitionTime":"2026-01-25T07:57:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:57:46 crc kubenswrapper[4832]: I0125 07:57:46.090023 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:57:46 crc kubenswrapper[4832]: I0125 07:57:46.090093 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:57:46 crc kubenswrapper[4832]: I0125 07:57:46.090108 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:57:46 crc kubenswrapper[4832]: I0125 07:57:46.090129 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:57:46 crc kubenswrapper[4832]: I0125 07:57:46.090144 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:57:46Z","lastTransitionTime":"2026-01-25T07:57:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:57:46 crc kubenswrapper[4832]: I0125 07:57:46.192562 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:57:46 crc kubenswrapper[4832]: I0125 07:57:46.192628 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:57:46 crc kubenswrapper[4832]: I0125 07:57:46.192650 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:57:46 crc kubenswrapper[4832]: I0125 07:57:46.192685 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:57:46 crc kubenswrapper[4832]: I0125 07:57:46.192703 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:57:46Z","lastTransitionTime":"2026-01-25T07:57:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:57:46 crc kubenswrapper[4832]: I0125 07:57:46.294570 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:57:46 crc kubenswrapper[4832]: I0125 07:57:46.294614 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:57:46 crc kubenswrapper[4832]: I0125 07:57:46.294624 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:57:46 crc kubenswrapper[4832]: I0125 07:57:46.294639 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:57:46 crc kubenswrapper[4832]: I0125 07:57:46.294650 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:57:46Z","lastTransitionTime":"2026-01-25T07:57:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:57:46 crc kubenswrapper[4832]: I0125 07:57:46.396739 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:57:46 crc kubenswrapper[4832]: I0125 07:57:46.396771 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:57:46 crc kubenswrapper[4832]: I0125 07:57:46.396782 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:57:46 crc kubenswrapper[4832]: I0125 07:57:46.396795 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:57:46 crc kubenswrapper[4832]: I0125 07:57:46.396805 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:57:46Z","lastTransitionTime":"2026-01-25T07:57:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:57:46 crc kubenswrapper[4832]: I0125 07:57:46.499070 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:57:46 crc kubenswrapper[4832]: I0125 07:57:46.499132 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:57:46 crc kubenswrapper[4832]: I0125 07:57:46.499143 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:57:46 crc kubenswrapper[4832]: I0125 07:57:46.499162 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:57:46 crc kubenswrapper[4832]: I0125 07:57:46.499173 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:57:46Z","lastTransitionTime":"2026-01-25T07:57:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:57:46 crc kubenswrapper[4832]: I0125 07:57:46.601790 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:57:46 crc kubenswrapper[4832]: I0125 07:57:46.601873 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:57:46 crc kubenswrapper[4832]: I0125 07:57:46.601896 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:57:46 crc kubenswrapper[4832]: I0125 07:57:46.601927 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:57:46 crc kubenswrapper[4832]: I0125 07:57:46.601947 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:57:46Z","lastTransitionTime":"2026-01-25T07:57:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:57:46 crc kubenswrapper[4832]: I0125 07:57:46.608753 4832 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-01 22:02:40.740028274 +0000 UTC Jan 25 07:57:46 crc kubenswrapper[4832]: I0125 07:57:46.657568 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/b1a15135-866b-4644-97aa-34c7da815b6b-metrics-certs\") pod \"network-metrics-daemon-nzj5s\" (UID: \"b1a15135-866b-4644-97aa-34c7da815b6b\") " pod="openshift-multus/network-metrics-daemon-nzj5s" Jan 25 07:57:46 crc kubenswrapper[4832]: E0125 07:57:46.657710 4832 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 25 07:57:46 crc kubenswrapper[4832]: E0125 07:57:46.657782 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b1a15135-866b-4644-97aa-34c7da815b6b-metrics-certs podName:b1a15135-866b-4644-97aa-34c7da815b6b nodeName:}" failed. No retries permitted until 2026-01-25 07:58:02.657765997 +0000 UTC m=+65.331589530 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/b1a15135-866b-4644-97aa-34c7da815b6b-metrics-certs") pod "network-metrics-daemon-nzj5s" (UID: "b1a15135-866b-4644-97aa-34c7da815b6b") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 25 07:57:46 crc kubenswrapper[4832]: I0125 07:57:46.666831 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:57:46 crc kubenswrapper[4832]: I0125 07:57:46.666944 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:57:46 crc kubenswrapper[4832]: I0125 07:57:46.666977 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:57:46 crc kubenswrapper[4832]: I0125 07:57:46.667022 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:57:46 crc kubenswrapper[4832]: I0125 07:57:46.667049 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:57:46Z","lastTransitionTime":"2026-01-25T07:57:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:57:46 crc kubenswrapper[4832]: I0125 07:57:46.668948 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 25 07:57:46 crc kubenswrapper[4832]: I0125 07:57:46.669027 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 25 07:57:46 crc kubenswrapper[4832]: E0125 07:57:46.669168 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 25 07:57:46 crc kubenswrapper[4832]: I0125 07:57:46.669358 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-nzj5s" Jan 25 07:57:46 crc kubenswrapper[4832]: E0125 07:57:46.669495 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 25 07:57:46 crc kubenswrapper[4832]: E0125 07:57:46.669688 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-nzj5s" podUID="b1a15135-866b-4644-97aa-34c7da815b6b" Jan 25 07:57:46 crc kubenswrapper[4832]: E0125 07:57:46.680302 4832 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-25T07:57:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:46Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-25T07:57:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:46Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-25T07:57:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:46Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-25T07:57:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:46Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"0979aa75-019e-429a-886d-abfe16bbe8b2\\\",\\\"systemUUID\\\":\\\"55010a19-6f9d-4b9e-9f82-47bdc3835176\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T07:57:46Z is after 2025-08-24T17:21:41Z" Jan 25 07:57:46 crc kubenswrapper[4832]: I0125 07:57:46.685308 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:57:46 crc kubenswrapper[4832]: I0125 07:57:46.685365 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:57:46 crc kubenswrapper[4832]: I0125 07:57:46.685378 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:57:46 crc kubenswrapper[4832]: I0125 07:57:46.685420 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:57:46 crc kubenswrapper[4832]: I0125 07:57:46.685432 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:57:46Z","lastTransitionTime":"2026-01-25T07:57:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:57:46 crc kubenswrapper[4832]: E0125 07:57:46.704610 4832 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-25T07:57:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:46Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-25T07:57:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:46Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-25T07:57:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:46Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-25T07:57:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:46Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"0979aa75-019e-429a-886d-abfe16bbe8b2\\\",\\\"systemUUID\\\":\\\"55010a19-6f9d-4b9e-9f82-47bdc3835176\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T07:57:46Z is after 2025-08-24T17:21:41Z" Jan 25 07:57:46 crc kubenswrapper[4832]: I0125 07:57:46.710158 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:57:46 crc kubenswrapper[4832]: I0125 07:57:46.710222 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:57:46 crc kubenswrapper[4832]: I0125 07:57:46.710240 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:57:46 crc kubenswrapper[4832]: I0125 07:57:46.710266 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:57:46 crc kubenswrapper[4832]: I0125 07:57:46.710285 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:57:46Z","lastTransitionTime":"2026-01-25T07:57:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:57:46 crc kubenswrapper[4832]: E0125 07:57:46.730413 4832 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-25T07:57:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:46Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-25T07:57:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:46Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-25T07:57:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:46Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-25T07:57:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:46Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"0979aa75-019e-429a-886d-abfe16bbe8b2\\\",\\\"systemUUID\\\":\\\"55010a19-6f9d-4b9e-9f82-47bdc3835176\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T07:57:46Z is after 2025-08-24T17:21:41Z" Jan 25 07:57:46 crc kubenswrapper[4832]: I0125 07:57:46.735569 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:57:46 crc kubenswrapper[4832]: I0125 07:57:46.735616 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:57:46 crc kubenswrapper[4832]: I0125 07:57:46.735629 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:57:46 crc kubenswrapper[4832]: I0125 07:57:46.735652 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:57:46 crc kubenswrapper[4832]: I0125 07:57:46.735665 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:57:46Z","lastTransitionTime":"2026-01-25T07:57:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:57:46 crc kubenswrapper[4832]: E0125 07:57:46.753901 4832 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-25T07:57:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:46Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-25T07:57:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:46Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-25T07:57:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:46Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-25T07:57:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:46Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"0979aa75-019e-429a-886d-abfe16bbe8b2\\\",\\\"systemUUID\\\":\\\"55010a19-6f9d-4b9e-9f82-47bdc3835176\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T07:57:46Z is after 2025-08-24T17:21:41Z" Jan 25 07:57:46 crc kubenswrapper[4832]: I0125 07:57:46.761070 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:57:46 crc kubenswrapper[4832]: I0125 07:57:46.761107 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:57:46 crc kubenswrapper[4832]: I0125 07:57:46.761118 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:57:46 crc kubenswrapper[4832]: I0125 07:57:46.761140 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:57:46 crc kubenswrapper[4832]: I0125 07:57:46.761152 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:57:46Z","lastTransitionTime":"2026-01-25T07:57:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:57:46 crc kubenswrapper[4832]: E0125 07:57:46.778198 4832 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-25T07:57:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:46Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-25T07:57:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:46Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-25T07:57:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:46Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-25T07:57:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:46Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"0979aa75-019e-429a-886d-abfe16bbe8b2\\\",\\\"systemUUID\\\":\\\"55010a19-6f9d-4b9e-9f82-47bdc3835176\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T07:57:46Z is after 2025-08-24T17:21:41Z" Jan 25 07:57:46 crc kubenswrapper[4832]: E0125 07:57:46.778336 4832 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 25 07:57:46 crc kubenswrapper[4832]: I0125 07:57:46.780655 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:57:46 crc kubenswrapper[4832]: I0125 07:57:46.780685 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:57:46 crc kubenswrapper[4832]: I0125 07:57:46.780696 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:57:46 crc kubenswrapper[4832]: I0125 07:57:46.780712 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:57:46 crc kubenswrapper[4832]: I0125 07:57:46.780722 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:57:46Z","lastTransitionTime":"2026-01-25T07:57:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:57:46 crc kubenswrapper[4832]: I0125 07:57:46.884182 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:57:46 crc kubenswrapper[4832]: I0125 07:57:46.884238 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:57:46 crc kubenswrapper[4832]: I0125 07:57:46.884251 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:57:46 crc kubenswrapper[4832]: I0125 07:57:46.884268 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:57:46 crc kubenswrapper[4832]: I0125 07:57:46.884280 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:57:46Z","lastTransitionTime":"2026-01-25T07:57:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:57:46 crc kubenswrapper[4832]: I0125 07:57:46.985978 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:57:46 crc kubenswrapper[4832]: I0125 07:57:46.986027 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:57:46 crc kubenswrapper[4832]: I0125 07:57:46.986042 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:57:46 crc kubenswrapper[4832]: I0125 07:57:46.986061 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:57:46 crc kubenswrapper[4832]: I0125 07:57:46.986074 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:57:46Z","lastTransitionTime":"2026-01-25T07:57:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:57:47 crc kubenswrapper[4832]: I0125 07:57:47.088112 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:57:47 crc kubenswrapper[4832]: I0125 07:57:47.088164 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:57:47 crc kubenswrapper[4832]: I0125 07:57:47.088172 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:57:47 crc kubenswrapper[4832]: I0125 07:57:47.088186 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:57:47 crc kubenswrapper[4832]: I0125 07:57:47.088195 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:57:47Z","lastTransitionTime":"2026-01-25T07:57:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:57:47 crc kubenswrapper[4832]: I0125 07:57:47.190828 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:57:47 crc kubenswrapper[4832]: I0125 07:57:47.190883 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:57:47 crc kubenswrapper[4832]: I0125 07:57:47.190899 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:57:47 crc kubenswrapper[4832]: I0125 07:57:47.190917 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:57:47 crc kubenswrapper[4832]: I0125 07:57:47.190928 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:57:47Z","lastTransitionTime":"2026-01-25T07:57:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:57:47 crc kubenswrapper[4832]: I0125 07:57:47.292657 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:57:47 crc kubenswrapper[4832]: I0125 07:57:47.292699 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:57:47 crc kubenswrapper[4832]: I0125 07:57:47.292711 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:57:47 crc kubenswrapper[4832]: I0125 07:57:47.292727 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:57:47 crc kubenswrapper[4832]: I0125 07:57:47.292739 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:57:47Z","lastTransitionTime":"2026-01-25T07:57:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:57:47 crc kubenswrapper[4832]: I0125 07:57:47.395000 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:57:47 crc kubenswrapper[4832]: I0125 07:57:47.395035 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:57:47 crc kubenswrapper[4832]: I0125 07:57:47.395045 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:57:47 crc kubenswrapper[4832]: I0125 07:57:47.395058 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:57:47 crc kubenswrapper[4832]: I0125 07:57:47.395070 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:57:47Z","lastTransitionTime":"2026-01-25T07:57:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:57:47 crc kubenswrapper[4832]: I0125 07:57:47.497267 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:57:47 crc kubenswrapper[4832]: I0125 07:57:47.497308 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:57:47 crc kubenswrapper[4832]: I0125 07:57:47.497318 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:57:47 crc kubenswrapper[4832]: I0125 07:57:47.497330 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:57:47 crc kubenswrapper[4832]: I0125 07:57:47.497340 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:57:47Z","lastTransitionTime":"2026-01-25T07:57:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:57:47 crc kubenswrapper[4832]: I0125 07:57:47.599324 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:57:47 crc kubenswrapper[4832]: I0125 07:57:47.599377 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:57:47 crc kubenswrapper[4832]: I0125 07:57:47.599422 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:57:47 crc kubenswrapper[4832]: I0125 07:57:47.599443 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:57:47 crc kubenswrapper[4832]: I0125 07:57:47.599457 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:57:47Z","lastTransitionTime":"2026-01-25T07:57:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:57:47 crc kubenswrapper[4832]: I0125 07:57:47.609244 4832 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-21 02:17:30.524649633 +0000 UTC Jan 25 07:57:47 crc kubenswrapper[4832]: I0125 07:57:47.669219 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 25 07:57:47 crc kubenswrapper[4832]: E0125 07:57:47.669351 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 25 07:57:47 crc kubenswrapper[4832]: I0125 07:57:47.684857 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-kzrcf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5439ad80-35f6-4da4-8745-8104e9963472\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c1f3fab8a8806d76e6199970ac471a73665e6ec874f959a1e7908df814babfff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dg29p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T07:57:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-kzrcf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T07:57:47Z is after 2025-08-24T17:21:41Z" Jan 25 07:57:47 crc kubenswrapper[4832]: I0125 07:57:47.698016 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-nzj5s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b1a15135-866b-4644-97aa-34c7da815b6b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6wc7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6wc7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T07:57:30Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-nzj5s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T07:57:47Z is after 2025-08-24T17:21:41Z" Jan 25 07:57:47 crc kubenswrapper[4832]: I0125 07:57:47.702247 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:57:47 crc kubenswrapper[4832]: I0125 07:57:47.702306 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:57:47 crc kubenswrapper[4832]: I0125 07:57:47.702324 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:57:47 crc kubenswrapper[4832]: I0125 07:57:47.702347 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:57:47 crc kubenswrapper[4832]: I0125 07:57:47.702366 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:57:47Z","lastTransitionTime":"2026-01-25T07:57:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:57:47 crc kubenswrapper[4832]: I0125 07:57:47.712689 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f6bad725-5721-4824-a4ed-bfc16b247b44\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:56:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:56:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:56:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://acf625e850d98cfae07cd2c4ef9d3f9a5404baad9c9bf3e87fa7ff5d1ba00212\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:56:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://902f7ae070f61b744e77e5cbcc7e585607467b588514ae3e99fdded86279a9b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:56:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e1d1028b32f15c85ebc49f8b388004a91d6c08f1bc2c7bf77c2d34db97525111\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:56:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://79304c289cb94b7a9cd8eed25b9e679ded9f3b2b6133ad21157032e313120e85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://79304c289cb94b7a9cd8eed25b9e679ded9f3b2b6133ad21157032e313120e85\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T07:56:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T07:56:58Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T07:56:57Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T07:57:47Z is after 2025-08-24T17:21:41Z" Jan 25 07:57:47 crc kubenswrapper[4832]: I0125 07:57:47.732662 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d0e4b534-077a-47eb-a9aa-463b4dce27c2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:56:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:56:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e400282707469172abd90879bb5c4f96419dd2fbdbc5cc58c6ee9954624b221f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://22fb11acb07674f4808f4563567010790f12a87af272fdcf5ad1998e616c3f13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7970bc59b29bb18f7064917431bb4dd3388f593b65f71d697e3bc1c37493d087\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ae35d18ac48a31c47656c723134740770a44da6fa1587a853402bbfd4f51956\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://56b41ea1d1a7bb58c288bf3c661f5cd441412fc4790cd8361da2061bd35721dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c6f28ecd4c0dfb159fffbbdfc1ecbfee0ce21de2efa607937d80ec098bfc2534\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c6f28ecd4c0dfb159fffbbdfc1ecbfee0ce21de2efa607937d80ec098bfc2534\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T07:56:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T07:56:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3d6c060504d04d04a811fe906985b4981037f7c249befd89d21694b58983826\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b3d6c060504d04d04a811fe906985b4981037f7c249befd89d21694b58983826\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T07:56:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T07:56:58Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f98f07a514287378206a12966a18bcce2ce996434858c036f7e405a8c5d51721\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f98f07a514287378206a12966a18bcce2ce996434858c036f7e405a8c5d51721\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T07:56:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T07:56:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T07:56:57Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T07:57:47Z is after 2025-08-24T17:21:41Z" Jan 25 07:57:47 crc kubenswrapper[4832]: I0125 07:57:47.749526 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f08aec7c666388c5a9a5ccc970acf6e9df3262090951fd1a205cfb2f6cfb26a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e880d54d6b2d147d036dac73afd36230c3a984b018b7bd600dcbd33ca83aa84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T07:57:47Z is after 2025-08-24T17:21:41Z" Jan 25 07:57:47 crc kubenswrapper[4832]: I0125 07:57:47.760579 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-ljmz9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f0e6de28-95c1-4b62-93a5-8141ed12ba8e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://90459cff650e6a278d83d57b502423c3c3bd87cadc083c7642dfc4cc33e7953c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s6dzs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T07:57:16Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-ljmz9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T07:57:47Z is after 2025-08-24T17:21:41Z" Jan 25 07:57:47 crc kubenswrapper[4832]: I0125 07:57:47.774260 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-9r9sz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1fb47e8e-c812-41b4-9be7-3fad81e121b0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11d30ecfbac91cbd5f546d8f064b715e31917d7db31102376299e2c5fa7951f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2t6v2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c32b6a39b2bc87d55b11a88a54d0909633358c70f3fc555cd4308bc5bf2689a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2t6v2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T07:57:16Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-9r9sz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T07:57:47Z is after 2025-08-24T17:21:41Z" Jan 25 07:57:47 crc kubenswrapper[4832]: I0125 07:57:47.787694 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:16Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T07:57:47Z is after 2025-08-24T17:21:41Z" Jan 25 07:57:47 crc kubenswrapper[4832]: I0125 07:57:47.804642 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://49bab1f91a75d2c164a43ba253102a6ac5ba0fd6347fad172ae2052f055d3434\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T07:57:47Z is after 2025-08-24T17:21:41Z" Jan 25 07:57:47 crc kubenswrapper[4832]: I0125 07:57:47.804994 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:57:47 crc kubenswrapper[4832]: I0125 07:57:47.805031 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:57:47 crc kubenswrapper[4832]: I0125 07:57:47.805039 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:57:47 crc kubenswrapper[4832]: I0125 07:57:47.805055 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:57:47 crc kubenswrapper[4832]: I0125 07:57:47.805064 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:57:47Z","lastTransitionTime":"2026-01-25T07:57:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:57:47 crc kubenswrapper[4832]: I0125 07:57:47.818216 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:19Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:19Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://097b2ff685144140b86c80b5c605d0ef31116b56237a696d1da4bf98f65d7ae2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T07:57:47Z is after 2025-08-24T17:21:41Z" Jan 25 07:57:47 crc kubenswrapper[4832]: I0125 07:57:47.832490 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T07:57:47Z is after 2025-08-24T17:21:41Z" Jan 25 07:57:47 crc kubenswrapper[4832]: I0125 07:57:47.845500 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-7tflx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"947f1c61-f061-4448-b301-9c2554b67933\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://62f9942e292890719dd629a44aa806877367db57a332a97f254fea093c039c5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6tmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://446dcb21c95e4112671db6f4b8376ff3361d3d386ecdaa190f615271511be812\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://446dcb21c95e4112671db6f4b8376ff3361d3d386ecdaa190f615271511be812\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T07:57:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T07:57:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6tmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2ca8e86a16d5f632146a210839dc52fb85013bd79ac5a467847d4a28a672539\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a2ca8e86a16d5f632146a210839dc52fb85013bd79ac5a467847d4a28a672539\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T07:57:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T07:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6tmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e8c763fc8bcc560d4435f2ed3be793465fb9e31b07bc26b76ce14bf7d9ce6b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3e8c763fc8bcc560d4435f2ed3be793465fb9e31b07bc26b76ce14bf7d9ce6b7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T07:57:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T07:57:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6tmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6a224c00f14700b78550beaa705d0f1cf0b2f13ef8ec3ba81aef885b81292f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a6a224c00f14700b78550beaa705d0f1cf0b2f13ef8ec3ba81aef885b81292f3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T07:57:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T07:57:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6tmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0565bbfef6aee4dc36b7eeea5fb9b0d26004395c38af8fb6f1745ff6853957e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0565bbfef6aee4dc36b7eeea5fb9b0d26004395c38af8fb6f1745ff6853957e4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T07:57:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T07:57:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6tmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://21c9f3889231e035c1db9611e076f2db7f52cca1449f9cd143323a8599d3141c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://21c9f3889231e035c1db9611e076f2db7f52cca1449f9cd143323a8599d3141c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T07:57:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T07:57:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6tmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T07:57:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-7tflx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T07:57:47Z is after 2025-08-24T17:21:41Z" Jan 25 07:57:47 crc kubenswrapper[4832]: I0125 07:57:47.856474 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-6dqw2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5b30a48c-b823-4cdd-ac0c-def5487d8fa6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d04c4243f10847106daab854b81ba5b24466780aa4900922ae2c460468a12e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxmsw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T07:57:16Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-6dqw2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T07:57:47Z is after 2025-08-24T17:21:41Z" Jan 25 07:57:47 crc kubenswrapper[4832]: I0125 07:57:47.872898 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-plv66" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9c6fdc72-86dc-433d-8aac-57b0eeefaca3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4eb8d5ded80c75addd304eb271c805a5558200db4ad062ef7354d8a0e4d2892d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rkm2k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b2bdf85709ae59146893142e9c99259a30d0a3d382b2212b1863f677f6afc2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rkm2k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://955df1f749685e35f57096ab341705a767f9f044c498ff9fe0c578205ab00e47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rkm2k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a4281c5178e1f538e268252a65fbf98cf6d3febdb246a148f96a4aa074654ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rkm2k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9039a4038315d24ad4f721f3a16dc792881c104d23270f4ab5ffb3d84ff4cb99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rkm2k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0de5e2c0084fa8b9faf368e61b965f84d8411bcbdfb8b3cf6a35f4bc6088e68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rkm2k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://46f7a9d8da7bc60b49c21eb3838eb9b38263ef6bf7be257ababc09c050822355\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://46f7a9d8da7bc60b49c21eb3838eb9b38263ef6bf7be257ababc09c050822355\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-25T07:57:40Z\\\",\\\"message\\\":\\\" node crc\\\\nI0125 07:57:40.180788 6436 obj_retry.go:386] Retry successful for *v1.Pod openshift-multus/multus-additional-cni-plugins-7tflx after 0 failed attempt(s)\\\\nI0125 07:57:40.180793 6436 default_network_controller.go:776] Recording success event on pod openshift-multus/multus-additional-cni-plugins-7tflx\\\\nI0125 07:57:40.180768 6436 ovn.go:134] Ensuring zone local for Pod openshift-machine-config-operator/machine-config-daemon-9r9sz in node crc\\\\nI0125 07:57:40.180804 6436 obj_retry.go:386] Retry successful for *v1.Pod openshift-machine-config-operator/machine-config-daemon-9r9sz after 0 failed attempt(s)\\\\nI0125 07:57:40.180809 6436 default_network_controller.go:776] Recording success event on pod openshift-machine-config-operator/machine-config-daemon-9r9sz\\\\nI0125 07:57:40.180747 6436 obj_retry.go:386] Retry successful for *v1.Pod openshift-image-registry/node-ca-6dqw2 after 0 failed attempt(s)\\\\nI0125 07:57:40.180817 6436 default_network_controller.go:776] Recording success event on pod openshift-image-registry/node-ca-6dqw2\\\\nI0125 07:57:40.180731 6436 default_network_controller.go:776] Recording success event on pod openshift-ovn-kubernetes/ovnkube-node-plv66\\\\nF0125 07:57:40.180824 6436 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-25T07:57:39Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-plv66_openshift-ovn-kubernetes(9c6fdc72-86dc-433d-8aac-57b0eeefaca3)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rkm2k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d82289bf3a8f5881decb5d348cc43fdfd61f4ce6af17013a893b687d2c759d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rkm2k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac96bdf8380dbae226d8f186a0449b986660f21889eb73734620b26fb796fbf1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ac96bdf8380dbae226d8f186a0449b986660f21889eb73734620b26fb796fbf1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T07:57:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rkm2k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T07:57:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-plv66\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T07:57:47Z is after 2025-08-24T17:21:41Z" Jan 25 07:57:47 crc kubenswrapper[4832]: I0125 07:57:47.885120 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-ct7hc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1be4ce34-f46c-4ee9-8fb5-7ac13dafef85\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c584b1d69c283cdea5cd50a6f1e3b9a1fd4b4b82bfb1142fb4bb32fd7c7d3fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cd2cg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://80d0c4fe9bedb92c87bfea3e2e7706dac8825366b74adb48b257fa32f31a6277\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cd2cg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T07:57:29Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-ct7hc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T07:57:47Z is after 2025-08-24T17:21:41Z" Jan 25 07:57:47 crc kubenswrapper[4832]: I0125 07:57:47.899118 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4399c971-4476-4d24-ae22-8f9710ee1ea8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:56:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:56:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:56:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://427b76c32266adf832d2068d3a55977e793505c5bb68d7b55f73115596094910\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:56:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://37e9206fcc440929199c51b318bab8d2c23814d1307eaed596434c12edf2ed21\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:56:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://959f94a48ef709e3a3ca62ab6fda1874fd98e4fa70fbde0fa03da51bc8d0ed25\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:56:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://56d7d5b36830b76c8af4d6a98ec50b4096ef677b7ec94784724d5395dbc5e1a5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7e2213b4c4748dc37cf94e9b977630270dedbabf28e81c8a6d75e4ee3346ad7a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-25T07:57:15Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0125 07:57:10.242088 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0125 07:57:10.245266 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3222874030/tls.crt::/tmp/serving-cert-3222874030/tls.key\\\\\\\"\\\\nI0125 07:57:15.582629 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0125 07:57:15.585295 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0125 07:57:15.585315 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0125 07:57:15.585341 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0125 07:57:15.585347 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0125 07:57:15.590465 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0125 07:57:15.590486 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0125 07:57:15.590498 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0125 07:57:15.590502 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0125 07:57:15.590506 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0125 07:57:15.590510 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0125 07:57:15.590513 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0125 07:57:15.590670 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0125 07:57:15.594690 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-25T07:56:59Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c0b0c638bfaa98aaf9932b5ad1b0bfc04ba52038c40f3aa85103388c557ace5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:56:59Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5cdefbe9da3ff798b69ba79465cd9b6fce74df31802f14dca3fa58ba5b9d1bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b5cdefbe9da3ff798b69ba79465cd9b6fce74df31802f14dca3fa58ba5b9d1bd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T07:56:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T07:56:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T07:56:57Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T07:57:47Z is after 2025-08-24T17:21:41Z" Jan 25 07:57:47 crc kubenswrapper[4832]: I0125 07:57:47.907198 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:57:47 crc kubenswrapper[4832]: I0125 07:57:47.907244 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:57:47 crc kubenswrapper[4832]: I0125 07:57:47.907267 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:57:47 crc kubenswrapper[4832]: I0125 07:57:47.907295 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:57:47 crc kubenswrapper[4832]: I0125 07:57:47.907538 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:57:47Z","lastTransitionTime":"2026-01-25T07:57:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:57:47 crc kubenswrapper[4832]: I0125 07:57:47.917797 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fcc553c4-1007-4dbc-8420-60b36d54467a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:56:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:56:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:56:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8be196a1dec67a58e78aa9de2efa770fc899f210cc9c13962f0ebe78b967ba34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:56:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b044eb1a229266f00938c08da6aa9e86425ca71d08c8434d7214d54850c36bbb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:56:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://82354c782a5e3edb960aa716e1fc5fa9ab40d1f483ae320f08abfb662c1f1911\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:56:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7833d14895ff5c8aa464bdd04ddfe77dd2cddb9658d863bf6421449e62657bd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:56:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T07:56:57Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T07:57:47Z is after 2025-08-24T17:21:41Z" Jan 25 07:57:47 crc kubenswrapper[4832]: I0125 07:57:47.929748 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:16Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T07:57:47Z is after 2025-08-24T17:21:41Z" Jan 25 07:57:48 crc kubenswrapper[4832]: I0125 07:57:48.010082 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:57:48 crc kubenswrapper[4832]: I0125 07:57:48.010573 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:57:48 crc kubenswrapper[4832]: I0125 07:57:48.010585 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:57:48 crc kubenswrapper[4832]: I0125 07:57:48.010602 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:57:48 crc kubenswrapper[4832]: I0125 07:57:48.010614 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:57:48Z","lastTransitionTime":"2026-01-25T07:57:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:57:48 crc kubenswrapper[4832]: I0125 07:57:48.115500 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:57:48 crc kubenswrapper[4832]: I0125 07:57:48.115560 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:57:48 crc kubenswrapper[4832]: I0125 07:57:48.115579 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:57:48 crc kubenswrapper[4832]: I0125 07:57:48.115598 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:57:48 crc kubenswrapper[4832]: I0125 07:57:48.115615 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:57:48Z","lastTransitionTime":"2026-01-25T07:57:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:57:48 crc kubenswrapper[4832]: I0125 07:57:48.218643 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:57:48 crc kubenswrapper[4832]: I0125 07:57:48.218704 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:57:48 crc kubenswrapper[4832]: I0125 07:57:48.218722 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:57:48 crc kubenswrapper[4832]: I0125 07:57:48.218745 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:57:48 crc kubenswrapper[4832]: I0125 07:57:48.218762 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:57:48Z","lastTransitionTime":"2026-01-25T07:57:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:57:48 crc kubenswrapper[4832]: I0125 07:57:48.320357 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:57:48 crc kubenswrapper[4832]: I0125 07:57:48.320412 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:57:48 crc kubenswrapper[4832]: I0125 07:57:48.320474 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:57:48 crc kubenswrapper[4832]: I0125 07:57:48.320491 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:57:48 crc kubenswrapper[4832]: I0125 07:57:48.320501 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:57:48Z","lastTransitionTime":"2026-01-25T07:57:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:57:48 crc kubenswrapper[4832]: I0125 07:57:48.373115 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 25 07:57:48 crc kubenswrapper[4832]: I0125 07:57:48.373205 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 25 07:57:48 crc kubenswrapper[4832]: E0125 07:57:48.373279 4832 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 25 07:57:48 crc kubenswrapper[4832]: E0125 07:57:48.373313 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-25 07:58:20.373273509 +0000 UTC m=+83.047097082 (durationBeforeRetry 32s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 25 07:57:48 crc kubenswrapper[4832]: E0125 07:57:48.373360 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-25 07:58:20.373344251 +0000 UTC m=+83.047167824 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 25 07:57:48 crc kubenswrapper[4832]: I0125 07:57:48.373455 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 25 07:57:48 crc kubenswrapper[4832]: I0125 07:57:48.373534 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 25 07:57:48 crc kubenswrapper[4832]: E0125 07:57:48.373601 4832 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 25 07:57:48 crc kubenswrapper[4832]: E0125 07:57:48.373616 4832 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 25 07:57:48 crc kubenswrapper[4832]: E0125 07:57:48.373627 4832 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 25 07:57:48 crc kubenswrapper[4832]: E0125 07:57:48.373666 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-25 07:58:20.37365358 +0000 UTC m=+83.047477113 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 25 07:57:48 crc kubenswrapper[4832]: E0125 07:57:48.373692 4832 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 25 07:57:48 crc kubenswrapper[4832]: E0125 07:57:48.373749 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-25 07:58:20.373731733 +0000 UTC m=+83.047555316 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 25 07:57:48 crc kubenswrapper[4832]: I0125 07:57:48.423604 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:57:48 crc kubenswrapper[4832]: I0125 07:57:48.423678 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:57:48 crc kubenswrapper[4832]: I0125 07:57:48.423697 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:57:48 crc kubenswrapper[4832]: I0125 07:57:48.423721 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:57:48 crc kubenswrapper[4832]: I0125 07:57:48.423741 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:57:48Z","lastTransitionTime":"2026-01-25T07:57:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:57:48 crc kubenswrapper[4832]: I0125 07:57:48.474527 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 25 07:57:48 crc kubenswrapper[4832]: E0125 07:57:48.474673 4832 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 25 07:57:48 crc kubenswrapper[4832]: E0125 07:57:48.474691 4832 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 25 07:57:48 crc kubenswrapper[4832]: E0125 07:57:48.474703 4832 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 25 07:57:48 crc kubenswrapper[4832]: E0125 07:57:48.474760 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-25 07:58:20.474745812 +0000 UTC m=+83.148569355 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 25 07:57:48 crc kubenswrapper[4832]: I0125 07:57:48.527175 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:57:48 crc kubenswrapper[4832]: I0125 07:57:48.527235 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:57:48 crc kubenswrapper[4832]: I0125 07:57:48.527252 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:57:48 crc kubenswrapper[4832]: I0125 07:57:48.527273 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:57:48 crc kubenswrapper[4832]: I0125 07:57:48.527291 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:57:48Z","lastTransitionTime":"2026-01-25T07:57:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:57:48 crc kubenswrapper[4832]: I0125 07:57:48.610264 4832 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-12 11:10:31.091193597 +0000 UTC Jan 25 07:57:48 crc kubenswrapper[4832]: I0125 07:57:48.630051 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:57:48 crc kubenswrapper[4832]: I0125 07:57:48.630081 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:57:48 crc kubenswrapper[4832]: I0125 07:57:48.630090 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:57:48 crc kubenswrapper[4832]: I0125 07:57:48.630104 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:57:48 crc kubenswrapper[4832]: I0125 07:57:48.630114 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:57:48Z","lastTransitionTime":"2026-01-25T07:57:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:57:48 crc kubenswrapper[4832]: I0125 07:57:48.668730 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 25 07:57:48 crc kubenswrapper[4832]: I0125 07:57:48.668761 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-nzj5s" Jan 25 07:57:48 crc kubenswrapper[4832]: I0125 07:57:48.668768 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 25 07:57:48 crc kubenswrapper[4832]: E0125 07:57:48.668883 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 25 07:57:48 crc kubenswrapper[4832]: E0125 07:57:48.668967 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-nzj5s" podUID="b1a15135-866b-4644-97aa-34c7da815b6b" Jan 25 07:57:48 crc kubenswrapper[4832]: E0125 07:57:48.669031 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 25 07:57:48 crc kubenswrapper[4832]: I0125 07:57:48.732321 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:57:48 crc kubenswrapper[4832]: I0125 07:57:48.732357 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:57:48 crc kubenswrapper[4832]: I0125 07:57:48.732365 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:57:48 crc kubenswrapper[4832]: I0125 07:57:48.732378 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:57:48 crc kubenswrapper[4832]: I0125 07:57:48.732407 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:57:48Z","lastTransitionTime":"2026-01-25T07:57:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:57:48 crc kubenswrapper[4832]: I0125 07:57:48.834236 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:57:48 crc kubenswrapper[4832]: I0125 07:57:48.834280 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:57:48 crc kubenswrapper[4832]: I0125 07:57:48.834291 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:57:48 crc kubenswrapper[4832]: I0125 07:57:48.834309 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:57:48 crc kubenswrapper[4832]: I0125 07:57:48.834320 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:57:48Z","lastTransitionTime":"2026-01-25T07:57:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:57:48 crc kubenswrapper[4832]: I0125 07:57:48.937796 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:57:48 crc kubenswrapper[4832]: I0125 07:57:48.937861 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:57:48 crc kubenswrapper[4832]: I0125 07:57:48.937878 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:57:48 crc kubenswrapper[4832]: I0125 07:57:48.937902 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:57:48 crc kubenswrapper[4832]: I0125 07:57:48.937919 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:57:48Z","lastTransitionTime":"2026-01-25T07:57:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:57:49 crc kubenswrapper[4832]: I0125 07:57:49.041075 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:57:49 crc kubenswrapper[4832]: I0125 07:57:49.041141 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:57:49 crc kubenswrapper[4832]: I0125 07:57:49.041156 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:57:49 crc kubenswrapper[4832]: I0125 07:57:49.041176 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:57:49 crc kubenswrapper[4832]: I0125 07:57:49.041192 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:57:49Z","lastTransitionTime":"2026-01-25T07:57:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:57:49 crc kubenswrapper[4832]: I0125 07:57:49.143772 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:57:49 crc kubenswrapper[4832]: I0125 07:57:49.143820 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:57:49 crc kubenswrapper[4832]: I0125 07:57:49.143833 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:57:49 crc kubenswrapper[4832]: I0125 07:57:49.143849 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:57:49 crc kubenswrapper[4832]: I0125 07:57:49.143861 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:57:49Z","lastTransitionTime":"2026-01-25T07:57:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:57:49 crc kubenswrapper[4832]: I0125 07:57:49.246704 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:57:49 crc kubenswrapper[4832]: I0125 07:57:49.246742 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:57:49 crc kubenswrapper[4832]: I0125 07:57:49.246753 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:57:49 crc kubenswrapper[4832]: I0125 07:57:49.246767 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:57:49 crc kubenswrapper[4832]: I0125 07:57:49.246777 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:57:49Z","lastTransitionTime":"2026-01-25T07:57:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:57:49 crc kubenswrapper[4832]: I0125 07:57:49.348854 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:57:49 crc kubenswrapper[4832]: I0125 07:57:49.348905 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:57:49 crc kubenswrapper[4832]: I0125 07:57:49.348922 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:57:49 crc kubenswrapper[4832]: I0125 07:57:49.348943 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:57:49 crc kubenswrapper[4832]: I0125 07:57:49.348961 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:57:49Z","lastTransitionTime":"2026-01-25T07:57:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:57:49 crc kubenswrapper[4832]: I0125 07:57:49.452861 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:57:49 crc kubenswrapper[4832]: I0125 07:57:49.452936 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:57:49 crc kubenswrapper[4832]: I0125 07:57:49.452955 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:57:49 crc kubenswrapper[4832]: I0125 07:57:49.452983 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:57:49 crc kubenswrapper[4832]: I0125 07:57:49.453004 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:57:49Z","lastTransitionTime":"2026-01-25T07:57:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:57:49 crc kubenswrapper[4832]: I0125 07:57:49.556279 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:57:49 crc kubenswrapper[4832]: I0125 07:57:49.556341 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:57:49 crc kubenswrapper[4832]: I0125 07:57:49.556354 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:57:49 crc kubenswrapper[4832]: I0125 07:57:49.556370 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:57:49 crc kubenswrapper[4832]: I0125 07:57:49.556425 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:57:49Z","lastTransitionTime":"2026-01-25T07:57:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:57:49 crc kubenswrapper[4832]: I0125 07:57:49.611312 4832 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-14 03:11:32.149686025 +0000 UTC Jan 25 07:57:49 crc kubenswrapper[4832]: I0125 07:57:49.660366 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:57:49 crc kubenswrapper[4832]: I0125 07:57:49.660469 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:57:49 crc kubenswrapper[4832]: I0125 07:57:49.660517 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:57:49 crc kubenswrapper[4832]: I0125 07:57:49.660564 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:57:49 crc kubenswrapper[4832]: I0125 07:57:49.660587 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:57:49Z","lastTransitionTime":"2026-01-25T07:57:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:57:49 crc kubenswrapper[4832]: I0125 07:57:49.669541 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 25 07:57:49 crc kubenswrapper[4832]: E0125 07:57:49.669688 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 25 07:57:49 crc kubenswrapper[4832]: I0125 07:57:49.763872 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:57:49 crc kubenswrapper[4832]: I0125 07:57:49.763947 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:57:49 crc kubenswrapper[4832]: I0125 07:57:49.763964 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:57:49 crc kubenswrapper[4832]: I0125 07:57:49.763988 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:57:49 crc kubenswrapper[4832]: I0125 07:57:49.764006 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:57:49Z","lastTransitionTime":"2026-01-25T07:57:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:57:49 crc kubenswrapper[4832]: I0125 07:57:49.867038 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:57:49 crc kubenswrapper[4832]: I0125 07:57:49.867098 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:57:49 crc kubenswrapper[4832]: I0125 07:57:49.867116 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:57:49 crc kubenswrapper[4832]: I0125 07:57:49.867138 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:57:49 crc kubenswrapper[4832]: I0125 07:57:49.867153 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:57:49Z","lastTransitionTime":"2026-01-25T07:57:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:57:49 crc kubenswrapper[4832]: I0125 07:57:49.970768 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:57:49 crc kubenswrapper[4832]: I0125 07:57:49.970847 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:57:49 crc kubenswrapper[4832]: I0125 07:57:49.970870 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:57:49 crc kubenswrapper[4832]: I0125 07:57:49.970896 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:57:49 crc kubenswrapper[4832]: I0125 07:57:49.970915 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:57:49Z","lastTransitionTime":"2026-01-25T07:57:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:57:50 crc kubenswrapper[4832]: I0125 07:57:50.073667 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:57:50 crc kubenswrapper[4832]: I0125 07:57:50.073733 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:57:50 crc kubenswrapper[4832]: I0125 07:57:50.073755 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:57:50 crc kubenswrapper[4832]: I0125 07:57:50.073783 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:57:50 crc kubenswrapper[4832]: I0125 07:57:50.073807 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:57:50Z","lastTransitionTime":"2026-01-25T07:57:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:57:50 crc kubenswrapper[4832]: I0125 07:57:50.177274 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:57:50 crc kubenswrapper[4832]: I0125 07:57:50.177341 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:57:50 crc kubenswrapper[4832]: I0125 07:57:50.177358 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:57:50 crc kubenswrapper[4832]: I0125 07:57:50.177409 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:57:50 crc kubenswrapper[4832]: I0125 07:57:50.177426 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:57:50Z","lastTransitionTime":"2026-01-25T07:57:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:57:50 crc kubenswrapper[4832]: I0125 07:57:50.280169 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:57:50 crc kubenswrapper[4832]: I0125 07:57:50.280194 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:57:50 crc kubenswrapper[4832]: I0125 07:57:50.280202 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:57:50 crc kubenswrapper[4832]: I0125 07:57:50.280214 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:57:50 crc kubenswrapper[4832]: I0125 07:57:50.280223 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:57:50Z","lastTransitionTime":"2026-01-25T07:57:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:57:50 crc kubenswrapper[4832]: I0125 07:57:50.382579 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:57:50 crc kubenswrapper[4832]: I0125 07:57:50.382637 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:57:50 crc kubenswrapper[4832]: I0125 07:57:50.382654 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:57:50 crc kubenswrapper[4832]: I0125 07:57:50.382676 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:57:50 crc kubenswrapper[4832]: I0125 07:57:50.382694 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:57:50Z","lastTransitionTime":"2026-01-25T07:57:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:57:50 crc kubenswrapper[4832]: I0125 07:57:50.486091 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:57:50 crc kubenswrapper[4832]: I0125 07:57:50.486123 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:57:50 crc kubenswrapper[4832]: I0125 07:57:50.486132 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:57:50 crc kubenswrapper[4832]: I0125 07:57:50.486145 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:57:50 crc kubenswrapper[4832]: I0125 07:57:50.486155 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:57:50Z","lastTransitionTime":"2026-01-25T07:57:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:57:50 crc kubenswrapper[4832]: I0125 07:57:50.592326 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:57:50 crc kubenswrapper[4832]: I0125 07:57:50.592412 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:57:50 crc kubenswrapper[4832]: I0125 07:57:50.592423 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:57:50 crc kubenswrapper[4832]: I0125 07:57:50.592439 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:57:50 crc kubenswrapper[4832]: I0125 07:57:50.592451 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:57:50Z","lastTransitionTime":"2026-01-25T07:57:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:57:50 crc kubenswrapper[4832]: I0125 07:57:50.612037 4832 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-15 23:46:54.23994199 +0000 UTC Jan 25 07:57:50 crc kubenswrapper[4832]: I0125 07:57:50.668908 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 25 07:57:50 crc kubenswrapper[4832]: I0125 07:57:50.668923 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 25 07:57:50 crc kubenswrapper[4832]: I0125 07:57:50.669090 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-nzj5s" Jan 25 07:57:50 crc kubenswrapper[4832]: E0125 07:57:50.669267 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 25 07:57:50 crc kubenswrapper[4832]: E0125 07:57:50.669459 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 25 07:57:50 crc kubenswrapper[4832]: E0125 07:57:50.669677 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-nzj5s" podUID="b1a15135-866b-4644-97aa-34c7da815b6b" Jan 25 07:57:50 crc kubenswrapper[4832]: I0125 07:57:50.696083 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:57:50 crc kubenswrapper[4832]: I0125 07:57:50.696119 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:57:50 crc kubenswrapper[4832]: I0125 07:57:50.696127 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:57:50 crc kubenswrapper[4832]: I0125 07:57:50.696140 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:57:50 crc kubenswrapper[4832]: I0125 07:57:50.696149 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:57:50Z","lastTransitionTime":"2026-01-25T07:57:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:57:50 crc kubenswrapper[4832]: I0125 07:57:50.798269 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:57:50 crc kubenswrapper[4832]: I0125 07:57:50.798327 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:57:50 crc kubenswrapper[4832]: I0125 07:57:50.798349 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:57:50 crc kubenswrapper[4832]: I0125 07:57:50.798377 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:57:50 crc kubenswrapper[4832]: I0125 07:57:50.798435 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:57:50Z","lastTransitionTime":"2026-01-25T07:57:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:57:50 crc kubenswrapper[4832]: I0125 07:57:50.901171 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:57:50 crc kubenswrapper[4832]: I0125 07:57:50.901327 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:57:50 crc kubenswrapper[4832]: I0125 07:57:50.901510 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:57:50 crc kubenswrapper[4832]: I0125 07:57:50.901562 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:57:50 crc kubenswrapper[4832]: I0125 07:57:50.901586 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:57:50Z","lastTransitionTime":"2026-01-25T07:57:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:57:51 crc kubenswrapper[4832]: I0125 07:57:51.004783 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:57:51 crc kubenswrapper[4832]: I0125 07:57:51.004910 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:57:51 crc kubenswrapper[4832]: I0125 07:57:51.004942 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:57:51 crc kubenswrapper[4832]: I0125 07:57:51.004970 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:57:51 crc kubenswrapper[4832]: I0125 07:57:51.004991 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:57:51Z","lastTransitionTime":"2026-01-25T07:57:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:57:51 crc kubenswrapper[4832]: I0125 07:57:51.108120 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:57:51 crc kubenswrapper[4832]: I0125 07:57:51.108183 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:57:51 crc kubenswrapper[4832]: I0125 07:57:51.108200 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:57:51 crc kubenswrapper[4832]: I0125 07:57:51.108221 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:57:51 crc kubenswrapper[4832]: I0125 07:57:51.108237 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:57:51Z","lastTransitionTime":"2026-01-25T07:57:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:57:51 crc kubenswrapper[4832]: I0125 07:57:51.210881 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:57:51 crc kubenswrapper[4832]: I0125 07:57:51.210937 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:57:51 crc kubenswrapper[4832]: I0125 07:57:51.210947 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:57:51 crc kubenswrapper[4832]: I0125 07:57:51.210962 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:57:51 crc kubenswrapper[4832]: I0125 07:57:51.210971 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:57:51Z","lastTransitionTime":"2026-01-25T07:57:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:57:51 crc kubenswrapper[4832]: I0125 07:57:51.313191 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:57:51 crc kubenswrapper[4832]: I0125 07:57:51.313253 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:57:51 crc kubenswrapper[4832]: I0125 07:57:51.313269 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:57:51 crc kubenswrapper[4832]: I0125 07:57:51.313290 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:57:51 crc kubenswrapper[4832]: I0125 07:57:51.313303 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:57:51Z","lastTransitionTime":"2026-01-25T07:57:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:57:51 crc kubenswrapper[4832]: I0125 07:57:51.415341 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:57:51 crc kubenswrapper[4832]: I0125 07:57:51.415373 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:57:51 crc kubenswrapper[4832]: I0125 07:57:51.415403 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:57:51 crc kubenswrapper[4832]: I0125 07:57:51.415420 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:57:51 crc kubenswrapper[4832]: I0125 07:57:51.415430 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:57:51Z","lastTransitionTime":"2026-01-25T07:57:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:57:51 crc kubenswrapper[4832]: I0125 07:57:51.517810 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:57:51 crc kubenswrapper[4832]: I0125 07:57:51.517846 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:57:51 crc kubenswrapper[4832]: I0125 07:57:51.517892 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:57:51 crc kubenswrapper[4832]: I0125 07:57:51.517906 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:57:51 crc kubenswrapper[4832]: I0125 07:57:51.517915 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:57:51Z","lastTransitionTime":"2026-01-25T07:57:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:57:51 crc kubenswrapper[4832]: I0125 07:57:51.612368 4832 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-04 00:46:05.960128665 +0000 UTC Jan 25 07:57:51 crc kubenswrapper[4832]: I0125 07:57:51.620553 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:57:51 crc kubenswrapper[4832]: I0125 07:57:51.620588 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:57:51 crc kubenswrapper[4832]: I0125 07:57:51.620626 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:57:51 crc kubenswrapper[4832]: I0125 07:57:51.620653 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:57:51 crc kubenswrapper[4832]: I0125 07:57:51.620667 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:57:51Z","lastTransitionTime":"2026-01-25T07:57:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:57:51 crc kubenswrapper[4832]: I0125 07:57:51.669265 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 25 07:57:51 crc kubenswrapper[4832]: E0125 07:57:51.669539 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 25 07:57:51 crc kubenswrapper[4832]: I0125 07:57:51.723124 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:57:51 crc kubenswrapper[4832]: I0125 07:57:51.723180 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:57:51 crc kubenswrapper[4832]: I0125 07:57:51.723192 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:57:51 crc kubenswrapper[4832]: I0125 07:57:51.723212 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:57:51 crc kubenswrapper[4832]: I0125 07:57:51.723226 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:57:51Z","lastTransitionTime":"2026-01-25T07:57:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:57:51 crc kubenswrapper[4832]: I0125 07:57:51.827280 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:57:51 crc kubenswrapper[4832]: I0125 07:57:51.827367 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:57:51 crc kubenswrapper[4832]: I0125 07:57:51.827414 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:57:51 crc kubenswrapper[4832]: I0125 07:57:51.827447 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:57:51 crc kubenswrapper[4832]: I0125 07:57:51.827473 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:57:51Z","lastTransitionTime":"2026-01-25T07:57:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:57:51 crc kubenswrapper[4832]: I0125 07:57:51.931367 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:57:51 crc kubenswrapper[4832]: I0125 07:57:51.931504 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:57:51 crc kubenswrapper[4832]: I0125 07:57:51.931518 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:57:51 crc kubenswrapper[4832]: I0125 07:57:51.931537 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:57:51 crc kubenswrapper[4832]: I0125 07:57:51.931550 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:57:51Z","lastTransitionTime":"2026-01-25T07:57:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:57:52 crc kubenswrapper[4832]: I0125 07:57:52.035054 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:57:52 crc kubenswrapper[4832]: I0125 07:57:52.035122 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:57:52 crc kubenswrapper[4832]: I0125 07:57:52.035141 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:57:52 crc kubenswrapper[4832]: I0125 07:57:52.035172 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:57:52 crc kubenswrapper[4832]: I0125 07:57:52.035194 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:57:52Z","lastTransitionTime":"2026-01-25T07:57:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:57:52 crc kubenswrapper[4832]: I0125 07:57:52.138804 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:57:52 crc kubenswrapper[4832]: I0125 07:57:52.138862 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:57:52 crc kubenswrapper[4832]: I0125 07:57:52.138876 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:57:52 crc kubenswrapper[4832]: I0125 07:57:52.138903 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:57:52 crc kubenswrapper[4832]: I0125 07:57:52.138922 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:57:52Z","lastTransitionTime":"2026-01-25T07:57:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:57:52 crc kubenswrapper[4832]: I0125 07:57:52.242511 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:57:52 crc kubenswrapper[4832]: I0125 07:57:52.242556 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:57:52 crc kubenswrapper[4832]: I0125 07:57:52.242571 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:57:52 crc kubenswrapper[4832]: I0125 07:57:52.242677 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:57:52 crc kubenswrapper[4832]: I0125 07:57:52.242694 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:57:52Z","lastTransitionTime":"2026-01-25T07:57:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:57:52 crc kubenswrapper[4832]: I0125 07:57:52.346069 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:57:52 crc kubenswrapper[4832]: I0125 07:57:52.346135 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:57:52 crc kubenswrapper[4832]: I0125 07:57:52.346147 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:57:52 crc kubenswrapper[4832]: I0125 07:57:52.346169 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:57:52 crc kubenswrapper[4832]: I0125 07:57:52.346186 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:57:52Z","lastTransitionTime":"2026-01-25T07:57:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:57:52 crc kubenswrapper[4832]: I0125 07:57:52.449929 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:57:52 crc kubenswrapper[4832]: I0125 07:57:52.449993 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:57:52 crc kubenswrapper[4832]: I0125 07:57:52.450005 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:57:52 crc kubenswrapper[4832]: I0125 07:57:52.450027 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:57:52 crc kubenswrapper[4832]: I0125 07:57:52.450042 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:57:52Z","lastTransitionTime":"2026-01-25T07:57:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:57:52 crc kubenswrapper[4832]: I0125 07:57:52.552833 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:57:52 crc kubenswrapper[4832]: I0125 07:57:52.552887 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:57:52 crc kubenswrapper[4832]: I0125 07:57:52.552902 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:57:52 crc kubenswrapper[4832]: I0125 07:57:52.552919 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:57:52 crc kubenswrapper[4832]: I0125 07:57:52.552929 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:57:52Z","lastTransitionTime":"2026-01-25T07:57:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:57:52 crc kubenswrapper[4832]: I0125 07:57:52.612945 4832 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-19 19:45:16.950966015 +0000 UTC Jan 25 07:57:52 crc kubenswrapper[4832]: I0125 07:57:52.656009 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:57:52 crc kubenswrapper[4832]: I0125 07:57:52.656084 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:57:52 crc kubenswrapper[4832]: I0125 07:57:52.656102 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:57:52 crc kubenswrapper[4832]: I0125 07:57:52.656132 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:57:52 crc kubenswrapper[4832]: I0125 07:57:52.656154 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:57:52Z","lastTransitionTime":"2026-01-25T07:57:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:57:52 crc kubenswrapper[4832]: I0125 07:57:52.669322 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-nzj5s" Jan 25 07:57:52 crc kubenswrapper[4832]: E0125 07:57:52.669549 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-nzj5s" podUID="b1a15135-866b-4644-97aa-34c7da815b6b" Jan 25 07:57:52 crc kubenswrapper[4832]: I0125 07:57:52.669767 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 25 07:57:52 crc kubenswrapper[4832]: E0125 07:57:52.669957 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 25 07:57:52 crc kubenswrapper[4832]: I0125 07:57:52.670189 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 25 07:57:52 crc kubenswrapper[4832]: E0125 07:57:52.670328 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 25 07:57:52 crc kubenswrapper[4832]: I0125 07:57:52.759224 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:57:52 crc kubenswrapper[4832]: I0125 07:57:52.759269 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:57:52 crc kubenswrapper[4832]: I0125 07:57:52.759278 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:57:52 crc kubenswrapper[4832]: I0125 07:57:52.759296 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:57:52 crc kubenswrapper[4832]: I0125 07:57:52.759306 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:57:52Z","lastTransitionTime":"2026-01-25T07:57:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:57:52 crc kubenswrapper[4832]: I0125 07:57:52.862926 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:57:52 crc kubenswrapper[4832]: I0125 07:57:52.863280 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:57:52 crc kubenswrapper[4832]: I0125 07:57:52.863427 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:57:52 crc kubenswrapper[4832]: I0125 07:57:52.863544 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:57:52 crc kubenswrapper[4832]: I0125 07:57:52.863663 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:57:52Z","lastTransitionTime":"2026-01-25T07:57:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:57:52 crc kubenswrapper[4832]: I0125 07:57:52.967152 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:57:52 crc kubenswrapper[4832]: I0125 07:57:52.967572 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:57:52 crc kubenswrapper[4832]: I0125 07:57:52.967661 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:57:52 crc kubenswrapper[4832]: I0125 07:57:52.967741 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:57:52 crc kubenswrapper[4832]: I0125 07:57:52.967820 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:57:52Z","lastTransitionTime":"2026-01-25T07:57:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:57:53 crc kubenswrapper[4832]: I0125 07:57:53.071834 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:57:53 crc kubenswrapper[4832]: I0125 07:57:53.072259 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:57:53 crc kubenswrapper[4832]: I0125 07:57:53.072365 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:57:53 crc kubenswrapper[4832]: I0125 07:57:53.072486 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:57:53 crc kubenswrapper[4832]: I0125 07:57:53.072583 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:57:53Z","lastTransitionTime":"2026-01-25T07:57:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:57:53 crc kubenswrapper[4832]: I0125 07:57:53.175814 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:57:53 crc kubenswrapper[4832]: I0125 07:57:53.175876 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:57:53 crc kubenswrapper[4832]: I0125 07:57:53.175886 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:57:53 crc kubenswrapper[4832]: I0125 07:57:53.175908 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:57:53 crc kubenswrapper[4832]: I0125 07:57:53.175924 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:57:53Z","lastTransitionTime":"2026-01-25T07:57:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:57:53 crc kubenswrapper[4832]: I0125 07:57:53.279237 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:57:53 crc kubenswrapper[4832]: I0125 07:57:53.279289 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:57:53 crc kubenswrapper[4832]: I0125 07:57:53.279307 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:57:53 crc kubenswrapper[4832]: I0125 07:57:53.279331 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:57:53 crc kubenswrapper[4832]: I0125 07:57:53.279350 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:57:53Z","lastTransitionTime":"2026-01-25T07:57:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:57:53 crc kubenswrapper[4832]: I0125 07:57:53.382618 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:57:53 crc kubenswrapper[4832]: I0125 07:57:53.382663 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:57:53 crc kubenswrapper[4832]: I0125 07:57:53.382675 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:57:53 crc kubenswrapper[4832]: I0125 07:57:53.382696 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:57:53 crc kubenswrapper[4832]: I0125 07:57:53.382712 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:57:53Z","lastTransitionTime":"2026-01-25T07:57:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:57:53 crc kubenswrapper[4832]: I0125 07:57:53.484845 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:57:53 crc kubenswrapper[4832]: I0125 07:57:53.484891 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:57:53 crc kubenswrapper[4832]: I0125 07:57:53.484907 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:57:53 crc kubenswrapper[4832]: I0125 07:57:53.484930 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:57:53 crc kubenswrapper[4832]: I0125 07:57:53.484947 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:57:53Z","lastTransitionTime":"2026-01-25T07:57:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:57:53 crc kubenswrapper[4832]: I0125 07:57:53.588479 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:57:53 crc kubenswrapper[4832]: I0125 07:57:53.588534 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:57:53 crc kubenswrapper[4832]: I0125 07:57:53.588553 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:57:53 crc kubenswrapper[4832]: I0125 07:57:53.588579 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:57:53 crc kubenswrapper[4832]: I0125 07:57:53.588597 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:57:53Z","lastTransitionTime":"2026-01-25T07:57:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:57:53 crc kubenswrapper[4832]: I0125 07:57:53.614078 4832 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-01 13:36:15.177172977 +0000 UTC Jan 25 07:57:53 crc kubenswrapper[4832]: I0125 07:57:53.668937 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 25 07:57:53 crc kubenswrapper[4832]: E0125 07:57:53.669065 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 25 07:57:53 crc kubenswrapper[4832]: I0125 07:57:53.670850 4832 scope.go:117] "RemoveContainer" containerID="46f7a9d8da7bc60b49c21eb3838eb9b38263ef6bf7be257ababc09c050822355" Jan 25 07:57:53 crc kubenswrapper[4832]: E0125 07:57:53.671272 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-plv66_openshift-ovn-kubernetes(9c6fdc72-86dc-433d-8aac-57b0eeefaca3)\"" pod="openshift-ovn-kubernetes/ovnkube-node-plv66" podUID="9c6fdc72-86dc-433d-8aac-57b0eeefaca3" Jan 25 07:57:53 crc kubenswrapper[4832]: I0125 07:57:53.691397 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:57:53 crc kubenswrapper[4832]: I0125 07:57:53.691451 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:57:53 crc kubenswrapper[4832]: I0125 07:57:53.691464 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:57:53 crc kubenswrapper[4832]: I0125 07:57:53.691489 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:57:53 crc kubenswrapper[4832]: I0125 07:57:53.691507 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:57:53Z","lastTransitionTime":"2026-01-25T07:57:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:57:53 crc kubenswrapper[4832]: I0125 07:57:53.793811 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:57:53 crc kubenswrapper[4832]: I0125 07:57:53.793906 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:57:53 crc kubenswrapper[4832]: I0125 07:57:53.793929 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:57:53 crc kubenswrapper[4832]: I0125 07:57:53.793951 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:57:53 crc kubenswrapper[4832]: I0125 07:57:53.793966 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:57:53Z","lastTransitionTime":"2026-01-25T07:57:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:57:53 crc kubenswrapper[4832]: I0125 07:57:53.896595 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:57:53 crc kubenswrapper[4832]: I0125 07:57:53.896672 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:57:53 crc kubenswrapper[4832]: I0125 07:57:53.896708 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:57:53 crc kubenswrapper[4832]: I0125 07:57:53.896744 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:57:53 crc kubenswrapper[4832]: I0125 07:57:53.896762 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:57:53Z","lastTransitionTime":"2026-01-25T07:57:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:57:54 crc kubenswrapper[4832]: I0125 07:57:54.000297 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:57:54 crc kubenswrapper[4832]: I0125 07:57:54.000329 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:57:54 crc kubenswrapper[4832]: I0125 07:57:54.000341 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:57:54 crc kubenswrapper[4832]: I0125 07:57:54.000357 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:57:54 crc kubenswrapper[4832]: I0125 07:57:54.000370 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:57:54Z","lastTransitionTime":"2026-01-25T07:57:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:57:54 crc kubenswrapper[4832]: I0125 07:57:54.103081 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:57:54 crc kubenswrapper[4832]: I0125 07:57:54.103171 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:57:54 crc kubenswrapper[4832]: I0125 07:57:54.103185 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:57:54 crc kubenswrapper[4832]: I0125 07:57:54.103205 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:57:54 crc kubenswrapper[4832]: I0125 07:57:54.103218 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:57:54Z","lastTransitionTime":"2026-01-25T07:57:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:57:54 crc kubenswrapper[4832]: I0125 07:57:54.206100 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:57:54 crc kubenswrapper[4832]: I0125 07:57:54.206145 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:57:54 crc kubenswrapper[4832]: I0125 07:57:54.206186 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:57:54 crc kubenswrapper[4832]: I0125 07:57:54.206228 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:57:54 crc kubenswrapper[4832]: I0125 07:57:54.206241 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:57:54Z","lastTransitionTime":"2026-01-25T07:57:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:57:54 crc kubenswrapper[4832]: I0125 07:57:54.309320 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:57:54 crc kubenswrapper[4832]: I0125 07:57:54.309377 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:57:54 crc kubenswrapper[4832]: I0125 07:57:54.309427 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:57:54 crc kubenswrapper[4832]: I0125 07:57:54.309457 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:57:54 crc kubenswrapper[4832]: I0125 07:57:54.309483 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:57:54Z","lastTransitionTime":"2026-01-25T07:57:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:57:54 crc kubenswrapper[4832]: I0125 07:57:54.412914 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:57:54 crc kubenswrapper[4832]: I0125 07:57:54.412978 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:57:54 crc kubenswrapper[4832]: I0125 07:57:54.412997 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:57:54 crc kubenswrapper[4832]: I0125 07:57:54.413023 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:57:54 crc kubenswrapper[4832]: I0125 07:57:54.413041 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:57:54Z","lastTransitionTime":"2026-01-25T07:57:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:57:54 crc kubenswrapper[4832]: I0125 07:57:54.515762 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:57:54 crc kubenswrapper[4832]: I0125 07:57:54.515821 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:57:54 crc kubenswrapper[4832]: I0125 07:57:54.515840 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:57:54 crc kubenswrapper[4832]: I0125 07:57:54.515863 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:57:54 crc kubenswrapper[4832]: I0125 07:57:54.515880 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:57:54Z","lastTransitionTime":"2026-01-25T07:57:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:57:54 crc kubenswrapper[4832]: I0125 07:57:54.614513 4832 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-13 00:43:11.570566149 +0000 UTC Jan 25 07:57:54 crc kubenswrapper[4832]: I0125 07:57:54.618599 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:57:54 crc kubenswrapper[4832]: I0125 07:57:54.618663 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:57:54 crc kubenswrapper[4832]: I0125 07:57:54.618686 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:57:54 crc kubenswrapper[4832]: I0125 07:57:54.618717 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:57:54 crc kubenswrapper[4832]: I0125 07:57:54.618738 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:57:54Z","lastTransitionTime":"2026-01-25T07:57:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:57:54 crc kubenswrapper[4832]: I0125 07:57:54.669669 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 25 07:57:54 crc kubenswrapper[4832]: I0125 07:57:54.669699 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-nzj5s" Jan 25 07:57:54 crc kubenswrapper[4832]: E0125 07:57:54.669797 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 25 07:57:54 crc kubenswrapper[4832]: I0125 07:57:54.669928 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 25 07:57:54 crc kubenswrapper[4832]: E0125 07:57:54.670030 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-nzj5s" podUID="b1a15135-866b-4644-97aa-34c7da815b6b" Jan 25 07:57:54 crc kubenswrapper[4832]: E0125 07:57:54.670292 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 25 07:57:54 crc kubenswrapper[4832]: I0125 07:57:54.721619 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:57:54 crc kubenswrapper[4832]: I0125 07:57:54.721713 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:57:54 crc kubenswrapper[4832]: I0125 07:57:54.721739 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:57:54 crc kubenswrapper[4832]: I0125 07:57:54.721770 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:57:54 crc kubenswrapper[4832]: I0125 07:57:54.721794 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:57:54Z","lastTransitionTime":"2026-01-25T07:57:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:57:54 crc kubenswrapper[4832]: I0125 07:57:54.824768 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:57:54 crc kubenswrapper[4832]: I0125 07:57:54.824865 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:57:54 crc kubenswrapper[4832]: I0125 07:57:54.824900 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:57:54 crc kubenswrapper[4832]: I0125 07:57:54.824933 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:57:54 crc kubenswrapper[4832]: I0125 07:57:54.824956 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:57:54Z","lastTransitionTime":"2026-01-25T07:57:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:57:54 crc kubenswrapper[4832]: I0125 07:57:54.928056 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:57:54 crc kubenswrapper[4832]: I0125 07:57:54.928101 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:57:54 crc kubenswrapper[4832]: I0125 07:57:54.928111 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:57:54 crc kubenswrapper[4832]: I0125 07:57:54.928132 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:57:54 crc kubenswrapper[4832]: I0125 07:57:54.928153 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:57:54Z","lastTransitionTime":"2026-01-25T07:57:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:57:55 crc kubenswrapper[4832]: I0125 07:57:55.031404 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:57:55 crc kubenswrapper[4832]: I0125 07:57:55.031469 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:57:55 crc kubenswrapper[4832]: I0125 07:57:55.031481 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:57:55 crc kubenswrapper[4832]: I0125 07:57:55.031502 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:57:55 crc kubenswrapper[4832]: I0125 07:57:55.031517 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:57:55Z","lastTransitionTime":"2026-01-25T07:57:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:57:55 crc kubenswrapper[4832]: I0125 07:57:55.134828 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:57:55 crc kubenswrapper[4832]: I0125 07:57:55.134908 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:57:55 crc kubenswrapper[4832]: I0125 07:57:55.134946 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:57:55 crc kubenswrapper[4832]: I0125 07:57:55.134979 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:57:55 crc kubenswrapper[4832]: I0125 07:57:55.135001 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:57:55Z","lastTransitionTime":"2026-01-25T07:57:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:57:55 crc kubenswrapper[4832]: I0125 07:57:55.238133 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:57:55 crc kubenswrapper[4832]: I0125 07:57:55.238198 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:57:55 crc kubenswrapper[4832]: I0125 07:57:55.238210 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:57:55 crc kubenswrapper[4832]: I0125 07:57:55.238229 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:57:55 crc kubenswrapper[4832]: I0125 07:57:55.238243 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:57:55Z","lastTransitionTime":"2026-01-25T07:57:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:57:55 crc kubenswrapper[4832]: I0125 07:57:55.340641 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:57:55 crc kubenswrapper[4832]: I0125 07:57:55.340683 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:57:55 crc kubenswrapper[4832]: I0125 07:57:55.340694 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:57:55 crc kubenswrapper[4832]: I0125 07:57:55.340709 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:57:55 crc kubenswrapper[4832]: I0125 07:57:55.340719 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:57:55Z","lastTransitionTime":"2026-01-25T07:57:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:57:55 crc kubenswrapper[4832]: I0125 07:57:55.444766 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:57:55 crc kubenswrapper[4832]: I0125 07:57:55.444845 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:57:55 crc kubenswrapper[4832]: I0125 07:57:55.444867 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:57:55 crc kubenswrapper[4832]: I0125 07:57:55.444895 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:57:55 crc kubenswrapper[4832]: I0125 07:57:55.444929 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:57:55Z","lastTransitionTime":"2026-01-25T07:57:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:57:55 crc kubenswrapper[4832]: I0125 07:57:55.547644 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:57:55 crc kubenswrapper[4832]: I0125 07:57:55.547670 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:57:55 crc kubenswrapper[4832]: I0125 07:57:55.547678 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:57:55 crc kubenswrapper[4832]: I0125 07:57:55.547691 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:57:55 crc kubenswrapper[4832]: I0125 07:57:55.547701 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:57:55Z","lastTransitionTime":"2026-01-25T07:57:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:57:55 crc kubenswrapper[4832]: I0125 07:57:55.614701 4832 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-06 14:32:23.634658036 +0000 UTC Jan 25 07:57:55 crc kubenswrapper[4832]: I0125 07:57:55.650331 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:57:55 crc kubenswrapper[4832]: I0125 07:57:55.650375 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:57:55 crc kubenswrapper[4832]: I0125 07:57:55.650414 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:57:55 crc kubenswrapper[4832]: I0125 07:57:55.650434 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:57:55 crc kubenswrapper[4832]: I0125 07:57:55.650449 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:57:55Z","lastTransitionTime":"2026-01-25T07:57:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:57:55 crc kubenswrapper[4832]: I0125 07:57:55.669108 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 25 07:57:55 crc kubenswrapper[4832]: E0125 07:57:55.669241 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 25 07:57:55 crc kubenswrapper[4832]: I0125 07:57:55.753591 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:57:55 crc kubenswrapper[4832]: I0125 07:57:55.753660 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:57:55 crc kubenswrapper[4832]: I0125 07:57:55.753676 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:57:55 crc kubenswrapper[4832]: I0125 07:57:55.753699 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:57:55 crc kubenswrapper[4832]: I0125 07:57:55.753717 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:57:55Z","lastTransitionTime":"2026-01-25T07:57:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:57:55 crc kubenswrapper[4832]: I0125 07:57:55.856192 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:57:55 crc kubenswrapper[4832]: I0125 07:57:55.856249 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:57:55 crc kubenswrapper[4832]: I0125 07:57:55.856262 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:57:55 crc kubenswrapper[4832]: I0125 07:57:55.856288 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:57:55 crc kubenswrapper[4832]: I0125 07:57:55.856302 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:57:55Z","lastTransitionTime":"2026-01-25T07:57:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:57:55 crc kubenswrapper[4832]: I0125 07:57:55.958621 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:57:55 crc kubenswrapper[4832]: I0125 07:57:55.958671 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:57:55 crc kubenswrapper[4832]: I0125 07:57:55.958685 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:57:55 crc kubenswrapper[4832]: I0125 07:57:55.958704 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:57:55 crc kubenswrapper[4832]: I0125 07:57:55.958716 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:57:55Z","lastTransitionTime":"2026-01-25T07:57:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:57:56 crc kubenswrapper[4832]: I0125 07:57:56.060941 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:57:56 crc kubenswrapper[4832]: I0125 07:57:56.060980 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:57:56 crc kubenswrapper[4832]: I0125 07:57:56.060992 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:57:56 crc kubenswrapper[4832]: I0125 07:57:56.061008 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:57:56 crc kubenswrapper[4832]: I0125 07:57:56.061018 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:57:56Z","lastTransitionTime":"2026-01-25T07:57:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:57:56 crc kubenswrapper[4832]: I0125 07:57:56.163768 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:57:56 crc kubenswrapper[4832]: I0125 07:57:56.163803 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:57:56 crc kubenswrapper[4832]: I0125 07:57:56.163812 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:57:56 crc kubenswrapper[4832]: I0125 07:57:56.163827 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:57:56 crc kubenswrapper[4832]: I0125 07:57:56.163836 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:57:56Z","lastTransitionTime":"2026-01-25T07:57:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:57:56 crc kubenswrapper[4832]: I0125 07:57:56.266446 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:57:56 crc kubenswrapper[4832]: I0125 07:57:56.266584 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:57:56 crc kubenswrapper[4832]: I0125 07:57:56.266618 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:57:56 crc kubenswrapper[4832]: I0125 07:57:56.266649 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:57:56 crc kubenswrapper[4832]: I0125 07:57:56.266670 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:57:56Z","lastTransitionTime":"2026-01-25T07:57:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:57:56 crc kubenswrapper[4832]: I0125 07:57:56.370652 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:57:56 crc kubenswrapper[4832]: I0125 07:57:56.370759 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:57:56 crc kubenswrapper[4832]: I0125 07:57:56.370773 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:57:56 crc kubenswrapper[4832]: I0125 07:57:56.370793 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:57:56 crc kubenswrapper[4832]: I0125 07:57:56.370806 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:57:56Z","lastTransitionTime":"2026-01-25T07:57:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:57:56 crc kubenswrapper[4832]: I0125 07:57:56.474600 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:57:56 crc kubenswrapper[4832]: I0125 07:57:56.474703 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:57:56 crc kubenswrapper[4832]: I0125 07:57:56.474725 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:57:56 crc kubenswrapper[4832]: I0125 07:57:56.474751 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:57:56 crc kubenswrapper[4832]: I0125 07:57:56.474769 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:57:56Z","lastTransitionTime":"2026-01-25T07:57:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:57:56 crc kubenswrapper[4832]: I0125 07:57:56.577455 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:57:56 crc kubenswrapper[4832]: I0125 07:57:56.577502 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:57:56 crc kubenswrapper[4832]: I0125 07:57:56.577519 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:57:56 crc kubenswrapper[4832]: I0125 07:57:56.577542 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:57:56 crc kubenswrapper[4832]: I0125 07:57:56.577559 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:57:56Z","lastTransitionTime":"2026-01-25T07:57:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:57:56 crc kubenswrapper[4832]: I0125 07:57:56.615499 4832 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-13 04:00:40.143990041 +0000 UTC Jan 25 07:57:56 crc kubenswrapper[4832]: I0125 07:57:56.669119 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 25 07:57:56 crc kubenswrapper[4832]: I0125 07:57:56.669199 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 25 07:57:56 crc kubenswrapper[4832]: E0125 07:57:56.669284 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 25 07:57:56 crc kubenswrapper[4832]: I0125 07:57:56.669374 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-nzj5s" Jan 25 07:57:56 crc kubenswrapper[4832]: E0125 07:57:56.669640 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 25 07:57:56 crc kubenswrapper[4832]: E0125 07:57:56.670032 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-nzj5s" podUID="b1a15135-866b-4644-97aa-34c7da815b6b" Jan 25 07:57:56 crc kubenswrapper[4832]: I0125 07:57:56.679951 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:57:56 crc kubenswrapper[4832]: I0125 07:57:56.679996 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:57:56 crc kubenswrapper[4832]: I0125 07:57:56.680012 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:57:56 crc kubenswrapper[4832]: I0125 07:57:56.680035 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:57:56 crc kubenswrapper[4832]: I0125 07:57:56.680052 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:57:56Z","lastTransitionTime":"2026-01-25T07:57:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:57:56 crc kubenswrapper[4832]: I0125 07:57:56.782946 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:57:56 crc kubenswrapper[4832]: I0125 07:57:56.783016 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:57:56 crc kubenswrapper[4832]: I0125 07:57:56.783041 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:57:56 crc kubenswrapper[4832]: I0125 07:57:56.783068 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:57:56 crc kubenswrapper[4832]: I0125 07:57:56.783086 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:57:56Z","lastTransitionTime":"2026-01-25T07:57:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:57:56 crc kubenswrapper[4832]: I0125 07:57:56.885800 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:57:56 crc kubenswrapper[4832]: I0125 07:57:56.885845 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:57:56 crc kubenswrapper[4832]: I0125 07:57:56.885855 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:57:56 crc kubenswrapper[4832]: I0125 07:57:56.885871 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:57:56 crc kubenswrapper[4832]: I0125 07:57:56.885880 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:57:56Z","lastTransitionTime":"2026-01-25T07:57:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:57:56 crc kubenswrapper[4832]: I0125 07:57:56.946840 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:57:56 crc kubenswrapper[4832]: I0125 07:57:56.946873 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:57:56 crc kubenswrapper[4832]: I0125 07:57:56.946882 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:57:56 crc kubenswrapper[4832]: I0125 07:57:56.946895 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:57:56 crc kubenswrapper[4832]: I0125 07:57:56.946904 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:57:56Z","lastTransitionTime":"2026-01-25T07:57:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:57:56 crc kubenswrapper[4832]: E0125 07:57:56.959497 4832 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-25T07:57:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:56Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-25T07:57:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:56Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-25T07:57:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:56Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-25T07:57:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:56Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"0979aa75-019e-429a-886d-abfe16bbe8b2\\\",\\\"systemUUID\\\":\\\"55010a19-6f9d-4b9e-9f82-47bdc3835176\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T07:57:56Z is after 2025-08-24T17:21:41Z" Jan 25 07:57:56 crc kubenswrapper[4832]: I0125 07:57:56.963100 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:57:56 crc kubenswrapper[4832]: I0125 07:57:56.963134 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:57:56 crc kubenswrapper[4832]: I0125 07:57:56.963147 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:57:56 crc kubenswrapper[4832]: I0125 07:57:56.963162 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:57:56 crc kubenswrapper[4832]: I0125 07:57:56.963174 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:57:56Z","lastTransitionTime":"2026-01-25T07:57:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:57:56 crc kubenswrapper[4832]: E0125 07:57:56.976482 4832 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-25T07:57:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:56Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-25T07:57:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:56Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-25T07:57:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:56Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-25T07:57:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:56Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"0979aa75-019e-429a-886d-abfe16bbe8b2\\\",\\\"systemUUID\\\":\\\"55010a19-6f9d-4b9e-9f82-47bdc3835176\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T07:57:56Z is after 2025-08-24T17:21:41Z" Jan 25 07:57:56 crc kubenswrapper[4832]: I0125 07:57:56.979666 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:57:56 crc kubenswrapper[4832]: I0125 07:57:56.979689 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:57:56 crc kubenswrapper[4832]: I0125 07:57:56.979697 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:57:56 crc kubenswrapper[4832]: I0125 07:57:56.979710 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:57:56 crc kubenswrapper[4832]: I0125 07:57:56.979719 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:57:56Z","lastTransitionTime":"2026-01-25T07:57:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:57:56 crc kubenswrapper[4832]: E0125 07:57:56.991201 4832 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-25T07:57:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:56Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-25T07:57:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:56Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-25T07:57:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:56Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-25T07:57:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:56Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"0979aa75-019e-429a-886d-abfe16bbe8b2\\\",\\\"systemUUID\\\":\\\"55010a19-6f9d-4b9e-9f82-47bdc3835176\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T07:57:56Z is after 2025-08-24T17:21:41Z" Jan 25 07:57:56 crc kubenswrapper[4832]: I0125 07:57:56.994904 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:57:56 crc kubenswrapper[4832]: I0125 07:57:56.994994 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:57:56 crc kubenswrapper[4832]: I0125 07:57:56.995012 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:57:56 crc kubenswrapper[4832]: I0125 07:57:56.995030 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:57:56 crc kubenswrapper[4832]: I0125 07:57:56.995043 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:57:56Z","lastTransitionTime":"2026-01-25T07:57:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:57:57 crc kubenswrapper[4832]: E0125 07:57:57.010021 4832 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-25T07:57:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:56Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-25T07:57:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:56Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-25T07:57:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:56Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-25T07:57:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:56Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"0979aa75-019e-429a-886d-abfe16bbe8b2\\\",\\\"systemUUID\\\":\\\"55010a19-6f9d-4b9e-9f82-47bdc3835176\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T07:57:57Z is after 2025-08-24T17:21:41Z" Jan 25 07:57:57 crc kubenswrapper[4832]: I0125 07:57:57.013754 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:57:57 crc kubenswrapper[4832]: I0125 07:57:57.013797 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:57:57 crc kubenswrapper[4832]: I0125 07:57:57.013809 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:57:57 crc kubenswrapper[4832]: I0125 07:57:57.013826 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:57:57 crc kubenswrapper[4832]: I0125 07:57:57.013840 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:57:57Z","lastTransitionTime":"2026-01-25T07:57:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:57:57 crc kubenswrapper[4832]: E0125 07:57:57.023822 4832 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-25T07:57:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:57Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-25T07:57:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:57Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-25T07:57:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:57Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-25T07:57:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:57Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"0979aa75-019e-429a-886d-abfe16bbe8b2\\\",\\\"systemUUID\\\":\\\"55010a19-6f9d-4b9e-9f82-47bdc3835176\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T07:57:57Z is after 2025-08-24T17:21:41Z" Jan 25 07:57:57 crc kubenswrapper[4832]: E0125 07:57:57.023942 4832 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 25 07:57:57 crc kubenswrapper[4832]: I0125 07:57:57.025314 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:57:57 crc kubenswrapper[4832]: I0125 07:57:57.025360 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:57:57 crc kubenswrapper[4832]: I0125 07:57:57.025373 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:57:57 crc kubenswrapper[4832]: I0125 07:57:57.025408 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:57:57 crc kubenswrapper[4832]: I0125 07:57:57.025423 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:57:57Z","lastTransitionTime":"2026-01-25T07:57:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:57:57 crc kubenswrapper[4832]: I0125 07:57:57.131012 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:57:57 crc kubenswrapper[4832]: I0125 07:57:57.131308 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:57:57 crc kubenswrapper[4832]: I0125 07:57:57.131418 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:57:57 crc kubenswrapper[4832]: I0125 07:57:57.131499 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:57:57 crc kubenswrapper[4832]: I0125 07:57:57.131569 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:57:57Z","lastTransitionTime":"2026-01-25T07:57:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:57:57 crc kubenswrapper[4832]: I0125 07:57:57.234452 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:57:57 crc kubenswrapper[4832]: I0125 07:57:57.234529 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:57:57 crc kubenswrapper[4832]: I0125 07:57:57.234539 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:57:57 crc kubenswrapper[4832]: I0125 07:57:57.234554 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:57:57 crc kubenswrapper[4832]: I0125 07:57:57.234564 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:57:57Z","lastTransitionTime":"2026-01-25T07:57:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:57:57 crc kubenswrapper[4832]: I0125 07:57:57.337003 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:57:57 crc kubenswrapper[4832]: I0125 07:57:57.337057 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:57:57 crc kubenswrapper[4832]: I0125 07:57:57.337074 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:57:57 crc kubenswrapper[4832]: I0125 07:57:57.337100 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:57:57 crc kubenswrapper[4832]: I0125 07:57:57.337119 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:57:57Z","lastTransitionTime":"2026-01-25T07:57:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:57:57 crc kubenswrapper[4832]: I0125 07:57:57.439966 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:57:57 crc kubenswrapper[4832]: I0125 07:57:57.439992 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:57:57 crc kubenswrapper[4832]: I0125 07:57:57.440001 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:57:57 crc kubenswrapper[4832]: I0125 07:57:57.440013 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:57:57 crc kubenswrapper[4832]: I0125 07:57:57.440021 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:57:57Z","lastTransitionTime":"2026-01-25T07:57:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:57:57 crc kubenswrapper[4832]: I0125 07:57:57.543051 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:57:57 crc kubenswrapper[4832]: I0125 07:57:57.543110 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:57:57 crc kubenswrapper[4832]: I0125 07:57:57.543127 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:57:57 crc kubenswrapper[4832]: I0125 07:57:57.543145 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:57:57 crc kubenswrapper[4832]: I0125 07:57:57.543157 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:57:57Z","lastTransitionTime":"2026-01-25T07:57:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:57:57 crc kubenswrapper[4832]: I0125 07:57:57.616267 4832 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-06 23:14:20.379347209 +0000 UTC Jan 25 07:57:57 crc kubenswrapper[4832]: I0125 07:57:57.645290 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:57:57 crc kubenswrapper[4832]: I0125 07:57:57.645585 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:57:57 crc kubenswrapper[4832]: I0125 07:57:57.645681 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:57:57 crc kubenswrapper[4832]: I0125 07:57:57.645771 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:57:57 crc kubenswrapper[4832]: I0125 07:57:57.645891 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:57:57Z","lastTransitionTime":"2026-01-25T07:57:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:57:57 crc kubenswrapper[4832]: I0125 07:57:57.668669 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 25 07:57:57 crc kubenswrapper[4832]: E0125 07:57:57.668801 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 25 07:57:57 crc kubenswrapper[4832]: I0125 07:57:57.685537 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:16Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T07:57:57Z is after 2025-08-24T17:21:41Z" Jan 25 07:57:57 crc kubenswrapper[4832]: I0125 07:57:57.699634 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://49bab1f91a75d2c164a43ba253102a6ac5ba0fd6347fad172ae2052f055d3434\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T07:57:57Z is after 2025-08-24T17:21:41Z" Jan 25 07:57:57 crc kubenswrapper[4832]: I0125 07:57:57.714508 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:19Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:19Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://097b2ff685144140b86c80b5c605d0ef31116b56237a696d1da4bf98f65d7ae2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T07:57:57Z is after 2025-08-24T17:21:41Z" Jan 25 07:57:57 crc kubenswrapper[4832]: I0125 07:57:57.725255 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-ljmz9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f0e6de28-95c1-4b62-93a5-8141ed12ba8e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://90459cff650e6a278d83d57b502423c3c3bd87cadc083c7642dfc4cc33e7953c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s6dzs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T07:57:16Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-ljmz9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T07:57:57Z is after 2025-08-24T17:21:41Z" Jan 25 07:57:57 crc kubenswrapper[4832]: I0125 07:57:57.734987 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-9r9sz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1fb47e8e-c812-41b4-9be7-3fad81e121b0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11d30ecfbac91cbd5f546d8f064b715e31917d7db31102376299e2c5fa7951f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2t6v2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c32b6a39b2bc87d55b11a88a54d0909633358c70f3fc555cd4308bc5bf2689a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2t6v2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T07:57:16Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-9r9sz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T07:57:57Z is after 2025-08-24T17:21:41Z" Jan 25 07:57:57 crc kubenswrapper[4832]: I0125 07:57:57.746379 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T07:57:57Z is after 2025-08-24T17:21:41Z" Jan 25 07:57:57 crc kubenswrapper[4832]: I0125 07:57:57.747879 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:57:57 crc kubenswrapper[4832]: I0125 07:57:57.747964 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:57:57 crc kubenswrapper[4832]: I0125 07:57:57.748022 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:57:57 crc kubenswrapper[4832]: I0125 07:57:57.748048 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:57:57 crc kubenswrapper[4832]: I0125 07:57:57.748065 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:57:57Z","lastTransitionTime":"2026-01-25T07:57:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:57:57 crc kubenswrapper[4832]: I0125 07:57:57.768442 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-7tflx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"947f1c61-f061-4448-b301-9c2554b67933\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://62f9942e292890719dd629a44aa806877367db57a332a97f254fea093c039c5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6tmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://446dcb21c95e4112671db6f4b8376ff3361d3d386ecdaa190f615271511be812\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://446dcb21c95e4112671db6f4b8376ff3361d3d386ecdaa190f615271511be812\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T07:57:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T07:57:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6tmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2ca8e86a16d5f632146a210839dc52fb85013bd79ac5a467847d4a28a672539\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a2ca8e86a16d5f632146a210839dc52fb85013bd79ac5a467847d4a28a672539\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T07:57:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T07:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6tmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e8c763fc8bcc560d4435f2ed3be793465fb9e31b07bc26b76ce14bf7d9ce6b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3e8c763fc8bcc560d4435f2ed3be793465fb9e31b07bc26b76ce14bf7d9ce6b7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T07:57:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T07:57:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6tmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6a224c00f14700b78550beaa705d0f1cf0b2f13ef8ec3ba81aef885b81292f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a6a224c00f14700b78550beaa705d0f1cf0b2f13ef8ec3ba81aef885b81292f3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T07:57:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T07:57:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6tmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0565bbfef6aee4dc36b7eeea5fb9b0d26004395c38af8fb6f1745ff6853957e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0565bbfef6aee4dc36b7eeea5fb9b0d26004395c38af8fb6f1745ff6853957e4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T07:57:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T07:57:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6tmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://21c9f3889231e035c1db9611e076f2db7f52cca1449f9cd143323a8599d3141c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://21c9f3889231e035c1db9611e076f2db7f52cca1449f9cd143323a8599d3141c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T07:57:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T07:57:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6tmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T07:57:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-7tflx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T07:57:57Z is after 2025-08-24T17:21:41Z" Jan 25 07:57:57 crc kubenswrapper[4832]: I0125 07:57:57.782110 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4399c971-4476-4d24-ae22-8f9710ee1ea8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:56:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:56:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:56:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://427b76c32266adf832d2068d3a55977e793505c5bb68d7b55f73115596094910\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:56:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://37e9206fcc440929199c51b318bab8d2c23814d1307eaed596434c12edf2ed21\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:56:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://959f94a48ef709e3a3ca62ab6fda1874fd98e4fa70fbde0fa03da51bc8d0ed25\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:56:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://56d7d5b36830b76c8af4d6a98ec50b4096ef677b7ec94784724d5395dbc5e1a5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7e2213b4c4748dc37cf94e9b977630270dedbabf28e81c8a6d75e4ee3346ad7a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-25T07:57:15Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0125 07:57:10.242088 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0125 07:57:10.245266 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3222874030/tls.crt::/tmp/serving-cert-3222874030/tls.key\\\\\\\"\\\\nI0125 07:57:15.582629 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0125 07:57:15.585295 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0125 07:57:15.585315 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0125 07:57:15.585341 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0125 07:57:15.585347 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0125 07:57:15.590465 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0125 07:57:15.590486 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0125 07:57:15.590498 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0125 07:57:15.590502 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0125 07:57:15.590506 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0125 07:57:15.590510 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0125 07:57:15.590513 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0125 07:57:15.590670 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0125 07:57:15.594690 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-25T07:56:59Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c0b0c638bfaa98aaf9932b5ad1b0bfc04ba52038c40f3aa85103388c557ace5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:56:59Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5cdefbe9da3ff798b69ba79465cd9b6fce74df31802f14dca3fa58ba5b9d1bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b5cdefbe9da3ff798b69ba79465cd9b6fce74df31802f14dca3fa58ba5b9d1bd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T07:56:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T07:56:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T07:56:57Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T07:57:57Z is after 2025-08-24T17:21:41Z" Jan 25 07:57:57 crc kubenswrapper[4832]: I0125 07:57:57.792265 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fcc553c4-1007-4dbc-8420-60b36d54467a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:56:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:56:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:56:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8be196a1dec67a58e78aa9de2efa770fc899f210cc9c13962f0ebe78b967ba34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:56:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b044eb1a229266f00938c08da6aa9e86425ca71d08c8434d7214d54850c36bbb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:56:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://82354c782a5e3edb960aa716e1fc5fa9ab40d1f483ae320f08abfb662c1f1911\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:56:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7833d14895ff5c8aa464bdd04ddfe77dd2cddb9658d863bf6421449e62657bd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:56:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T07:56:57Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T07:57:57Z is after 2025-08-24T17:21:41Z" Jan 25 07:57:57 crc kubenswrapper[4832]: I0125 07:57:57.803917 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:16Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T07:57:57Z is after 2025-08-24T17:21:41Z" Jan 25 07:57:57 crc kubenswrapper[4832]: I0125 07:57:57.813846 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-6dqw2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5b30a48c-b823-4cdd-ac0c-def5487d8fa6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d04c4243f10847106daab854b81ba5b24466780aa4900922ae2c460468a12e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxmsw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T07:57:16Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-6dqw2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T07:57:57Z is after 2025-08-24T17:21:41Z" Jan 25 07:57:57 crc kubenswrapper[4832]: I0125 07:57:57.830092 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-plv66" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9c6fdc72-86dc-433d-8aac-57b0eeefaca3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4eb8d5ded80c75addd304eb271c805a5558200db4ad062ef7354d8a0e4d2892d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rkm2k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b2bdf85709ae59146893142e9c99259a30d0a3d382b2212b1863f677f6afc2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rkm2k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://955df1f749685e35f57096ab341705a767f9f044c498ff9fe0c578205ab00e47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rkm2k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a4281c5178e1f538e268252a65fbf98cf6d3febdb246a148f96a4aa074654ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rkm2k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9039a4038315d24ad4f721f3a16dc792881c104d23270f4ab5ffb3d84ff4cb99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rkm2k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0de5e2c0084fa8b9faf368e61b965f84d8411bcbdfb8b3cf6a35f4bc6088e68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rkm2k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://46f7a9d8da7bc60b49c21eb3838eb9b38263ef6bf7be257ababc09c050822355\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://46f7a9d8da7bc60b49c21eb3838eb9b38263ef6bf7be257ababc09c050822355\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-25T07:57:40Z\\\",\\\"message\\\":\\\" node crc\\\\nI0125 07:57:40.180788 6436 obj_retry.go:386] Retry successful for *v1.Pod openshift-multus/multus-additional-cni-plugins-7tflx after 0 failed attempt(s)\\\\nI0125 07:57:40.180793 6436 default_network_controller.go:776] Recording success event on pod openshift-multus/multus-additional-cni-plugins-7tflx\\\\nI0125 07:57:40.180768 6436 ovn.go:134] Ensuring zone local for Pod openshift-machine-config-operator/machine-config-daemon-9r9sz in node crc\\\\nI0125 07:57:40.180804 6436 obj_retry.go:386] Retry successful for *v1.Pod openshift-machine-config-operator/machine-config-daemon-9r9sz after 0 failed attempt(s)\\\\nI0125 07:57:40.180809 6436 default_network_controller.go:776] Recording success event on pod openshift-machine-config-operator/machine-config-daemon-9r9sz\\\\nI0125 07:57:40.180747 6436 obj_retry.go:386] Retry successful for *v1.Pod openshift-image-registry/node-ca-6dqw2 after 0 failed attempt(s)\\\\nI0125 07:57:40.180817 6436 default_network_controller.go:776] Recording success event on pod openshift-image-registry/node-ca-6dqw2\\\\nI0125 07:57:40.180731 6436 default_network_controller.go:776] Recording success event on pod openshift-ovn-kubernetes/ovnkube-node-plv66\\\\nF0125 07:57:40.180824 6436 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-25T07:57:39Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-plv66_openshift-ovn-kubernetes(9c6fdc72-86dc-433d-8aac-57b0eeefaca3)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rkm2k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d82289bf3a8f5881decb5d348cc43fdfd61f4ce6af17013a893b687d2c759d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rkm2k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac96bdf8380dbae226d8f186a0449b986660f21889eb73734620b26fb796fbf1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ac96bdf8380dbae226d8f186a0449b986660f21889eb73734620b26fb796fbf1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T07:57:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rkm2k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T07:57:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-plv66\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T07:57:57Z is after 2025-08-24T17:21:41Z" Jan 25 07:57:57 crc kubenswrapper[4832]: I0125 07:57:57.843441 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-ct7hc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1be4ce34-f46c-4ee9-8fb5-7ac13dafef85\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c584b1d69c283cdea5cd50a6f1e3b9a1fd4b4b82bfb1142fb4bb32fd7c7d3fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cd2cg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://80d0c4fe9bedb92c87bfea3e2e7706dac8825366b74adb48b257fa32f31a6277\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cd2cg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T07:57:29Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-ct7hc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T07:57:57Z is after 2025-08-24T17:21:41Z" Jan 25 07:57:57 crc kubenswrapper[4832]: I0125 07:57:57.850630 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:57:57 crc kubenswrapper[4832]: I0125 07:57:57.850681 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:57:57 crc kubenswrapper[4832]: I0125 07:57:57.850694 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:57:57 crc kubenswrapper[4832]: I0125 07:57:57.850712 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:57:57 crc kubenswrapper[4832]: I0125 07:57:57.850726 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:57:57Z","lastTransitionTime":"2026-01-25T07:57:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:57:57 crc kubenswrapper[4832]: I0125 07:57:57.855419 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f6bad725-5721-4824-a4ed-bfc16b247b44\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:56:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:56:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:56:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://acf625e850d98cfae07cd2c4ef9d3f9a5404baad9c9bf3e87fa7ff5d1ba00212\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:56:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://902f7ae070f61b744e77e5cbcc7e585607467b588514ae3e99fdded86279a9b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:56:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e1d1028b32f15c85ebc49f8b388004a91d6c08f1bc2c7bf77c2d34db97525111\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:56:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://79304c289cb94b7a9cd8eed25b9e679ded9f3b2b6133ad21157032e313120e85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://79304c289cb94b7a9cd8eed25b9e679ded9f3b2b6133ad21157032e313120e85\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T07:56:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T07:56:58Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T07:56:57Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T07:57:57Z is after 2025-08-24T17:21:41Z" Jan 25 07:57:57 crc kubenswrapper[4832]: I0125 07:57:57.873434 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d0e4b534-077a-47eb-a9aa-463b4dce27c2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:56:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:56:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e400282707469172abd90879bb5c4f96419dd2fbdbc5cc58c6ee9954624b221f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://22fb11acb07674f4808f4563567010790f12a87af272fdcf5ad1998e616c3f13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7970bc59b29bb18f7064917431bb4dd3388f593b65f71d697e3bc1c37493d087\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ae35d18ac48a31c47656c723134740770a44da6fa1587a853402bbfd4f51956\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://56b41ea1d1a7bb58c288bf3c661f5cd441412fc4790cd8361da2061bd35721dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c6f28ecd4c0dfb159fffbbdfc1ecbfee0ce21de2efa607937d80ec098bfc2534\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c6f28ecd4c0dfb159fffbbdfc1ecbfee0ce21de2efa607937d80ec098bfc2534\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T07:56:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T07:56:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3d6c060504d04d04a811fe906985b4981037f7c249befd89d21694b58983826\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b3d6c060504d04d04a811fe906985b4981037f7c249befd89d21694b58983826\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T07:56:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T07:56:58Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f98f07a514287378206a12966a18bcce2ce996434858c036f7e405a8c5d51721\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f98f07a514287378206a12966a18bcce2ce996434858c036f7e405a8c5d51721\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T07:56:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T07:56:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T07:56:57Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T07:57:57Z is after 2025-08-24T17:21:41Z" Jan 25 07:57:57 crc kubenswrapper[4832]: I0125 07:57:57.887180 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f08aec7c666388c5a9a5ccc970acf6e9df3262090951fd1a205cfb2f6cfb26a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e880d54d6b2d147d036dac73afd36230c3a984b018b7bd600dcbd33ca83aa84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T07:57:57Z is after 2025-08-24T17:21:41Z" Jan 25 07:57:57 crc kubenswrapper[4832]: I0125 07:57:57.901566 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-kzrcf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5439ad80-35f6-4da4-8745-8104e9963472\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c1f3fab8a8806d76e6199970ac471a73665e6ec874f959a1e7908df814babfff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dg29p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T07:57:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-kzrcf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T07:57:57Z is after 2025-08-24T17:21:41Z" Jan 25 07:57:57 crc kubenswrapper[4832]: I0125 07:57:57.911201 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-nzj5s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b1a15135-866b-4644-97aa-34c7da815b6b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6wc7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6wc7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T07:57:30Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-nzj5s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T07:57:57Z is after 2025-08-24T17:21:41Z" Jan 25 07:57:57 crc kubenswrapper[4832]: I0125 07:57:57.952819 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:57:57 crc kubenswrapper[4832]: I0125 07:57:57.952880 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:57:57 crc kubenswrapper[4832]: I0125 07:57:57.952897 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:57:57 crc kubenswrapper[4832]: I0125 07:57:57.952919 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:57:57 crc kubenswrapper[4832]: I0125 07:57:57.952936 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:57:57Z","lastTransitionTime":"2026-01-25T07:57:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:57:58 crc kubenswrapper[4832]: I0125 07:57:58.056537 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:57:58 crc kubenswrapper[4832]: I0125 07:57:58.056587 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:57:58 crc kubenswrapper[4832]: I0125 07:57:58.056599 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:57:58 crc kubenswrapper[4832]: I0125 07:57:58.056642 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:57:58 crc kubenswrapper[4832]: I0125 07:57:58.056656 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:57:58Z","lastTransitionTime":"2026-01-25T07:57:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:57:58 crc kubenswrapper[4832]: I0125 07:57:58.160923 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:57:58 crc kubenswrapper[4832]: I0125 07:57:58.160956 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:57:58 crc kubenswrapper[4832]: I0125 07:57:58.160963 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:57:58 crc kubenswrapper[4832]: I0125 07:57:58.160977 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:57:58 crc kubenswrapper[4832]: I0125 07:57:58.160986 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:57:58Z","lastTransitionTime":"2026-01-25T07:57:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:57:58 crc kubenswrapper[4832]: I0125 07:57:58.263216 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:57:58 crc kubenswrapper[4832]: I0125 07:57:58.263279 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:57:58 crc kubenswrapper[4832]: I0125 07:57:58.263297 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:57:58 crc kubenswrapper[4832]: I0125 07:57:58.263325 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:57:58 crc kubenswrapper[4832]: I0125 07:57:58.263343 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:57:58Z","lastTransitionTime":"2026-01-25T07:57:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:57:58 crc kubenswrapper[4832]: I0125 07:57:58.366013 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:57:58 crc kubenswrapper[4832]: I0125 07:57:58.366443 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:57:58 crc kubenswrapper[4832]: I0125 07:57:58.366477 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:57:58 crc kubenswrapper[4832]: I0125 07:57:58.366501 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:57:58 crc kubenswrapper[4832]: I0125 07:57:58.366518 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:57:58Z","lastTransitionTime":"2026-01-25T07:57:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:57:58 crc kubenswrapper[4832]: I0125 07:57:58.468924 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:57:58 crc kubenswrapper[4832]: I0125 07:57:58.468989 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:57:58 crc kubenswrapper[4832]: I0125 07:57:58.469344 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:57:58 crc kubenswrapper[4832]: I0125 07:57:58.469432 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:57:58 crc kubenswrapper[4832]: I0125 07:57:58.469448 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:57:58Z","lastTransitionTime":"2026-01-25T07:57:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:57:58 crc kubenswrapper[4832]: I0125 07:57:58.572965 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:57:58 crc kubenswrapper[4832]: I0125 07:57:58.573011 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:57:58 crc kubenswrapper[4832]: I0125 07:57:58.573022 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:57:58 crc kubenswrapper[4832]: I0125 07:57:58.573040 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:57:58 crc kubenswrapper[4832]: I0125 07:57:58.573055 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:57:58Z","lastTransitionTime":"2026-01-25T07:57:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:57:58 crc kubenswrapper[4832]: I0125 07:57:58.616471 4832 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-06 05:32:02.523643187 +0000 UTC Jan 25 07:57:58 crc kubenswrapper[4832]: I0125 07:57:58.669295 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 25 07:57:58 crc kubenswrapper[4832]: I0125 07:57:58.669439 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-nzj5s" Jan 25 07:57:58 crc kubenswrapper[4832]: E0125 07:57:58.669477 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 25 07:57:58 crc kubenswrapper[4832]: I0125 07:57:58.669321 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 25 07:57:58 crc kubenswrapper[4832]: E0125 07:57:58.669618 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-nzj5s" podUID="b1a15135-866b-4644-97aa-34c7da815b6b" Jan 25 07:57:58 crc kubenswrapper[4832]: E0125 07:57:58.669797 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 25 07:57:58 crc kubenswrapper[4832]: I0125 07:57:58.675148 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:57:58 crc kubenswrapper[4832]: I0125 07:57:58.675178 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:57:58 crc kubenswrapper[4832]: I0125 07:57:58.675188 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:57:58 crc kubenswrapper[4832]: I0125 07:57:58.675203 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:57:58 crc kubenswrapper[4832]: I0125 07:57:58.675215 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:57:58Z","lastTransitionTime":"2026-01-25T07:57:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:57:58 crc kubenswrapper[4832]: I0125 07:57:58.777496 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:57:58 crc kubenswrapper[4832]: I0125 07:57:58.777526 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:57:58 crc kubenswrapper[4832]: I0125 07:57:58.777533 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:57:58 crc kubenswrapper[4832]: I0125 07:57:58.777546 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:57:58 crc kubenswrapper[4832]: I0125 07:57:58.777554 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:57:58Z","lastTransitionTime":"2026-01-25T07:57:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:57:58 crc kubenswrapper[4832]: I0125 07:57:58.879516 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:57:58 crc kubenswrapper[4832]: I0125 07:57:58.879802 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:57:58 crc kubenswrapper[4832]: I0125 07:57:58.879881 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:57:58 crc kubenswrapper[4832]: I0125 07:57:58.879997 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:57:58 crc kubenswrapper[4832]: I0125 07:57:58.880101 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:57:58Z","lastTransitionTime":"2026-01-25T07:57:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:57:58 crc kubenswrapper[4832]: I0125 07:57:58.982654 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:57:58 crc kubenswrapper[4832]: I0125 07:57:58.982704 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:57:58 crc kubenswrapper[4832]: I0125 07:57:58.982716 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:57:58 crc kubenswrapper[4832]: I0125 07:57:58.982735 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:57:58 crc kubenswrapper[4832]: I0125 07:57:58.982747 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:57:58Z","lastTransitionTime":"2026-01-25T07:57:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:57:59 crc kubenswrapper[4832]: I0125 07:57:59.086545 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:57:59 crc kubenswrapper[4832]: I0125 07:57:59.086592 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:57:59 crc kubenswrapper[4832]: I0125 07:57:59.086604 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:57:59 crc kubenswrapper[4832]: I0125 07:57:59.086623 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:57:59 crc kubenswrapper[4832]: I0125 07:57:59.086637 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:57:59Z","lastTransitionTime":"2026-01-25T07:57:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:57:59 crc kubenswrapper[4832]: I0125 07:57:59.189480 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:57:59 crc kubenswrapper[4832]: I0125 07:57:59.189905 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:57:59 crc kubenswrapper[4832]: I0125 07:57:59.190050 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:57:59 crc kubenswrapper[4832]: I0125 07:57:59.190203 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:57:59 crc kubenswrapper[4832]: I0125 07:57:59.190379 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:57:59Z","lastTransitionTime":"2026-01-25T07:57:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:57:59 crc kubenswrapper[4832]: I0125 07:57:59.297523 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:57:59 crc kubenswrapper[4832]: I0125 07:57:59.297617 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:57:59 crc kubenswrapper[4832]: I0125 07:57:59.297653 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:57:59 crc kubenswrapper[4832]: I0125 07:57:59.297675 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:57:59 crc kubenswrapper[4832]: I0125 07:57:59.297691 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:57:59Z","lastTransitionTime":"2026-01-25T07:57:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:57:59 crc kubenswrapper[4832]: I0125 07:57:59.400311 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:57:59 crc kubenswrapper[4832]: I0125 07:57:59.400369 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:57:59 crc kubenswrapper[4832]: I0125 07:57:59.400401 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:57:59 crc kubenswrapper[4832]: I0125 07:57:59.400420 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:57:59 crc kubenswrapper[4832]: I0125 07:57:59.400433 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:57:59Z","lastTransitionTime":"2026-01-25T07:57:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:57:59 crc kubenswrapper[4832]: I0125 07:57:59.503353 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:57:59 crc kubenswrapper[4832]: I0125 07:57:59.503423 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:57:59 crc kubenswrapper[4832]: I0125 07:57:59.503436 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:57:59 crc kubenswrapper[4832]: I0125 07:57:59.503472 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:57:59 crc kubenswrapper[4832]: I0125 07:57:59.503483 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:57:59Z","lastTransitionTime":"2026-01-25T07:57:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:57:59 crc kubenswrapper[4832]: I0125 07:57:59.606887 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:57:59 crc kubenswrapper[4832]: I0125 07:57:59.606946 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:57:59 crc kubenswrapper[4832]: I0125 07:57:59.606963 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:57:59 crc kubenswrapper[4832]: I0125 07:57:59.606987 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:57:59 crc kubenswrapper[4832]: I0125 07:57:59.607005 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:57:59Z","lastTransitionTime":"2026-01-25T07:57:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:57:59 crc kubenswrapper[4832]: I0125 07:57:59.617139 4832 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-15 01:30:34.906730464 +0000 UTC Jan 25 07:57:59 crc kubenswrapper[4832]: I0125 07:57:59.668954 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 25 07:57:59 crc kubenswrapper[4832]: E0125 07:57:59.669180 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 25 07:57:59 crc kubenswrapper[4832]: I0125 07:57:59.710273 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:57:59 crc kubenswrapper[4832]: I0125 07:57:59.710314 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:57:59 crc kubenswrapper[4832]: I0125 07:57:59.710329 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:57:59 crc kubenswrapper[4832]: I0125 07:57:59.710347 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:57:59 crc kubenswrapper[4832]: I0125 07:57:59.710361 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:57:59Z","lastTransitionTime":"2026-01-25T07:57:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:57:59 crc kubenswrapper[4832]: I0125 07:57:59.813168 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:57:59 crc kubenswrapper[4832]: I0125 07:57:59.814523 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:57:59 crc kubenswrapper[4832]: I0125 07:57:59.814585 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:57:59 crc kubenswrapper[4832]: I0125 07:57:59.814652 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:57:59 crc kubenswrapper[4832]: I0125 07:57:59.814743 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:57:59Z","lastTransitionTime":"2026-01-25T07:57:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:57:59 crc kubenswrapper[4832]: I0125 07:57:59.917020 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:57:59 crc kubenswrapper[4832]: I0125 07:57:59.917052 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:57:59 crc kubenswrapper[4832]: I0125 07:57:59.917063 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:57:59 crc kubenswrapper[4832]: I0125 07:57:59.917079 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:57:59 crc kubenswrapper[4832]: I0125 07:57:59.917093 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:57:59Z","lastTransitionTime":"2026-01-25T07:57:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:58:00 crc kubenswrapper[4832]: I0125 07:58:00.019911 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:58:00 crc kubenswrapper[4832]: I0125 07:58:00.019938 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:58:00 crc kubenswrapper[4832]: I0125 07:58:00.019946 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:58:00 crc kubenswrapper[4832]: I0125 07:58:00.019961 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:58:00 crc kubenswrapper[4832]: I0125 07:58:00.019971 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:58:00Z","lastTransitionTime":"2026-01-25T07:58:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:58:00 crc kubenswrapper[4832]: I0125 07:58:00.121873 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:58:00 crc kubenswrapper[4832]: I0125 07:58:00.121899 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:58:00 crc kubenswrapper[4832]: I0125 07:58:00.121908 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:58:00 crc kubenswrapper[4832]: I0125 07:58:00.121920 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:58:00 crc kubenswrapper[4832]: I0125 07:58:00.121928 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:58:00Z","lastTransitionTime":"2026-01-25T07:58:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:58:00 crc kubenswrapper[4832]: I0125 07:58:00.224034 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:58:00 crc kubenswrapper[4832]: I0125 07:58:00.224060 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:58:00 crc kubenswrapper[4832]: I0125 07:58:00.224068 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:58:00 crc kubenswrapper[4832]: I0125 07:58:00.224079 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:58:00 crc kubenswrapper[4832]: I0125 07:58:00.224102 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:58:00Z","lastTransitionTime":"2026-01-25T07:58:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:58:00 crc kubenswrapper[4832]: I0125 07:58:00.326058 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:58:00 crc kubenswrapper[4832]: I0125 07:58:00.326100 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:58:00 crc kubenswrapper[4832]: I0125 07:58:00.326109 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:58:00 crc kubenswrapper[4832]: I0125 07:58:00.326123 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:58:00 crc kubenswrapper[4832]: I0125 07:58:00.326132 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:58:00Z","lastTransitionTime":"2026-01-25T07:58:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:58:00 crc kubenswrapper[4832]: I0125 07:58:00.428079 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:58:00 crc kubenswrapper[4832]: I0125 07:58:00.428565 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:58:00 crc kubenswrapper[4832]: I0125 07:58:00.428580 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:58:00 crc kubenswrapper[4832]: I0125 07:58:00.428595 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:58:00 crc kubenswrapper[4832]: I0125 07:58:00.428606 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:58:00Z","lastTransitionTime":"2026-01-25T07:58:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:58:00 crc kubenswrapper[4832]: I0125 07:58:00.530614 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:58:00 crc kubenswrapper[4832]: I0125 07:58:00.530646 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:58:00 crc kubenswrapper[4832]: I0125 07:58:00.530664 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:58:00 crc kubenswrapper[4832]: I0125 07:58:00.530677 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:58:00 crc kubenswrapper[4832]: I0125 07:58:00.530716 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:58:00Z","lastTransitionTime":"2026-01-25T07:58:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:58:00 crc kubenswrapper[4832]: I0125 07:58:00.617700 4832 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-17 18:18:31.252952504 +0000 UTC Jan 25 07:58:00 crc kubenswrapper[4832]: I0125 07:58:00.633162 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:58:00 crc kubenswrapper[4832]: I0125 07:58:00.633201 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:58:00 crc kubenswrapper[4832]: I0125 07:58:00.633211 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:58:00 crc kubenswrapper[4832]: I0125 07:58:00.633226 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:58:00 crc kubenswrapper[4832]: I0125 07:58:00.633237 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:58:00Z","lastTransitionTime":"2026-01-25T07:58:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:58:00 crc kubenswrapper[4832]: I0125 07:58:00.668735 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 25 07:58:00 crc kubenswrapper[4832]: I0125 07:58:00.668849 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-nzj5s" Jan 25 07:58:00 crc kubenswrapper[4832]: E0125 07:58:00.668905 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 25 07:58:00 crc kubenswrapper[4832]: I0125 07:58:00.668952 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 25 07:58:00 crc kubenswrapper[4832]: E0125 07:58:00.669008 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-nzj5s" podUID="b1a15135-866b-4644-97aa-34c7da815b6b" Jan 25 07:58:00 crc kubenswrapper[4832]: E0125 07:58:00.669120 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 25 07:58:00 crc kubenswrapper[4832]: I0125 07:58:00.736093 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:58:00 crc kubenswrapper[4832]: I0125 07:58:00.736129 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:58:00 crc kubenswrapper[4832]: I0125 07:58:00.736139 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:58:00 crc kubenswrapper[4832]: I0125 07:58:00.736153 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:58:00 crc kubenswrapper[4832]: I0125 07:58:00.736163 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:58:00Z","lastTransitionTime":"2026-01-25T07:58:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:58:00 crc kubenswrapper[4832]: I0125 07:58:00.838691 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:58:00 crc kubenswrapper[4832]: I0125 07:58:00.839036 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:58:00 crc kubenswrapper[4832]: I0125 07:58:00.839132 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:58:00 crc kubenswrapper[4832]: I0125 07:58:00.839206 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:58:00 crc kubenswrapper[4832]: I0125 07:58:00.839294 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:58:00Z","lastTransitionTime":"2026-01-25T07:58:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:58:00 crc kubenswrapper[4832]: I0125 07:58:00.942549 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:58:00 crc kubenswrapper[4832]: I0125 07:58:00.942610 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:58:00 crc kubenswrapper[4832]: I0125 07:58:00.942621 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:58:00 crc kubenswrapper[4832]: I0125 07:58:00.942643 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:58:00 crc kubenswrapper[4832]: I0125 07:58:00.942656 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:58:00Z","lastTransitionTime":"2026-01-25T07:58:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:58:01 crc kubenswrapper[4832]: I0125 07:58:01.045533 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:58:01 crc kubenswrapper[4832]: I0125 07:58:01.045587 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:58:01 crc kubenswrapper[4832]: I0125 07:58:01.045601 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:58:01 crc kubenswrapper[4832]: I0125 07:58:01.045623 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:58:01 crc kubenswrapper[4832]: I0125 07:58:01.045644 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:58:01Z","lastTransitionTime":"2026-01-25T07:58:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:58:01 crc kubenswrapper[4832]: I0125 07:58:01.147757 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:58:01 crc kubenswrapper[4832]: I0125 07:58:01.147812 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:58:01 crc kubenswrapper[4832]: I0125 07:58:01.147830 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:58:01 crc kubenswrapper[4832]: I0125 07:58:01.147852 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:58:01 crc kubenswrapper[4832]: I0125 07:58:01.147869 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:58:01Z","lastTransitionTime":"2026-01-25T07:58:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:58:01 crc kubenswrapper[4832]: I0125 07:58:01.250506 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:58:01 crc kubenswrapper[4832]: I0125 07:58:01.250539 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:58:01 crc kubenswrapper[4832]: I0125 07:58:01.250549 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:58:01 crc kubenswrapper[4832]: I0125 07:58:01.250561 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:58:01 crc kubenswrapper[4832]: I0125 07:58:01.250571 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:58:01Z","lastTransitionTime":"2026-01-25T07:58:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:58:01 crc kubenswrapper[4832]: I0125 07:58:01.353015 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:58:01 crc kubenswrapper[4832]: I0125 07:58:01.353241 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:58:01 crc kubenswrapper[4832]: I0125 07:58:01.353312 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:58:01 crc kubenswrapper[4832]: I0125 07:58:01.353376 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:58:01 crc kubenswrapper[4832]: I0125 07:58:01.353470 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:58:01Z","lastTransitionTime":"2026-01-25T07:58:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:58:01 crc kubenswrapper[4832]: I0125 07:58:01.455643 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:58:01 crc kubenswrapper[4832]: I0125 07:58:01.456020 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:58:01 crc kubenswrapper[4832]: I0125 07:58:01.456100 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:58:01 crc kubenswrapper[4832]: I0125 07:58:01.456168 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:58:01 crc kubenswrapper[4832]: I0125 07:58:01.456226 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:58:01Z","lastTransitionTime":"2026-01-25T07:58:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:58:01 crc kubenswrapper[4832]: I0125 07:58:01.558509 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:58:01 crc kubenswrapper[4832]: I0125 07:58:01.558544 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:58:01 crc kubenswrapper[4832]: I0125 07:58:01.558555 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:58:01 crc kubenswrapper[4832]: I0125 07:58:01.558568 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:58:01 crc kubenswrapper[4832]: I0125 07:58:01.558580 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:58:01Z","lastTransitionTime":"2026-01-25T07:58:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:58:01 crc kubenswrapper[4832]: I0125 07:58:01.618496 4832 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-30 23:08:22.026332174 +0000 UTC Jan 25 07:58:01 crc kubenswrapper[4832]: I0125 07:58:01.660328 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:58:01 crc kubenswrapper[4832]: I0125 07:58:01.660368 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:58:01 crc kubenswrapper[4832]: I0125 07:58:01.660378 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:58:01 crc kubenswrapper[4832]: I0125 07:58:01.660408 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:58:01 crc kubenswrapper[4832]: I0125 07:58:01.660420 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:58:01Z","lastTransitionTime":"2026-01-25T07:58:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:58:01 crc kubenswrapper[4832]: I0125 07:58:01.669568 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 25 07:58:01 crc kubenswrapper[4832]: E0125 07:58:01.669733 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 25 07:58:01 crc kubenswrapper[4832]: I0125 07:58:01.762766 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:58:01 crc kubenswrapper[4832]: I0125 07:58:01.762807 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:58:01 crc kubenswrapper[4832]: I0125 07:58:01.762819 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:58:01 crc kubenswrapper[4832]: I0125 07:58:01.762834 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:58:01 crc kubenswrapper[4832]: I0125 07:58:01.762842 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:58:01Z","lastTransitionTime":"2026-01-25T07:58:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:58:01 crc kubenswrapper[4832]: I0125 07:58:01.865289 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:58:01 crc kubenswrapper[4832]: I0125 07:58:01.865349 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:58:01 crc kubenswrapper[4832]: I0125 07:58:01.865371 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:58:01 crc kubenswrapper[4832]: I0125 07:58:01.865432 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:58:01 crc kubenswrapper[4832]: I0125 07:58:01.865455 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:58:01Z","lastTransitionTime":"2026-01-25T07:58:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:58:01 crc kubenswrapper[4832]: I0125 07:58:01.967821 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:58:01 crc kubenswrapper[4832]: I0125 07:58:01.967863 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:58:01 crc kubenswrapper[4832]: I0125 07:58:01.967872 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:58:01 crc kubenswrapper[4832]: I0125 07:58:01.967885 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:58:01 crc kubenswrapper[4832]: I0125 07:58:01.967894 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:58:01Z","lastTransitionTime":"2026-01-25T07:58:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:58:02 crc kubenswrapper[4832]: I0125 07:58:02.069463 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:58:02 crc kubenswrapper[4832]: I0125 07:58:02.069504 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:58:02 crc kubenswrapper[4832]: I0125 07:58:02.069516 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:58:02 crc kubenswrapper[4832]: I0125 07:58:02.069532 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:58:02 crc kubenswrapper[4832]: I0125 07:58:02.069544 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:58:02Z","lastTransitionTime":"2026-01-25T07:58:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:58:02 crc kubenswrapper[4832]: I0125 07:58:02.171685 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:58:02 crc kubenswrapper[4832]: I0125 07:58:02.171728 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:58:02 crc kubenswrapper[4832]: I0125 07:58:02.171739 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:58:02 crc kubenswrapper[4832]: I0125 07:58:02.171755 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:58:02 crc kubenswrapper[4832]: I0125 07:58:02.171769 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:58:02Z","lastTransitionTime":"2026-01-25T07:58:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:58:02 crc kubenswrapper[4832]: I0125 07:58:02.273586 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:58:02 crc kubenswrapper[4832]: I0125 07:58:02.273618 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:58:02 crc kubenswrapper[4832]: I0125 07:58:02.273628 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:58:02 crc kubenswrapper[4832]: I0125 07:58:02.273650 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:58:02 crc kubenswrapper[4832]: I0125 07:58:02.273680 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:58:02Z","lastTransitionTime":"2026-01-25T07:58:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:58:02 crc kubenswrapper[4832]: I0125 07:58:02.375607 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:58:02 crc kubenswrapper[4832]: I0125 07:58:02.375678 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:58:02 crc kubenswrapper[4832]: I0125 07:58:02.375690 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:58:02 crc kubenswrapper[4832]: I0125 07:58:02.375726 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:58:02 crc kubenswrapper[4832]: I0125 07:58:02.375741 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:58:02Z","lastTransitionTime":"2026-01-25T07:58:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:58:02 crc kubenswrapper[4832]: I0125 07:58:02.477895 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:58:02 crc kubenswrapper[4832]: I0125 07:58:02.477932 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:58:02 crc kubenswrapper[4832]: I0125 07:58:02.477944 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:58:02 crc kubenswrapper[4832]: I0125 07:58:02.477961 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:58:02 crc kubenswrapper[4832]: I0125 07:58:02.477973 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:58:02Z","lastTransitionTime":"2026-01-25T07:58:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:58:02 crc kubenswrapper[4832]: I0125 07:58:02.579992 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:58:02 crc kubenswrapper[4832]: I0125 07:58:02.580040 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:58:02 crc kubenswrapper[4832]: I0125 07:58:02.580054 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:58:02 crc kubenswrapper[4832]: I0125 07:58:02.580073 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:58:02 crc kubenswrapper[4832]: I0125 07:58:02.580084 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:58:02Z","lastTransitionTime":"2026-01-25T07:58:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:58:02 crc kubenswrapper[4832]: I0125 07:58:02.619374 4832 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-30 05:13:41.305437345 +0000 UTC Jan 25 07:58:02 crc kubenswrapper[4832]: I0125 07:58:02.669043 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 25 07:58:02 crc kubenswrapper[4832]: I0125 07:58:02.669105 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 25 07:58:02 crc kubenswrapper[4832]: I0125 07:58:02.669078 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-nzj5s" Jan 25 07:58:02 crc kubenswrapper[4832]: E0125 07:58:02.669272 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 25 07:58:02 crc kubenswrapper[4832]: E0125 07:58:02.669350 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-nzj5s" podUID="b1a15135-866b-4644-97aa-34c7da815b6b" Jan 25 07:58:02 crc kubenswrapper[4832]: E0125 07:58:02.669463 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 25 07:58:02 crc kubenswrapper[4832]: I0125 07:58:02.682521 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:58:02 crc kubenswrapper[4832]: I0125 07:58:02.682563 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:58:02 crc kubenswrapper[4832]: I0125 07:58:02.682577 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:58:02 crc kubenswrapper[4832]: I0125 07:58:02.682597 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:58:02 crc kubenswrapper[4832]: I0125 07:58:02.682613 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:58:02Z","lastTransitionTime":"2026-01-25T07:58:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:58:02 crc kubenswrapper[4832]: I0125 07:58:02.737127 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/b1a15135-866b-4644-97aa-34c7da815b6b-metrics-certs\") pod \"network-metrics-daemon-nzj5s\" (UID: \"b1a15135-866b-4644-97aa-34c7da815b6b\") " pod="openshift-multus/network-metrics-daemon-nzj5s" Jan 25 07:58:02 crc kubenswrapper[4832]: E0125 07:58:02.737293 4832 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 25 07:58:02 crc kubenswrapper[4832]: E0125 07:58:02.737353 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b1a15135-866b-4644-97aa-34c7da815b6b-metrics-certs podName:b1a15135-866b-4644-97aa-34c7da815b6b nodeName:}" failed. No retries permitted until 2026-01-25 07:58:34.737336966 +0000 UTC m=+97.411160499 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/b1a15135-866b-4644-97aa-34c7da815b6b-metrics-certs") pod "network-metrics-daemon-nzj5s" (UID: "b1a15135-866b-4644-97aa-34c7da815b6b") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 25 07:58:02 crc kubenswrapper[4832]: I0125 07:58:02.784838 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:58:02 crc kubenswrapper[4832]: I0125 07:58:02.784878 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:58:02 crc kubenswrapper[4832]: I0125 07:58:02.784890 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:58:02 crc kubenswrapper[4832]: I0125 07:58:02.784907 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:58:02 crc kubenswrapper[4832]: I0125 07:58:02.784919 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:58:02Z","lastTransitionTime":"2026-01-25T07:58:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:58:02 crc kubenswrapper[4832]: I0125 07:58:02.887092 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:58:02 crc kubenswrapper[4832]: I0125 07:58:02.887142 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:58:02 crc kubenswrapper[4832]: I0125 07:58:02.887153 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:58:02 crc kubenswrapper[4832]: I0125 07:58:02.887169 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:58:02 crc kubenswrapper[4832]: I0125 07:58:02.887181 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:58:02Z","lastTransitionTime":"2026-01-25T07:58:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:58:02 crc kubenswrapper[4832]: I0125 07:58:02.988735 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:58:02 crc kubenswrapper[4832]: I0125 07:58:02.988775 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:58:02 crc kubenswrapper[4832]: I0125 07:58:02.988787 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:58:02 crc kubenswrapper[4832]: I0125 07:58:02.988802 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:58:02 crc kubenswrapper[4832]: I0125 07:58:02.988814 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:58:02Z","lastTransitionTime":"2026-01-25T07:58:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:58:03 crc kubenswrapper[4832]: I0125 07:58:03.091054 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:58:03 crc kubenswrapper[4832]: I0125 07:58:03.091090 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:58:03 crc kubenswrapper[4832]: I0125 07:58:03.091100 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:58:03 crc kubenswrapper[4832]: I0125 07:58:03.091114 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:58:03 crc kubenswrapper[4832]: I0125 07:58:03.091128 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:58:03Z","lastTransitionTime":"2026-01-25T07:58:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:58:03 crc kubenswrapper[4832]: I0125 07:58:03.193418 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:58:03 crc kubenswrapper[4832]: I0125 07:58:03.193487 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:58:03 crc kubenswrapper[4832]: I0125 07:58:03.193496 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:58:03 crc kubenswrapper[4832]: I0125 07:58:03.193509 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:58:03 crc kubenswrapper[4832]: I0125 07:58:03.193518 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:58:03Z","lastTransitionTime":"2026-01-25T07:58:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:58:03 crc kubenswrapper[4832]: I0125 07:58:03.296009 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:58:03 crc kubenswrapper[4832]: I0125 07:58:03.296049 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:58:03 crc kubenswrapper[4832]: I0125 07:58:03.296057 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:58:03 crc kubenswrapper[4832]: I0125 07:58:03.296071 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:58:03 crc kubenswrapper[4832]: I0125 07:58:03.296080 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:58:03Z","lastTransitionTime":"2026-01-25T07:58:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:58:03 crc kubenswrapper[4832]: I0125 07:58:03.398212 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:58:03 crc kubenswrapper[4832]: I0125 07:58:03.398249 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:58:03 crc kubenswrapper[4832]: I0125 07:58:03.398258 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:58:03 crc kubenswrapper[4832]: I0125 07:58:03.398272 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:58:03 crc kubenswrapper[4832]: I0125 07:58:03.398281 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:58:03Z","lastTransitionTime":"2026-01-25T07:58:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:58:03 crc kubenswrapper[4832]: I0125 07:58:03.500268 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:58:03 crc kubenswrapper[4832]: I0125 07:58:03.500318 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:58:03 crc kubenswrapper[4832]: I0125 07:58:03.500326 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:58:03 crc kubenswrapper[4832]: I0125 07:58:03.500345 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:58:03 crc kubenswrapper[4832]: I0125 07:58:03.500354 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:58:03Z","lastTransitionTime":"2026-01-25T07:58:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:58:03 crc kubenswrapper[4832]: I0125 07:58:03.602480 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:58:03 crc kubenswrapper[4832]: I0125 07:58:03.602514 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:58:03 crc kubenswrapper[4832]: I0125 07:58:03.602525 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:58:03 crc kubenswrapper[4832]: I0125 07:58:03.602541 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:58:03 crc kubenswrapper[4832]: I0125 07:58:03.602552 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:58:03Z","lastTransitionTime":"2026-01-25T07:58:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:58:03 crc kubenswrapper[4832]: I0125 07:58:03.620012 4832 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-28 04:14:50.222043851 +0000 UTC Jan 25 07:58:03 crc kubenswrapper[4832]: I0125 07:58:03.668907 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 25 07:58:03 crc kubenswrapper[4832]: E0125 07:58:03.669049 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 25 07:58:03 crc kubenswrapper[4832]: I0125 07:58:03.704973 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:58:03 crc kubenswrapper[4832]: I0125 07:58:03.705004 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:58:03 crc kubenswrapper[4832]: I0125 07:58:03.705012 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:58:03 crc kubenswrapper[4832]: I0125 07:58:03.705024 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:58:03 crc kubenswrapper[4832]: I0125 07:58:03.705033 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:58:03Z","lastTransitionTime":"2026-01-25T07:58:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:58:03 crc kubenswrapper[4832]: I0125 07:58:03.807138 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:58:03 crc kubenswrapper[4832]: I0125 07:58:03.807178 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:58:03 crc kubenswrapper[4832]: I0125 07:58:03.807195 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:58:03 crc kubenswrapper[4832]: I0125 07:58:03.807216 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:58:03 crc kubenswrapper[4832]: I0125 07:58:03.807234 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:58:03Z","lastTransitionTime":"2026-01-25T07:58:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:58:03 crc kubenswrapper[4832]: I0125 07:58:03.909362 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:58:03 crc kubenswrapper[4832]: I0125 07:58:03.909462 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:58:03 crc kubenswrapper[4832]: I0125 07:58:03.909471 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:58:03 crc kubenswrapper[4832]: I0125 07:58:03.909486 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:58:03 crc kubenswrapper[4832]: I0125 07:58:03.909496 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:58:03Z","lastTransitionTime":"2026-01-25T07:58:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:58:04 crc kubenswrapper[4832]: I0125 07:58:04.011231 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:58:04 crc kubenswrapper[4832]: I0125 07:58:04.011269 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:58:04 crc kubenswrapper[4832]: I0125 07:58:04.011280 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:58:04 crc kubenswrapper[4832]: I0125 07:58:04.011296 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:58:04 crc kubenswrapper[4832]: I0125 07:58:04.011306 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:58:04Z","lastTransitionTime":"2026-01-25T07:58:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:58:04 crc kubenswrapper[4832]: I0125 07:58:04.290796 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:58:04 crc kubenswrapper[4832]: I0125 07:58:04.290849 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:58:04 crc kubenswrapper[4832]: I0125 07:58:04.290863 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:58:04 crc kubenswrapper[4832]: I0125 07:58:04.290882 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:58:04 crc kubenswrapper[4832]: I0125 07:58:04.290894 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:58:04Z","lastTransitionTime":"2026-01-25T07:58:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:58:04 crc kubenswrapper[4832]: I0125 07:58:04.293132 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-kzrcf_5439ad80-35f6-4da4-8745-8104e9963472/kube-multus/0.log" Jan 25 07:58:04 crc kubenswrapper[4832]: I0125 07:58:04.293192 4832 generic.go:334] "Generic (PLEG): container finished" podID="5439ad80-35f6-4da4-8745-8104e9963472" containerID="c1f3fab8a8806d76e6199970ac471a73665e6ec874f959a1e7908df814babfff" exitCode=1 Jan 25 07:58:04 crc kubenswrapper[4832]: I0125 07:58:04.293233 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-kzrcf" event={"ID":"5439ad80-35f6-4da4-8745-8104e9963472","Type":"ContainerDied","Data":"c1f3fab8a8806d76e6199970ac471a73665e6ec874f959a1e7908df814babfff"} Jan 25 07:58:04 crc kubenswrapper[4832]: I0125 07:58:04.293853 4832 scope.go:117] "RemoveContainer" containerID="c1f3fab8a8806d76e6199970ac471a73665e6ec874f959a1e7908df814babfff" Jan 25 07:58:04 crc kubenswrapper[4832]: I0125 07:58:04.319941 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4399c971-4476-4d24-ae22-8f9710ee1ea8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:56:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:56:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:56:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://427b76c32266adf832d2068d3a55977e793505c5bb68d7b55f73115596094910\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:56:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://37e9206fcc440929199c51b318bab8d2c23814d1307eaed596434c12edf2ed21\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:56:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://959f94a48ef709e3a3ca62ab6fda1874fd98e4fa70fbde0fa03da51bc8d0ed25\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:56:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://56d7d5b36830b76c8af4d6a98ec50b4096ef677b7ec94784724d5395dbc5e1a5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7e2213b4c4748dc37cf94e9b977630270dedbabf28e81c8a6d75e4ee3346ad7a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-25T07:57:15Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0125 07:57:10.242088 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0125 07:57:10.245266 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3222874030/tls.crt::/tmp/serving-cert-3222874030/tls.key\\\\\\\"\\\\nI0125 07:57:15.582629 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0125 07:57:15.585295 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0125 07:57:15.585315 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0125 07:57:15.585341 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0125 07:57:15.585347 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0125 07:57:15.590465 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0125 07:57:15.590486 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0125 07:57:15.590498 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0125 07:57:15.590502 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0125 07:57:15.590506 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0125 07:57:15.590510 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0125 07:57:15.590513 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0125 07:57:15.590670 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0125 07:57:15.594690 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-25T07:56:59Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c0b0c638bfaa98aaf9932b5ad1b0bfc04ba52038c40f3aa85103388c557ace5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:56:59Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5cdefbe9da3ff798b69ba79465cd9b6fce74df31802f14dca3fa58ba5b9d1bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b5cdefbe9da3ff798b69ba79465cd9b6fce74df31802f14dca3fa58ba5b9d1bd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T07:56:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T07:56:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T07:56:57Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T07:58:04Z is after 2025-08-24T17:21:41Z" Jan 25 07:58:04 crc kubenswrapper[4832]: I0125 07:58:04.332795 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fcc553c4-1007-4dbc-8420-60b36d54467a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:56:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:56:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:56:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8be196a1dec67a58e78aa9de2efa770fc899f210cc9c13962f0ebe78b967ba34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:56:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b044eb1a229266f00938c08da6aa9e86425ca71d08c8434d7214d54850c36bbb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:56:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://82354c782a5e3edb960aa716e1fc5fa9ab40d1f483ae320f08abfb662c1f1911\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:56:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7833d14895ff5c8aa464bdd04ddfe77dd2cddb9658d863bf6421449e62657bd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:56:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T07:56:57Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T07:58:04Z is after 2025-08-24T17:21:41Z" Jan 25 07:58:04 crc kubenswrapper[4832]: I0125 07:58:04.343333 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:16Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T07:58:04Z is after 2025-08-24T17:21:41Z" Jan 25 07:58:04 crc kubenswrapper[4832]: I0125 07:58:04.351977 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-6dqw2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5b30a48c-b823-4cdd-ac0c-def5487d8fa6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d04c4243f10847106daab854b81ba5b24466780aa4900922ae2c460468a12e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxmsw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T07:57:16Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-6dqw2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T07:58:04Z is after 2025-08-24T17:21:41Z" Jan 25 07:58:04 crc kubenswrapper[4832]: I0125 07:58:04.370679 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-plv66" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9c6fdc72-86dc-433d-8aac-57b0eeefaca3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4eb8d5ded80c75addd304eb271c805a5558200db4ad062ef7354d8a0e4d2892d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rkm2k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b2bdf85709ae59146893142e9c99259a30d0a3d382b2212b1863f677f6afc2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rkm2k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://955df1f749685e35f57096ab341705a767f9f044c498ff9fe0c578205ab00e47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rkm2k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a4281c5178e1f538e268252a65fbf98cf6d3febdb246a148f96a4aa074654ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rkm2k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9039a4038315d24ad4f721f3a16dc792881c104d23270f4ab5ffb3d84ff4cb99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rkm2k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0de5e2c0084fa8b9faf368e61b965f84d8411bcbdfb8b3cf6a35f4bc6088e68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rkm2k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://46f7a9d8da7bc60b49c21eb3838eb9b38263ef6bf7be257ababc09c050822355\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://46f7a9d8da7bc60b49c21eb3838eb9b38263ef6bf7be257ababc09c050822355\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-25T07:57:40Z\\\",\\\"message\\\":\\\" node crc\\\\nI0125 07:57:40.180788 6436 obj_retry.go:386] Retry successful for *v1.Pod openshift-multus/multus-additional-cni-plugins-7tflx after 0 failed attempt(s)\\\\nI0125 07:57:40.180793 6436 default_network_controller.go:776] Recording success event on pod openshift-multus/multus-additional-cni-plugins-7tflx\\\\nI0125 07:57:40.180768 6436 ovn.go:134] Ensuring zone local for Pod openshift-machine-config-operator/machine-config-daemon-9r9sz in node crc\\\\nI0125 07:57:40.180804 6436 obj_retry.go:386] Retry successful for *v1.Pod openshift-machine-config-operator/machine-config-daemon-9r9sz after 0 failed attempt(s)\\\\nI0125 07:57:40.180809 6436 default_network_controller.go:776] Recording success event on pod openshift-machine-config-operator/machine-config-daemon-9r9sz\\\\nI0125 07:57:40.180747 6436 obj_retry.go:386] Retry successful for *v1.Pod openshift-image-registry/node-ca-6dqw2 after 0 failed attempt(s)\\\\nI0125 07:57:40.180817 6436 default_network_controller.go:776] Recording success event on pod openshift-image-registry/node-ca-6dqw2\\\\nI0125 07:57:40.180731 6436 default_network_controller.go:776] Recording success event on pod openshift-ovn-kubernetes/ovnkube-node-plv66\\\\nF0125 07:57:40.180824 6436 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-25T07:57:39Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-plv66_openshift-ovn-kubernetes(9c6fdc72-86dc-433d-8aac-57b0eeefaca3)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rkm2k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d82289bf3a8f5881decb5d348cc43fdfd61f4ce6af17013a893b687d2c759d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rkm2k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac96bdf8380dbae226d8f186a0449b986660f21889eb73734620b26fb796fbf1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ac96bdf8380dbae226d8f186a0449b986660f21889eb73734620b26fb796fbf1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T07:57:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rkm2k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T07:57:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-plv66\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T07:58:04Z is after 2025-08-24T17:21:41Z" Jan 25 07:58:04 crc kubenswrapper[4832]: I0125 07:58:04.380597 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-ct7hc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1be4ce34-f46c-4ee9-8fb5-7ac13dafef85\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c584b1d69c283cdea5cd50a6f1e3b9a1fd4b4b82bfb1142fb4bb32fd7c7d3fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cd2cg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://80d0c4fe9bedb92c87bfea3e2e7706dac8825366b74adb48b257fa32f31a6277\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cd2cg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T07:57:29Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-ct7hc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T07:58:04Z is after 2025-08-24T17:21:41Z" Jan 25 07:58:04 crc kubenswrapper[4832]: I0125 07:58:04.391821 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f6bad725-5721-4824-a4ed-bfc16b247b44\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:56:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:56:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:56:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://acf625e850d98cfae07cd2c4ef9d3f9a5404baad9c9bf3e87fa7ff5d1ba00212\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:56:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://902f7ae070f61b744e77e5cbcc7e585607467b588514ae3e99fdded86279a9b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:56:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e1d1028b32f15c85ebc49f8b388004a91d6c08f1bc2c7bf77c2d34db97525111\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:56:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://79304c289cb94b7a9cd8eed25b9e679ded9f3b2b6133ad21157032e313120e85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://79304c289cb94b7a9cd8eed25b9e679ded9f3b2b6133ad21157032e313120e85\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T07:56:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T07:56:58Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T07:56:57Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T07:58:04Z is after 2025-08-24T17:21:41Z" Jan 25 07:58:04 crc kubenswrapper[4832]: I0125 07:58:04.392749 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:58:04 crc kubenswrapper[4832]: I0125 07:58:04.392876 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:58:04 crc kubenswrapper[4832]: I0125 07:58:04.392957 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:58:04 crc kubenswrapper[4832]: I0125 07:58:04.393173 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:58:04 crc kubenswrapper[4832]: I0125 07:58:04.393372 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:58:04Z","lastTransitionTime":"2026-01-25T07:58:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:58:04 crc kubenswrapper[4832]: I0125 07:58:04.410741 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d0e4b534-077a-47eb-a9aa-463b4dce27c2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:56:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:56:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e400282707469172abd90879bb5c4f96419dd2fbdbc5cc58c6ee9954624b221f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://22fb11acb07674f4808f4563567010790f12a87af272fdcf5ad1998e616c3f13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7970bc59b29bb18f7064917431bb4dd3388f593b65f71d697e3bc1c37493d087\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ae35d18ac48a31c47656c723134740770a44da6fa1587a853402bbfd4f51956\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://56b41ea1d1a7bb58c288bf3c661f5cd441412fc4790cd8361da2061bd35721dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c6f28ecd4c0dfb159fffbbdfc1ecbfee0ce21de2efa607937d80ec098bfc2534\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c6f28ecd4c0dfb159fffbbdfc1ecbfee0ce21de2efa607937d80ec098bfc2534\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T07:56:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T07:56:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3d6c060504d04d04a811fe906985b4981037f7c249befd89d21694b58983826\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b3d6c060504d04d04a811fe906985b4981037f7c249befd89d21694b58983826\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T07:56:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T07:56:58Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f98f07a514287378206a12966a18bcce2ce996434858c036f7e405a8c5d51721\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f98f07a514287378206a12966a18bcce2ce996434858c036f7e405a8c5d51721\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T07:56:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T07:56:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T07:56:57Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T07:58:04Z is after 2025-08-24T17:21:41Z" Jan 25 07:58:04 crc kubenswrapper[4832]: I0125 07:58:04.427499 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f08aec7c666388c5a9a5ccc970acf6e9df3262090951fd1a205cfb2f6cfb26a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e880d54d6b2d147d036dac73afd36230c3a984b018b7bd600dcbd33ca83aa84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T07:58:04Z is after 2025-08-24T17:21:41Z" Jan 25 07:58:04 crc kubenswrapper[4832]: I0125 07:58:04.443033 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-kzrcf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5439ad80-35f6-4da4-8745-8104e9963472\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:58:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:58:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c1f3fab8a8806d76e6199970ac471a73665e6ec874f959a1e7908df814babfff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c1f3fab8a8806d76e6199970ac471a73665e6ec874f959a1e7908df814babfff\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-25T07:58:03Z\\\",\\\"message\\\":\\\"2026-01-25T07:57:18+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_ec6ca88f-716a-45cc-bbc3-4dcb86c68fbf\\\\n2026-01-25T07:57:18+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_ec6ca88f-716a-45cc-bbc3-4dcb86c68fbf to /host/opt/cni/bin/\\\\n2026-01-25T07:57:18Z [verbose] multus-daemon started\\\\n2026-01-25T07:57:18Z [verbose] Readiness Indicator file check\\\\n2026-01-25T07:58:03Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-25T07:57:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dg29p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T07:57:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-kzrcf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T07:58:04Z is after 2025-08-24T17:21:41Z" Jan 25 07:58:04 crc kubenswrapper[4832]: I0125 07:58:04.453533 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-nzj5s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b1a15135-866b-4644-97aa-34c7da815b6b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6wc7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6wc7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T07:57:30Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-nzj5s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T07:58:04Z is after 2025-08-24T17:21:41Z" Jan 25 07:58:04 crc kubenswrapper[4832]: I0125 07:58:04.466275 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:16Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T07:58:04Z is after 2025-08-24T17:21:41Z" Jan 25 07:58:04 crc kubenswrapper[4832]: I0125 07:58:04.478833 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://49bab1f91a75d2c164a43ba253102a6ac5ba0fd6347fad172ae2052f055d3434\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T07:58:04Z is after 2025-08-24T17:21:41Z" Jan 25 07:58:04 crc kubenswrapper[4832]: I0125 07:58:04.488168 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:19Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:19Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://097b2ff685144140b86c80b5c605d0ef31116b56237a696d1da4bf98f65d7ae2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T07:58:04Z is after 2025-08-24T17:21:41Z" Jan 25 07:58:04 crc kubenswrapper[4832]: I0125 07:58:04.496291 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-ljmz9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f0e6de28-95c1-4b62-93a5-8141ed12ba8e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://90459cff650e6a278d83d57b502423c3c3bd87cadc083c7642dfc4cc33e7953c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s6dzs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T07:57:16Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-ljmz9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T07:58:04Z is after 2025-08-24T17:21:41Z" Jan 25 07:58:04 crc kubenswrapper[4832]: I0125 07:58:04.496724 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:58:04 crc kubenswrapper[4832]: I0125 07:58:04.496756 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:58:04 crc kubenswrapper[4832]: I0125 07:58:04.496790 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:58:04 crc kubenswrapper[4832]: I0125 07:58:04.496809 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:58:04 crc kubenswrapper[4832]: I0125 07:58:04.496820 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:58:04Z","lastTransitionTime":"2026-01-25T07:58:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:58:04 crc kubenswrapper[4832]: I0125 07:58:04.505715 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-9r9sz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1fb47e8e-c812-41b4-9be7-3fad81e121b0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11d30ecfbac91cbd5f546d8f064b715e31917d7db31102376299e2c5fa7951f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2t6v2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c32b6a39b2bc87d55b11a88a54d0909633358c70f3fc555cd4308bc5bf2689a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2t6v2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T07:57:16Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-9r9sz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T07:58:04Z is after 2025-08-24T17:21:41Z" Jan 25 07:58:04 crc kubenswrapper[4832]: I0125 07:58:04.515293 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T07:58:04Z is after 2025-08-24T17:21:41Z" Jan 25 07:58:04 crc kubenswrapper[4832]: I0125 07:58:04.528089 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-7tflx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"947f1c61-f061-4448-b301-9c2554b67933\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://62f9942e292890719dd629a44aa806877367db57a332a97f254fea093c039c5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6tmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://446dcb21c95e4112671db6f4b8376ff3361d3d386ecdaa190f615271511be812\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://446dcb21c95e4112671db6f4b8376ff3361d3d386ecdaa190f615271511be812\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T07:57:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T07:57:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6tmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2ca8e86a16d5f632146a210839dc52fb85013bd79ac5a467847d4a28a672539\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a2ca8e86a16d5f632146a210839dc52fb85013bd79ac5a467847d4a28a672539\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T07:57:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T07:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6tmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e8c763fc8bcc560d4435f2ed3be793465fb9e31b07bc26b76ce14bf7d9ce6b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3e8c763fc8bcc560d4435f2ed3be793465fb9e31b07bc26b76ce14bf7d9ce6b7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T07:57:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T07:57:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6tmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6a224c00f14700b78550beaa705d0f1cf0b2f13ef8ec3ba81aef885b81292f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a6a224c00f14700b78550beaa705d0f1cf0b2f13ef8ec3ba81aef885b81292f3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T07:57:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T07:57:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6tmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0565bbfef6aee4dc36b7eeea5fb9b0d26004395c38af8fb6f1745ff6853957e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0565bbfef6aee4dc36b7eeea5fb9b0d26004395c38af8fb6f1745ff6853957e4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T07:57:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T07:57:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6tmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://21c9f3889231e035c1db9611e076f2db7f52cca1449f9cd143323a8599d3141c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://21c9f3889231e035c1db9611e076f2db7f52cca1449f9cd143323a8599d3141c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T07:57:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T07:57:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6tmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T07:57:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-7tflx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T07:58:04Z is after 2025-08-24T17:21:41Z" Jan 25 07:58:04 crc kubenswrapper[4832]: I0125 07:58:04.599270 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:58:04 crc kubenswrapper[4832]: I0125 07:58:04.599297 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:58:04 crc kubenswrapper[4832]: I0125 07:58:04.599305 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:58:04 crc kubenswrapper[4832]: I0125 07:58:04.599318 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:58:04 crc kubenswrapper[4832]: I0125 07:58:04.599342 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:58:04Z","lastTransitionTime":"2026-01-25T07:58:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:58:04 crc kubenswrapper[4832]: I0125 07:58:04.620657 4832 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-01 14:19:52.833238092 +0000 UTC Jan 25 07:58:04 crc kubenswrapper[4832]: I0125 07:58:04.669266 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 25 07:58:04 crc kubenswrapper[4832]: E0125 07:58:04.669373 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 25 07:58:04 crc kubenswrapper[4832]: I0125 07:58:04.669281 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 25 07:58:04 crc kubenswrapper[4832]: E0125 07:58:04.669467 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 25 07:58:04 crc kubenswrapper[4832]: I0125 07:58:04.669562 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-nzj5s" Jan 25 07:58:04 crc kubenswrapper[4832]: E0125 07:58:04.669893 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-nzj5s" podUID="b1a15135-866b-4644-97aa-34c7da815b6b" Jan 25 07:58:04 crc kubenswrapper[4832]: I0125 07:58:04.670196 4832 scope.go:117] "RemoveContainer" containerID="46f7a9d8da7bc60b49c21eb3838eb9b38263ef6bf7be257ababc09c050822355" Jan 25 07:58:04 crc kubenswrapper[4832]: I0125 07:58:04.701449 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:58:04 crc kubenswrapper[4832]: I0125 07:58:04.701610 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:58:04 crc kubenswrapper[4832]: I0125 07:58:04.701699 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:58:04 crc kubenswrapper[4832]: I0125 07:58:04.701813 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:58:04 crc kubenswrapper[4832]: I0125 07:58:04.701912 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:58:04Z","lastTransitionTime":"2026-01-25T07:58:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:58:04 crc kubenswrapper[4832]: I0125 07:58:04.805495 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:58:04 crc kubenswrapper[4832]: I0125 07:58:04.806141 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:58:04 crc kubenswrapper[4832]: I0125 07:58:04.806196 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:58:04 crc kubenswrapper[4832]: I0125 07:58:04.806226 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:58:04 crc kubenswrapper[4832]: I0125 07:58:04.806278 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:58:04Z","lastTransitionTime":"2026-01-25T07:58:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:58:04 crc kubenswrapper[4832]: I0125 07:58:04.909215 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:58:04 crc kubenswrapper[4832]: I0125 07:58:04.909285 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:58:04 crc kubenswrapper[4832]: I0125 07:58:04.909303 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:58:04 crc kubenswrapper[4832]: I0125 07:58:04.909339 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:58:04 crc kubenswrapper[4832]: I0125 07:58:04.909360 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:58:04Z","lastTransitionTime":"2026-01-25T07:58:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:58:05 crc kubenswrapper[4832]: I0125 07:58:05.012113 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:58:05 crc kubenswrapper[4832]: I0125 07:58:05.012164 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:58:05 crc kubenswrapper[4832]: I0125 07:58:05.012173 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:58:05 crc kubenswrapper[4832]: I0125 07:58:05.012192 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:58:05 crc kubenswrapper[4832]: I0125 07:58:05.012202 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:58:05Z","lastTransitionTime":"2026-01-25T07:58:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:58:05 crc kubenswrapper[4832]: I0125 07:58:05.115190 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:58:05 crc kubenswrapper[4832]: I0125 07:58:05.115249 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:58:05 crc kubenswrapper[4832]: I0125 07:58:05.115260 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:58:05 crc kubenswrapper[4832]: I0125 07:58:05.115283 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:58:05 crc kubenswrapper[4832]: I0125 07:58:05.115295 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:58:05Z","lastTransitionTime":"2026-01-25T07:58:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:58:05 crc kubenswrapper[4832]: I0125 07:58:05.216891 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:58:05 crc kubenswrapper[4832]: I0125 07:58:05.216935 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:58:05 crc kubenswrapper[4832]: I0125 07:58:05.216947 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:58:05 crc kubenswrapper[4832]: I0125 07:58:05.216962 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:58:05 crc kubenswrapper[4832]: I0125 07:58:05.216974 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:58:05Z","lastTransitionTime":"2026-01-25T07:58:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:58:05 crc kubenswrapper[4832]: I0125 07:58:05.299252 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-kzrcf_5439ad80-35f6-4da4-8745-8104e9963472/kube-multus/0.log" Jan 25 07:58:05 crc kubenswrapper[4832]: I0125 07:58:05.299405 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-kzrcf" event={"ID":"5439ad80-35f6-4da4-8745-8104e9963472","Type":"ContainerStarted","Data":"bcaff12dd09b5de72efcfafa4784bfc96159d855dfb239fc5120bb5fb0c6653e"} Jan 25 07:58:05 crc kubenswrapper[4832]: I0125 07:58:05.305424 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-plv66_9c6fdc72-86dc-433d-8aac-57b0eeefaca3/ovnkube-controller/2.log" Jan 25 07:58:05 crc kubenswrapper[4832]: I0125 07:58:05.309353 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-plv66" event={"ID":"9c6fdc72-86dc-433d-8aac-57b0eeefaca3","Type":"ContainerStarted","Data":"b9360fc46a4533171758f5c0111aec5209164d6ef530b6c4c7047c14a347f7bd"} Jan 25 07:58:05 crc kubenswrapper[4832]: I0125 07:58:05.310179 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-plv66" Jan 25 07:58:05 crc kubenswrapper[4832]: I0125 07:58:05.320880 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:58:05 crc kubenswrapper[4832]: I0125 07:58:05.320926 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:58:05 crc kubenswrapper[4832]: I0125 07:58:05.320948 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:58:05 crc kubenswrapper[4832]: I0125 07:58:05.320969 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:58:05 crc kubenswrapper[4832]: I0125 07:58:05.320982 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:58:05Z","lastTransitionTime":"2026-01-25T07:58:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:58:05 crc kubenswrapper[4832]: I0125 07:58:05.322796 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-plv66" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9c6fdc72-86dc-433d-8aac-57b0eeefaca3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4eb8d5ded80c75addd304eb271c805a5558200db4ad062ef7354d8a0e4d2892d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rkm2k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b2bdf85709ae59146893142e9c99259a30d0a3d382b2212b1863f677f6afc2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rkm2k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://955df1f749685e35f57096ab341705a767f9f044c498ff9fe0c578205ab00e47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rkm2k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a4281c5178e1f538e268252a65fbf98cf6d3febdb246a148f96a4aa074654ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rkm2k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9039a4038315d24ad4f721f3a16dc792881c104d23270f4ab5ffb3d84ff4cb99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rkm2k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0de5e2c0084fa8b9faf368e61b965f84d8411bcbdfb8b3cf6a35f4bc6088e68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rkm2k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://46f7a9d8da7bc60b49c21eb3838eb9b38263ef6bf7be257ababc09c050822355\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://46f7a9d8da7bc60b49c21eb3838eb9b38263ef6bf7be257ababc09c050822355\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-25T07:57:40Z\\\",\\\"message\\\":\\\" node crc\\\\nI0125 07:57:40.180788 6436 obj_retry.go:386] Retry successful for *v1.Pod openshift-multus/multus-additional-cni-plugins-7tflx after 0 failed attempt(s)\\\\nI0125 07:57:40.180793 6436 default_network_controller.go:776] Recording success event on pod openshift-multus/multus-additional-cni-plugins-7tflx\\\\nI0125 07:57:40.180768 6436 ovn.go:134] Ensuring zone local for Pod openshift-machine-config-operator/machine-config-daemon-9r9sz in node crc\\\\nI0125 07:57:40.180804 6436 obj_retry.go:386] Retry successful for *v1.Pod openshift-machine-config-operator/machine-config-daemon-9r9sz after 0 failed attempt(s)\\\\nI0125 07:57:40.180809 6436 default_network_controller.go:776] Recording success event on pod openshift-machine-config-operator/machine-config-daemon-9r9sz\\\\nI0125 07:57:40.180747 6436 obj_retry.go:386] Retry successful for *v1.Pod openshift-image-registry/node-ca-6dqw2 after 0 failed attempt(s)\\\\nI0125 07:57:40.180817 6436 default_network_controller.go:776] Recording success event on pod openshift-image-registry/node-ca-6dqw2\\\\nI0125 07:57:40.180731 6436 default_network_controller.go:776] Recording success event on pod openshift-ovn-kubernetes/ovnkube-node-plv66\\\\nF0125 07:57:40.180824 6436 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-25T07:57:39Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-plv66_openshift-ovn-kubernetes(9c6fdc72-86dc-433d-8aac-57b0eeefaca3)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rkm2k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d82289bf3a8f5881decb5d348cc43fdfd61f4ce6af17013a893b687d2c759d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rkm2k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac96bdf8380dbae226d8f186a0449b986660f21889eb73734620b26fb796fbf1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ac96bdf8380dbae226d8f186a0449b986660f21889eb73734620b26fb796fbf1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T07:57:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rkm2k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T07:57:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-plv66\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T07:58:05Z is after 2025-08-24T17:21:41Z" Jan 25 07:58:05 crc kubenswrapper[4832]: I0125 07:58:05.337437 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-ct7hc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1be4ce34-f46c-4ee9-8fb5-7ac13dafef85\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c584b1d69c283cdea5cd50a6f1e3b9a1fd4b4b82bfb1142fb4bb32fd7c7d3fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cd2cg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://80d0c4fe9bedb92c87bfea3e2e7706dac8825366b74adb48b257fa32f31a6277\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cd2cg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T07:57:29Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-ct7hc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T07:58:05Z is after 2025-08-24T17:21:41Z" Jan 25 07:58:05 crc kubenswrapper[4832]: I0125 07:58:05.359675 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4399c971-4476-4d24-ae22-8f9710ee1ea8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:56:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:56:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:56:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://427b76c32266adf832d2068d3a55977e793505c5bb68d7b55f73115596094910\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:56:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://37e9206fcc440929199c51b318bab8d2c23814d1307eaed596434c12edf2ed21\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:56:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://959f94a48ef709e3a3ca62ab6fda1874fd98e4fa70fbde0fa03da51bc8d0ed25\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:56:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://56d7d5b36830b76c8af4d6a98ec50b4096ef677b7ec94784724d5395dbc5e1a5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7e2213b4c4748dc37cf94e9b977630270dedbabf28e81c8a6d75e4ee3346ad7a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-25T07:57:15Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0125 07:57:10.242088 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0125 07:57:10.245266 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3222874030/tls.crt::/tmp/serving-cert-3222874030/tls.key\\\\\\\"\\\\nI0125 07:57:15.582629 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0125 07:57:15.585295 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0125 07:57:15.585315 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0125 07:57:15.585341 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0125 07:57:15.585347 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0125 07:57:15.590465 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0125 07:57:15.590486 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0125 07:57:15.590498 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0125 07:57:15.590502 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0125 07:57:15.590506 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0125 07:57:15.590510 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0125 07:57:15.590513 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0125 07:57:15.590670 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0125 07:57:15.594690 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-25T07:56:59Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c0b0c638bfaa98aaf9932b5ad1b0bfc04ba52038c40f3aa85103388c557ace5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:56:59Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5cdefbe9da3ff798b69ba79465cd9b6fce74df31802f14dca3fa58ba5b9d1bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b5cdefbe9da3ff798b69ba79465cd9b6fce74df31802f14dca3fa58ba5b9d1bd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T07:56:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T07:56:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T07:56:57Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T07:58:05Z is after 2025-08-24T17:21:41Z" Jan 25 07:58:05 crc kubenswrapper[4832]: I0125 07:58:05.380715 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fcc553c4-1007-4dbc-8420-60b36d54467a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:56:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:56:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:56:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8be196a1dec67a58e78aa9de2efa770fc899f210cc9c13962f0ebe78b967ba34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:56:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b044eb1a229266f00938c08da6aa9e86425ca71d08c8434d7214d54850c36bbb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:56:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://82354c782a5e3edb960aa716e1fc5fa9ab40d1f483ae320f08abfb662c1f1911\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:56:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7833d14895ff5c8aa464bdd04ddfe77dd2cddb9658d863bf6421449e62657bd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:56:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T07:56:57Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T07:58:05Z is after 2025-08-24T17:21:41Z" Jan 25 07:58:05 crc kubenswrapper[4832]: I0125 07:58:05.395905 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:16Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T07:58:05Z is after 2025-08-24T17:21:41Z" Jan 25 07:58:05 crc kubenswrapper[4832]: I0125 07:58:05.406531 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-6dqw2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5b30a48c-b823-4cdd-ac0c-def5487d8fa6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d04c4243f10847106daab854b81ba5b24466780aa4900922ae2c460468a12e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxmsw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T07:57:16Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-6dqw2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T07:58:05Z is after 2025-08-24T17:21:41Z" Jan 25 07:58:05 crc kubenswrapper[4832]: I0125 07:58:05.422353 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-nzj5s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b1a15135-866b-4644-97aa-34c7da815b6b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6wc7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6wc7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T07:57:30Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-nzj5s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T07:58:05Z is after 2025-08-24T17:21:41Z" Jan 25 07:58:05 crc kubenswrapper[4832]: I0125 07:58:05.423668 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:58:05 crc kubenswrapper[4832]: I0125 07:58:05.423708 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:58:05 crc kubenswrapper[4832]: I0125 07:58:05.423718 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:58:05 crc kubenswrapper[4832]: I0125 07:58:05.423736 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:58:05 crc kubenswrapper[4832]: I0125 07:58:05.423750 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:58:05Z","lastTransitionTime":"2026-01-25T07:58:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:58:05 crc kubenswrapper[4832]: I0125 07:58:05.439094 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f6bad725-5721-4824-a4ed-bfc16b247b44\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:56:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:56:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:56:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://acf625e850d98cfae07cd2c4ef9d3f9a5404baad9c9bf3e87fa7ff5d1ba00212\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:56:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://902f7ae070f61b744e77e5cbcc7e585607467b588514ae3e99fdded86279a9b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:56:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e1d1028b32f15c85ebc49f8b388004a91d6c08f1bc2c7bf77c2d34db97525111\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:56:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://79304c289cb94b7a9cd8eed25b9e679ded9f3b2b6133ad21157032e313120e85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://79304c289cb94b7a9cd8eed25b9e679ded9f3b2b6133ad21157032e313120e85\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T07:56:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T07:56:58Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T07:56:57Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T07:58:05Z is after 2025-08-24T17:21:41Z" Jan 25 07:58:05 crc kubenswrapper[4832]: I0125 07:58:05.465474 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d0e4b534-077a-47eb-a9aa-463b4dce27c2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:56:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:56:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e400282707469172abd90879bb5c4f96419dd2fbdbc5cc58c6ee9954624b221f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://22fb11acb07674f4808f4563567010790f12a87af272fdcf5ad1998e616c3f13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7970bc59b29bb18f7064917431bb4dd3388f593b65f71d697e3bc1c37493d087\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ae35d18ac48a31c47656c723134740770a44da6fa1587a853402bbfd4f51956\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://56b41ea1d1a7bb58c288bf3c661f5cd441412fc4790cd8361da2061bd35721dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c6f28ecd4c0dfb159fffbbdfc1ecbfee0ce21de2efa607937d80ec098bfc2534\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c6f28ecd4c0dfb159fffbbdfc1ecbfee0ce21de2efa607937d80ec098bfc2534\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T07:56:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T07:56:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3d6c060504d04d04a811fe906985b4981037f7c249befd89d21694b58983826\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b3d6c060504d04d04a811fe906985b4981037f7c249befd89d21694b58983826\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T07:56:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T07:56:58Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f98f07a514287378206a12966a18bcce2ce996434858c036f7e405a8c5d51721\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f98f07a514287378206a12966a18bcce2ce996434858c036f7e405a8c5d51721\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T07:56:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T07:56:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T07:56:57Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T07:58:05Z is after 2025-08-24T17:21:41Z" Jan 25 07:58:05 crc kubenswrapper[4832]: I0125 07:58:05.481040 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f08aec7c666388c5a9a5ccc970acf6e9df3262090951fd1a205cfb2f6cfb26a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e880d54d6b2d147d036dac73afd36230c3a984b018b7bd600dcbd33ca83aa84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T07:58:05Z is after 2025-08-24T17:21:41Z" Jan 25 07:58:05 crc kubenswrapper[4832]: I0125 07:58:05.495250 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-kzrcf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5439ad80-35f6-4da4-8745-8104e9963472\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:58:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:58:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bcaff12dd09b5de72efcfafa4784bfc96159d855dfb239fc5120bb5fb0c6653e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c1f3fab8a8806d76e6199970ac471a73665e6ec874f959a1e7908df814babfff\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-25T07:58:03Z\\\",\\\"message\\\":\\\"2026-01-25T07:57:18+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_ec6ca88f-716a-45cc-bbc3-4dcb86c68fbf\\\\n2026-01-25T07:57:18+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_ec6ca88f-716a-45cc-bbc3-4dcb86c68fbf to /host/opt/cni/bin/\\\\n2026-01-25T07:57:18Z [verbose] multus-daemon started\\\\n2026-01-25T07:57:18Z [verbose] Readiness Indicator file check\\\\n2026-01-25T07:58:03Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-25T07:57:17Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:58:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dg29p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T07:57:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-kzrcf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T07:58:05Z is after 2025-08-24T17:21:41Z" Jan 25 07:58:05 crc kubenswrapper[4832]: I0125 07:58:05.505372 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-9r9sz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1fb47e8e-c812-41b4-9be7-3fad81e121b0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11d30ecfbac91cbd5f546d8f064b715e31917d7db31102376299e2c5fa7951f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2t6v2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c32b6a39b2bc87d55b11a88a54d0909633358c70f3fc555cd4308bc5bf2689a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2t6v2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T07:57:16Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-9r9sz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T07:58:05Z is after 2025-08-24T17:21:41Z" Jan 25 07:58:05 crc kubenswrapper[4832]: I0125 07:58:05.520117 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:16Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T07:58:05Z is after 2025-08-24T17:21:41Z" Jan 25 07:58:05 crc kubenswrapper[4832]: I0125 07:58:05.526624 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:58:05 crc kubenswrapper[4832]: I0125 07:58:05.526716 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:58:05 crc kubenswrapper[4832]: I0125 07:58:05.526741 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:58:05 crc kubenswrapper[4832]: I0125 07:58:05.526773 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:58:05 crc kubenswrapper[4832]: I0125 07:58:05.526795 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:58:05Z","lastTransitionTime":"2026-01-25T07:58:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:58:05 crc kubenswrapper[4832]: I0125 07:58:05.534175 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://49bab1f91a75d2c164a43ba253102a6ac5ba0fd6347fad172ae2052f055d3434\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T07:58:05Z is after 2025-08-24T17:21:41Z" Jan 25 07:58:05 crc kubenswrapper[4832]: I0125 07:58:05.548627 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:19Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:19Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://097b2ff685144140b86c80b5c605d0ef31116b56237a696d1da4bf98f65d7ae2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T07:58:05Z is after 2025-08-24T17:21:41Z" Jan 25 07:58:05 crc kubenswrapper[4832]: I0125 07:58:05.559871 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-ljmz9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f0e6de28-95c1-4b62-93a5-8141ed12ba8e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://90459cff650e6a278d83d57b502423c3c3bd87cadc083c7642dfc4cc33e7953c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s6dzs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T07:57:16Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-ljmz9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T07:58:05Z is after 2025-08-24T17:21:41Z" Jan 25 07:58:05 crc kubenswrapper[4832]: I0125 07:58:05.576406 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T07:58:05Z is after 2025-08-24T17:21:41Z" Jan 25 07:58:05 crc kubenswrapper[4832]: I0125 07:58:05.594249 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-7tflx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"947f1c61-f061-4448-b301-9c2554b67933\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://62f9942e292890719dd629a44aa806877367db57a332a97f254fea093c039c5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6tmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://446dcb21c95e4112671db6f4b8376ff3361d3d386ecdaa190f615271511be812\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://446dcb21c95e4112671db6f4b8376ff3361d3d386ecdaa190f615271511be812\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T07:57:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T07:57:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6tmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2ca8e86a16d5f632146a210839dc52fb85013bd79ac5a467847d4a28a672539\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a2ca8e86a16d5f632146a210839dc52fb85013bd79ac5a467847d4a28a672539\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T07:57:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T07:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6tmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e8c763fc8bcc560d4435f2ed3be793465fb9e31b07bc26b76ce14bf7d9ce6b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3e8c763fc8bcc560d4435f2ed3be793465fb9e31b07bc26b76ce14bf7d9ce6b7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T07:57:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T07:57:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6tmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6a224c00f14700b78550beaa705d0f1cf0b2f13ef8ec3ba81aef885b81292f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a6a224c00f14700b78550beaa705d0f1cf0b2f13ef8ec3ba81aef885b81292f3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T07:57:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T07:57:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6tmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0565bbfef6aee4dc36b7eeea5fb9b0d26004395c38af8fb6f1745ff6853957e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0565bbfef6aee4dc36b7eeea5fb9b0d26004395c38af8fb6f1745ff6853957e4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T07:57:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T07:57:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6tmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://21c9f3889231e035c1db9611e076f2db7f52cca1449f9cd143323a8599d3141c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://21c9f3889231e035c1db9611e076f2db7f52cca1449f9cd143323a8599d3141c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T07:57:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T07:57:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6tmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T07:57:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-7tflx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T07:58:05Z is after 2025-08-24T17:21:41Z" Jan 25 07:58:05 crc kubenswrapper[4832]: I0125 07:58:05.608954 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4399c971-4476-4d24-ae22-8f9710ee1ea8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:56:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:56:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:56:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://427b76c32266adf832d2068d3a55977e793505c5bb68d7b55f73115596094910\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:56:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://37e9206fcc440929199c51b318bab8d2c23814d1307eaed596434c12edf2ed21\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:56:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://959f94a48ef709e3a3ca62ab6fda1874fd98e4fa70fbde0fa03da51bc8d0ed25\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:56:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://56d7d5b36830b76c8af4d6a98ec50b4096ef677b7ec94784724d5395dbc5e1a5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7e2213b4c4748dc37cf94e9b977630270dedbabf28e81c8a6d75e4ee3346ad7a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-25T07:57:15Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0125 07:57:10.242088 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0125 07:57:10.245266 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3222874030/tls.crt::/tmp/serving-cert-3222874030/tls.key\\\\\\\"\\\\nI0125 07:57:15.582629 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0125 07:57:15.585295 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0125 07:57:15.585315 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0125 07:57:15.585341 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0125 07:57:15.585347 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0125 07:57:15.590465 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0125 07:57:15.590486 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0125 07:57:15.590498 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0125 07:57:15.590502 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0125 07:57:15.590506 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0125 07:57:15.590510 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0125 07:57:15.590513 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0125 07:57:15.590670 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0125 07:57:15.594690 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-25T07:56:59Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c0b0c638bfaa98aaf9932b5ad1b0bfc04ba52038c40f3aa85103388c557ace5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:56:59Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5cdefbe9da3ff798b69ba79465cd9b6fce74df31802f14dca3fa58ba5b9d1bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b5cdefbe9da3ff798b69ba79465cd9b6fce74df31802f14dca3fa58ba5b9d1bd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T07:56:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T07:56:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T07:56:57Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T07:58:05Z is after 2025-08-24T17:21:41Z" Jan 25 07:58:05 crc kubenswrapper[4832]: I0125 07:58:05.620990 4832 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-24 03:07:38.470687296 +0000 UTC Jan 25 07:58:05 crc kubenswrapper[4832]: I0125 07:58:05.623245 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fcc553c4-1007-4dbc-8420-60b36d54467a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:56:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:56:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:56:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8be196a1dec67a58e78aa9de2efa770fc899f210cc9c13962f0ebe78b967ba34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:56:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b044eb1a229266f00938c08da6aa9e86425ca71d08c8434d7214d54850c36bbb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:56:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://82354c782a5e3edb960aa716e1fc5fa9ab40d1f483ae320f08abfb662c1f1911\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:56:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7833d14895ff5c8aa464bdd04ddfe77dd2cddb9658d863bf6421449e62657bd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:56:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T07:56:57Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T07:58:05Z is after 2025-08-24T17:21:41Z" Jan 25 07:58:05 crc kubenswrapper[4832]: I0125 07:58:05.629453 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:58:05 crc kubenswrapper[4832]: I0125 07:58:05.629484 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:58:05 crc kubenswrapper[4832]: I0125 07:58:05.629493 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:58:05 crc kubenswrapper[4832]: I0125 07:58:05.629509 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:58:05 crc kubenswrapper[4832]: I0125 07:58:05.629520 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:58:05Z","lastTransitionTime":"2026-01-25T07:58:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:58:05 crc kubenswrapper[4832]: I0125 07:58:05.637780 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:16Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T07:58:05Z is after 2025-08-24T17:21:41Z" Jan 25 07:58:05 crc kubenswrapper[4832]: I0125 07:58:05.647357 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-6dqw2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5b30a48c-b823-4cdd-ac0c-def5487d8fa6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d04c4243f10847106daab854b81ba5b24466780aa4900922ae2c460468a12e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxmsw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T07:57:16Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-6dqw2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T07:58:05Z is after 2025-08-24T17:21:41Z" Jan 25 07:58:05 crc kubenswrapper[4832]: I0125 07:58:05.666063 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-plv66" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9c6fdc72-86dc-433d-8aac-57b0eeefaca3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4eb8d5ded80c75addd304eb271c805a5558200db4ad062ef7354d8a0e4d2892d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rkm2k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b2bdf85709ae59146893142e9c99259a30d0a3d382b2212b1863f677f6afc2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rkm2k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://955df1f749685e35f57096ab341705a767f9f044c498ff9fe0c578205ab00e47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rkm2k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a4281c5178e1f538e268252a65fbf98cf6d3febdb246a148f96a4aa074654ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rkm2k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9039a4038315d24ad4f721f3a16dc792881c104d23270f4ab5ffb3d84ff4cb99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rkm2k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0de5e2c0084fa8b9faf368e61b965f84d8411bcbdfb8b3cf6a35f4bc6088e68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rkm2k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9360fc46a4533171758f5c0111aec5209164d6ef530b6c4c7047c14a347f7bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://46f7a9d8da7bc60b49c21eb3838eb9b38263ef6bf7be257ababc09c050822355\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-25T07:57:40Z\\\",\\\"message\\\":\\\" node crc\\\\nI0125 07:57:40.180788 6436 obj_retry.go:386] Retry successful for *v1.Pod openshift-multus/multus-additional-cni-plugins-7tflx after 0 failed attempt(s)\\\\nI0125 07:57:40.180793 6436 default_network_controller.go:776] Recording success event on pod openshift-multus/multus-additional-cni-plugins-7tflx\\\\nI0125 07:57:40.180768 6436 ovn.go:134] Ensuring zone local for Pod openshift-machine-config-operator/machine-config-daemon-9r9sz in node crc\\\\nI0125 07:57:40.180804 6436 obj_retry.go:386] Retry successful for *v1.Pod openshift-machine-config-operator/machine-config-daemon-9r9sz after 0 failed attempt(s)\\\\nI0125 07:57:40.180809 6436 default_network_controller.go:776] Recording success event on pod openshift-machine-config-operator/machine-config-daemon-9r9sz\\\\nI0125 07:57:40.180747 6436 obj_retry.go:386] Retry successful for *v1.Pod openshift-image-registry/node-ca-6dqw2 after 0 failed attempt(s)\\\\nI0125 07:57:40.180817 6436 default_network_controller.go:776] Recording success event on pod openshift-image-registry/node-ca-6dqw2\\\\nI0125 07:57:40.180731 6436 default_network_controller.go:776] Recording success event on pod openshift-ovn-kubernetes/ovnkube-node-plv66\\\\nF0125 07:57:40.180824 6436 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-25T07:57:39Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:58:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rkm2k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d82289bf3a8f5881decb5d348cc43fdfd61f4ce6af17013a893b687d2c759d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rkm2k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac96bdf8380dbae226d8f186a0449b986660f21889eb73734620b26fb796fbf1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ac96bdf8380dbae226d8f186a0449b986660f21889eb73734620b26fb796fbf1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T07:57:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rkm2k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T07:57:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-plv66\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T07:58:05Z is after 2025-08-24T17:21:41Z" Jan 25 07:58:05 crc kubenswrapper[4832]: I0125 07:58:05.668828 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 25 07:58:05 crc kubenswrapper[4832]: E0125 07:58:05.668964 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 25 07:58:05 crc kubenswrapper[4832]: I0125 07:58:05.681227 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-ct7hc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1be4ce34-f46c-4ee9-8fb5-7ac13dafef85\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c584b1d69c283cdea5cd50a6f1e3b9a1fd4b4b82bfb1142fb4bb32fd7c7d3fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cd2cg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://80d0c4fe9bedb92c87bfea3e2e7706dac8825366b74adb48b257fa32f31a6277\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cd2cg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T07:57:29Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-ct7hc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T07:58:05Z is after 2025-08-24T17:21:41Z" Jan 25 07:58:05 crc kubenswrapper[4832]: I0125 07:58:05.692340 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f6bad725-5721-4824-a4ed-bfc16b247b44\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:56:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:56:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:56:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://acf625e850d98cfae07cd2c4ef9d3f9a5404baad9c9bf3e87fa7ff5d1ba00212\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:56:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://902f7ae070f61b744e77e5cbcc7e585607467b588514ae3e99fdded86279a9b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:56:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e1d1028b32f15c85ebc49f8b388004a91d6c08f1bc2c7bf77c2d34db97525111\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:56:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://79304c289cb94b7a9cd8eed25b9e679ded9f3b2b6133ad21157032e313120e85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://79304c289cb94b7a9cd8eed25b9e679ded9f3b2b6133ad21157032e313120e85\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T07:56:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T07:56:58Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T07:56:57Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T07:58:05Z is after 2025-08-24T17:21:41Z" Jan 25 07:58:05 crc kubenswrapper[4832]: I0125 07:58:05.712061 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d0e4b534-077a-47eb-a9aa-463b4dce27c2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:56:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:56:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e400282707469172abd90879bb5c4f96419dd2fbdbc5cc58c6ee9954624b221f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://22fb11acb07674f4808f4563567010790f12a87af272fdcf5ad1998e616c3f13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7970bc59b29bb18f7064917431bb4dd3388f593b65f71d697e3bc1c37493d087\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ae35d18ac48a31c47656c723134740770a44da6fa1587a853402bbfd4f51956\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://56b41ea1d1a7bb58c288bf3c661f5cd441412fc4790cd8361da2061bd35721dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c6f28ecd4c0dfb159fffbbdfc1ecbfee0ce21de2efa607937d80ec098bfc2534\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c6f28ecd4c0dfb159fffbbdfc1ecbfee0ce21de2efa607937d80ec098bfc2534\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T07:56:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T07:56:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3d6c060504d04d04a811fe906985b4981037f7c249befd89d21694b58983826\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b3d6c060504d04d04a811fe906985b4981037f7c249befd89d21694b58983826\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T07:56:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T07:56:58Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f98f07a514287378206a12966a18bcce2ce996434858c036f7e405a8c5d51721\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f98f07a514287378206a12966a18bcce2ce996434858c036f7e405a8c5d51721\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T07:56:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T07:56:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T07:56:57Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T07:58:05Z is after 2025-08-24T17:21:41Z" Jan 25 07:58:05 crc kubenswrapper[4832]: I0125 07:58:05.724234 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f08aec7c666388c5a9a5ccc970acf6e9df3262090951fd1a205cfb2f6cfb26a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e880d54d6b2d147d036dac73afd36230c3a984b018b7bd600dcbd33ca83aa84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T07:58:05Z is after 2025-08-24T17:21:41Z" Jan 25 07:58:05 crc kubenswrapper[4832]: I0125 07:58:05.731798 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:58:05 crc kubenswrapper[4832]: I0125 07:58:05.731860 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:58:05 crc kubenswrapper[4832]: I0125 07:58:05.731872 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:58:05 crc kubenswrapper[4832]: I0125 07:58:05.731888 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:58:05 crc kubenswrapper[4832]: I0125 07:58:05.731900 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:58:05Z","lastTransitionTime":"2026-01-25T07:58:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:58:05 crc kubenswrapper[4832]: I0125 07:58:05.738410 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-kzrcf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5439ad80-35f6-4da4-8745-8104e9963472\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:58:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:58:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bcaff12dd09b5de72efcfafa4784bfc96159d855dfb239fc5120bb5fb0c6653e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c1f3fab8a8806d76e6199970ac471a73665e6ec874f959a1e7908df814babfff\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-25T07:58:03Z\\\",\\\"message\\\":\\\"2026-01-25T07:57:18+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_ec6ca88f-716a-45cc-bbc3-4dcb86c68fbf\\\\n2026-01-25T07:57:18+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_ec6ca88f-716a-45cc-bbc3-4dcb86c68fbf to /host/opt/cni/bin/\\\\n2026-01-25T07:57:18Z [verbose] multus-daemon started\\\\n2026-01-25T07:57:18Z [verbose] Readiness Indicator file check\\\\n2026-01-25T07:58:03Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-25T07:57:17Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:58:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dg29p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T07:57:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-kzrcf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T07:58:05Z is after 2025-08-24T17:21:41Z" Jan 25 07:58:05 crc kubenswrapper[4832]: I0125 07:58:05.748874 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-nzj5s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b1a15135-866b-4644-97aa-34c7da815b6b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6wc7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6wc7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T07:57:30Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-nzj5s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T07:58:05Z is after 2025-08-24T17:21:41Z" Jan 25 07:58:05 crc kubenswrapper[4832]: I0125 07:58:05.763537 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:16Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T07:58:05Z is after 2025-08-24T17:21:41Z" Jan 25 07:58:05 crc kubenswrapper[4832]: I0125 07:58:05.779567 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://49bab1f91a75d2c164a43ba253102a6ac5ba0fd6347fad172ae2052f055d3434\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T07:58:05Z is after 2025-08-24T17:21:41Z" Jan 25 07:58:05 crc kubenswrapper[4832]: I0125 07:58:05.790623 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:19Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:19Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://097b2ff685144140b86c80b5c605d0ef31116b56237a696d1da4bf98f65d7ae2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T07:58:05Z is after 2025-08-24T17:21:41Z" Jan 25 07:58:05 crc kubenswrapper[4832]: I0125 07:58:05.801743 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-ljmz9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f0e6de28-95c1-4b62-93a5-8141ed12ba8e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://90459cff650e6a278d83d57b502423c3c3bd87cadc083c7642dfc4cc33e7953c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s6dzs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T07:57:16Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-ljmz9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T07:58:05Z is after 2025-08-24T17:21:41Z" Jan 25 07:58:05 crc kubenswrapper[4832]: I0125 07:58:05.812011 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-9r9sz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1fb47e8e-c812-41b4-9be7-3fad81e121b0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11d30ecfbac91cbd5f546d8f064b715e31917d7db31102376299e2c5fa7951f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2t6v2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c32b6a39b2bc87d55b11a88a54d0909633358c70f3fc555cd4308bc5bf2689a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2t6v2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T07:57:16Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-9r9sz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T07:58:05Z is after 2025-08-24T17:21:41Z" Jan 25 07:58:05 crc kubenswrapper[4832]: I0125 07:58:05.822653 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T07:58:05Z is after 2025-08-24T17:21:41Z" Jan 25 07:58:05 crc kubenswrapper[4832]: I0125 07:58:05.834154 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:58:05 crc kubenswrapper[4832]: I0125 07:58:05.834199 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:58:05 crc kubenswrapper[4832]: I0125 07:58:05.834211 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:58:05 crc kubenswrapper[4832]: I0125 07:58:05.834226 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:58:05 crc kubenswrapper[4832]: I0125 07:58:05.834236 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:58:05Z","lastTransitionTime":"2026-01-25T07:58:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:58:05 crc kubenswrapper[4832]: I0125 07:58:05.836838 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-7tflx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"947f1c61-f061-4448-b301-9c2554b67933\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://62f9942e292890719dd629a44aa806877367db57a332a97f254fea093c039c5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6tmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://446dcb21c95e4112671db6f4b8376ff3361d3d386ecdaa190f615271511be812\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://446dcb21c95e4112671db6f4b8376ff3361d3d386ecdaa190f615271511be812\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T07:57:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T07:57:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6tmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2ca8e86a16d5f632146a210839dc52fb85013bd79ac5a467847d4a28a672539\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a2ca8e86a16d5f632146a210839dc52fb85013bd79ac5a467847d4a28a672539\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T07:57:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T07:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6tmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e8c763fc8bcc560d4435f2ed3be793465fb9e31b07bc26b76ce14bf7d9ce6b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3e8c763fc8bcc560d4435f2ed3be793465fb9e31b07bc26b76ce14bf7d9ce6b7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T07:57:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T07:57:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6tmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6a224c00f14700b78550beaa705d0f1cf0b2f13ef8ec3ba81aef885b81292f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a6a224c00f14700b78550beaa705d0f1cf0b2f13ef8ec3ba81aef885b81292f3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T07:57:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T07:57:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6tmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0565bbfef6aee4dc36b7eeea5fb9b0d26004395c38af8fb6f1745ff6853957e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0565bbfef6aee4dc36b7eeea5fb9b0d26004395c38af8fb6f1745ff6853957e4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T07:57:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T07:57:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6tmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://21c9f3889231e035c1db9611e076f2db7f52cca1449f9cd143323a8599d3141c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://21c9f3889231e035c1db9611e076f2db7f52cca1449f9cd143323a8599d3141c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T07:57:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T07:57:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6tmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T07:57:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-7tflx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T07:58:05Z is after 2025-08-24T17:21:41Z" Jan 25 07:58:05 crc kubenswrapper[4832]: I0125 07:58:05.937599 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:58:05 crc kubenswrapper[4832]: I0125 07:58:05.937652 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:58:05 crc kubenswrapper[4832]: I0125 07:58:05.937665 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:58:05 crc kubenswrapper[4832]: I0125 07:58:05.937685 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:58:05 crc kubenswrapper[4832]: I0125 07:58:05.937699 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:58:05Z","lastTransitionTime":"2026-01-25T07:58:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:58:06 crc kubenswrapper[4832]: I0125 07:58:06.043336 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:58:06 crc kubenswrapper[4832]: I0125 07:58:06.043367 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:58:06 crc kubenswrapper[4832]: I0125 07:58:06.043376 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:58:06 crc kubenswrapper[4832]: I0125 07:58:06.043410 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:58:06 crc kubenswrapper[4832]: I0125 07:58:06.043418 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:58:06Z","lastTransitionTime":"2026-01-25T07:58:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:58:06 crc kubenswrapper[4832]: I0125 07:58:06.146171 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:58:06 crc kubenswrapper[4832]: I0125 07:58:06.146225 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:58:06 crc kubenswrapper[4832]: I0125 07:58:06.146238 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:58:06 crc kubenswrapper[4832]: I0125 07:58:06.146255 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:58:06 crc kubenswrapper[4832]: I0125 07:58:06.146266 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:58:06Z","lastTransitionTime":"2026-01-25T07:58:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:58:06 crc kubenswrapper[4832]: I0125 07:58:06.248687 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:58:06 crc kubenswrapper[4832]: I0125 07:58:06.248732 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:58:06 crc kubenswrapper[4832]: I0125 07:58:06.248744 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:58:06 crc kubenswrapper[4832]: I0125 07:58:06.248761 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:58:06 crc kubenswrapper[4832]: I0125 07:58:06.248773 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:58:06Z","lastTransitionTime":"2026-01-25T07:58:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:58:06 crc kubenswrapper[4832]: I0125 07:58:06.313136 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-plv66_9c6fdc72-86dc-433d-8aac-57b0eeefaca3/ovnkube-controller/3.log" Jan 25 07:58:06 crc kubenswrapper[4832]: I0125 07:58:06.313732 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-plv66_9c6fdc72-86dc-433d-8aac-57b0eeefaca3/ovnkube-controller/2.log" Jan 25 07:58:06 crc kubenswrapper[4832]: I0125 07:58:06.315723 4832 generic.go:334] "Generic (PLEG): container finished" podID="9c6fdc72-86dc-433d-8aac-57b0eeefaca3" containerID="b9360fc46a4533171758f5c0111aec5209164d6ef530b6c4c7047c14a347f7bd" exitCode=1 Jan 25 07:58:06 crc kubenswrapper[4832]: I0125 07:58:06.315758 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-plv66" event={"ID":"9c6fdc72-86dc-433d-8aac-57b0eeefaca3","Type":"ContainerDied","Data":"b9360fc46a4533171758f5c0111aec5209164d6ef530b6c4c7047c14a347f7bd"} Jan 25 07:58:06 crc kubenswrapper[4832]: I0125 07:58:06.315813 4832 scope.go:117] "RemoveContainer" containerID="46f7a9d8da7bc60b49c21eb3838eb9b38263ef6bf7be257ababc09c050822355" Jan 25 07:58:06 crc kubenswrapper[4832]: I0125 07:58:06.316531 4832 scope.go:117] "RemoveContainer" containerID="b9360fc46a4533171758f5c0111aec5209164d6ef530b6c4c7047c14a347f7bd" Jan 25 07:58:06 crc kubenswrapper[4832]: E0125 07:58:06.316812 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-plv66_openshift-ovn-kubernetes(9c6fdc72-86dc-433d-8aac-57b0eeefaca3)\"" pod="openshift-ovn-kubernetes/ovnkube-node-plv66" podUID="9c6fdc72-86dc-433d-8aac-57b0eeefaca3" Jan 25 07:58:06 crc kubenswrapper[4832]: I0125 07:58:06.332010 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-7tflx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"947f1c61-f061-4448-b301-9c2554b67933\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://62f9942e292890719dd629a44aa806877367db57a332a97f254fea093c039c5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6tmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://446dcb21c95e4112671db6f4b8376ff3361d3d386ecdaa190f615271511be812\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://446dcb21c95e4112671db6f4b8376ff3361d3d386ecdaa190f615271511be812\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T07:57:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T07:57:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6tmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2ca8e86a16d5f632146a210839dc52fb85013bd79ac5a467847d4a28a672539\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a2ca8e86a16d5f632146a210839dc52fb85013bd79ac5a467847d4a28a672539\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T07:57:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T07:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6tmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e8c763fc8bcc560d4435f2ed3be793465fb9e31b07bc26b76ce14bf7d9ce6b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3e8c763fc8bcc560d4435f2ed3be793465fb9e31b07bc26b76ce14bf7d9ce6b7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T07:57:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T07:57:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6tmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6a224c00f14700b78550beaa705d0f1cf0b2f13ef8ec3ba81aef885b81292f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a6a224c00f14700b78550beaa705d0f1cf0b2f13ef8ec3ba81aef885b81292f3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T07:57:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T07:57:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6tmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0565bbfef6aee4dc36b7eeea5fb9b0d26004395c38af8fb6f1745ff6853957e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0565bbfef6aee4dc36b7eeea5fb9b0d26004395c38af8fb6f1745ff6853957e4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T07:57:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T07:57:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6tmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://21c9f3889231e035c1db9611e076f2db7f52cca1449f9cd143323a8599d3141c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://21c9f3889231e035c1db9611e076f2db7f52cca1449f9cd143323a8599d3141c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T07:57:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T07:57:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6tmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T07:57:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-7tflx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T07:58:06Z is after 2025-08-24T17:21:41Z" Jan 25 07:58:06 crc kubenswrapper[4832]: I0125 07:58:06.343885 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T07:58:06Z is after 2025-08-24T17:21:41Z" Jan 25 07:58:06 crc kubenswrapper[4832]: I0125 07:58:06.351199 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:58:06 crc kubenswrapper[4832]: I0125 07:58:06.351226 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:58:06 crc kubenswrapper[4832]: I0125 07:58:06.351236 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:58:06 crc kubenswrapper[4832]: I0125 07:58:06.351251 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:58:06 crc kubenswrapper[4832]: I0125 07:58:06.351262 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:58:06Z","lastTransitionTime":"2026-01-25T07:58:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:58:06 crc kubenswrapper[4832]: I0125 07:58:06.356700 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:16Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T07:58:06Z is after 2025-08-24T17:21:41Z" Jan 25 07:58:06 crc kubenswrapper[4832]: I0125 07:58:06.366559 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-6dqw2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5b30a48c-b823-4cdd-ac0c-def5487d8fa6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d04c4243f10847106daab854b81ba5b24466780aa4900922ae2c460468a12e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxmsw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T07:57:16Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-6dqw2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T07:58:06Z is after 2025-08-24T17:21:41Z" Jan 25 07:58:06 crc kubenswrapper[4832]: I0125 07:58:06.383813 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-plv66" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9c6fdc72-86dc-433d-8aac-57b0eeefaca3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4eb8d5ded80c75addd304eb271c805a5558200db4ad062ef7354d8a0e4d2892d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rkm2k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b2bdf85709ae59146893142e9c99259a30d0a3d382b2212b1863f677f6afc2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rkm2k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://955df1f749685e35f57096ab341705a767f9f044c498ff9fe0c578205ab00e47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rkm2k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a4281c5178e1f538e268252a65fbf98cf6d3febdb246a148f96a4aa074654ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rkm2k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9039a4038315d24ad4f721f3a16dc792881c104d23270f4ab5ffb3d84ff4cb99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rkm2k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0de5e2c0084fa8b9faf368e61b965f84d8411bcbdfb8b3cf6a35f4bc6088e68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rkm2k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9360fc46a4533171758f5c0111aec5209164d6ef530b6c4c7047c14a347f7bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://46f7a9d8da7bc60b49c21eb3838eb9b38263ef6bf7be257ababc09c050822355\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-25T07:57:40Z\\\",\\\"message\\\":\\\" node crc\\\\nI0125 07:57:40.180788 6436 obj_retry.go:386] Retry successful for *v1.Pod openshift-multus/multus-additional-cni-plugins-7tflx after 0 failed attempt(s)\\\\nI0125 07:57:40.180793 6436 default_network_controller.go:776] Recording success event on pod openshift-multus/multus-additional-cni-plugins-7tflx\\\\nI0125 07:57:40.180768 6436 ovn.go:134] Ensuring zone local for Pod openshift-machine-config-operator/machine-config-daemon-9r9sz in node crc\\\\nI0125 07:57:40.180804 6436 obj_retry.go:386] Retry successful for *v1.Pod openshift-machine-config-operator/machine-config-daemon-9r9sz after 0 failed attempt(s)\\\\nI0125 07:57:40.180809 6436 default_network_controller.go:776] Recording success event on pod openshift-machine-config-operator/machine-config-daemon-9r9sz\\\\nI0125 07:57:40.180747 6436 obj_retry.go:386] Retry successful for *v1.Pod openshift-image-registry/node-ca-6dqw2 after 0 failed attempt(s)\\\\nI0125 07:57:40.180817 6436 default_network_controller.go:776] Recording success event on pod openshift-image-registry/node-ca-6dqw2\\\\nI0125 07:57:40.180731 6436 default_network_controller.go:776] Recording success event on pod openshift-ovn-kubernetes/ovnkube-node-plv66\\\\nF0125 07:57:40.180824 6436 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-25T07:57:39Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b9360fc46a4533171758f5c0111aec5209164d6ef530b6c4c7047c14a347f7bd\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-25T07:58:05Z\\\",\\\"message\\\":\\\"map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-etcd-operator/metrics]} name:Service_openshift-etcd-operator/metrics_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.188:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {53c717ca-2174-4315-bb03-c937a9c0d9b6}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0125 07:58:05.422450 6811 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-etcd-operator/metrics]} name:Service_openshift-etcd-operator/metrics_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.188:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {53c717ca-2174-4315-bb03-c937a9c0d9b6}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0125 07:58:05.420969 6811 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-25T07:58:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rkm2k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d82289bf3a8f5881decb5d348cc43fdfd61f4ce6af17013a893b687d2c759d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rkm2k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac96bdf8380dbae226d8f186a0449b986660f21889eb73734620b26fb796fbf1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ac96bdf8380dbae226d8f186a0449b986660f21889eb73734620b26fb796fbf1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T07:57:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rkm2k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T07:57:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-plv66\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T07:58:06Z is after 2025-08-24T17:21:41Z" Jan 25 07:58:06 crc kubenswrapper[4832]: I0125 07:58:06.394504 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-ct7hc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1be4ce34-f46c-4ee9-8fb5-7ac13dafef85\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c584b1d69c283cdea5cd50a6f1e3b9a1fd4b4b82bfb1142fb4bb32fd7c7d3fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cd2cg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://80d0c4fe9bedb92c87bfea3e2e7706dac8825366b74adb48b257fa32f31a6277\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cd2cg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T07:57:29Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-ct7hc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T07:58:06Z is after 2025-08-24T17:21:41Z" Jan 25 07:58:06 crc kubenswrapper[4832]: I0125 07:58:06.407156 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4399c971-4476-4d24-ae22-8f9710ee1ea8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:56:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:56:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:56:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://427b76c32266adf832d2068d3a55977e793505c5bb68d7b55f73115596094910\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:56:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://37e9206fcc440929199c51b318bab8d2c23814d1307eaed596434c12edf2ed21\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:56:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://959f94a48ef709e3a3ca62ab6fda1874fd98e4fa70fbde0fa03da51bc8d0ed25\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:56:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://56d7d5b36830b76c8af4d6a98ec50b4096ef677b7ec94784724d5395dbc5e1a5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7e2213b4c4748dc37cf94e9b977630270dedbabf28e81c8a6d75e4ee3346ad7a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-25T07:57:15Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0125 07:57:10.242088 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0125 07:57:10.245266 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3222874030/tls.crt::/tmp/serving-cert-3222874030/tls.key\\\\\\\"\\\\nI0125 07:57:15.582629 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0125 07:57:15.585295 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0125 07:57:15.585315 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0125 07:57:15.585341 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0125 07:57:15.585347 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0125 07:57:15.590465 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0125 07:57:15.590486 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0125 07:57:15.590498 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0125 07:57:15.590502 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0125 07:57:15.590506 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0125 07:57:15.590510 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0125 07:57:15.590513 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0125 07:57:15.590670 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0125 07:57:15.594690 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-25T07:56:59Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c0b0c638bfaa98aaf9932b5ad1b0bfc04ba52038c40f3aa85103388c557ace5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:56:59Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5cdefbe9da3ff798b69ba79465cd9b6fce74df31802f14dca3fa58ba5b9d1bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b5cdefbe9da3ff798b69ba79465cd9b6fce74df31802f14dca3fa58ba5b9d1bd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T07:56:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T07:56:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T07:56:57Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T07:58:06Z is after 2025-08-24T17:21:41Z" Jan 25 07:58:06 crc kubenswrapper[4832]: I0125 07:58:06.419237 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fcc553c4-1007-4dbc-8420-60b36d54467a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:56:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:56:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:56:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8be196a1dec67a58e78aa9de2efa770fc899f210cc9c13962f0ebe78b967ba34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:56:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b044eb1a229266f00938c08da6aa9e86425ca71d08c8434d7214d54850c36bbb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:56:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://82354c782a5e3edb960aa716e1fc5fa9ab40d1f483ae320f08abfb662c1f1911\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:56:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7833d14895ff5c8aa464bdd04ddfe77dd2cddb9658d863bf6421449e62657bd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:56:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T07:56:57Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T07:58:06Z is after 2025-08-24T17:21:41Z" Jan 25 07:58:06 crc kubenswrapper[4832]: I0125 07:58:06.430937 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f08aec7c666388c5a9a5ccc970acf6e9df3262090951fd1a205cfb2f6cfb26a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e880d54d6b2d147d036dac73afd36230c3a984b018b7bd600dcbd33ca83aa84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T07:58:06Z is after 2025-08-24T17:21:41Z" Jan 25 07:58:06 crc kubenswrapper[4832]: I0125 07:58:06.442157 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-kzrcf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5439ad80-35f6-4da4-8745-8104e9963472\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:58:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:58:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bcaff12dd09b5de72efcfafa4784bfc96159d855dfb239fc5120bb5fb0c6653e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c1f3fab8a8806d76e6199970ac471a73665e6ec874f959a1e7908df814babfff\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-25T07:58:03Z\\\",\\\"message\\\":\\\"2026-01-25T07:57:18+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_ec6ca88f-716a-45cc-bbc3-4dcb86c68fbf\\\\n2026-01-25T07:57:18+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_ec6ca88f-716a-45cc-bbc3-4dcb86c68fbf to /host/opt/cni/bin/\\\\n2026-01-25T07:57:18Z [verbose] multus-daemon started\\\\n2026-01-25T07:57:18Z [verbose] Readiness Indicator file check\\\\n2026-01-25T07:58:03Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-25T07:57:17Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:58:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dg29p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T07:57:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-kzrcf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T07:58:06Z is after 2025-08-24T17:21:41Z" Jan 25 07:58:06 crc kubenswrapper[4832]: I0125 07:58:06.451230 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-nzj5s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b1a15135-866b-4644-97aa-34c7da815b6b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6wc7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6wc7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T07:57:30Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-nzj5s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T07:58:06Z is after 2025-08-24T17:21:41Z" Jan 25 07:58:06 crc kubenswrapper[4832]: I0125 07:58:06.454419 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:58:06 crc kubenswrapper[4832]: I0125 07:58:06.454449 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:58:06 crc kubenswrapper[4832]: I0125 07:58:06.454489 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:58:06 crc kubenswrapper[4832]: I0125 07:58:06.454503 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:58:06 crc kubenswrapper[4832]: I0125 07:58:06.454512 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:58:06Z","lastTransitionTime":"2026-01-25T07:58:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:58:06 crc kubenswrapper[4832]: I0125 07:58:06.460631 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f6bad725-5721-4824-a4ed-bfc16b247b44\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:56:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:56:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:56:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://acf625e850d98cfae07cd2c4ef9d3f9a5404baad9c9bf3e87fa7ff5d1ba00212\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:56:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://902f7ae070f61b744e77e5cbcc7e585607467b588514ae3e99fdded86279a9b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:56:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e1d1028b32f15c85ebc49f8b388004a91d6c08f1bc2c7bf77c2d34db97525111\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:56:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://79304c289cb94b7a9cd8eed25b9e679ded9f3b2b6133ad21157032e313120e85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://79304c289cb94b7a9cd8eed25b9e679ded9f3b2b6133ad21157032e313120e85\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T07:56:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T07:56:58Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T07:56:57Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T07:58:06Z is after 2025-08-24T17:21:41Z" Jan 25 07:58:06 crc kubenswrapper[4832]: I0125 07:58:06.478150 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d0e4b534-077a-47eb-a9aa-463b4dce27c2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:56:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:56:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e400282707469172abd90879bb5c4f96419dd2fbdbc5cc58c6ee9954624b221f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://22fb11acb07674f4808f4563567010790f12a87af272fdcf5ad1998e616c3f13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7970bc59b29bb18f7064917431bb4dd3388f593b65f71d697e3bc1c37493d087\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ae35d18ac48a31c47656c723134740770a44da6fa1587a853402bbfd4f51956\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://56b41ea1d1a7bb58c288bf3c661f5cd441412fc4790cd8361da2061bd35721dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c6f28ecd4c0dfb159fffbbdfc1ecbfee0ce21de2efa607937d80ec098bfc2534\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c6f28ecd4c0dfb159fffbbdfc1ecbfee0ce21de2efa607937d80ec098bfc2534\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T07:56:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T07:56:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3d6c060504d04d04a811fe906985b4981037f7c249befd89d21694b58983826\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b3d6c060504d04d04a811fe906985b4981037f7c249befd89d21694b58983826\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T07:56:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T07:56:58Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f98f07a514287378206a12966a18bcce2ce996434858c036f7e405a8c5d51721\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f98f07a514287378206a12966a18bcce2ce996434858c036f7e405a8c5d51721\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T07:56:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T07:56:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T07:56:57Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T07:58:06Z is after 2025-08-24T17:21:41Z" Jan 25 07:58:06 crc kubenswrapper[4832]: I0125 07:58:06.487837 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:19Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:19Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://097b2ff685144140b86c80b5c605d0ef31116b56237a696d1da4bf98f65d7ae2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T07:58:06Z is after 2025-08-24T17:21:41Z" Jan 25 07:58:06 crc kubenswrapper[4832]: I0125 07:58:06.496188 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-ljmz9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f0e6de28-95c1-4b62-93a5-8141ed12ba8e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://90459cff650e6a278d83d57b502423c3c3bd87cadc083c7642dfc4cc33e7953c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s6dzs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T07:57:16Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-ljmz9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T07:58:06Z is after 2025-08-24T17:21:41Z" Jan 25 07:58:06 crc kubenswrapper[4832]: I0125 07:58:06.504993 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-9r9sz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1fb47e8e-c812-41b4-9be7-3fad81e121b0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11d30ecfbac91cbd5f546d8f064b715e31917d7db31102376299e2c5fa7951f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2t6v2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c32b6a39b2bc87d55b11a88a54d0909633358c70f3fc555cd4308bc5bf2689a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2t6v2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T07:57:16Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-9r9sz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T07:58:06Z is after 2025-08-24T17:21:41Z" Jan 25 07:58:06 crc kubenswrapper[4832]: I0125 07:58:06.516244 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:16Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T07:58:06Z is after 2025-08-24T17:21:41Z" Jan 25 07:58:06 crc kubenswrapper[4832]: I0125 07:58:06.529255 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://49bab1f91a75d2c164a43ba253102a6ac5ba0fd6347fad172ae2052f055d3434\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T07:58:06Z is after 2025-08-24T17:21:41Z" Jan 25 07:58:06 crc kubenswrapper[4832]: I0125 07:58:06.556809 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:58:06 crc kubenswrapper[4832]: I0125 07:58:06.556869 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:58:06 crc kubenswrapper[4832]: I0125 07:58:06.556882 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:58:06 crc kubenswrapper[4832]: I0125 07:58:06.556899 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:58:06 crc kubenswrapper[4832]: I0125 07:58:06.556911 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:58:06Z","lastTransitionTime":"2026-01-25T07:58:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:58:06 crc kubenswrapper[4832]: I0125 07:58:06.621642 4832 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-23 13:03:37.122308289 +0000 UTC Jan 25 07:58:06 crc kubenswrapper[4832]: I0125 07:58:06.659469 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:58:06 crc kubenswrapper[4832]: I0125 07:58:06.659495 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:58:06 crc kubenswrapper[4832]: I0125 07:58:06.659505 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:58:06 crc kubenswrapper[4832]: I0125 07:58:06.659520 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:58:06 crc kubenswrapper[4832]: I0125 07:58:06.659532 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:58:06Z","lastTransitionTime":"2026-01-25T07:58:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:58:06 crc kubenswrapper[4832]: I0125 07:58:06.669075 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 25 07:58:06 crc kubenswrapper[4832]: E0125 07:58:06.669180 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 25 07:58:06 crc kubenswrapper[4832]: I0125 07:58:06.669340 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 25 07:58:06 crc kubenswrapper[4832]: E0125 07:58:06.669423 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 25 07:58:06 crc kubenswrapper[4832]: I0125 07:58:06.669556 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-nzj5s" Jan 25 07:58:06 crc kubenswrapper[4832]: E0125 07:58:06.669628 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-nzj5s" podUID="b1a15135-866b-4644-97aa-34c7da815b6b" Jan 25 07:58:06 crc kubenswrapper[4832]: I0125 07:58:06.762549 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:58:06 crc kubenswrapper[4832]: I0125 07:58:06.762647 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:58:06 crc kubenswrapper[4832]: I0125 07:58:06.762674 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:58:06 crc kubenswrapper[4832]: I0125 07:58:06.762706 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:58:06 crc kubenswrapper[4832]: I0125 07:58:06.762733 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:58:06Z","lastTransitionTime":"2026-01-25T07:58:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:58:06 crc kubenswrapper[4832]: I0125 07:58:06.865253 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:58:06 crc kubenswrapper[4832]: I0125 07:58:06.865288 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:58:06 crc kubenswrapper[4832]: I0125 07:58:06.865297 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:58:06 crc kubenswrapper[4832]: I0125 07:58:06.865311 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:58:06 crc kubenswrapper[4832]: I0125 07:58:06.865320 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:58:06Z","lastTransitionTime":"2026-01-25T07:58:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:58:06 crc kubenswrapper[4832]: I0125 07:58:06.968241 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:58:06 crc kubenswrapper[4832]: I0125 07:58:06.968281 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:58:06 crc kubenswrapper[4832]: I0125 07:58:06.968289 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:58:06 crc kubenswrapper[4832]: I0125 07:58:06.968303 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:58:06 crc kubenswrapper[4832]: I0125 07:58:06.968315 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:58:06Z","lastTransitionTime":"2026-01-25T07:58:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:58:07 crc kubenswrapper[4832]: I0125 07:58:07.070455 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:58:07 crc kubenswrapper[4832]: I0125 07:58:07.070486 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:58:07 crc kubenswrapper[4832]: I0125 07:58:07.070498 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:58:07 crc kubenswrapper[4832]: I0125 07:58:07.070514 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:58:07 crc kubenswrapper[4832]: I0125 07:58:07.070525 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:58:07Z","lastTransitionTime":"2026-01-25T07:58:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:58:07 crc kubenswrapper[4832]: I0125 07:58:07.172717 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:58:07 crc kubenswrapper[4832]: I0125 07:58:07.172754 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:58:07 crc kubenswrapper[4832]: I0125 07:58:07.172762 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:58:07 crc kubenswrapper[4832]: I0125 07:58:07.172776 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:58:07 crc kubenswrapper[4832]: I0125 07:58:07.172784 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:58:07Z","lastTransitionTime":"2026-01-25T07:58:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:58:07 crc kubenswrapper[4832]: I0125 07:58:07.224495 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:58:07 crc kubenswrapper[4832]: I0125 07:58:07.224521 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:58:07 crc kubenswrapper[4832]: I0125 07:58:07.224529 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:58:07 crc kubenswrapper[4832]: I0125 07:58:07.224566 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:58:07 crc kubenswrapper[4832]: I0125 07:58:07.224597 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:58:07Z","lastTransitionTime":"2026-01-25T07:58:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:58:07 crc kubenswrapper[4832]: E0125 07:58:07.235128 4832 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-25T07:58:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-25T07:58:07Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-25T07:58:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-25T07:58:07Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-25T07:58:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-25T07:58:07Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-25T07:58:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-25T07:58:07Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"0979aa75-019e-429a-886d-abfe16bbe8b2\\\",\\\"systemUUID\\\":\\\"55010a19-6f9d-4b9e-9f82-47bdc3835176\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T07:58:07Z is after 2025-08-24T17:21:41Z" Jan 25 07:58:07 crc kubenswrapper[4832]: I0125 07:58:07.238305 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:58:07 crc kubenswrapper[4832]: I0125 07:58:07.238331 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:58:07 crc kubenswrapper[4832]: I0125 07:58:07.238339 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:58:07 crc kubenswrapper[4832]: I0125 07:58:07.238371 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:58:07 crc kubenswrapper[4832]: I0125 07:58:07.238381 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:58:07Z","lastTransitionTime":"2026-01-25T07:58:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:58:07 crc kubenswrapper[4832]: E0125 07:58:07.248272 4832 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-25T07:58:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-25T07:58:07Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-25T07:58:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-25T07:58:07Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-25T07:58:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-25T07:58:07Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-25T07:58:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-25T07:58:07Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"0979aa75-019e-429a-886d-abfe16bbe8b2\\\",\\\"systemUUID\\\":\\\"55010a19-6f9d-4b9e-9f82-47bdc3835176\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T07:58:07Z is after 2025-08-24T17:21:41Z" Jan 25 07:58:07 crc kubenswrapper[4832]: I0125 07:58:07.250979 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:58:07 crc kubenswrapper[4832]: I0125 07:58:07.251029 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:58:07 crc kubenswrapper[4832]: I0125 07:58:07.251038 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:58:07 crc kubenswrapper[4832]: I0125 07:58:07.251051 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:58:07 crc kubenswrapper[4832]: I0125 07:58:07.251062 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:58:07Z","lastTransitionTime":"2026-01-25T07:58:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:58:07 crc kubenswrapper[4832]: E0125 07:58:07.266509 4832 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-25T07:58:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-25T07:58:07Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-25T07:58:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-25T07:58:07Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-25T07:58:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-25T07:58:07Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-25T07:58:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-25T07:58:07Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"0979aa75-019e-429a-886d-abfe16bbe8b2\\\",\\\"systemUUID\\\":\\\"55010a19-6f9d-4b9e-9f82-47bdc3835176\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T07:58:07Z is after 2025-08-24T17:21:41Z" Jan 25 07:58:07 crc kubenswrapper[4832]: I0125 07:58:07.270510 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:58:07 crc kubenswrapper[4832]: I0125 07:58:07.270534 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:58:07 crc kubenswrapper[4832]: I0125 07:58:07.270543 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:58:07 crc kubenswrapper[4832]: I0125 07:58:07.270556 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:58:07 crc kubenswrapper[4832]: I0125 07:58:07.270565 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:58:07Z","lastTransitionTime":"2026-01-25T07:58:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:58:07 crc kubenswrapper[4832]: E0125 07:58:07.280274 4832 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-25T07:58:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-25T07:58:07Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-25T07:58:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-25T07:58:07Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-25T07:58:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-25T07:58:07Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-25T07:58:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-25T07:58:07Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"0979aa75-019e-429a-886d-abfe16bbe8b2\\\",\\\"systemUUID\\\":\\\"55010a19-6f9d-4b9e-9f82-47bdc3835176\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T07:58:07Z is after 2025-08-24T17:21:41Z" Jan 25 07:58:07 crc kubenswrapper[4832]: I0125 07:58:07.283629 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:58:07 crc kubenswrapper[4832]: I0125 07:58:07.283674 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:58:07 crc kubenswrapper[4832]: I0125 07:58:07.283681 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:58:07 crc kubenswrapper[4832]: I0125 07:58:07.283697 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:58:07 crc kubenswrapper[4832]: I0125 07:58:07.283708 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:58:07Z","lastTransitionTime":"2026-01-25T07:58:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:58:07 crc kubenswrapper[4832]: E0125 07:58:07.293616 4832 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-25T07:58:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-25T07:58:07Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-25T07:58:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-25T07:58:07Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-25T07:58:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-25T07:58:07Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-25T07:58:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-25T07:58:07Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"0979aa75-019e-429a-886d-abfe16bbe8b2\\\",\\\"systemUUID\\\":\\\"55010a19-6f9d-4b9e-9f82-47bdc3835176\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T07:58:07Z is after 2025-08-24T17:21:41Z" Jan 25 07:58:07 crc kubenswrapper[4832]: E0125 07:58:07.293736 4832 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 25 07:58:07 crc kubenswrapper[4832]: I0125 07:58:07.295146 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:58:07 crc kubenswrapper[4832]: I0125 07:58:07.295194 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:58:07 crc kubenswrapper[4832]: I0125 07:58:07.295207 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:58:07 crc kubenswrapper[4832]: I0125 07:58:07.295222 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:58:07 crc kubenswrapper[4832]: I0125 07:58:07.295234 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:58:07Z","lastTransitionTime":"2026-01-25T07:58:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:58:07 crc kubenswrapper[4832]: I0125 07:58:07.320339 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-plv66_9c6fdc72-86dc-433d-8aac-57b0eeefaca3/ovnkube-controller/3.log" Jan 25 07:58:07 crc kubenswrapper[4832]: I0125 07:58:07.323708 4832 scope.go:117] "RemoveContainer" containerID="b9360fc46a4533171758f5c0111aec5209164d6ef530b6c4c7047c14a347f7bd" Jan 25 07:58:07 crc kubenswrapper[4832]: E0125 07:58:07.323949 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-plv66_openshift-ovn-kubernetes(9c6fdc72-86dc-433d-8aac-57b0eeefaca3)\"" pod="openshift-ovn-kubernetes/ovnkube-node-plv66" podUID="9c6fdc72-86dc-433d-8aac-57b0eeefaca3" Jan 25 07:58:07 crc kubenswrapper[4832]: I0125 07:58:07.335307 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:16Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T07:58:07Z is after 2025-08-24T17:21:41Z" Jan 25 07:58:07 crc kubenswrapper[4832]: I0125 07:58:07.345598 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://49bab1f91a75d2c164a43ba253102a6ac5ba0fd6347fad172ae2052f055d3434\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T07:58:07Z is after 2025-08-24T17:21:41Z" Jan 25 07:58:07 crc kubenswrapper[4832]: I0125 07:58:07.355871 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:19Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:19Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://097b2ff685144140b86c80b5c605d0ef31116b56237a696d1da4bf98f65d7ae2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T07:58:07Z is after 2025-08-24T17:21:41Z" Jan 25 07:58:07 crc kubenswrapper[4832]: I0125 07:58:07.363981 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-ljmz9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f0e6de28-95c1-4b62-93a5-8141ed12ba8e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://90459cff650e6a278d83d57b502423c3c3bd87cadc083c7642dfc4cc33e7953c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s6dzs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T07:57:16Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-ljmz9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T07:58:07Z is after 2025-08-24T17:21:41Z" Jan 25 07:58:07 crc kubenswrapper[4832]: I0125 07:58:07.372798 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-9r9sz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1fb47e8e-c812-41b4-9be7-3fad81e121b0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11d30ecfbac91cbd5f546d8f064b715e31917d7db31102376299e2c5fa7951f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2t6v2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c32b6a39b2bc87d55b11a88a54d0909633358c70f3fc555cd4308bc5bf2689a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2t6v2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T07:57:16Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-9r9sz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T07:58:07Z is after 2025-08-24T17:21:41Z" Jan 25 07:58:07 crc kubenswrapper[4832]: I0125 07:58:07.382821 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T07:58:07Z is after 2025-08-24T17:21:41Z" Jan 25 07:58:07 crc kubenswrapper[4832]: I0125 07:58:07.395271 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-7tflx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"947f1c61-f061-4448-b301-9c2554b67933\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://62f9942e292890719dd629a44aa806877367db57a332a97f254fea093c039c5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6tmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://446dcb21c95e4112671db6f4b8376ff3361d3d386ecdaa190f615271511be812\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://446dcb21c95e4112671db6f4b8376ff3361d3d386ecdaa190f615271511be812\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T07:57:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T07:57:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6tmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2ca8e86a16d5f632146a210839dc52fb85013bd79ac5a467847d4a28a672539\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a2ca8e86a16d5f632146a210839dc52fb85013bd79ac5a467847d4a28a672539\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T07:57:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T07:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6tmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e8c763fc8bcc560d4435f2ed3be793465fb9e31b07bc26b76ce14bf7d9ce6b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3e8c763fc8bcc560d4435f2ed3be793465fb9e31b07bc26b76ce14bf7d9ce6b7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T07:57:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T07:57:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6tmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6a224c00f14700b78550beaa705d0f1cf0b2f13ef8ec3ba81aef885b81292f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a6a224c00f14700b78550beaa705d0f1cf0b2f13ef8ec3ba81aef885b81292f3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T07:57:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T07:57:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6tmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0565bbfef6aee4dc36b7eeea5fb9b0d26004395c38af8fb6f1745ff6853957e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0565bbfef6aee4dc36b7eeea5fb9b0d26004395c38af8fb6f1745ff6853957e4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T07:57:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T07:57:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6tmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://21c9f3889231e035c1db9611e076f2db7f52cca1449f9cd143323a8599d3141c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://21c9f3889231e035c1db9611e076f2db7f52cca1449f9cd143323a8599d3141c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T07:57:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T07:57:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6tmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T07:57:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-7tflx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T07:58:07Z is after 2025-08-24T17:21:41Z" Jan 25 07:58:07 crc kubenswrapper[4832]: I0125 07:58:07.396934 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:58:07 crc kubenswrapper[4832]: I0125 07:58:07.396972 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:58:07 crc kubenswrapper[4832]: I0125 07:58:07.396982 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:58:07 crc kubenswrapper[4832]: I0125 07:58:07.396998 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:58:07 crc kubenswrapper[4832]: I0125 07:58:07.397009 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:58:07Z","lastTransitionTime":"2026-01-25T07:58:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:58:07 crc kubenswrapper[4832]: I0125 07:58:07.407356 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4399c971-4476-4d24-ae22-8f9710ee1ea8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:56:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:56:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:56:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://427b76c32266adf832d2068d3a55977e793505c5bb68d7b55f73115596094910\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:56:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://37e9206fcc440929199c51b318bab8d2c23814d1307eaed596434c12edf2ed21\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:56:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://959f94a48ef709e3a3ca62ab6fda1874fd98e4fa70fbde0fa03da51bc8d0ed25\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:56:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://56d7d5b36830b76c8af4d6a98ec50b4096ef677b7ec94784724d5395dbc5e1a5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7e2213b4c4748dc37cf94e9b977630270dedbabf28e81c8a6d75e4ee3346ad7a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-25T07:57:15Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0125 07:57:10.242088 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0125 07:57:10.245266 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3222874030/tls.crt::/tmp/serving-cert-3222874030/tls.key\\\\\\\"\\\\nI0125 07:57:15.582629 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0125 07:57:15.585295 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0125 07:57:15.585315 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0125 07:57:15.585341 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0125 07:57:15.585347 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0125 07:57:15.590465 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0125 07:57:15.590486 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0125 07:57:15.590498 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0125 07:57:15.590502 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0125 07:57:15.590506 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0125 07:57:15.590510 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0125 07:57:15.590513 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0125 07:57:15.590670 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0125 07:57:15.594690 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-25T07:56:59Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c0b0c638bfaa98aaf9932b5ad1b0bfc04ba52038c40f3aa85103388c557ace5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:56:59Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5cdefbe9da3ff798b69ba79465cd9b6fce74df31802f14dca3fa58ba5b9d1bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b5cdefbe9da3ff798b69ba79465cd9b6fce74df31802f14dca3fa58ba5b9d1bd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T07:56:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T07:56:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T07:56:57Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T07:58:07Z is after 2025-08-24T17:21:41Z" Jan 25 07:58:07 crc kubenswrapper[4832]: I0125 07:58:07.417027 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fcc553c4-1007-4dbc-8420-60b36d54467a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:56:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:56:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:56:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8be196a1dec67a58e78aa9de2efa770fc899f210cc9c13962f0ebe78b967ba34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:56:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b044eb1a229266f00938c08da6aa9e86425ca71d08c8434d7214d54850c36bbb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:56:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://82354c782a5e3edb960aa716e1fc5fa9ab40d1f483ae320f08abfb662c1f1911\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:56:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7833d14895ff5c8aa464bdd04ddfe77dd2cddb9658d863bf6421449e62657bd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:56:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T07:56:57Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T07:58:07Z is after 2025-08-24T17:21:41Z" Jan 25 07:58:07 crc kubenswrapper[4832]: I0125 07:58:07.425805 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:16Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T07:58:07Z is after 2025-08-24T17:21:41Z" Jan 25 07:58:07 crc kubenswrapper[4832]: I0125 07:58:07.433459 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-6dqw2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5b30a48c-b823-4cdd-ac0c-def5487d8fa6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d04c4243f10847106daab854b81ba5b24466780aa4900922ae2c460468a12e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxmsw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T07:57:16Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-6dqw2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T07:58:07Z is after 2025-08-24T17:21:41Z" Jan 25 07:58:07 crc kubenswrapper[4832]: I0125 07:58:07.448521 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-plv66" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9c6fdc72-86dc-433d-8aac-57b0eeefaca3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4eb8d5ded80c75addd304eb271c805a5558200db4ad062ef7354d8a0e4d2892d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rkm2k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b2bdf85709ae59146893142e9c99259a30d0a3d382b2212b1863f677f6afc2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rkm2k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://955df1f749685e35f57096ab341705a767f9f044c498ff9fe0c578205ab00e47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rkm2k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a4281c5178e1f538e268252a65fbf98cf6d3febdb246a148f96a4aa074654ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rkm2k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9039a4038315d24ad4f721f3a16dc792881c104d23270f4ab5ffb3d84ff4cb99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rkm2k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0de5e2c0084fa8b9faf368e61b965f84d8411bcbdfb8b3cf6a35f4bc6088e68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rkm2k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9360fc46a4533171758f5c0111aec5209164d6ef530b6c4c7047c14a347f7bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b9360fc46a4533171758f5c0111aec5209164d6ef530b6c4c7047c14a347f7bd\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-25T07:58:05Z\\\",\\\"message\\\":\\\"map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-etcd-operator/metrics]} name:Service_openshift-etcd-operator/metrics_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.188:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {53c717ca-2174-4315-bb03-c937a9c0d9b6}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0125 07:58:05.422450 6811 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-etcd-operator/metrics]} name:Service_openshift-etcd-operator/metrics_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.188:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {53c717ca-2174-4315-bb03-c937a9c0d9b6}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0125 07:58:05.420969 6811 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-25T07:58:04Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-plv66_openshift-ovn-kubernetes(9c6fdc72-86dc-433d-8aac-57b0eeefaca3)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rkm2k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d82289bf3a8f5881decb5d348cc43fdfd61f4ce6af17013a893b687d2c759d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rkm2k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac96bdf8380dbae226d8f186a0449b986660f21889eb73734620b26fb796fbf1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ac96bdf8380dbae226d8f186a0449b986660f21889eb73734620b26fb796fbf1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T07:57:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rkm2k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T07:57:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-plv66\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T07:58:07Z is after 2025-08-24T17:21:41Z" Jan 25 07:58:07 crc kubenswrapper[4832]: I0125 07:58:07.457958 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-ct7hc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1be4ce34-f46c-4ee9-8fb5-7ac13dafef85\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c584b1d69c283cdea5cd50a6f1e3b9a1fd4b4b82bfb1142fb4bb32fd7c7d3fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cd2cg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://80d0c4fe9bedb92c87bfea3e2e7706dac8825366b74adb48b257fa32f31a6277\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cd2cg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T07:57:29Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-ct7hc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T07:58:07Z is after 2025-08-24T17:21:41Z" Jan 25 07:58:07 crc kubenswrapper[4832]: I0125 07:58:07.467593 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f6bad725-5721-4824-a4ed-bfc16b247b44\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:56:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:56:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:56:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://acf625e850d98cfae07cd2c4ef9d3f9a5404baad9c9bf3e87fa7ff5d1ba00212\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:56:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://902f7ae070f61b744e77e5cbcc7e585607467b588514ae3e99fdded86279a9b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:56:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e1d1028b32f15c85ebc49f8b388004a91d6c08f1bc2c7bf77c2d34db97525111\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:56:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://79304c289cb94b7a9cd8eed25b9e679ded9f3b2b6133ad21157032e313120e85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://79304c289cb94b7a9cd8eed25b9e679ded9f3b2b6133ad21157032e313120e85\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T07:56:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T07:56:58Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T07:56:57Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T07:58:07Z is after 2025-08-24T17:21:41Z" Jan 25 07:58:07 crc kubenswrapper[4832]: I0125 07:58:07.485888 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d0e4b534-077a-47eb-a9aa-463b4dce27c2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:56:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:56:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e400282707469172abd90879bb5c4f96419dd2fbdbc5cc58c6ee9954624b221f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://22fb11acb07674f4808f4563567010790f12a87af272fdcf5ad1998e616c3f13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7970bc59b29bb18f7064917431bb4dd3388f593b65f71d697e3bc1c37493d087\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ae35d18ac48a31c47656c723134740770a44da6fa1587a853402bbfd4f51956\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://56b41ea1d1a7bb58c288bf3c661f5cd441412fc4790cd8361da2061bd35721dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c6f28ecd4c0dfb159fffbbdfc1ecbfee0ce21de2efa607937d80ec098bfc2534\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c6f28ecd4c0dfb159fffbbdfc1ecbfee0ce21de2efa607937d80ec098bfc2534\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T07:56:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T07:56:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3d6c060504d04d04a811fe906985b4981037f7c249befd89d21694b58983826\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b3d6c060504d04d04a811fe906985b4981037f7c249befd89d21694b58983826\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T07:56:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T07:56:58Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f98f07a514287378206a12966a18bcce2ce996434858c036f7e405a8c5d51721\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f98f07a514287378206a12966a18bcce2ce996434858c036f7e405a8c5d51721\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T07:56:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T07:56:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T07:56:57Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T07:58:07Z is after 2025-08-24T17:21:41Z" Jan 25 07:58:07 crc kubenswrapper[4832]: I0125 07:58:07.497279 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f08aec7c666388c5a9a5ccc970acf6e9df3262090951fd1a205cfb2f6cfb26a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e880d54d6b2d147d036dac73afd36230c3a984b018b7bd600dcbd33ca83aa84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T07:58:07Z is after 2025-08-24T17:21:41Z" Jan 25 07:58:07 crc kubenswrapper[4832]: I0125 07:58:07.498623 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:58:07 crc kubenswrapper[4832]: I0125 07:58:07.498676 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:58:07 crc kubenswrapper[4832]: I0125 07:58:07.498689 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:58:07 crc kubenswrapper[4832]: I0125 07:58:07.498715 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:58:07 crc kubenswrapper[4832]: I0125 07:58:07.498728 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:58:07Z","lastTransitionTime":"2026-01-25T07:58:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:58:07 crc kubenswrapper[4832]: I0125 07:58:07.507626 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-kzrcf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5439ad80-35f6-4da4-8745-8104e9963472\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:58:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:58:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bcaff12dd09b5de72efcfafa4784bfc96159d855dfb239fc5120bb5fb0c6653e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c1f3fab8a8806d76e6199970ac471a73665e6ec874f959a1e7908df814babfff\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-25T07:58:03Z\\\",\\\"message\\\":\\\"2026-01-25T07:57:18+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_ec6ca88f-716a-45cc-bbc3-4dcb86c68fbf\\\\n2026-01-25T07:57:18+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_ec6ca88f-716a-45cc-bbc3-4dcb86c68fbf to /host/opt/cni/bin/\\\\n2026-01-25T07:57:18Z [verbose] multus-daemon started\\\\n2026-01-25T07:57:18Z [verbose] Readiness Indicator file check\\\\n2026-01-25T07:58:03Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-25T07:57:17Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:58:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dg29p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T07:57:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-kzrcf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T07:58:07Z is after 2025-08-24T17:21:41Z" Jan 25 07:58:07 crc kubenswrapper[4832]: I0125 07:58:07.516887 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-nzj5s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b1a15135-866b-4644-97aa-34c7da815b6b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6wc7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6wc7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T07:57:30Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-nzj5s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T07:58:07Z is after 2025-08-24T17:21:41Z" Jan 25 07:58:07 crc kubenswrapper[4832]: I0125 07:58:07.600994 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:58:07 crc kubenswrapper[4832]: I0125 07:58:07.601038 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:58:07 crc kubenswrapper[4832]: I0125 07:58:07.601049 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:58:07 crc kubenswrapper[4832]: I0125 07:58:07.601064 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:58:07 crc kubenswrapper[4832]: I0125 07:58:07.601074 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:58:07Z","lastTransitionTime":"2026-01-25T07:58:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:58:07 crc kubenswrapper[4832]: I0125 07:58:07.622362 4832 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-28 14:57:54.459745847 +0000 UTC Jan 25 07:58:07 crc kubenswrapper[4832]: I0125 07:58:07.669140 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 25 07:58:07 crc kubenswrapper[4832]: E0125 07:58:07.669300 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 25 07:58:07 crc kubenswrapper[4832]: I0125 07:58:07.681755 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fcc553c4-1007-4dbc-8420-60b36d54467a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:56:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:56:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:56:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8be196a1dec67a58e78aa9de2efa770fc899f210cc9c13962f0ebe78b967ba34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:56:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b044eb1a229266f00938c08da6aa9e86425ca71d08c8434d7214d54850c36bbb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:56:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://82354c782a5e3edb960aa716e1fc5fa9ab40d1f483ae320f08abfb662c1f1911\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:56:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7833d14895ff5c8aa464bdd04ddfe77dd2cddb9658d863bf6421449e62657bd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:56:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T07:56:57Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T07:58:07Z is after 2025-08-24T17:21:41Z" Jan 25 07:58:07 crc kubenswrapper[4832]: I0125 07:58:07.691916 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:16Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T07:58:07Z is after 2025-08-24T17:21:41Z" Jan 25 07:58:07 crc kubenswrapper[4832]: I0125 07:58:07.700586 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-6dqw2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5b30a48c-b823-4cdd-ac0c-def5487d8fa6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d04c4243f10847106daab854b81ba5b24466780aa4900922ae2c460468a12e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxmsw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T07:57:16Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-6dqw2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T07:58:07Z is after 2025-08-24T17:21:41Z" Jan 25 07:58:07 crc kubenswrapper[4832]: I0125 07:58:07.702992 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:58:07 crc kubenswrapper[4832]: I0125 07:58:07.703024 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:58:07 crc kubenswrapper[4832]: I0125 07:58:07.703033 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:58:07 crc kubenswrapper[4832]: I0125 07:58:07.703046 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:58:07 crc kubenswrapper[4832]: I0125 07:58:07.703057 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:58:07Z","lastTransitionTime":"2026-01-25T07:58:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:58:07 crc kubenswrapper[4832]: I0125 07:58:07.717322 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-plv66" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9c6fdc72-86dc-433d-8aac-57b0eeefaca3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4eb8d5ded80c75addd304eb271c805a5558200db4ad062ef7354d8a0e4d2892d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rkm2k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b2bdf85709ae59146893142e9c99259a30d0a3d382b2212b1863f677f6afc2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rkm2k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://955df1f749685e35f57096ab341705a767f9f044c498ff9fe0c578205ab00e47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rkm2k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a4281c5178e1f538e268252a65fbf98cf6d3febdb246a148f96a4aa074654ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rkm2k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9039a4038315d24ad4f721f3a16dc792881c104d23270f4ab5ffb3d84ff4cb99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rkm2k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0de5e2c0084fa8b9faf368e61b965f84d8411bcbdfb8b3cf6a35f4bc6088e68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rkm2k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9360fc46a4533171758f5c0111aec5209164d6ef530b6c4c7047c14a347f7bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b9360fc46a4533171758f5c0111aec5209164d6ef530b6c4c7047c14a347f7bd\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-25T07:58:05Z\\\",\\\"message\\\":\\\"map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-etcd-operator/metrics]} name:Service_openshift-etcd-operator/metrics_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.188:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {53c717ca-2174-4315-bb03-c937a9c0d9b6}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0125 07:58:05.422450 6811 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-etcd-operator/metrics]} name:Service_openshift-etcd-operator/metrics_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.188:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {53c717ca-2174-4315-bb03-c937a9c0d9b6}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0125 07:58:05.420969 6811 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-25T07:58:04Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-plv66_openshift-ovn-kubernetes(9c6fdc72-86dc-433d-8aac-57b0eeefaca3)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rkm2k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d82289bf3a8f5881decb5d348cc43fdfd61f4ce6af17013a893b687d2c759d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rkm2k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac96bdf8380dbae226d8f186a0449b986660f21889eb73734620b26fb796fbf1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ac96bdf8380dbae226d8f186a0449b986660f21889eb73734620b26fb796fbf1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T07:57:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rkm2k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T07:57:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-plv66\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T07:58:07Z is after 2025-08-24T17:21:41Z" Jan 25 07:58:07 crc kubenswrapper[4832]: I0125 07:58:07.727580 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-ct7hc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1be4ce34-f46c-4ee9-8fb5-7ac13dafef85\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c584b1d69c283cdea5cd50a6f1e3b9a1fd4b4b82bfb1142fb4bb32fd7c7d3fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cd2cg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://80d0c4fe9bedb92c87bfea3e2e7706dac8825366b74adb48b257fa32f31a6277\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cd2cg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T07:57:29Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-ct7hc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T07:58:07Z is after 2025-08-24T17:21:41Z" Jan 25 07:58:07 crc kubenswrapper[4832]: I0125 07:58:07.739709 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4399c971-4476-4d24-ae22-8f9710ee1ea8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:56:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:56:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:56:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://427b76c32266adf832d2068d3a55977e793505c5bb68d7b55f73115596094910\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:56:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://37e9206fcc440929199c51b318bab8d2c23814d1307eaed596434c12edf2ed21\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:56:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://959f94a48ef709e3a3ca62ab6fda1874fd98e4fa70fbde0fa03da51bc8d0ed25\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:56:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://56d7d5b36830b76c8af4d6a98ec50b4096ef677b7ec94784724d5395dbc5e1a5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7e2213b4c4748dc37cf94e9b977630270dedbabf28e81c8a6d75e4ee3346ad7a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-25T07:57:15Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0125 07:57:10.242088 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0125 07:57:10.245266 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3222874030/tls.crt::/tmp/serving-cert-3222874030/tls.key\\\\\\\"\\\\nI0125 07:57:15.582629 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0125 07:57:15.585295 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0125 07:57:15.585315 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0125 07:57:15.585341 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0125 07:57:15.585347 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0125 07:57:15.590465 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0125 07:57:15.590486 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0125 07:57:15.590498 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0125 07:57:15.590502 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0125 07:57:15.590506 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0125 07:57:15.590510 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0125 07:57:15.590513 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0125 07:57:15.590670 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0125 07:57:15.594690 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-25T07:56:59Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c0b0c638bfaa98aaf9932b5ad1b0bfc04ba52038c40f3aa85103388c557ace5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:56:59Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5cdefbe9da3ff798b69ba79465cd9b6fce74df31802f14dca3fa58ba5b9d1bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b5cdefbe9da3ff798b69ba79465cd9b6fce74df31802f14dca3fa58ba5b9d1bd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T07:56:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T07:56:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T07:56:57Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T07:58:07Z is after 2025-08-24T17:21:41Z" Jan 25 07:58:07 crc kubenswrapper[4832]: I0125 07:58:07.756324 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d0e4b534-077a-47eb-a9aa-463b4dce27c2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:56:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:56:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e400282707469172abd90879bb5c4f96419dd2fbdbc5cc58c6ee9954624b221f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://22fb11acb07674f4808f4563567010790f12a87af272fdcf5ad1998e616c3f13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7970bc59b29bb18f7064917431bb4dd3388f593b65f71d697e3bc1c37493d087\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ae35d18ac48a31c47656c723134740770a44da6fa1587a853402bbfd4f51956\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://56b41ea1d1a7bb58c288bf3c661f5cd441412fc4790cd8361da2061bd35721dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c6f28ecd4c0dfb159fffbbdfc1ecbfee0ce21de2efa607937d80ec098bfc2534\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c6f28ecd4c0dfb159fffbbdfc1ecbfee0ce21de2efa607937d80ec098bfc2534\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T07:56:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T07:56:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3d6c060504d04d04a811fe906985b4981037f7c249befd89d21694b58983826\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b3d6c060504d04d04a811fe906985b4981037f7c249befd89d21694b58983826\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T07:56:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T07:56:58Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f98f07a514287378206a12966a18bcce2ce996434858c036f7e405a8c5d51721\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f98f07a514287378206a12966a18bcce2ce996434858c036f7e405a8c5d51721\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T07:56:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T07:56:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T07:56:57Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T07:58:07Z is after 2025-08-24T17:21:41Z" Jan 25 07:58:07 crc kubenswrapper[4832]: I0125 07:58:07.772061 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f08aec7c666388c5a9a5ccc970acf6e9df3262090951fd1a205cfb2f6cfb26a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e880d54d6b2d147d036dac73afd36230c3a984b018b7bd600dcbd33ca83aa84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T07:58:07Z is after 2025-08-24T17:21:41Z" Jan 25 07:58:07 crc kubenswrapper[4832]: I0125 07:58:07.786810 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-kzrcf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5439ad80-35f6-4da4-8745-8104e9963472\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:58:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:58:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bcaff12dd09b5de72efcfafa4784bfc96159d855dfb239fc5120bb5fb0c6653e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c1f3fab8a8806d76e6199970ac471a73665e6ec874f959a1e7908df814babfff\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-25T07:58:03Z\\\",\\\"message\\\":\\\"2026-01-25T07:57:18+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_ec6ca88f-716a-45cc-bbc3-4dcb86c68fbf\\\\n2026-01-25T07:57:18+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_ec6ca88f-716a-45cc-bbc3-4dcb86c68fbf to /host/opt/cni/bin/\\\\n2026-01-25T07:57:18Z [verbose] multus-daemon started\\\\n2026-01-25T07:57:18Z [verbose] Readiness Indicator file check\\\\n2026-01-25T07:58:03Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-25T07:57:17Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:58:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dg29p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T07:57:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-kzrcf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T07:58:07Z is after 2025-08-24T17:21:41Z" Jan 25 07:58:07 crc kubenswrapper[4832]: I0125 07:58:07.795460 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-nzj5s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b1a15135-866b-4644-97aa-34c7da815b6b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6wc7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6wc7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T07:57:30Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-nzj5s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T07:58:07Z is after 2025-08-24T17:21:41Z" Jan 25 07:58:07 crc kubenswrapper[4832]: I0125 07:58:07.805681 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:58:07 crc kubenswrapper[4832]: I0125 07:58:07.805709 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:58:07 crc kubenswrapper[4832]: I0125 07:58:07.805718 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:58:07 crc kubenswrapper[4832]: I0125 07:58:07.805611 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f6bad725-5721-4824-a4ed-bfc16b247b44\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:56:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:56:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:56:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://acf625e850d98cfae07cd2c4ef9d3f9a5404baad9c9bf3e87fa7ff5d1ba00212\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:56:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://902f7ae070f61b744e77e5cbcc7e585607467b588514ae3e99fdded86279a9b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:56:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e1d1028b32f15c85ebc49f8b388004a91d6c08f1bc2c7bf77c2d34db97525111\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:56:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://79304c289cb94b7a9cd8eed25b9e679ded9f3b2b6133ad21157032e313120e85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://79304c289cb94b7a9cd8eed25b9e679ded9f3b2b6133ad21157032e313120e85\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T07:56:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T07:56:58Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T07:56:57Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T07:58:07Z is after 2025-08-24T17:21:41Z" Jan 25 07:58:07 crc kubenswrapper[4832]: I0125 07:58:07.805731 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:58:07 crc kubenswrapper[4832]: I0125 07:58:07.805914 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:58:07Z","lastTransitionTime":"2026-01-25T07:58:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:58:07 crc kubenswrapper[4832]: I0125 07:58:07.818480 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://49bab1f91a75d2c164a43ba253102a6ac5ba0fd6347fad172ae2052f055d3434\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T07:58:07Z is after 2025-08-24T17:21:41Z" Jan 25 07:58:07 crc kubenswrapper[4832]: I0125 07:58:07.828640 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:19Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:19Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://097b2ff685144140b86c80b5c605d0ef31116b56237a696d1da4bf98f65d7ae2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T07:58:07Z is after 2025-08-24T17:21:41Z" Jan 25 07:58:07 crc kubenswrapper[4832]: I0125 07:58:07.836249 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-ljmz9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f0e6de28-95c1-4b62-93a5-8141ed12ba8e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://90459cff650e6a278d83d57b502423c3c3bd87cadc083c7642dfc4cc33e7953c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s6dzs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T07:57:16Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-ljmz9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T07:58:07Z is after 2025-08-24T17:21:41Z" Jan 25 07:58:07 crc kubenswrapper[4832]: I0125 07:58:07.844786 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-9r9sz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1fb47e8e-c812-41b4-9be7-3fad81e121b0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11d30ecfbac91cbd5f546d8f064b715e31917d7db31102376299e2c5fa7951f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2t6v2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c32b6a39b2bc87d55b11a88a54d0909633358c70f3fc555cd4308bc5bf2689a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2t6v2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T07:57:16Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-9r9sz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T07:58:07Z is after 2025-08-24T17:21:41Z" Jan 25 07:58:07 crc kubenswrapper[4832]: I0125 07:58:07.853608 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:16Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T07:58:07Z is after 2025-08-24T17:21:41Z" Jan 25 07:58:07 crc kubenswrapper[4832]: I0125 07:58:07.863795 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T07:58:07Z is after 2025-08-24T17:21:41Z" Jan 25 07:58:07 crc kubenswrapper[4832]: I0125 07:58:07.875672 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-7tflx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"947f1c61-f061-4448-b301-9c2554b67933\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://62f9942e292890719dd629a44aa806877367db57a332a97f254fea093c039c5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6tmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://446dcb21c95e4112671db6f4b8376ff3361d3d386ecdaa190f615271511be812\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://446dcb21c95e4112671db6f4b8376ff3361d3d386ecdaa190f615271511be812\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T07:57:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T07:57:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6tmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2ca8e86a16d5f632146a210839dc52fb85013bd79ac5a467847d4a28a672539\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a2ca8e86a16d5f632146a210839dc52fb85013bd79ac5a467847d4a28a672539\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T07:57:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T07:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6tmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e8c763fc8bcc560d4435f2ed3be793465fb9e31b07bc26b76ce14bf7d9ce6b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3e8c763fc8bcc560d4435f2ed3be793465fb9e31b07bc26b76ce14bf7d9ce6b7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T07:57:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T07:57:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6tmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6a224c00f14700b78550beaa705d0f1cf0b2f13ef8ec3ba81aef885b81292f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a6a224c00f14700b78550beaa705d0f1cf0b2f13ef8ec3ba81aef885b81292f3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T07:57:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T07:57:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6tmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0565bbfef6aee4dc36b7eeea5fb9b0d26004395c38af8fb6f1745ff6853957e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0565bbfef6aee4dc36b7eeea5fb9b0d26004395c38af8fb6f1745ff6853957e4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T07:57:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T07:57:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6tmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://21c9f3889231e035c1db9611e076f2db7f52cca1449f9cd143323a8599d3141c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://21c9f3889231e035c1db9611e076f2db7f52cca1449f9cd143323a8599d3141c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T07:57:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T07:57:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6tmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T07:57:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-7tflx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T07:58:07Z is after 2025-08-24T17:21:41Z" Jan 25 07:58:07 crc kubenswrapper[4832]: I0125 07:58:07.908387 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:58:07 crc kubenswrapper[4832]: I0125 07:58:07.908429 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:58:07 crc kubenswrapper[4832]: I0125 07:58:07.908439 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:58:07 crc kubenswrapper[4832]: I0125 07:58:07.908451 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:58:07 crc kubenswrapper[4832]: I0125 07:58:07.908461 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:58:07Z","lastTransitionTime":"2026-01-25T07:58:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:58:08 crc kubenswrapper[4832]: I0125 07:58:08.010829 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:58:08 crc kubenswrapper[4832]: I0125 07:58:08.010927 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:58:08 crc kubenswrapper[4832]: I0125 07:58:08.010942 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:58:08 crc kubenswrapper[4832]: I0125 07:58:08.010962 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:58:08 crc kubenswrapper[4832]: I0125 07:58:08.010977 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:58:08Z","lastTransitionTime":"2026-01-25T07:58:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:58:08 crc kubenswrapper[4832]: I0125 07:58:08.112463 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:58:08 crc kubenswrapper[4832]: I0125 07:58:08.112506 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:58:08 crc kubenswrapper[4832]: I0125 07:58:08.112522 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:58:08 crc kubenswrapper[4832]: I0125 07:58:08.112536 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:58:08 crc kubenswrapper[4832]: I0125 07:58:08.112547 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:58:08Z","lastTransitionTime":"2026-01-25T07:58:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:58:08 crc kubenswrapper[4832]: I0125 07:58:08.214749 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:58:08 crc kubenswrapper[4832]: I0125 07:58:08.214798 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:58:08 crc kubenswrapper[4832]: I0125 07:58:08.214812 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:58:08 crc kubenswrapper[4832]: I0125 07:58:08.214831 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:58:08 crc kubenswrapper[4832]: I0125 07:58:08.214842 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:58:08Z","lastTransitionTime":"2026-01-25T07:58:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:58:08 crc kubenswrapper[4832]: I0125 07:58:08.317363 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:58:08 crc kubenswrapper[4832]: I0125 07:58:08.317436 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:58:08 crc kubenswrapper[4832]: I0125 07:58:08.317447 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:58:08 crc kubenswrapper[4832]: I0125 07:58:08.317466 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:58:08 crc kubenswrapper[4832]: I0125 07:58:08.317479 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:58:08Z","lastTransitionTime":"2026-01-25T07:58:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:58:08 crc kubenswrapper[4832]: I0125 07:58:08.419683 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:58:08 crc kubenswrapper[4832]: I0125 07:58:08.419723 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:58:08 crc kubenswrapper[4832]: I0125 07:58:08.419736 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:58:08 crc kubenswrapper[4832]: I0125 07:58:08.419752 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:58:08 crc kubenswrapper[4832]: I0125 07:58:08.419763 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:58:08Z","lastTransitionTime":"2026-01-25T07:58:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:58:08 crc kubenswrapper[4832]: I0125 07:58:08.521587 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:58:08 crc kubenswrapper[4832]: I0125 07:58:08.521615 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:58:08 crc kubenswrapper[4832]: I0125 07:58:08.521626 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:58:08 crc kubenswrapper[4832]: I0125 07:58:08.521639 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:58:08 crc kubenswrapper[4832]: I0125 07:58:08.521650 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:58:08Z","lastTransitionTime":"2026-01-25T07:58:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:58:08 crc kubenswrapper[4832]: I0125 07:58:08.622664 4832 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-07 16:16:24.546439564 +0000 UTC Jan 25 07:58:08 crc kubenswrapper[4832]: I0125 07:58:08.623910 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:58:08 crc kubenswrapper[4832]: I0125 07:58:08.623951 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:58:08 crc kubenswrapper[4832]: I0125 07:58:08.623964 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:58:08 crc kubenswrapper[4832]: I0125 07:58:08.623983 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:58:08 crc kubenswrapper[4832]: I0125 07:58:08.623995 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:58:08Z","lastTransitionTime":"2026-01-25T07:58:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:58:08 crc kubenswrapper[4832]: I0125 07:58:08.669275 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 25 07:58:08 crc kubenswrapper[4832]: I0125 07:58:08.669353 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 25 07:58:08 crc kubenswrapper[4832]: E0125 07:58:08.669427 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 25 07:58:08 crc kubenswrapper[4832]: I0125 07:58:08.669483 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-nzj5s" Jan 25 07:58:08 crc kubenswrapper[4832]: E0125 07:58:08.669598 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 25 07:58:08 crc kubenswrapper[4832]: E0125 07:58:08.669787 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-nzj5s" podUID="b1a15135-866b-4644-97aa-34c7da815b6b" Jan 25 07:58:08 crc kubenswrapper[4832]: I0125 07:58:08.726337 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:58:08 crc kubenswrapper[4832]: I0125 07:58:08.726378 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:58:08 crc kubenswrapper[4832]: I0125 07:58:08.726405 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:58:08 crc kubenswrapper[4832]: I0125 07:58:08.726422 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:58:08 crc kubenswrapper[4832]: I0125 07:58:08.726434 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:58:08Z","lastTransitionTime":"2026-01-25T07:58:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:58:08 crc kubenswrapper[4832]: I0125 07:58:08.827949 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:58:08 crc kubenswrapper[4832]: I0125 07:58:08.827991 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:58:08 crc kubenswrapper[4832]: I0125 07:58:08.828000 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:58:08 crc kubenswrapper[4832]: I0125 07:58:08.828017 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:58:08 crc kubenswrapper[4832]: I0125 07:58:08.828028 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:58:08Z","lastTransitionTime":"2026-01-25T07:58:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:58:08 crc kubenswrapper[4832]: I0125 07:58:08.930505 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:58:08 crc kubenswrapper[4832]: I0125 07:58:08.930543 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:58:08 crc kubenswrapper[4832]: I0125 07:58:08.930552 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:58:08 crc kubenswrapper[4832]: I0125 07:58:08.930568 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:58:08 crc kubenswrapper[4832]: I0125 07:58:08.930579 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:58:08Z","lastTransitionTime":"2026-01-25T07:58:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:58:09 crc kubenswrapper[4832]: I0125 07:58:09.032992 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:58:09 crc kubenswrapper[4832]: I0125 07:58:09.033035 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:58:09 crc kubenswrapper[4832]: I0125 07:58:09.033047 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:58:09 crc kubenswrapper[4832]: I0125 07:58:09.033063 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:58:09 crc kubenswrapper[4832]: I0125 07:58:09.033073 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:58:09Z","lastTransitionTime":"2026-01-25T07:58:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:58:09 crc kubenswrapper[4832]: I0125 07:58:09.135740 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:58:09 crc kubenswrapper[4832]: I0125 07:58:09.135789 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:58:09 crc kubenswrapper[4832]: I0125 07:58:09.135801 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:58:09 crc kubenswrapper[4832]: I0125 07:58:09.135819 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:58:09 crc kubenswrapper[4832]: I0125 07:58:09.135832 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:58:09Z","lastTransitionTime":"2026-01-25T07:58:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:58:09 crc kubenswrapper[4832]: I0125 07:58:09.237873 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:58:09 crc kubenswrapper[4832]: I0125 07:58:09.237903 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:58:09 crc kubenswrapper[4832]: I0125 07:58:09.237911 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:58:09 crc kubenswrapper[4832]: I0125 07:58:09.237924 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:58:09 crc kubenswrapper[4832]: I0125 07:58:09.237934 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:58:09Z","lastTransitionTime":"2026-01-25T07:58:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:58:09 crc kubenswrapper[4832]: I0125 07:58:09.339862 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:58:09 crc kubenswrapper[4832]: I0125 07:58:09.339893 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:58:09 crc kubenswrapper[4832]: I0125 07:58:09.339901 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:58:09 crc kubenswrapper[4832]: I0125 07:58:09.339913 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:58:09 crc kubenswrapper[4832]: I0125 07:58:09.339922 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:58:09Z","lastTransitionTime":"2026-01-25T07:58:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:58:09 crc kubenswrapper[4832]: I0125 07:58:09.444728 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:58:09 crc kubenswrapper[4832]: I0125 07:58:09.444773 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:58:09 crc kubenswrapper[4832]: I0125 07:58:09.444806 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:58:09 crc kubenswrapper[4832]: I0125 07:58:09.444824 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:58:09 crc kubenswrapper[4832]: I0125 07:58:09.444835 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:58:09Z","lastTransitionTime":"2026-01-25T07:58:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:58:09 crc kubenswrapper[4832]: I0125 07:58:09.547684 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:58:09 crc kubenswrapper[4832]: I0125 07:58:09.547729 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:58:09 crc kubenswrapper[4832]: I0125 07:58:09.547739 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:58:09 crc kubenswrapper[4832]: I0125 07:58:09.547753 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:58:09 crc kubenswrapper[4832]: I0125 07:58:09.547762 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:58:09Z","lastTransitionTime":"2026-01-25T07:58:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:58:09 crc kubenswrapper[4832]: I0125 07:58:09.623764 4832 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-30 12:49:46.995920053 +0000 UTC Jan 25 07:58:09 crc kubenswrapper[4832]: I0125 07:58:09.649914 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:58:09 crc kubenswrapper[4832]: I0125 07:58:09.649947 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:58:09 crc kubenswrapper[4832]: I0125 07:58:09.649956 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:58:09 crc kubenswrapper[4832]: I0125 07:58:09.649969 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:58:09 crc kubenswrapper[4832]: I0125 07:58:09.649978 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:58:09Z","lastTransitionTime":"2026-01-25T07:58:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:58:09 crc kubenswrapper[4832]: I0125 07:58:09.669291 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 25 07:58:09 crc kubenswrapper[4832]: E0125 07:58:09.669508 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 25 07:58:09 crc kubenswrapper[4832]: I0125 07:58:09.752584 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:58:09 crc kubenswrapper[4832]: I0125 07:58:09.752636 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:58:09 crc kubenswrapper[4832]: I0125 07:58:09.752648 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:58:09 crc kubenswrapper[4832]: I0125 07:58:09.752665 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:58:09 crc kubenswrapper[4832]: I0125 07:58:09.752679 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:58:09Z","lastTransitionTime":"2026-01-25T07:58:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:58:09 crc kubenswrapper[4832]: I0125 07:58:09.855237 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:58:09 crc kubenswrapper[4832]: I0125 07:58:09.855280 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:58:09 crc kubenswrapper[4832]: I0125 07:58:09.855292 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:58:09 crc kubenswrapper[4832]: I0125 07:58:09.855307 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:58:09 crc kubenswrapper[4832]: I0125 07:58:09.855318 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:58:09Z","lastTransitionTime":"2026-01-25T07:58:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:58:09 crc kubenswrapper[4832]: I0125 07:58:09.957883 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:58:09 crc kubenswrapper[4832]: I0125 07:58:09.957923 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:58:09 crc kubenswrapper[4832]: I0125 07:58:09.957931 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:58:09 crc kubenswrapper[4832]: I0125 07:58:09.957946 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:58:09 crc kubenswrapper[4832]: I0125 07:58:09.957959 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:58:09Z","lastTransitionTime":"2026-01-25T07:58:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:58:10 crc kubenswrapper[4832]: I0125 07:58:10.060849 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:58:10 crc kubenswrapper[4832]: I0125 07:58:10.060920 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:58:10 crc kubenswrapper[4832]: I0125 07:58:10.060935 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:58:10 crc kubenswrapper[4832]: I0125 07:58:10.060953 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:58:10 crc kubenswrapper[4832]: I0125 07:58:10.060963 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:58:10Z","lastTransitionTime":"2026-01-25T07:58:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:58:10 crc kubenswrapper[4832]: I0125 07:58:10.163750 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:58:10 crc kubenswrapper[4832]: I0125 07:58:10.163793 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:58:10 crc kubenswrapper[4832]: I0125 07:58:10.163807 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:58:10 crc kubenswrapper[4832]: I0125 07:58:10.163823 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:58:10 crc kubenswrapper[4832]: I0125 07:58:10.163833 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:58:10Z","lastTransitionTime":"2026-01-25T07:58:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:58:10 crc kubenswrapper[4832]: I0125 07:58:10.266509 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:58:10 crc kubenswrapper[4832]: I0125 07:58:10.266535 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:58:10 crc kubenswrapper[4832]: I0125 07:58:10.266545 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:58:10 crc kubenswrapper[4832]: I0125 07:58:10.266562 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:58:10 crc kubenswrapper[4832]: I0125 07:58:10.266575 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:58:10Z","lastTransitionTime":"2026-01-25T07:58:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:58:10 crc kubenswrapper[4832]: I0125 07:58:10.369899 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:58:10 crc kubenswrapper[4832]: I0125 07:58:10.369968 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:58:10 crc kubenswrapper[4832]: I0125 07:58:10.369983 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:58:10 crc kubenswrapper[4832]: I0125 07:58:10.370000 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:58:10 crc kubenswrapper[4832]: I0125 07:58:10.370011 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:58:10Z","lastTransitionTime":"2026-01-25T07:58:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:58:10 crc kubenswrapper[4832]: I0125 07:58:10.473043 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:58:10 crc kubenswrapper[4832]: I0125 07:58:10.473084 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:58:10 crc kubenswrapper[4832]: I0125 07:58:10.473094 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:58:10 crc kubenswrapper[4832]: I0125 07:58:10.473112 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:58:10 crc kubenswrapper[4832]: I0125 07:58:10.473124 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:58:10Z","lastTransitionTime":"2026-01-25T07:58:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:58:10 crc kubenswrapper[4832]: I0125 07:58:10.575104 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:58:10 crc kubenswrapper[4832]: I0125 07:58:10.575187 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:58:10 crc kubenswrapper[4832]: I0125 07:58:10.575198 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:58:10 crc kubenswrapper[4832]: I0125 07:58:10.575215 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:58:10 crc kubenswrapper[4832]: I0125 07:58:10.575228 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:58:10Z","lastTransitionTime":"2026-01-25T07:58:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:58:10 crc kubenswrapper[4832]: I0125 07:58:10.623918 4832 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-21 10:00:25.222505936 +0000 UTC Jan 25 07:58:10 crc kubenswrapper[4832]: I0125 07:58:10.668662 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 25 07:58:10 crc kubenswrapper[4832]: I0125 07:58:10.668665 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 25 07:58:10 crc kubenswrapper[4832]: E0125 07:58:10.668842 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 25 07:58:10 crc kubenswrapper[4832]: I0125 07:58:10.668695 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-nzj5s" Jan 25 07:58:10 crc kubenswrapper[4832]: E0125 07:58:10.668945 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 25 07:58:10 crc kubenswrapper[4832]: E0125 07:58:10.669001 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-nzj5s" podUID="b1a15135-866b-4644-97aa-34c7da815b6b" Jan 25 07:58:10 crc kubenswrapper[4832]: I0125 07:58:10.677548 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:58:10 crc kubenswrapper[4832]: I0125 07:58:10.677595 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:58:10 crc kubenswrapper[4832]: I0125 07:58:10.677610 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:58:10 crc kubenswrapper[4832]: I0125 07:58:10.677634 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:58:10 crc kubenswrapper[4832]: I0125 07:58:10.677666 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:58:10Z","lastTransitionTime":"2026-01-25T07:58:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:58:10 crc kubenswrapper[4832]: I0125 07:58:10.780081 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:58:10 crc kubenswrapper[4832]: I0125 07:58:10.780127 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:58:10 crc kubenswrapper[4832]: I0125 07:58:10.780138 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:58:10 crc kubenswrapper[4832]: I0125 07:58:10.780159 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:58:10 crc kubenswrapper[4832]: I0125 07:58:10.780170 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:58:10Z","lastTransitionTime":"2026-01-25T07:58:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:58:10 crc kubenswrapper[4832]: I0125 07:58:10.882420 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:58:10 crc kubenswrapper[4832]: I0125 07:58:10.882459 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:58:10 crc kubenswrapper[4832]: I0125 07:58:10.882468 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:58:10 crc kubenswrapper[4832]: I0125 07:58:10.882482 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:58:10 crc kubenswrapper[4832]: I0125 07:58:10.882490 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:58:10Z","lastTransitionTime":"2026-01-25T07:58:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:58:10 crc kubenswrapper[4832]: I0125 07:58:10.984182 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:58:10 crc kubenswrapper[4832]: I0125 07:58:10.984232 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:58:10 crc kubenswrapper[4832]: I0125 07:58:10.984243 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:58:10 crc kubenswrapper[4832]: I0125 07:58:10.984261 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:58:10 crc kubenswrapper[4832]: I0125 07:58:10.984278 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:58:10Z","lastTransitionTime":"2026-01-25T07:58:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:58:11 crc kubenswrapper[4832]: I0125 07:58:11.086520 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:58:11 crc kubenswrapper[4832]: I0125 07:58:11.086563 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:58:11 crc kubenswrapper[4832]: I0125 07:58:11.086572 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:58:11 crc kubenswrapper[4832]: I0125 07:58:11.086586 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:58:11 crc kubenswrapper[4832]: I0125 07:58:11.086598 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:58:11Z","lastTransitionTime":"2026-01-25T07:58:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:58:11 crc kubenswrapper[4832]: I0125 07:58:11.189113 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:58:11 crc kubenswrapper[4832]: I0125 07:58:11.189159 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:58:11 crc kubenswrapper[4832]: I0125 07:58:11.189172 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:58:11 crc kubenswrapper[4832]: I0125 07:58:11.189190 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:58:11 crc kubenswrapper[4832]: I0125 07:58:11.189202 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:58:11Z","lastTransitionTime":"2026-01-25T07:58:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:58:11 crc kubenswrapper[4832]: I0125 07:58:11.291633 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:58:11 crc kubenswrapper[4832]: I0125 07:58:11.291677 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:58:11 crc kubenswrapper[4832]: I0125 07:58:11.291689 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:58:11 crc kubenswrapper[4832]: I0125 07:58:11.291711 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:58:11 crc kubenswrapper[4832]: I0125 07:58:11.292057 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:58:11Z","lastTransitionTime":"2026-01-25T07:58:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:58:11 crc kubenswrapper[4832]: I0125 07:58:11.394957 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:58:11 crc kubenswrapper[4832]: I0125 07:58:11.395006 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:58:11 crc kubenswrapper[4832]: I0125 07:58:11.395022 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:58:11 crc kubenswrapper[4832]: I0125 07:58:11.395046 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:58:11 crc kubenswrapper[4832]: I0125 07:58:11.395062 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:58:11Z","lastTransitionTime":"2026-01-25T07:58:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:58:11 crc kubenswrapper[4832]: I0125 07:58:11.497125 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:58:11 crc kubenswrapper[4832]: I0125 07:58:11.497185 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:58:11 crc kubenswrapper[4832]: I0125 07:58:11.497198 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:58:11 crc kubenswrapper[4832]: I0125 07:58:11.497216 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:58:11 crc kubenswrapper[4832]: I0125 07:58:11.497235 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:58:11Z","lastTransitionTime":"2026-01-25T07:58:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:58:11 crc kubenswrapper[4832]: I0125 07:58:11.600954 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:58:11 crc kubenswrapper[4832]: I0125 07:58:11.601011 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:58:11 crc kubenswrapper[4832]: I0125 07:58:11.601026 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:58:11 crc kubenswrapper[4832]: I0125 07:58:11.601050 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:58:11 crc kubenswrapper[4832]: I0125 07:58:11.601068 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:58:11Z","lastTransitionTime":"2026-01-25T07:58:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:58:11 crc kubenswrapper[4832]: I0125 07:58:11.624526 4832 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-08 22:45:06.86875985 +0000 UTC Jan 25 07:58:11 crc kubenswrapper[4832]: I0125 07:58:11.669810 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 25 07:58:11 crc kubenswrapper[4832]: E0125 07:58:11.670016 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 25 07:58:11 crc kubenswrapper[4832]: I0125 07:58:11.703012 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:58:11 crc kubenswrapper[4832]: I0125 07:58:11.703076 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:58:11 crc kubenswrapper[4832]: I0125 07:58:11.703093 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:58:11 crc kubenswrapper[4832]: I0125 07:58:11.703117 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:58:11 crc kubenswrapper[4832]: I0125 07:58:11.703137 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:58:11Z","lastTransitionTime":"2026-01-25T07:58:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:58:11 crc kubenswrapper[4832]: I0125 07:58:11.805707 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:58:11 crc kubenswrapper[4832]: I0125 07:58:11.805788 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:58:11 crc kubenswrapper[4832]: I0125 07:58:11.805813 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:58:11 crc kubenswrapper[4832]: I0125 07:58:11.805845 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:58:11 crc kubenswrapper[4832]: I0125 07:58:11.805868 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:58:11Z","lastTransitionTime":"2026-01-25T07:58:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:58:11 crc kubenswrapper[4832]: I0125 07:58:11.908789 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:58:11 crc kubenswrapper[4832]: I0125 07:58:11.908865 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:58:11 crc kubenswrapper[4832]: I0125 07:58:11.908885 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:58:11 crc kubenswrapper[4832]: I0125 07:58:11.908911 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:58:11 crc kubenswrapper[4832]: I0125 07:58:11.908926 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:58:11Z","lastTransitionTime":"2026-01-25T07:58:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:58:12 crc kubenswrapper[4832]: I0125 07:58:12.011929 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:58:12 crc kubenswrapper[4832]: I0125 07:58:12.011980 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:58:12 crc kubenswrapper[4832]: I0125 07:58:12.011996 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:58:12 crc kubenswrapper[4832]: I0125 07:58:12.012020 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:58:12 crc kubenswrapper[4832]: I0125 07:58:12.012036 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:58:12Z","lastTransitionTime":"2026-01-25T07:58:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:58:12 crc kubenswrapper[4832]: I0125 07:58:12.115096 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:58:12 crc kubenswrapper[4832]: I0125 07:58:12.115142 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:58:12 crc kubenswrapper[4832]: I0125 07:58:12.115159 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:58:12 crc kubenswrapper[4832]: I0125 07:58:12.115181 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:58:12 crc kubenswrapper[4832]: I0125 07:58:12.115198 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:58:12Z","lastTransitionTime":"2026-01-25T07:58:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:58:12 crc kubenswrapper[4832]: I0125 07:58:12.218280 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:58:12 crc kubenswrapper[4832]: I0125 07:58:12.218388 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:58:12 crc kubenswrapper[4832]: I0125 07:58:12.218431 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:58:12 crc kubenswrapper[4832]: I0125 07:58:12.218455 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:58:12 crc kubenswrapper[4832]: I0125 07:58:12.218471 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:58:12Z","lastTransitionTime":"2026-01-25T07:58:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:58:12 crc kubenswrapper[4832]: I0125 07:58:12.321951 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:58:12 crc kubenswrapper[4832]: I0125 07:58:12.322013 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:58:12 crc kubenswrapper[4832]: I0125 07:58:12.322030 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:58:12 crc kubenswrapper[4832]: I0125 07:58:12.322053 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:58:12 crc kubenswrapper[4832]: I0125 07:58:12.322070 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:58:12Z","lastTransitionTime":"2026-01-25T07:58:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:58:12 crc kubenswrapper[4832]: I0125 07:58:12.424644 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:58:12 crc kubenswrapper[4832]: I0125 07:58:12.424695 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:58:12 crc kubenswrapper[4832]: I0125 07:58:12.424718 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:58:12 crc kubenswrapper[4832]: I0125 07:58:12.424745 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:58:12 crc kubenswrapper[4832]: I0125 07:58:12.424768 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:58:12Z","lastTransitionTime":"2026-01-25T07:58:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:58:12 crc kubenswrapper[4832]: I0125 07:58:12.527220 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:58:12 crc kubenswrapper[4832]: I0125 07:58:12.527261 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:58:12 crc kubenswrapper[4832]: I0125 07:58:12.527270 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:58:12 crc kubenswrapper[4832]: I0125 07:58:12.527287 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:58:12 crc kubenswrapper[4832]: I0125 07:58:12.527296 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:58:12Z","lastTransitionTime":"2026-01-25T07:58:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:58:12 crc kubenswrapper[4832]: I0125 07:58:12.625416 4832 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-25 15:13:34.540402292 +0000 UTC Jan 25 07:58:12 crc kubenswrapper[4832]: I0125 07:58:12.629683 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:58:12 crc kubenswrapper[4832]: I0125 07:58:12.629732 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:58:12 crc kubenswrapper[4832]: I0125 07:58:12.629743 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:58:12 crc kubenswrapper[4832]: I0125 07:58:12.629764 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:58:12 crc kubenswrapper[4832]: I0125 07:58:12.629777 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:58:12Z","lastTransitionTime":"2026-01-25T07:58:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:58:12 crc kubenswrapper[4832]: I0125 07:58:12.669129 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-nzj5s" Jan 25 07:58:12 crc kubenswrapper[4832]: I0125 07:58:12.669276 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 25 07:58:12 crc kubenswrapper[4832]: E0125 07:58:12.669483 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-nzj5s" podUID="b1a15135-866b-4644-97aa-34c7da815b6b" Jan 25 07:58:12 crc kubenswrapper[4832]: I0125 07:58:12.669523 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 25 07:58:12 crc kubenswrapper[4832]: E0125 07:58:12.669854 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 25 07:58:12 crc kubenswrapper[4832]: E0125 07:58:12.669781 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 25 07:58:12 crc kubenswrapper[4832]: I0125 07:58:12.732870 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:58:12 crc kubenswrapper[4832]: I0125 07:58:12.732924 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:58:12 crc kubenswrapper[4832]: I0125 07:58:12.732942 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:58:12 crc kubenswrapper[4832]: I0125 07:58:12.732965 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:58:12 crc kubenswrapper[4832]: I0125 07:58:12.732983 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:58:12Z","lastTransitionTime":"2026-01-25T07:58:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:58:12 crc kubenswrapper[4832]: I0125 07:58:12.836037 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:58:12 crc kubenswrapper[4832]: I0125 07:58:12.836109 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:58:12 crc kubenswrapper[4832]: I0125 07:58:12.836131 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:58:12 crc kubenswrapper[4832]: I0125 07:58:12.836156 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:58:12 crc kubenswrapper[4832]: I0125 07:58:12.836172 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:58:12Z","lastTransitionTime":"2026-01-25T07:58:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:58:12 crc kubenswrapper[4832]: I0125 07:58:12.939440 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:58:12 crc kubenswrapper[4832]: I0125 07:58:12.939500 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:58:12 crc kubenswrapper[4832]: I0125 07:58:12.939513 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:58:12 crc kubenswrapper[4832]: I0125 07:58:12.939533 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:58:12 crc kubenswrapper[4832]: I0125 07:58:12.939551 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:58:12Z","lastTransitionTime":"2026-01-25T07:58:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:58:13 crc kubenswrapper[4832]: I0125 07:58:13.042002 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:58:13 crc kubenswrapper[4832]: I0125 07:58:13.042079 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:58:13 crc kubenswrapper[4832]: I0125 07:58:13.042103 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:58:13 crc kubenswrapper[4832]: I0125 07:58:13.042134 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:58:13 crc kubenswrapper[4832]: I0125 07:58:13.042159 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:58:13Z","lastTransitionTime":"2026-01-25T07:58:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:58:13 crc kubenswrapper[4832]: I0125 07:58:13.144541 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:58:13 crc kubenswrapper[4832]: I0125 07:58:13.144591 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:58:13 crc kubenswrapper[4832]: I0125 07:58:13.144601 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:58:13 crc kubenswrapper[4832]: I0125 07:58:13.144618 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:58:13 crc kubenswrapper[4832]: I0125 07:58:13.144629 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:58:13Z","lastTransitionTime":"2026-01-25T07:58:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:58:13 crc kubenswrapper[4832]: I0125 07:58:13.247380 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:58:13 crc kubenswrapper[4832]: I0125 07:58:13.247429 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:58:13 crc kubenswrapper[4832]: I0125 07:58:13.247437 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:58:13 crc kubenswrapper[4832]: I0125 07:58:13.247451 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:58:13 crc kubenswrapper[4832]: I0125 07:58:13.247462 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:58:13Z","lastTransitionTime":"2026-01-25T07:58:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:58:13 crc kubenswrapper[4832]: I0125 07:58:13.349491 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:58:13 crc kubenswrapper[4832]: I0125 07:58:13.349543 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:58:13 crc kubenswrapper[4832]: I0125 07:58:13.349553 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:58:13 crc kubenswrapper[4832]: I0125 07:58:13.349567 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:58:13 crc kubenswrapper[4832]: I0125 07:58:13.349575 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:58:13Z","lastTransitionTime":"2026-01-25T07:58:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:58:13 crc kubenswrapper[4832]: I0125 07:58:13.452443 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:58:13 crc kubenswrapper[4832]: I0125 07:58:13.452507 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:58:13 crc kubenswrapper[4832]: I0125 07:58:13.452533 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:58:13 crc kubenswrapper[4832]: I0125 07:58:13.452567 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:58:13 crc kubenswrapper[4832]: I0125 07:58:13.452592 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:58:13Z","lastTransitionTime":"2026-01-25T07:58:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:58:13 crc kubenswrapper[4832]: I0125 07:58:13.555435 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:58:13 crc kubenswrapper[4832]: I0125 07:58:13.555494 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:58:13 crc kubenswrapper[4832]: I0125 07:58:13.555513 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:58:13 crc kubenswrapper[4832]: I0125 07:58:13.555535 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:58:13 crc kubenswrapper[4832]: I0125 07:58:13.555551 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:58:13Z","lastTransitionTime":"2026-01-25T07:58:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:58:13 crc kubenswrapper[4832]: I0125 07:58:13.626328 4832 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-03 04:26:29.041938249 +0000 UTC Jan 25 07:58:13 crc kubenswrapper[4832]: I0125 07:58:13.658326 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:58:13 crc kubenswrapper[4832]: I0125 07:58:13.658465 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:58:13 crc kubenswrapper[4832]: I0125 07:58:13.658493 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:58:13 crc kubenswrapper[4832]: I0125 07:58:13.658528 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:58:13 crc kubenswrapper[4832]: I0125 07:58:13.658552 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:58:13Z","lastTransitionTime":"2026-01-25T07:58:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:58:13 crc kubenswrapper[4832]: I0125 07:58:13.668871 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 25 07:58:13 crc kubenswrapper[4832]: E0125 07:58:13.669040 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 25 07:58:13 crc kubenswrapper[4832]: I0125 07:58:13.760763 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:58:13 crc kubenswrapper[4832]: I0125 07:58:13.760835 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:58:13 crc kubenswrapper[4832]: I0125 07:58:13.760858 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:58:13 crc kubenswrapper[4832]: I0125 07:58:13.760888 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:58:13 crc kubenswrapper[4832]: I0125 07:58:13.760914 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:58:13Z","lastTransitionTime":"2026-01-25T07:58:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:58:13 crc kubenswrapper[4832]: I0125 07:58:13.863759 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:58:13 crc kubenswrapper[4832]: I0125 07:58:13.863818 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:58:13 crc kubenswrapper[4832]: I0125 07:58:13.863842 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:58:13 crc kubenswrapper[4832]: I0125 07:58:13.863931 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:58:13 crc kubenswrapper[4832]: I0125 07:58:13.863962 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:58:13Z","lastTransitionTime":"2026-01-25T07:58:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:58:13 crc kubenswrapper[4832]: I0125 07:58:13.966847 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:58:13 crc kubenswrapper[4832]: I0125 07:58:13.966929 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:58:13 crc kubenswrapper[4832]: I0125 07:58:13.966954 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:58:13 crc kubenswrapper[4832]: I0125 07:58:13.966988 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:58:13 crc kubenswrapper[4832]: I0125 07:58:13.967012 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:58:13Z","lastTransitionTime":"2026-01-25T07:58:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:58:14 crc kubenswrapper[4832]: I0125 07:58:14.083516 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:58:14 crc kubenswrapper[4832]: I0125 07:58:14.083578 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:58:14 crc kubenswrapper[4832]: I0125 07:58:14.083596 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:58:14 crc kubenswrapper[4832]: I0125 07:58:14.083620 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:58:14 crc kubenswrapper[4832]: I0125 07:58:14.083638 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:58:14Z","lastTransitionTime":"2026-01-25T07:58:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:58:14 crc kubenswrapper[4832]: I0125 07:58:14.185888 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:58:14 crc kubenswrapper[4832]: I0125 07:58:14.185968 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:58:14 crc kubenswrapper[4832]: I0125 07:58:14.186005 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:58:14 crc kubenswrapper[4832]: I0125 07:58:14.186036 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:58:14 crc kubenswrapper[4832]: I0125 07:58:14.186059 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:58:14Z","lastTransitionTime":"2026-01-25T07:58:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:58:14 crc kubenswrapper[4832]: I0125 07:58:14.288868 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:58:14 crc kubenswrapper[4832]: I0125 07:58:14.288947 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:58:14 crc kubenswrapper[4832]: I0125 07:58:14.288972 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:58:14 crc kubenswrapper[4832]: I0125 07:58:14.288998 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:58:14 crc kubenswrapper[4832]: I0125 07:58:14.289017 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:58:14Z","lastTransitionTime":"2026-01-25T07:58:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:58:14 crc kubenswrapper[4832]: I0125 07:58:14.392474 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:58:14 crc kubenswrapper[4832]: I0125 07:58:14.392515 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:58:14 crc kubenswrapper[4832]: I0125 07:58:14.392530 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:58:14 crc kubenswrapper[4832]: I0125 07:58:14.392546 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:58:14 crc kubenswrapper[4832]: I0125 07:58:14.392559 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:58:14Z","lastTransitionTime":"2026-01-25T07:58:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:58:14 crc kubenswrapper[4832]: I0125 07:58:14.495712 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:58:14 crc kubenswrapper[4832]: I0125 07:58:14.495773 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:58:14 crc kubenswrapper[4832]: I0125 07:58:14.495795 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:58:14 crc kubenswrapper[4832]: I0125 07:58:14.495819 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:58:14 crc kubenswrapper[4832]: I0125 07:58:14.495836 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:58:14Z","lastTransitionTime":"2026-01-25T07:58:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:58:14 crc kubenswrapper[4832]: I0125 07:58:14.599245 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:58:14 crc kubenswrapper[4832]: I0125 07:58:14.599306 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:58:14 crc kubenswrapper[4832]: I0125 07:58:14.599318 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:58:14 crc kubenswrapper[4832]: I0125 07:58:14.599334 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:58:14 crc kubenswrapper[4832]: I0125 07:58:14.599345 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:58:14Z","lastTransitionTime":"2026-01-25T07:58:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:58:14 crc kubenswrapper[4832]: I0125 07:58:14.626811 4832 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-06 20:46:35.624252819 +0000 UTC Jan 25 07:58:14 crc kubenswrapper[4832]: I0125 07:58:14.668835 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-nzj5s" Jan 25 07:58:14 crc kubenswrapper[4832]: I0125 07:58:14.668884 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 25 07:58:14 crc kubenswrapper[4832]: I0125 07:58:14.668932 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 25 07:58:14 crc kubenswrapper[4832]: E0125 07:58:14.669577 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-nzj5s" podUID="b1a15135-866b-4644-97aa-34c7da815b6b" Jan 25 07:58:14 crc kubenswrapper[4832]: E0125 07:58:14.669811 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 25 07:58:14 crc kubenswrapper[4832]: E0125 07:58:14.670175 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 25 07:58:14 crc kubenswrapper[4832]: I0125 07:58:14.705821 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:58:14 crc kubenswrapper[4832]: I0125 07:58:14.705926 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:58:14 crc kubenswrapper[4832]: I0125 07:58:14.705947 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:58:14 crc kubenswrapper[4832]: I0125 07:58:14.705974 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:58:14 crc kubenswrapper[4832]: I0125 07:58:14.705994 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:58:14Z","lastTransitionTime":"2026-01-25T07:58:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:58:14 crc kubenswrapper[4832]: I0125 07:58:14.809678 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:58:14 crc kubenswrapper[4832]: I0125 07:58:14.809765 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:58:14 crc kubenswrapper[4832]: I0125 07:58:14.809783 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:58:14 crc kubenswrapper[4832]: I0125 07:58:14.809809 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:58:14 crc kubenswrapper[4832]: I0125 07:58:14.809828 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:58:14Z","lastTransitionTime":"2026-01-25T07:58:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:58:14 crc kubenswrapper[4832]: I0125 07:58:14.914291 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:58:14 crc kubenswrapper[4832]: I0125 07:58:14.914335 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:58:14 crc kubenswrapper[4832]: I0125 07:58:14.914346 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:58:14 crc kubenswrapper[4832]: I0125 07:58:14.914367 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:58:14 crc kubenswrapper[4832]: I0125 07:58:14.914379 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:58:14Z","lastTransitionTime":"2026-01-25T07:58:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:58:15 crc kubenswrapper[4832]: I0125 07:58:15.016458 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:58:15 crc kubenswrapper[4832]: I0125 07:58:15.016501 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:58:15 crc kubenswrapper[4832]: I0125 07:58:15.016512 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:58:15 crc kubenswrapper[4832]: I0125 07:58:15.016558 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:58:15 crc kubenswrapper[4832]: I0125 07:58:15.016571 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:58:15Z","lastTransitionTime":"2026-01-25T07:58:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:58:15 crc kubenswrapper[4832]: I0125 07:58:15.119610 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:58:15 crc kubenswrapper[4832]: I0125 07:58:15.119675 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:58:15 crc kubenswrapper[4832]: I0125 07:58:15.119693 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:58:15 crc kubenswrapper[4832]: I0125 07:58:15.119723 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:58:15 crc kubenswrapper[4832]: I0125 07:58:15.119743 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:58:15Z","lastTransitionTime":"2026-01-25T07:58:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:58:15 crc kubenswrapper[4832]: I0125 07:58:15.222850 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:58:15 crc kubenswrapper[4832]: I0125 07:58:15.222915 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:58:15 crc kubenswrapper[4832]: I0125 07:58:15.222933 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:58:15 crc kubenswrapper[4832]: I0125 07:58:15.222959 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:58:15 crc kubenswrapper[4832]: I0125 07:58:15.222980 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:58:15Z","lastTransitionTime":"2026-01-25T07:58:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:58:15 crc kubenswrapper[4832]: I0125 07:58:15.325456 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:58:15 crc kubenswrapper[4832]: I0125 07:58:15.325497 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:58:15 crc kubenswrapper[4832]: I0125 07:58:15.325510 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:58:15 crc kubenswrapper[4832]: I0125 07:58:15.325530 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:58:15 crc kubenswrapper[4832]: I0125 07:58:15.325583 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:58:15Z","lastTransitionTime":"2026-01-25T07:58:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:58:15 crc kubenswrapper[4832]: I0125 07:58:15.428341 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:58:15 crc kubenswrapper[4832]: I0125 07:58:15.428419 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:58:15 crc kubenswrapper[4832]: I0125 07:58:15.428432 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:58:15 crc kubenswrapper[4832]: I0125 07:58:15.428452 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:58:15 crc kubenswrapper[4832]: I0125 07:58:15.428464 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:58:15Z","lastTransitionTime":"2026-01-25T07:58:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:58:15 crc kubenswrapper[4832]: I0125 07:58:15.532113 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:58:15 crc kubenswrapper[4832]: I0125 07:58:15.532160 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:58:15 crc kubenswrapper[4832]: I0125 07:58:15.532169 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:58:15 crc kubenswrapper[4832]: I0125 07:58:15.532186 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:58:15 crc kubenswrapper[4832]: I0125 07:58:15.532199 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:58:15Z","lastTransitionTime":"2026-01-25T07:58:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:58:15 crc kubenswrapper[4832]: I0125 07:58:15.627199 4832 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-20 09:04:11.813168203 +0000 UTC Jan 25 07:58:15 crc kubenswrapper[4832]: I0125 07:58:15.635282 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:58:15 crc kubenswrapper[4832]: I0125 07:58:15.635350 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:58:15 crc kubenswrapper[4832]: I0125 07:58:15.635370 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:58:15 crc kubenswrapper[4832]: I0125 07:58:15.635443 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:58:15 crc kubenswrapper[4832]: I0125 07:58:15.635466 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:58:15Z","lastTransitionTime":"2026-01-25T07:58:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:58:15 crc kubenswrapper[4832]: I0125 07:58:15.669442 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 25 07:58:15 crc kubenswrapper[4832]: E0125 07:58:15.669698 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 25 07:58:15 crc kubenswrapper[4832]: I0125 07:58:15.738011 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:58:15 crc kubenswrapper[4832]: I0125 07:58:15.738066 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:58:15 crc kubenswrapper[4832]: I0125 07:58:15.738077 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:58:15 crc kubenswrapper[4832]: I0125 07:58:15.738099 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:58:15 crc kubenswrapper[4832]: I0125 07:58:15.738110 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:58:15Z","lastTransitionTime":"2026-01-25T07:58:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:58:15 crc kubenswrapper[4832]: I0125 07:58:15.840953 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:58:15 crc kubenswrapper[4832]: I0125 07:58:15.841017 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:58:15 crc kubenswrapper[4832]: I0125 07:58:15.841039 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:58:15 crc kubenswrapper[4832]: I0125 07:58:15.841069 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:58:15 crc kubenswrapper[4832]: I0125 07:58:15.841091 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:58:15Z","lastTransitionTime":"2026-01-25T07:58:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:58:15 crc kubenswrapper[4832]: I0125 07:58:15.943982 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:58:15 crc kubenswrapper[4832]: I0125 07:58:15.944026 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:58:15 crc kubenswrapper[4832]: I0125 07:58:15.944035 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:58:15 crc kubenswrapper[4832]: I0125 07:58:15.944049 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:58:15 crc kubenswrapper[4832]: I0125 07:58:15.944061 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:58:15Z","lastTransitionTime":"2026-01-25T07:58:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:58:16 crc kubenswrapper[4832]: I0125 07:58:16.046835 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:58:16 crc kubenswrapper[4832]: I0125 07:58:16.046865 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:58:16 crc kubenswrapper[4832]: I0125 07:58:16.046874 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:58:16 crc kubenswrapper[4832]: I0125 07:58:16.046885 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:58:16 crc kubenswrapper[4832]: I0125 07:58:16.046894 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:58:16Z","lastTransitionTime":"2026-01-25T07:58:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:58:16 crc kubenswrapper[4832]: I0125 07:58:16.148809 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:58:16 crc kubenswrapper[4832]: I0125 07:58:16.148847 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:58:16 crc kubenswrapper[4832]: I0125 07:58:16.148856 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:58:16 crc kubenswrapper[4832]: I0125 07:58:16.148871 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:58:16 crc kubenswrapper[4832]: I0125 07:58:16.148881 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:58:16Z","lastTransitionTime":"2026-01-25T07:58:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:58:16 crc kubenswrapper[4832]: I0125 07:58:16.251124 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:58:16 crc kubenswrapper[4832]: I0125 07:58:16.251152 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:58:16 crc kubenswrapper[4832]: I0125 07:58:16.251159 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:58:16 crc kubenswrapper[4832]: I0125 07:58:16.251171 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:58:16 crc kubenswrapper[4832]: I0125 07:58:16.251180 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:58:16Z","lastTransitionTime":"2026-01-25T07:58:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:58:16 crc kubenswrapper[4832]: I0125 07:58:16.352756 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:58:16 crc kubenswrapper[4832]: I0125 07:58:16.352790 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:58:16 crc kubenswrapper[4832]: I0125 07:58:16.352798 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:58:16 crc kubenswrapper[4832]: I0125 07:58:16.352809 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:58:16 crc kubenswrapper[4832]: I0125 07:58:16.352818 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:58:16Z","lastTransitionTime":"2026-01-25T07:58:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:58:16 crc kubenswrapper[4832]: I0125 07:58:16.455189 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:58:16 crc kubenswrapper[4832]: I0125 07:58:16.455234 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:58:16 crc kubenswrapper[4832]: I0125 07:58:16.455243 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:58:16 crc kubenswrapper[4832]: I0125 07:58:16.455258 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:58:16 crc kubenswrapper[4832]: I0125 07:58:16.455295 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:58:16Z","lastTransitionTime":"2026-01-25T07:58:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:58:16 crc kubenswrapper[4832]: I0125 07:58:16.557647 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:58:16 crc kubenswrapper[4832]: I0125 07:58:16.557688 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:58:16 crc kubenswrapper[4832]: I0125 07:58:16.557696 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:58:16 crc kubenswrapper[4832]: I0125 07:58:16.557709 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:58:16 crc kubenswrapper[4832]: I0125 07:58:16.557718 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:58:16Z","lastTransitionTime":"2026-01-25T07:58:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:58:16 crc kubenswrapper[4832]: I0125 07:58:16.627716 4832 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-13 15:00:22.779487999 +0000 UTC Jan 25 07:58:16 crc kubenswrapper[4832]: I0125 07:58:16.660244 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:58:16 crc kubenswrapper[4832]: I0125 07:58:16.660301 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:58:16 crc kubenswrapper[4832]: I0125 07:58:16.660318 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:58:16 crc kubenswrapper[4832]: I0125 07:58:16.660344 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:58:16 crc kubenswrapper[4832]: I0125 07:58:16.660359 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:58:16Z","lastTransitionTime":"2026-01-25T07:58:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:58:16 crc kubenswrapper[4832]: I0125 07:58:16.669502 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-nzj5s" Jan 25 07:58:16 crc kubenswrapper[4832]: I0125 07:58:16.669554 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 25 07:58:16 crc kubenswrapper[4832]: E0125 07:58:16.669606 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-nzj5s" podUID="b1a15135-866b-4644-97aa-34c7da815b6b" Jan 25 07:58:16 crc kubenswrapper[4832]: E0125 07:58:16.669695 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 25 07:58:16 crc kubenswrapper[4832]: I0125 07:58:16.669717 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 25 07:58:16 crc kubenswrapper[4832]: E0125 07:58:16.669902 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 25 07:58:16 crc kubenswrapper[4832]: I0125 07:58:16.762837 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:58:16 crc kubenswrapper[4832]: I0125 07:58:16.762876 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:58:16 crc kubenswrapper[4832]: I0125 07:58:16.762888 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:58:16 crc kubenswrapper[4832]: I0125 07:58:16.762904 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:58:16 crc kubenswrapper[4832]: I0125 07:58:16.762913 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:58:16Z","lastTransitionTime":"2026-01-25T07:58:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:58:16 crc kubenswrapper[4832]: I0125 07:58:16.865071 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:58:16 crc kubenswrapper[4832]: I0125 07:58:16.865150 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:58:16 crc kubenswrapper[4832]: I0125 07:58:16.865162 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:58:16 crc kubenswrapper[4832]: I0125 07:58:16.865180 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:58:16 crc kubenswrapper[4832]: I0125 07:58:16.865191 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:58:16Z","lastTransitionTime":"2026-01-25T07:58:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:58:16 crc kubenswrapper[4832]: I0125 07:58:16.967418 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:58:16 crc kubenswrapper[4832]: I0125 07:58:16.968051 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:58:16 crc kubenswrapper[4832]: I0125 07:58:16.968075 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:58:16 crc kubenswrapper[4832]: I0125 07:58:16.968095 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:58:16 crc kubenswrapper[4832]: I0125 07:58:16.968103 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:58:16Z","lastTransitionTime":"2026-01-25T07:58:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:58:17 crc kubenswrapper[4832]: I0125 07:58:17.070984 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:58:17 crc kubenswrapper[4832]: I0125 07:58:17.071019 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:58:17 crc kubenswrapper[4832]: I0125 07:58:17.071030 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:58:17 crc kubenswrapper[4832]: I0125 07:58:17.071054 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:58:17 crc kubenswrapper[4832]: I0125 07:58:17.071066 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:58:17Z","lastTransitionTime":"2026-01-25T07:58:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:58:17 crc kubenswrapper[4832]: I0125 07:58:17.174208 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:58:17 crc kubenswrapper[4832]: I0125 07:58:17.174249 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:58:17 crc kubenswrapper[4832]: I0125 07:58:17.174260 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:58:17 crc kubenswrapper[4832]: I0125 07:58:17.174278 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:58:17 crc kubenswrapper[4832]: I0125 07:58:17.174288 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:58:17Z","lastTransitionTime":"2026-01-25T07:58:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:58:17 crc kubenswrapper[4832]: I0125 07:58:17.277030 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:58:17 crc kubenswrapper[4832]: I0125 07:58:17.277068 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:58:17 crc kubenswrapper[4832]: I0125 07:58:17.277078 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:58:17 crc kubenswrapper[4832]: I0125 07:58:17.277093 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:58:17 crc kubenswrapper[4832]: I0125 07:58:17.277103 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:58:17Z","lastTransitionTime":"2026-01-25T07:58:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:58:17 crc kubenswrapper[4832]: I0125 07:58:17.324469 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:58:17 crc kubenswrapper[4832]: I0125 07:58:17.324540 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:58:17 crc kubenswrapper[4832]: I0125 07:58:17.324558 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:58:17 crc kubenswrapper[4832]: I0125 07:58:17.324584 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:58:17 crc kubenswrapper[4832]: I0125 07:58:17.324603 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:58:17Z","lastTransitionTime":"2026-01-25T07:58:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:58:17 crc kubenswrapper[4832]: E0125 07:58:17.342455 4832 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-25T07:58:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-25T07:58:17Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-25T07:58:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-25T07:58:17Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-25T07:58:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-25T07:58:17Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-25T07:58:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-25T07:58:17Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"0979aa75-019e-429a-886d-abfe16bbe8b2\\\",\\\"systemUUID\\\":\\\"55010a19-6f9d-4b9e-9f82-47bdc3835176\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T07:58:17Z is after 2025-08-24T17:21:41Z" Jan 25 07:58:17 crc kubenswrapper[4832]: I0125 07:58:17.346651 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:58:17 crc kubenswrapper[4832]: I0125 07:58:17.346691 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:58:17 crc kubenswrapper[4832]: I0125 07:58:17.346725 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:58:17 crc kubenswrapper[4832]: I0125 07:58:17.346741 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:58:17 crc kubenswrapper[4832]: I0125 07:58:17.346752 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:58:17Z","lastTransitionTime":"2026-01-25T07:58:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:58:17 crc kubenswrapper[4832]: E0125 07:58:17.360312 4832 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-25T07:58:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-25T07:58:17Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-25T07:58:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-25T07:58:17Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-25T07:58:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-25T07:58:17Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-25T07:58:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-25T07:58:17Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"0979aa75-019e-429a-886d-abfe16bbe8b2\\\",\\\"systemUUID\\\":\\\"55010a19-6f9d-4b9e-9f82-47bdc3835176\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T07:58:17Z is after 2025-08-24T17:21:41Z" Jan 25 07:58:17 crc kubenswrapper[4832]: I0125 07:58:17.365182 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:58:17 crc kubenswrapper[4832]: I0125 07:58:17.365243 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:58:17 crc kubenswrapper[4832]: I0125 07:58:17.365258 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:58:17 crc kubenswrapper[4832]: I0125 07:58:17.365276 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:58:17 crc kubenswrapper[4832]: I0125 07:58:17.365287 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:58:17Z","lastTransitionTime":"2026-01-25T07:58:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:58:17 crc kubenswrapper[4832]: E0125 07:58:17.381023 4832 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-25T07:58:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-25T07:58:17Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-25T07:58:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-25T07:58:17Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-25T07:58:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-25T07:58:17Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-25T07:58:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-25T07:58:17Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"0979aa75-019e-429a-886d-abfe16bbe8b2\\\",\\\"systemUUID\\\":\\\"55010a19-6f9d-4b9e-9f82-47bdc3835176\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T07:58:17Z is after 2025-08-24T17:21:41Z" Jan 25 07:58:17 crc kubenswrapper[4832]: I0125 07:58:17.386902 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:58:17 crc kubenswrapper[4832]: I0125 07:58:17.386974 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:58:17 crc kubenswrapper[4832]: I0125 07:58:17.386994 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:58:17 crc kubenswrapper[4832]: I0125 07:58:17.387020 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:58:17 crc kubenswrapper[4832]: I0125 07:58:17.387040 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:58:17Z","lastTransitionTime":"2026-01-25T07:58:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:58:17 crc kubenswrapper[4832]: E0125 07:58:17.403536 4832 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-25T07:58:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-25T07:58:17Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-25T07:58:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-25T07:58:17Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-25T07:58:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-25T07:58:17Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-25T07:58:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-25T07:58:17Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"0979aa75-019e-429a-886d-abfe16bbe8b2\\\",\\\"systemUUID\\\":\\\"55010a19-6f9d-4b9e-9f82-47bdc3835176\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T07:58:17Z is after 2025-08-24T17:21:41Z" Jan 25 07:58:17 crc kubenswrapper[4832]: I0125 07:58:17.408595 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:58:17 crc kubenswrapper[4832]: I0125 07:58:17.408642 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:58:17 crc kubenswrapper[4832]: I0125 07:58:17.408656 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:58:17 crc kubenswrapper[4832]: I0125 07:58:17.408679 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:58:17 crc kubenswrapper[4832]: I0125 07:58:17.408694 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:58:17Z","lastTransitionTime":"2026-01-25T07:58:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:58:17 crc kubenswrapper[4832]: E0125 07:58:17.426478 4832 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-25T07:58:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-25T07:58:17Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-25T07:58:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-25T07:58:17Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-25T07:58:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-25T07:58:17Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-25T07:58:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-25T07:58:17Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"0979aa75-019e-429a-886d-abfe16bbe8b2\\\",\\\"systemUUID\\\":\\\"55010a19-6f9d-4b9e-9f82-47bdc3835176\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T07:58:17Z is after 2025-08-24T17:21:41Z" Jan 25 07:58:17 crc kubenswrapper[4832]: E0125 07:58:17.426605 4832 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 25 07:58:17 crc kubenswrapper[4832]: I0125 07:58:17.428053 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:58:17 crc kubenswrapper[4832]: I0125 07:58:17.428083 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:58:17 crc kubenswrapper[4832]: I0125 07:58:17.428093 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:58:17 crc kubenswrapper[4832]: I0125 07:58:17.428110 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:58:17 crc kubenswrapper[4832]: I0125 07:58:17.428123 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:58:17Z","lastTransitionTime":"2026-01-25T07:58:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:58:17 crc kubenswrapper[4832]: I0125 07:58:17.532459 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:58:17 crc kubenswrapper[4832]: I0125 07:58:17.532505 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:58:17 crc kubenswrapper[4832]: I0125 07:58:17.532522 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:58:17 crc kubenswrapper[4832]: I0125 07:58:17.532538 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:58:17 crc kubenswrapper[4832]: I0125 07:58:17.532550 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:58:17Z","lastTransitionTime":"2026-01-25T07:58:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:58:17 crc kubenswrapper[4832]: I0125 07:58:17.628252 4832 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-29 18:37:24.394572262 +0000 UTC Jan 25 07:58:17 crc kubenswrapper[4832]: I0125 07:58:17.634744 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:58:17 crc kubenswrapper[4832]: I0125 07:58:17.634774 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:58:17 crc kubenswrapper[4832]: I0125 07:58:17.634781 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:58:17 crc kubenswrapper[4832]: I0125 07:58:17.634795 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:58:17 crc kubenswrapper[4832]: I0125 07:58:17.634804 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:58:17Z","lastTransitionTime":"2026-01-25T07:58:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:58:17 crc kubenswrapper[4832]: I0125 07:58:17.669500 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 25 07:58:17 crc kubenswrapper[4832]: E0125 07:58:17.669649 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 25 07:58:17 crc kubenswrapper[4832]: I0125 07:58:17.682867 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:16Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T07:58:17Z is after 2025-08-24T17:21:41Z" Jan 25 07:58:17 crc kubenswrapper[4832]: I0125 07:58:17.696539 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://49bab1f91a75d2c164a43ba253102a6ac5ba0fd6347fad172ae2052f055d3434\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T07:58:17Z is after 2025-08-24T17:21:41Z" Jan 25 07:58:17 crc kubenswrapper[4832]: I0125 07:58:17.710216 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:19Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:19Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://097b2ff685144140b86c80b5c605d0ef31116b56237a696d1da4bf98f65d7ae2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T07:58:17Z is after 2025-08-24T17:21:41Z" Jan 25 07:58:17 crc kubenswrapper[4832]: I0125 07:58:17.721762 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-ljmz9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f0e6de28-95c1-4b62-93a5-8141ed12ba8e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://90459cff650e6a278d83d57b502423c3c3bd87cadc083c7642dfc4cc33e7953c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s6dzs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T07:57:16Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-ljmz9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T07:58:17Z is after 2025-08-24T17:21:41Z" Jan 25 07:58:17 crc kubenswrapper[4832]: I0125 07:58:17.734063 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-9r9sz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1fb47e8e-c812-41b4-9be7-3fad81e121b0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11d30ecfbac91cbd5f546d8f064b715e31917d7db31102376299e2c5fa7951f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2t6v2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c32b6a39b2bc87d55b11a88a54d0909633358c70f3fc555cd4308bc5bf2689a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2t6v2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T07:57:16Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-9r9sz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T07:58:17Z is after 2025-08-24T17:21:41Z" Jan 25 07:58:17 crc kubenswrapper[4832]: I0125 07:58:17.736617 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:58:17 crc kubenswrapper[4832]: I0125 07:58:17.736657 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:58:17 crc kubenswrapper[4832]: I0125 07:58:17.736669 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:58:17 crc kubenswrapper[4832]: I0125 07:58:17.736696 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:58:17 crc kubenswrapper[4832]: I0125 07:58:17.736708 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:58:17Z","lastTransitionTime":"2026-01-25T07:58:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:58:17 crc kubenswrapper[4832]: I0125 07:58:17.746061 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T07:58:17Z is after 2025-08-24T17:21:41Z" Jan 25 07:58:17 crc kubenswrapper[4832]: I0125 07:58:17.759481 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-7tflx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"947f1c61-f061-4448-b301-9c2554b67933\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://62f9942e292890719dd629a44aa806877367db57a332a97f254fea093c039c5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6tmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://446dcb21c95e4112671db6f4b8376ff3361d3d386ecdaa190f615271511be812\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://446dcb21c95e4112671db6f4b8376ff3361d3d386ecdaa190f615271511be812\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T07:57:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T07:57:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6tmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2ca8e86a16d5f632146a210839dc52fb85013bd79ac5a467847d4a28a672539\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a2ca8e86a16d5f632146a210839dc52fb85013bd79ac5a467847d4a28a672539\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T07:57:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T07:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6tmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e8c763fc8bcc560d4435f2ed3be793465fb9e31b07bc26b76ce14bf7d9ce6b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3e8c763fc8bcc560d4435f2ed3be793465fb9e31b07bc26b76ce14bf7d9ce6b7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T07:57:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T07:57:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6tmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6a224c00f14700b78550beaa705d0f1cf0b2f13ef8ec3ba81aef885b81292f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a6a224c00f14700b78550beaa705d0f1cf0b2f13ef8ec3ba81aef885b81292f3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T07:57:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T07:57:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6tmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0565bbfef6aee4dc36b7eeea5fb9b0d26004395c38af8fb6f1745ff6853957e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0565bbfef6aee4dc36b7eeea5fb9b0d26004395c38af8fb6f1745ff6853957e4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T07:57:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T07:57:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6tmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://21c9f3889231e035c1db9611e076f2db7f52cca1449f9cd143323a8599d3141c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://21c9f3889231e035c1db9611e076f2db7f52cca1449f9cd143323a8599d3141c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T07:57:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T07:57:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6tmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T07:57:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-7tflx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T07:58:17Z is after 2025-08-24T17:21:41Z" Jan 25 07:58:17 crc kubenswrapper[4832]: I0125 07:58:17.773814 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4399c971-4476-4d24-ae22-8f9710ee1ea8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:56:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:56:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:56:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://427b76c32266adf832d2068d3a55977e793505c5bb68d7b55f73115596094910\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:56:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://37e9206fcc440929199c51b318bab8d2c23814d1307eaed596434c12edf2ed21\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:56:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://959f94a48ef709e3a3ca62ab6fda1874fd98e4fa70fbde0fa03da51bc8d0ed25\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:56:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://56d7d5b36830b76c8af4d6a98ec50b4096ef677b7ec94784724d5395dbc5e1a5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7e2213b4c4748dc37cf94e9b977630270dedbabf28e81c8a6d75e4ee3346ad7a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-25T07:57:15Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0125 07:57:10.242088 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0125 07:57:10.245266 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3222874030/tls.crt::/tmp/serving-cert-3222874030/tls.key\\\\\\\"\\\\nI0125 07:57:15.582629 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0125 07:57:15.585295 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0125 07:57:15.585315 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0125 07:57:15.585341 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0125 07:57:15.585347 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0125 07:57:15.590465 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0125 07:57:15.590486 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0125 07:57:15.590498 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0125 07:57:15.590502 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0125 07:57:15.590506 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0125 07:57:15.590510 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0125 07:57:15.590513 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0125 07:57:15.590670 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0125 07:57:15.594690 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-25T07:56:59Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c0b0c638bfaa98aaf9932b5ad1b0bfc04ba52038c40f3aa85103388c557ace5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:56:59Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5cdefbe9da3ff798b69ba79465cd9b6fce74df31802f14dca3fa58ba5b9d1bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b5cdefbe9da3ff798b69ba79465cd9b6fce74df31802f14dca3fa58ba5b9d1bd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T07:56:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T07:56:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T07:56:57Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T07:58:17Z is after 2025-08-24T17:21:41Z" Jan 25 07:58:17 crc kubenswrapper[4832]: I0125 07:58:17.785271 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fcc553c4-1007-4dbc-8420-60b36d54467a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:56:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:56:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:56:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8be196a1dec67a58e78aa9de2efa770fc899f210cc9c13962f0ebe78b967ba34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:56:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b044eb1a229266f00938c08da6aa9e86425ca71d08c8434d7214d54850c36bbb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:56:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://82354c782a5e3edb960aa716e1fc5fa9ab40d1f483ae320f08abfb662c1f1911\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:56:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7833d14895ff5c8aa464bdd04ddfe77dd2cddb9658d863bf6421449e62657bd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:56:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T07:56:57Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T07:58:17Z is after 2025-08-24T17:21:41Z" Jan 25 07:58:17 crc kubenswrapper[4832]: I0125 07:58:17.797269 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:16Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T07:58:17Z is after 2025-08-24T17:21:41Z" Jan 25 07:58:17 crc kubenswrapper[4832]: I0125 07:58:17.807775 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-6dqw2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5b30a48c-b823-4cdd-ac0c-def5487d8fa6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d04c4243f10847106daab854b81ba5b24466780aa4900922ae2c460468a12e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxmsw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T07:57:16Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-6dqw2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T07:58:17Z is after 2025-08-24T17:21:41Z" Jan 25 07:58:17 crc kubenswrapper[4832]: I0125 07:58:17.824668 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-plv66" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9c6fdc72-86dc-433d-8aac-57b0eeefaca3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4eb8d5ded80c75addd304eb271c805a5558200db4ad062ef7354d8a0e4d2892d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rkm2k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b2bdf85709ae59146893142e9c99259a30d0a3d382b2212b1863f677f6afc2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rkm2k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://955df1f749685e35f57096ab341705a767f9f044c498ff9fe0c578205ab00e47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rkm2k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a4281c5178e1f538e268252a65fbf98cf6d3febdb246a148f96a4aa074654ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rkm2k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9039a4038315d24ad4f721f3a16dc792881c104d23270f4ab5ffb3d84ff4cb99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rkm2k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0de5e2c0084fa8b9faf368e61b965f84d8411bcbdfb8b3cf6a35f4bc6088e68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rkm2k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9360fc46a4533171758f5c0111aec5209164d6ef530b6c4c7047c14a347f7bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b9360fc46a4533171758f5c0111aec5209164d6ef530b6c4c7047c14a347f7bd\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-25T07:58:05Z\\\",\\\"message\\\":\\\"map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-etcd-operator/metrics]} name:Service_openshift-etcd-operator/metrics_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.188:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {53c717ca-2174-4315-bb03-c937a9c0d9b6}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0125 07:58:05.422450 6811 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-etcd-operator/metrics]} name:Service_openshift-etcd-operator/metrics_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.188:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {53c717ca-2174-4315-bb03-c937a9c0d9b6}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0125 07:58:05.420969 6811 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-25T07:58:04Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-plv66_openshift-ovn-kubernetes(9c6fdc72-86dc-433d-8aac-57b0eeefaca3)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rkm2k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d82289bf3a8f5881decb5d348cc43fdfd61f4ce6af17013a893b687d2c759d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rkm2k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac96bdf8380dbae226d8f186a0449b986660f21889eb73734620b26fb796fbf1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ac96bdf8380dbae226d8f186a0449b986660f21889eb73734620b26fb796fbf1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T07:57:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rkm2k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T07:57:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-plv66\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T07:58:17Z is after 2025-08-24T17:21:41Z" Jan 25 07:58:17 crc kubenswrapper[4832]: I0125 07:58:17.835987 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-ct7hc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1be4ce34-f46c-4ee9-8fb5-7ac13dafef85\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c584b1d69c283cdea5cd50a6f1e3b9a1fd4b4b82bfb1142fb4bb32fd7c7d3fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cd2cg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://80d0c4fe9bedb92c87bfea3e2e7706dac8825366b74adb48b257fa32f31a6277\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cd2cg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T07:57:29Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-ct7hc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T07:58:17Z is after 2025-08-24T17:21:41Z" Jan 25 07:58:17 crc kubenswrapper[4832]: I0125 07:58:17.841687 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:58:17 crc kubenswrapper[4832]: I0125 07:58:17.841741 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:58:17 crc kubenswrapper[4832]: I0125 07:58:17.841754 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:58:17 crc kubenswrapper[4832]: I0125 07:58:17.841770 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:58:17 crc kubenswrapper[4832]: I0125 07:58:17.841787 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:58:17Z","lastTransitionTime":"2026-01-25T07:58:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:58:17 crc kubenswrapper[4832]: I0125 07:58:17.850296 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f6bad725-5721-4824-a4ed-bfc16b247b44\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:56:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:56:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:56:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://acf625e850d98cfae07cd2c4ef9d3f9a5404baad9c9bf3e87fa7ff5d1ba00212\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:56:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://902f7ae070f61b744e77e5cbcc7e585607467b588514ae3e99fdded86279a9b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:56:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e1d1028b32f15c85ebc49f8b388004a91d6c08f1bc2c7bf77c2d34db97525111\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:56:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://79304c289cb94b7a9cd8eed25b9e679ded9f3b2b6133ad21157032e313120e85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://79304c289cb94b7a9cd8eed25b9e679ded9f3b2b6133ad21157032e313120e85\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T07:56:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T07:56:58Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T07:56:57Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T07:58:17Z is after 2025-08-24T17:21:41Z" Jan 25 07:58:17 crc kubenswrapper[4832]: I0125 07:58:17.866843 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d0e4b534-077a-47eb-a9aa-463b4dce27c2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:56:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:56:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e400282707469172abd90879bb5c4f96419dd2fbdbc5cc58c6ee9954624b221f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://22fb11acb07674f4808f4563567010790f12a87af272fdcf5ad1998e616c3f13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7970bc59b29bb18f7064917431bb4dd3388f593b65f71d697e3bc1c37493d087\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ae35d18ac48a31c47656c723134740770a44da6fa1587a853402bbfd4f51956\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://56b41ea1d1a7bb58c288bf3c661f5cd441412fc4790cd8361da2061bd35721dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c6f28ecd4c0dfb159fffbbdfc1ecbfee0ce21de2efa607937d80ec098bfc2534\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c6f28ecd4c0dfb159fffbbdfc1ecbfee0ce21de2efa607937d80ec098bfc2534\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T07:56:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T07:56:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3d6c060504d04d04a811fe906985b4981037f7c249befd89d21694b58983826\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b3d6c060504d04d04a811fe906985b4981037f7c249befd89d21694b58983826\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T07:56:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T07:56:58Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f98f07a514287378206a12966a18bcce2ce996434858c036f7e405a8c5d51721\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f98f07a514287378206a12966a18bcce2ce996434858c036f7e405a8c5d51721\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T07:56:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T07:56:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T07:56:57Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T07:58:17Z is after 2025-08-24T17:21:41Z" Jan 25 07:58:17 crc kubenswrapper[4832]: I0125 07:58:17.876932 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f08aec7c666388c5a9a5ccc970acf6e9df3262090951fd1a205cfb2f6cfb26a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e880d54d6b2d147d036dac73afd36230c3a984b018b7bd600dcbd33ca83aa84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T07:58:17Z is after 2025-08-24T17:21:41Z" Jan 25 07:58:17 crc kubenswrapper[4832]: I0125 07:58:17.886712 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-kzrcf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5439ad80-35f6-4da4-8745-8104e9963472\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:58:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:58:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bcaff12dd09b5de72efcfafa4784bfc96159d855dfb239fc5120bb5fb0c6653e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c1f3fab8a8806d76e6199970ac471a73665e6ec874f959a1e7908df814babfff\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-25T07:58:03Z\\\",\\\"message\\\":\\\"2026-01-25T07:57:18+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_ec6ca88f-716a-45cc-bbc3-4dcb86c68fbf\\\\n2026-01-25T07:57:18+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_ec6ca88f-716a-45cc-bbc3-4dcb86c68fbf to /host/opt/cni/bin/\\\\n2026-01-25T07:57:18Z [verbose] multus-daemon started\\\\n2026-01-25T07:57:18Z [verbose] Readiness Indicator file check\\\\n2026-01-25T07:58:03Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-25T07:57:17Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:58:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dg29p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T07:57:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-kzrcf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T07:58:17Z is after 2025-08-24T17:21:41Z" Jan 25 07:58:17 crc kubenswrapper[4832]: I0125 07:58:17.895510 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-nzj5s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b1a15135-866b-4644-97aa-34c7da815b6b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6wc7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6wc7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T07:57:30Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-nzj5s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T07:58:17Z is after 2025-08-24T17:21:41Z" Jan 25 07:58:17 crc kubenswrapper[4832]: I0125 07:58:17.943276 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:58:17 crc kubenswrapper[4832]: I0125 07:58:17.943305 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:58:17 crc kubenswrapper[4832]: I0125 07:58:17.943312 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:58:17 crc kubenswrapper[4832]: I0125 07:58:17.943324 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:58:17 crc kubenswrapper[4832]: I0125 07:58:17.943345 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:58:17Z","lastTransitionTime":"2026-01-25T07:58:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:58:18 crc kubenswrapper[4832]: I0125 07:58:18.045962 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:58:18 crc kubenswrapper[4832]: I0125 07:58:18.046012 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:58:18 crc kubenswrapper[4832]: I0125 07:58:18.046023 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:58:18 crc kubenswrapper[4832]: I0125 07:58:18.046039 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:58:18 crc kubenswrapper[4832]: I0125 07:58:18.046052 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:58:18Z","lastTransitionTime":"2026-01-25T07:58:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:58:18 crc kubenswrapper[4832]: I0125 07:58:18.147895 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:58:18 crc kubenswrapper[4832]: I0125 07:58:18.148320 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:58:18 crc kubenswrapper[4832]: I0125 07:58:18.148340 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:58:18 crc kubenswrapper[4832]: I0125 07:58:18.148764 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:58:18 crc kubenswrapper[4832]: I0125 07:58:18.149054 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:58:18Z","lastTransitionTime":"2026-01-25T07:58:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:58:18 crc kubenswrapper[4832]: I0125 07:58:18.251104 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:58:18 crc kubenswrapper[4832]: I0125 07:58:18.251130 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:58:18 crc kubenswrapper[4832]: I0125 07:58:18.251147 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:58:18 crc kubenswrapper[4832]: I0125 07:58:18.251162 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:58:18 crc kubenswrapper[4832]: I0125 07:58:18.251171 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:58:18Z","lastTransitionTime":"2026-01-25T07:58:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:58:18 crc kubenswrapper[4832]: I0125 07:58:18.353296 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:58:18 crc kubenswrapper[4832]: I0125 07:58:18.353359 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:58:18 crc kubenswrapper[4832]: I0125 07:58:18.353372 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:58:18 crc kubenswrapper[4832]: I0125 07:58:18.353404 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:58:18 crc kubenswrapper[4832]: I0125 07:58:18.353415 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:58:18Z","lastTransitionTime":"2026-01-25T07:58:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:58:18 crc kubenswrapper[4832]: I0125 07:58:18.456279 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:58:18 crc kubenswrapper[4832]: I0125 07:58:18.456350 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:58:18 crc kubenswrapper[4832]: I0125 07:58:18.456363 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:58:18 crc kubenswrapper[4832]: I0125 07:58:18.456404 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:58:18 crc kubenswrapper[4832]: I0125 07:58:18.456417 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:58:18Z","lastTransitionTime":"2026-01-25T07:58:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:58:18 crc kubenswrapper[4832]: I0125 07:58:18.559576 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:58:18 crc kubenswrapper[4832]: I0125 07:58:18.559660 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:58:18 crc kubenswrapper[4832]: I0125 07:58:18.559678 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:58:18 crc kubenswrapper[4832]: I0125 07:58:18.559700 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:58:18 crc kubenswrapper[4832]: I0125 07:58:18.559717 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:58:18Z","lastTransitionTime":"2026-01-25T07:58:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:58:18 crc kubenswrapper[4832]: I0125 07:58:18.629438 4832 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-07 11:32:35.972773155 +0000 UTC Jan 25 07:58:18 crc kubenswrapper[4832]: I0125 07:58:18.662347 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:58:18 crc kubenswrapper[4832]: I0125 07:58:18.662411 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:58:18 crc kubenswrapper[4832]: I0125 07:58:18.662422 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:58:18 crc kubenswrapper[4832]: I0125 07:58:18.662445 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:58:18 crc kubenswrapper[4832]: I0125 07:58:18.662459 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:58:18Z","lastTransitionTime":"2026-01-25T07:58:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:58:18 crc kubenswrapper[4832]: I0125 07:58:18.668712 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 25 07:58:18 crc kubenswrapper[4832]: I0125 07:58:18.668787 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-nzj5s" Jan 25 07:58:18 crc kubenswrapper[4832]: E0125 07:58:18.668800 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 25 07:58:18 crc kubenswrapper[4832]: I0125 07:58:18.668715 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 25 07:58:18 crc kubenswrapper[4832]: E0125 07:58:18.668907 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-nzj5s" podUID="b1a15135-866b-4644-97aa-34c7da815b6b" Jan 25 07:58:18 crc kubenswrapper[4832]: E0125 07:58:18.668990 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 25 07:58:18 crc kubenswrapper[4832]: I0125 07:58:18.764827 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:58:18 crc kubenswrapper[4832]: I0125 07:58:18.764875 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:58:18 crc kubenswrapper[4832]: I0125 07:58:18.764890 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:58:18 crc kubenswrapper[4832]: I0125 07:58:18.764911 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:58:18 crc kubenswrapper[4832]: I0125 07:58:18.764925 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:58:18Z","lastTransitionTime":"2026-01-25T07:58:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:58:18 crc kubenswrapper[4832]: I0125 07:58:18.867801 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:58:18 crc kubenswrapper[4832]: I0125 07:58:18.867847 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:58:18 crc kubenswrapper[4832]: I0125 07:58:18.867856 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:58:18 crc kubenswrapper[4832]: I0125 07:58:18.867870 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:58:18 crc kubenswrapper[4832]: I0125 07:58:18.867880 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:58:18Z","lastTransitionTime":"2026-01-25T07:58:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:58:18 crc kubenswrapper[4832]: I0125 07:58:18.970984 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:58:18 crc kubenswrapper[4832]: I0125 07:58:18.971049 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:58:18 crc kubenswrapper[4832]: I0125 07:58:18.971060 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:58:18 crc kubenswrapper[4832]: I0125 07:58:18.971074 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:58:18 crc kubenswrapper[4832]: I0125 07:58:18.971083 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:58:18Z","lastTransitionTime":"2026-01-25T07:58:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:58:19 crc kubenswrapper[4832]: I0125 07:58:19.073825 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:58:19 crc kubenswrapper[4832]: I0125 07:58:19.073859 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:58:19 crc kubenswrapper[4832]: I0125 07:58:19.073868 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:58:19 crc kubenswrapper[4832]: I0125 07:58:19.073881 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:58:19 crc kubenswrapper[4832]: I0125 07:58:19.073890 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:58:19Z","lastTransitionTime":"2026-01-25T07:58:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:58:19 crc kubenswrapper[4832]: I0125 07:58:19.175570 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:58:19 crc kubenswrapper[4832]: I0125 07:58:19.175613 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:58:19 crc kubenswrapper[4832]: I0125 07:58:19.175624 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:58:19 crc kubenswrapper[4832]: I0125 07:58:19.175640 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:58:19 crc kubenswrapper[4832]: I0125 07:58:19.175654 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:58:19Z","lastTransitionTime":"2026-01-25T07:58:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:58:19 crc kubenswrapper[4832]: I0125 07:58:19.278113 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:58:19 crc kubenswrapper[4832]: I0125 07:58:19.278165 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:58:19 crc kubenswrapper[4832]: I0125 07:58:19.278178 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:58:19 crc kubenswrapper[4832]: I0125 07:58:19.278194 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:58:19 crc kubenswrapper[4832]: I0125 07:58:19.278206 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:58:19Z","lastTransitionTime":"2026-01-25T07:58:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:58:19 crc kubenswrapper[4832]: I0125 07:58:19.380652 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:58:19 crc kubenswrapper[4832]: I0125 07:58:19.380689 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:58:19 crc kubenswrapper[4832]: I0125 07:58:19.380698 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:58:19 crc kubenswrapper[4832]: I0125 07:58:19.380714 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:58:19 crc kubenswrapper[4832]: I0125 07:58:19.380725 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:58:19Z","lastTransitionTime":"2026-01-25T07:58:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:58:19 crc kubenswrapper[4832]: I0125 07:58:19.483304 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:58:19 crc kubenswrapper[4832]: I0125 07:58:19.483342 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:58:19 crc kubenswrapper[4832]: I0125 07:58:19.483366 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:58:19 crc kubenswrapper[4832]: I0125 07:58:19.483384 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:58:19 crc kubenswrapper[4832]: I0125 07:58:19.483411 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:58:19Z","lastTransitionTime":"2026-01-25T07:58:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:58:19 crc kubenswrapper[4832]: I0125 07:58:19.586382 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:58:19 crc kubenswrapper[4832]: I0125 07:58:19.586474 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:58:19 crc kubenswrapper[4832]: I0125 07:58:19.586492 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:58:19 crc kubenswrapper[4832]: I0125 07:58:19.586513 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:58:19 crc kubenswrapper[4832]: I0125 07:58:19.586529 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:58:19Z","lastTransitionTime":"2026-01-25T07:58:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:58:19 crc kubenswrapper[4832]: I0125 07:58:19.630199 4832 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-22 19:12:58.173327852 +0000 UTC Jan 25 07:58:19 crc kubenswrapper[4832]: I0125 07:58:19.668975 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 25 07:58:19 crc kubenswrapper[4832]: E0125 07:58:19.669125 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 25 07:58:19 crc kubenswrapper[4832]: I0125 07:58:19.688477 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:58:19 crc kubenswrapper[4832]: I0125 07:58:19.688522 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:58:19 crc kubenswrapper[4832]: I0125 07:58:19.688531 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:58:19 crc kubenswrapper[4832]: I0125 07:58:19.688548 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:58:19 crc kubenswrapper[4832]: I0125 07:58:19.688559 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:58:19Z","lastTransitionTime":"2026-01-25T07:58:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:58:19 crc kubenswrapper[4832]: I0125 07:58:19.791015 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:58:19 crc kubenswrapper[4832]: I0125 07:58:19.791054 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:58:19 crc kubenswrapper[4832]: I0125 07:58:19.791063 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:58:19 crc kubenswrapper[4832]: I0125 07:58:19.791081 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:58:19 crc kubenswrapper[4832]: I0125 07:58:19.791096 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:58:19Z","lastTransitionTime":"2026-01-25T07:58:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:58:19 crc kubenswrapper[4832]: I0125 07:58:19.895493 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:58:19 crc kubenswrapper[4832]: I0125 07:58:19.895555 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:58:19 crc kubenswrapper[4832]: I0125 07:58:19.895578 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:58:19 crc kubenswrapper[4832]: I0125 07:58:19.895608 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:58:19 crc kubenswrapper[4832]: I0125 07:58:19.895634 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:58:19Z","lastTransitionTime":"2026-01-25T07:58:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:58:19 crc kubenswrapper[4832]: I0125 07:58:19.997581 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:58:19 crc kubenswrapper[4832]: I0125 07:58:19.997646 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:58:19 crc kubenswrapper[4832]: I0125 07:58:19.997658 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:58:19 crc kubenswrapper[4832]: I0125 07:58:19.997674 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:58:19 crc kubenswrapper[4832]: I0125 07:58:19.997684 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:58:19Z","lastTransitionTime":"2026-01-25T07:58:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:58:20 crc kubenswrapper[4832]: I0125 07:58:20.108913 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:58:20 crc kubenswrapper[4832]: I0125 07:58:20.108964 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:58:20 crc kubenswrapper[4832]: I0125 07:58:20.108977 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:58:20 crc kubenswrapper[4832]: I0125 07:58:20.108992 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:58:20 crc kubenswrapper[4832]: I0125 07:58:20.109001 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:58:20Z","lastTransitionTime":"2026-01-25T07:58:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:58:20 crc kubenswrapper[4832]: I0125 07:58:20.211053 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:58:20 crc kubenswrapper[4832]: I0125 07:58:20.211089 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:58:20 crc kubenswrapper[4832]: I0125 07:58:20.211097 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:58:20 crc kubenswrapper[4832]: I0125 07:58:20.211112 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:58:20 crc kubenswrapper[4832]: I0125 07:58:20.211122 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:58:20Z","lastTransitionTime":"2026-01-25T07:58:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:58:20 crc kubenswrapper[4832]: I0125 07:58:20.313534 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:58:20 crc kubenswrapper[4832]: I0125 07:58:20.313604 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:58:20 crc kubenswrapper[4832]: I0125 07:58:20.313617 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:58:20 crc kubenswrapper[4832]: I0125 07:58:20.313639 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:58:20 crc kubenswrapper[4832]: I0125 07:58:20.313655 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:58:20Z","lastTransitionTime":"2026-01-25T07:58:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:58:20 crc kubenswrapper[4832]: I0125 07:58:20.416025 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:58:20 crc kubenswrapper[4832]: I0125 07:58:20.416086 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:58:20 crc kubenswrapper[4832]: I0125 07:58:20.416104 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:58:20 crc kubenswrapper[4832]: I0125 07:58:20.416126 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:58:20 crc kubenswrapper[4832]: I0125 07:58:20.416142 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:58:20Z","lastTransitionTime":"2026-01-25T07:58:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:58:20 crc kubenswrapper[4832]: I0125 07:58:20.453741 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 25 07:58:20 crc kubenswrapper[4832]: I0125 07:58:20.453851 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 25 07:58:20 crc kubenswrapper[4832]: I0125 07:58:20.453883 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 25 07:58:20 crc kubenswrapper[4832]: I0125 07:58:20.453918 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 25 07:58:20 crc kubenswrapper[4832]: E0125 07:58:20.453999 4832 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 25 07:58:20 crc kubenswrapper[4832]: E0125 07:58:20.454005 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-25 07:59:24.453970838 +0000 UTC m=+147.127794421 (durationBeforeRetry 1m4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 25 07:58:20 crc kubenswrapper[4832]: E0125 07:58:20.454065 4832 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 25 07:58:20 crc kubenswrapper[4832]: E0125 07:58:20.454094 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-25 07:59:24.454072921 +0000 UTC m=+147.127896454 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 25 07:58:20 crc kubenswrapper[4832]: E0125 07:58:20.454149 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-25 07:59:24.454130183 +0000 UTC m=+147.127953716 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 25 07:58:20 crc kubenswrapper[4832]: E0125 07:58:20.454162 4832 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 25 07:58:20 crc kubenswrapper[4832]: E0125 07:58:20.454234 4832 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 25 07:58:20 crc kubenswrapper[4832]: E0125 07:58:20.454254 4832 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 25 07:58:20 crc kubenswrapper[4832]: E0125 07:58:20.454349 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-25 07:59:24.454317189 +0000 UTC m=+147.128140792 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 25 07:58:20 crc kubenswrapper[4832]: I0125 07:58:20.519516 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:58:20 crc kubenswrapper[4832]: I0125 07:58:20.519569 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:58:20 crc kubenswrapper[4832]: I0125 07:58:20.519586 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:58:20 crc kubenswrapper[4832]: I0125 07:58:20.519610 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:58:20 crc kubenswrapper[4832]: I0125 07:58:20.519628 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:58:20Z","lastTransitionTime":"2026-01-25T07:58:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:58:20 crc kubenswrapper[4832]: I0125 07:58:20.554796 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 25 07:58:20 crc kubenswrapper[4832]: E0125 07:58:20.555015 4832 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 25 07:58:20 crc kubenswrapper[4832]: E0125 07:58:20.555043 4832 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 25 07:58:20 crc kubenswrapper[4832]: E0125 07:58:20.555063 4832 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 25 07:58:20 crc kubenswrapper[4832]: E0125 07:58:20.555138 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-25 07:59:24.555118182 +0000 UTC m=+147.228941755 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 25 07:58:20 crc kubenswrapper[4832]: I0125 07:58:20.623969 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:58:20 crc kubenswrapper[4832]: I0125 07:58:20.624033 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:58:20 crc kubenswrapper[4832]: I0125 07:58:20.624056 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:58:20 crc kubenswrapper[4832]: I0125 07:58:20.624081 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:58:20 crc kubenswrapper[4832]: I0125 07:58:20.624101 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:58:20Z","lastTransitionTime":"2026-01-25T07:58:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:58:20 crc kubenswrapper[4832]: I0125 07:58:20.630347 4832 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-14 13:25:45.238456649 +0000 UTC Jan 25 07:58:20 crc kubenswrapper[4832]: I0125 07:58:20.669073 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 25 07:58:20 crc kubenswrapper[4832]: I0125 07:58:20.669158 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 25 07:58:20 crc kubenswrapper[4832]: E0125 07:58:20.669199 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 25 07:58:20 crc kubenswrapper[4832]: I0125 07:58:20.669092 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-nzj5s" Jan 25 07:58:20 crc kubenswrapper[4832]: E0125 07:58:20.669797 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 25 07:58:20 crc kubenswrapper[4832]: E0125 07:58:20.669825 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-nzj5s" podUID="b1a15135-866b-4644-97aa-34c7da815b6b" Jan 25 07:58:20 crc kubenswrapper[4832]: I0125 07:58:20.726549 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:58:20 crc kubenswrapper[4832]: I0125 07:58:20.726612 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:58:20 crc kubenswrapper[4832]: I0125 07:58:20.726629 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:58:20 crc kubenswrapper[4832]: I0125 07:58:20.726652 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:58:20 crc kubenswrapper[4832]: I0125 07:58:20.726670 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:58:20Z","lastTransitionTime":"2026-01-25T07:58:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:58:20 crc kubenswrapper[4832]: I0125 07:58:20.829620 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:58:20 crc kubenswrapper[4832]: I0125 07:58:20.829679 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:58:20 crc kubenswrapper[4832]: I0125 07:58:20.829691 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:58:20 crc kubenswrapper[4832]: I0125 07:58:20.829736 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:58:20 crc kubenswrapper[4832]: I0125 07:58:20.829748 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:58:20Z","lastTransitionTime":"2026-01-25T07:58:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:58:20 crc kubenswrapper[4832]: I0125 07:58:20.932599 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:58:20 crc kubenswrapper[4832]: I0125 07:58:20.932675 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:58:20 crc kubenswrapper[4832]: I0125 07:58:20.932698 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:58:20 crc kubenswrapper[4832]: I0125 07:58:20.932729 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:58:20 crc kubenswrapper[4832]: I0125 07:58:20.932752 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:58:20Z","lastTransitionTime":"2026-01-25T07:58:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:58:21 crc kubenswrapper[4832]: I0125 07:58:21.035501 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:58:21 crc kubenswrapper[4832]: I0125 07:58:21.035558 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:58:21 crc kubenswrapper[4832]: I0125 07:58:21.035570 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:58:21 crc kubenswrapper[4832]: I0125 07:58:21.035585 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:58:21 crc kubenswrapper[4832]: I0125 07:58:21.035596 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:58:21Z","lastTransitionTime":"2026-01-25T07:58:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:58:21 crc kubenswrapper[4832]: I0125 07:58:21.137637 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:58:21 crc kubenswrapper[4832]: I0125 07:58:21.137671 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:58:21 crc kubenswrapper[4832]: I0125 07:58:21.137681 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:58:21 crc kubenswrapper[4832]: I0125 07:58:21.137693 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:58:21 crc kubenswrapper[4832]: I0125 07:58:21.137704 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:58:21Z","lastTransitionTime":"2026-01-25T07:58:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:58:21 crc kubenswrapper[4832]: I0125 07:58:21.239994 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:58:21 crc kubenswrapper[4832]: I0125 07:58:21.240028 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:58:21 crc kubenswrapper[4832]: I0125 07:58:21.240036 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:58:21 crc kubenswrapper[4832]: I0125 07:58:21.240051 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:58:21 crc kubenswrapper[4832]: I0125 07:58:21.240063 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:58:21Z","lastTransitionTime":"2026-01-25T07:58:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:58:21 crc kubenswrapper[4832]: I0125 07:58:21.342511 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:58:21 crc kubenswrapper[4832]: I0125 07:58:21.342545 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:58:21 crc kubenswrapper[4832]: I0125 07:58:21.342572 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:58:21 crc kubenswrapper[4832]: I0125 07:58:21.342668 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:58:21 crc kubenswrapper[4832]: I0125 07:58:21.342679 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:58:21Z","lastTransitionTime":"2026-01-25T07:58:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:58:21 crc kubenswrapper[4832]: I0125 07:58:21.445737 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:58:21 crc kubenswrapper[4832]: I0125 07:58:21.445778 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:58:21 crc kubenswrapper[4832]: I0125 07:58:21.445786 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:58:21 crc kubenswrapper[4832]: I0125 07:58:21.445801 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:58:21 crc kubenswrapper[4832]: I0125 07:58:21.445810 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:58:21Z","lastTransitionTime":"2026-01-25T07:58:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:58:21 crc kubenswrapper[4832]: I0125 07:58:21.549174 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:58:21 crc kubenswrapper[4832]: I0125 07:58:21.549216 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:58:21 crc kubenswrapper[4832]: I0125 07:58:21.549224 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:58:21 crc kubenswrapper[4832]: I0125 07:58:21.549240 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:58:21 crc kubenswrapper[4832]: I0125 07:58:21.549249 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:58:21Z","lastTransitionTime":"2026-01-25T07:58:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:58:21 crc kubenswrapper[4832]: I0125 07:58:21.630591 4832 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-04 17:41:14.210382222 +0000 UTC Jan 25 07:58:21 crc kubenswrapper[4832]: I0125 07:58:21.651701 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:58:21 crc kubenswrapper[4832]: I0125 07:58:21.651725 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:58:21 crc kubenswrapper[4832]: I0125 07:58:21.651734 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:58:21 crc kubenswrapper[4832]: I0125 07:58:21.651745 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:58:21 crc kubenswrapper[4832]: I0125 07:58:21.651754 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:58:21Z","lastTransitionTime":"2026-01-25T07:58:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:58:21 crc kubenswrapper[4832]: I0125 07:58:21.669648 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 25 07:58:21 crc kubenswrapper[4832]: E0125 07:58:21.669784 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 25 07:58:21 crc kubenswrapper[4832]: I0125 07:58:21.671150 4832 scope.go:117] "RemoveContainer" containerID="b9360fc46a4533171758f5c0111aec5209164d6ef530b6c4c7047c14a347f7bd" Jan 25 07:58:21 crc kubenswrapper[4832]: E0125 07:58:21.671778 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-plv66_openshift-ovn-kubernetes(9c6fdc72-86dc-433d-8aac-57b0eeefaca3)\"" pod="openshift-ovn-kubernetes/ovnkube-node-plv66" podUID="9c6fdc72-86dc-433d-8aac-57b0eeefaca3" Jan 25 07:58:21 crc kubenswrapper[4832]: I0125 07:58:21.680860 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc"] Jan 25 07:58:21 crc kubenswrapper[4832]: I0125 07:58:21.755464 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:58:21 crc kubenswrapper[4832]: I0125 07:58:21.755506 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:58:21 crc kubenswrapper[4832]: I0125 07:58:21.755519 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:58:21 crc kubenswrapper[4832]: I0125 07:58:21.755534 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:58:21 crc kubenswrapper[4832]: I0125 07:58:21.755544 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:58:21Z","lastTransitionTime":"2026-01-25T07:58:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:58:21 crc kubenswrapper[4832]: I0125 07:58:21.863442 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:58:21 crc kubenswrapper[4832]: I0125 07:58:21.863516 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:58:21 crc kubenswrapper[4832]: I0125 07:58:21.863533 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:58:21 crc kubenswrapper[4832]: I0125 07:58:21.863557 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:58:21 crc kubenswrapper[4832]: I0125 07:58:21.863574 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:58:21Z","lastTransitionTime":"2026-01-25T07:58:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:58:21 crc kubenswrapper[4832]: I0125 07:58:21.965501 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:58:21 crc kubenswrapper[4832]: I0125 07:58:21.965550 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:58:21 crc kubenswrapper[4832]: I0125 07:58:21.965639 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:58:21 crc kubenswrapper[4832]: I0125 07:58:21.965664 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:58:21 crc kubenswrapper[4832]: I0125 07:58:21.965677 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:58:21Z","lastTransitionTime":"2026-01-25T07:58:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:58:22 crc kubenswrapper[4832]: I0125 07:58:22.069529 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:58:22 crc kubenswrapper[4832]: I0125 07:58:22.069597 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:58:22 crc kubenswrapper[4832]: I0125 07:58:22.069613 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:58:22 crc kubenswrapper[4832]: I0125 07:58:22.069636 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:58:22 crc kubenswrapper[4832]: I0125 07:58:22.069653 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:58:22Z","lastTransitionTime":"2026-01-25T07:58:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:58:22 crc kubenswrapper[4832]: I0125 07:58:22.173169 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:58:22 crc kubenswrapper[4832]: I0125 07:58:22.173223 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:58:22 crc kubenswrapper[4832]: I0125 07:58:22.173240 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:58:22 crc kubenswrapper[4832]: I0125 07:58:22.173262 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:58:22 crc kubenswrapper[4832]: I0125 07:58:22.173280 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:58:22Z","lastTransitionTime":"2026-01-25T07:58:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:58:22 crc kubenswrapper[4832]: I0125 07:58:22.276517 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:58:22 crc kubenswrapper[4832]: I0125 07:58:22.276583 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:58:22 crc kubenswrapper[4832]: I0125 07:58:22.276605 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:58:22 crc kubenswrapper[4832]: I0125 07:58:22.276633 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:58:22 crc kubenswrapper[4832]: I0125 07:58:22.276656 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:58:22Z","lastTransitionTime":"2026-01-25T07:58:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:58:22 crc kubenswrapper[4832]: I0125 07:58:22.379473 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:58:22 crc kubenswrapper[4832]: I0125 07:58:22.379538 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:58:22 crc kubenswrapper[4832]: I0125 07:58:22.379559 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:58:22 crc kubenswrapper[4832]: I0125 07:58:22.379587 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:58:22 crc kubenswrapper[4832]: I0125 07:58:22.379605 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:58:22Z","lastTransitionTime":"2026-01-25T07:58:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:58:22 crc kubenswrapper[4832]: I0125 07:58:22.483140 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:58:22 crc kubenswrapper[4832]: I0125 07:58:22.483206 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:58:22 crc kubenswrapper[4832]: I0125 07:58:22.483223 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:58:22 crc kubenswrapper[4832]: I0125 07:58:22.483246 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:58:22 crc kubenswrapper[4832]: I0125 07:58:22.483266 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:58:22Z","lastTransitionTime":"2026-01-25T07:58:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:58:22 crc kubenswrapper[4832]: I0125 07:58:22.587190 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:58:22 crc kubenswrapper[4832]: I0125 07:58:22.587270 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:58:22 crc kubenswrapper[4832]: I0125 07:58:22.587294 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:58:22 crc kubenswrapper[4832]: I0125 07:58:22.587329 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:58:22 crc kubenswrapper[4832]: I0125 07:58:22.587362 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:58:22Z","lastTransitionTime":"2026-01-25T07:58:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:58:22 crc kubenswrapper[4832]: I0125 07:58:22.631115 4832 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-29 12:00:56.474188201 +0000 UTC Jan 25 07:58:22 crc kubenswrapper[4832]: I0125 07:58:22.668897 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 25 07:58:22 crc kubenswrapper[4832]: I0125 07:58:22.668977 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-nzj5s" Jan 25 07:58:22 crc kubenswrapper[4832]: I0125 07:58:22.668994 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 25 07:58:22 crc kubenswrapper[4832]: E0125 07:58:22.669076 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 25 07:58:22 crc kubenswrapper[4832]: E0125 07:58:22.669491 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-nzj5s" podUID="b1a15135-866b-4644-97aa-34c7da815b6b" Jan 25 07:58:22 crc kubenswrapper[4832]: E0125 07:58:22.669553 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 25 07:58:22 crc kubenswrapper[4832]: I0125 07:58:22.690708 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:58:22 crc kubenswrapper[4832]: I0125 07:58:22.690756 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:58:22 crc kubenswrapper[4832]: I0125 07:58:22.690773 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:58:22 crc kubenswrapper[4832]: I0125 07:58:22.690793 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:58:22 crc kubenswrapper[4832]: I0125 07:58:22.690809 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:58:22Z","lastTransitionTime":"2026-01-25T07:58:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:58:22 crc kubenswrapper[4832]: I0125 07:58:22.793511 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:58:22 crc kubenswrapper[4832]: I0125 07:58:22.793561 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:58:22 crc kubenswrapper[4832]: I0125 07:58:22.793571 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:58:22 crc kubenswrapper[4832]: I0125 07:58:22.793585 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:58:22 crc kubenswrapper[4832]: I0125 07:58:22.793595 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:58:22Z","lastTransitionTime":"2026-01-25T07:58:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:58:22 crc kubenswrapper[4832]: I0125 07:58:22.896793 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:58:22 crc kubenswrapper[4832]: I0125 07:58:22.896857 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:58:22 crc kubenswrapper[4832]: I0125 07:58:22.896910 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:58:22 crc kubenswrapper[4832]: I0125 07:58:22.896940 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:58:22 crc kubenswrapper[4832]: I0125 07:58:22.896961 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:58:22Z","lastTransitionTime":"2026-01-25T07:58:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:58:22 crc kubenswrapper[4832]: I0125 07:58:22.999456 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:58:22 crc kubenswrapper[4832]: I0125 07:58:22.999508 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:58:22 crc kubenswrapper[4832]: I0125 07:58:22.999524 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:58:22 crc kubenswrapper[4832]: I0125 07:58:22.999549 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:58:22 crc kubenswrapper[4832]: I0125 07:58:22.999565 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:58:22Z","lastTransitionTime":"2026-01-25T07:58:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:58:23 crc kubenswrapper[4832]: I0125 07:58:23.102424 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:58:23 crc kubenswrapper[4832]: I0125 07:58:23.102786 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:58:23 crc kubenswrapper[4832]: I0125 07:58:23.102916 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:58:23 crc kubenswrapper[4832]: I0125 07:58:23.103070 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:58:23 crc kubenswrapper[4832]: I0125 07:58:23.103192 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:58:23Z","lastTransitionTime":"2026-01-25T07:58:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:58:23 crc kubenswrapper[4832]: I0125 07:58:23.206640 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:58:23 crc kubenswrapper[4832]: I0125 07:58:23.206688 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:58:23 crc kubenswrapper[4832]: I0125 07:58:23.206718 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:58:23 crc kubenswrapper[4832]: I0125 07:58:23.206732 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:58:23 crc kubenswrapper[4832]: I0125 07:58:23.206741 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:58:23Z","lastTransitionTime":"2026-01-25T07:58:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:58:23 crc kubenswrapper[4832]: I0125 07:58:23.309555 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:58:23 crc kubenswrapper[4832]: I0125 07:58:23.309622 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:58:23 crc kubenswrapper[4832]: I0125 07:58:23.309639 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:58:23 crc kubenswrapper[4832]: I0125 07:58:23.309664 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:58:23 crc kubenswrapper[4832]: I0125 07:58:23.309680 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:58:23Z","lastTransitionTime":"2026-01-25T07:58:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:58:23 crc kubenswrapper[4832]: I0125 07:58:23.412121 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:58:23 crc kubenswrapper[4832]: I0125 07:58:23.412354 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:58:23 crc kubenswrapper[4832]: I0125 07:58:23.412460 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:58:23 crc kubenswrapper[4832]: I0125 07:58:23.412545 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:58:23 crc kubenswrapper[4832]: I0125 07:58:23.412618 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:58:23Z","lastTransitionTime":"2026-01-25T07:58:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:58:23 crc kubenswrapper[4832]: I0125 07:58:23.515307 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:58:23 crc kubenswrapper[4832]: I0125 07:58:23.515352 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:58:23 crc kubenswrapper[4832]: I0125 07:58:23.515364 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:58:23 crc kubenswrapper[4832]: I0125 07:58:23.515399 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:58:23 crc kubenswrapper[4832]: I0125 07:58:23.515412 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:58:23Z","lastTransitionTime":"2026-01-25T07:58:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:58:23 crc kubenswrapper[4832]: I0125 07:58:23.617426 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:58:23 crc kubenswrapper[4832]: I0125 07:58:23.617711 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:58:23 crc kubenswrapper[4832]: I0125 07:58:23.617848 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:58:23 crc kubenswrapper[4832]: I0125 07:58:23.617935 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:58:23 crc kubenswrapper[4832]: I0125 07:58:23.618035 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:58:23Z","lastTransitionTime":"2026-01-25T07:58:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:58:23 crc kubenswrapper[4832]: I0125 07:58:23.632084 4832 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-27 17:49:39.858358327 +0000 UTC Jan 25 07:58:23 crc kubenswrapper[4832]: I0125 07:58:23.669112 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 25 07:58:23 crc kubenswrapper[4832]: E0125 07:58:23.669210 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 25 07:58:23 crc kubenswrapper[4832]: I0125 07:58:23.721295 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:58:23 crc kubenswrapper[4832]: I0125 07:58:23.721686 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:58:23 crc kubenswrapper[4832]: I0125 07:58:23.721891 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:58:23 crc kubenswrapper[4832]: I0125 07:58:23.722197 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:58:23 crc kubenswrapper[4832]: I0125 07:58:23.722355 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:58:23Z","lastTransitionTime":"2026-01-25T07:58:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:58:23 crc kubenswrapper[4832]: I0125 07:58:23.824939 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:58:23 crc kubenswrapper[4832]: I0125 07:58:23.824982 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:58:23 crc kubenswrapper[4832]: I0125 07:58:23.824994 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:58:23 crc kubenswrapper[4832]: I0125 07:58:23.825010 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:58:23 crc kubenswrapper[4832]: I0125 07:58:23.825020 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:58:23Z","lastTransitionTime":"2026-01-25T07:58:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:58:23 crc kubenswrapper[4832]: I0125 07:58:23.927628 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:58:23 crc kubenswrapper[4832]: I0125 07:58:23.927676 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:58:23 crc kubenswrapper[4832]: I0125 07:58:23.927689 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:58:23 crc kubenswrapper[4832]: I0125 07:58:23.927708 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:58:23 crc kubenswrapper[4832]: I0125 07:58:23.927723 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:58:23Z","lastTransitionTime":"2026-01-25T07:58:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:58:24 crc kubenswrapper[4832]: I0125 07:58:24.030563 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:58:24 crc kubenswrapper[4832]: I0125 07:58:24.030619 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:58:24 crc kubenswrapper[4832]: I0125 07:58:24.030630 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:58:24 crc kubenswrapper[4832]: I0125 07:58:24.030649 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:58:24 crc kubenswrapper[4832]: I0125 07:58:24.030665 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:58:24Z","lastTransitionTime":"2026-01-25T07:58:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:58:24 crc kubenswrapper[4832]: I0125 07:58:24.133103 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:58:24 crc kubenswrapper[4832]: I0125 07:58:24.133139 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:58:24 crc kubenswrapper[4832]: I0125 07:58:24.133148 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:58:24 crc kubenswrapper[4832]: I0125 07:58:24.133161 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:58:24 crc kubenswrapper[4832]: I0125 07:58:24.133169 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:58:24Z","lastTransitionTime":"2026-01-25T07:58:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:58:24 crc kubenswrapper[4832]: I0125 07:58:24.235450 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:58:24 crc kubenswrapper[4832]: I0125 07:58:24.235518 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:58:24 crc kubenswrapper[4832]: I0125 07:58:24.235540 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:58:24 crc kubenswrapper[4832]: I0125 07:58:24.235569 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:58:24 crc kubenswrapper[4832]: I0125 07:58:24.235590 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:58:24Z","lastTransitionTime":"2026-01-25T07:58:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:58:24 crc kubenswrapper[4832]: I0125 07:58:24.338310 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:58:24 crc kubenswrapper[4832]: I0125 07:58:24.338339 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:58:24 crc kubenswrapper[4832]: I0125 07:58:24.338347 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:58:24 crc kubenswrapper[4832]: I0125 07:58:24.338360 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:58:24 crc kubenswrapper[4832]: I0125 07:58:24.338368 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:58:24Z","lastTransitionTime":"2026-01-25T07:58:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:58:24 crc kubenswrapper[4832]: I0125 07:58:24.440061 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:58:24 crc kubenswrapper[4832]: I0125 07:58:24.440100 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:58:24 crc kubenswrapper[4832]: I0125 07:58:24.440109 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:58:24 crc kubenswrapper[4832]: I0125 07:58:24.440123 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:58:24 crc kubenswrapper[4832]: I0125 07:58:24.440133 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:58:24Z","lastTransitionTime":"2026-01-25T07:58:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:58:24 crc kubenswrapper[4832]: I0125 07:58:24.542297 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:58:24 crc kubenswrapper[4832]: I0125 07:58:24.542348 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:58:24 crc kubenswrapper[4832]: I0125 07:58:24.542357 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:58:24 crc kubenswrapper[4832]: I0125 07:58:24.542370 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:58:24 crc kubenswrapper[4832]: I0125 07:58:24.542378 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:58:24Z","lastTransitionTime":"2026-01-25T07:58:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:58:24 crc kubenswrapper[4832]: I0125 07:58:24.632370 4832 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-25 02:43:46.575196949 +0000 UTC Jan 25 07:58:24 crc kubenswrapper[4832]: I0125 07:58:24.644692 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:58:24 crc kubenswrapper[4832]: I0125 07:58:24.644716 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:58:24 crc kubenswrapper[4832]: I0125 07:58:24.644725 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:58:24 crc kubenswrapper[4832]: I0125 07:58:24.644738 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:58:24 crc kubenswrapper[4832]: I0125 07:58:24.644745 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:58:24Z","lastTransitionTime":"2026-01-25T07:58:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:58:24 crc kubenswrapper[4832]: I0125 07:58:24.668827 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 25 07:58:24 crc kubenswrapper[4832]: I0125 07:58:24.668862 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 25 07:58:24 crc kubenswrapper[4832]: E0125 07:58:24.668925 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 25 07:58:24 crc kubenswrapper[4832]: I0125 07:58:24.668830 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-nzj5s" Jan 25 07:58:24 crc kubenswrapper[4832]: E0125 07:58:24.669000 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 25 07:58:24 crc kubenswrapper[4832]: E0125 07:58:24.669315 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-nzj5s" podUID="b1a15135-866b-4644-97aa-34c7da815b6b" Jan 25 07:58:24 crc kubenswrapper[4832]: I0125 07:58:24.747309 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:58:24 crc kubenswrapper[4832]: I0125 07:58:24.747359 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:58:24 crc kubenswrapper[4832]: I0125 07:58:24.747372 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:58:24 crc kubenswrapper[4832]: I0125 07:58:24.747426 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:58:24 crc kubenswrapper[4832]: I0125 07:58:24.747441 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:58:24Z","lastTransitionTime":"2026-01-25T07:58:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:58:24 crc kubenswrapper[4832]: I0125 07:58:24.849305 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:58:24 crc kubenswrapper[4832]: I0125 07:58:24.849366 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:58:24 crc kubenswrapper[4832]: I0125 07:58:24.849407 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:58:24 crc kubenswrapper[4832]: I0125 07:58:24.849430 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:58:24 crc kubenswrapper[4832]: I0125 07:58:24.849446 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:58:24Z","lastTransitionTime":"2026-01-25T07:58:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:58:24 crc kubenswrapper[4832]: I0125 07:58:24.952209 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:58:24 crc kubenswrapper[4832]: I0125 07:58:24.952244 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:58:24 crc kubenswrapper[4832]: I0125 07:58:24.952260 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:58:24 crc kubenswrapper[4832]: I0125 07:58:24.952275 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:58:24 crc kubenswrapper[4832]: I0125 07:58:24.952286 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:58:24Z","lastTransitionTime":"2026-01-25T07:58:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:58:25 crc kubenswrapper[4832]: I0125 07:58:25.053789 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:58:25 crc kubenswrapper[4832]: I0125 07:58:25.053823 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:58:25 crc kubenswrapper[4832]: I0125 07:58:25.053833 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:58:25 crc kubenswrapper[4832]: I0125 07:58:25.053849 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:58:25 crc kubenswrapper[4832]: I0125 07:58:25.053859 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:58:25Z","lastTransitionTime":"2026-01-25T07:58:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:58:25 crc kubenswrapper[4832]: I0125 07:58:25.156220 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:58:25 crc kubenswrapper[4832]: I0125 07:58:25.156290 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:58:25 crc kubenswrapper[4832]: I0125 07:58:25.156303 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:58:25 crc kubenswrapper[4832]: I0125 07:58:25.156323 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:58:25 crc kubenswrapper[4832]: I0125 07:58:25.156337 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:58:25Z","lastTransitionTime":"2026-01-25T07:58:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:58:25 crc kubenswrapper[4832]: I0125 07:58:25.259405 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:58:25 crc kubenswrapper[4832]: I0125 07:58:25.259446 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:58:25 crc kubenswrapper[4832]: I0125 07:58:25.259454 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:58:25 crc kubenswrapper[4832]: I0125 07:58:25.259468 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:58:25 crc kubenswrapper[4832]: I0125 07:58:25.259477 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:58:25Z","lastTransitionTime":"2026-01-25T07:58:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:58:25 crc kubenswrapper[4832]: I0125 07:58:25.362870 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:58:25 crc kubenswrapper[4832]: I0125 07:58:25.362941 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:58:25 crc kubenswrapper[4832]: I0125 07:58:25.362954 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:58:25 crc kubenswrapper[4832]: I0125 07:58:25.362979 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:58:25 crc kubenswrapper[4832]: I0125 07:58:25.362992 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:58:25Z","lastTransitionTime":"2026-01-25T07:58:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:58:25 crc kubenswrapper[4832]: I0125 07:58:25.466286 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:58:25 crc kubenswrapper[4832]: I0125 07:58:25.466357 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:58:25 crc kubenswrapper[4832]: I0125 07:58:25.466376 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:58:25 crc kubenswrapper[4832]: I0125 07:58:25.466435 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:58:25 crc kubenswrapper[4832]: I0125 07:58:25.466456 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:58:25Z","lastTransitionTime":"2026-01-25T07:58:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:58:25 crc kubenswrapper[4832]: I0125 07:58:25.571431 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:58:25 crc kubenswrapper[4832]: I0125 07:58:25.571489 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:58:25 crc kubenswrapper[4832]: I0125 07:58:25.571508 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:58:25 crc kubenswrapper[4832]: I0125 07:58:25.571533 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:58:25 crc kubenswrapper[4832]: I0125 07:58:25.571552 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:58:25Z","lastTransitionTime":"2026-01-25T07:58:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:58:25 crc kubenswrapper[4832]: I0125 07:58:25.633548 4832 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-07 19:18:53.529457262 +0000 UTC Jan 25 07:58:25 crc kubenswrapper[4832]: I0125 07:58:25.669955 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 25 07:58:25 crc kubenswrapper[4832]: E0125 07:58:25.670264 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 25 07:58:25 crc kubenswrapper[4832]: I0125 07:58:25.674646 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:58:25 crc kubenswrapper[4832]: I0125 07:58:25.674727 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:58:25 crc kubenswrapper[4832]: I0125 07:58:25.674745 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:58:25 crc kubenswrapper[4832]: I0125 07:58:25.674771 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:58:25 crc kubenswrapper[4832]: I0125 07:58:25.674789 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:58:25Z","lastTransitionTime":"2026-01-25T07:58:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:58:25 crc kubenswrapper[4832]: I0125 07:58:25.777712 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:58:25 crc kubenswrapper[4832]: I0125 07:58:25.777798 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:58:25 crc kubenswrapper[4832]: I0125 07:58:25.777813 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:58:25 crc kubenswrapper[4832]: I0125 07:58:25.777837 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:58:25 crc kubenswrapper[4832]: I0125 07:58:25.777853 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:58:25Z","lastTransitionTime":"2026-01-25T07:58:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:58:25 crc kubenswrapper[4832]: I0125 07:58:25.880223 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:58:25 crc kubenswrapper[4832]: I0125 07:58:25.880320 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:58:25 crc kubenswrapper[4832]: I0125 07:58:25.880339 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:58:25 crc kubenswrapper[4832]: I0125 07:58:25.880370 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:58:25 crc kubenswrapper[4832]: I0125 07:58:25.880419 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:58:25Z","lastTransitionTime":"2026-01-25T07:58:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:58:25 crc kubenswrapper[4832]: I0125 07:58:25.984249 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:58:25 crc kubenswrapper[4832]: I0125 07:58:25.984308 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:58:25 crc kubenswrapper[4832]: I0125 07:58:25.984323 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:58:25 crc kubenswrapper[4832]: I0125 07:58:25.984343 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:58:25 crc kubenswrapper[4832]: I0125 07:58:25.984353 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:58:25Z","lastTransitionTime":"2026-01-25T07:58:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:58:26 crc kubenswrapper[4832]: I0125 07:58:26.087536 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:58:26 crc kubenswrapper[4832]: I0125 07:58:26.087612 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:58:26 crc kubenswrapper[4832]: I0125 07:58:26.087634 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:58:26 crc kubenswrapper[4832]: I0125 07:58:26.087661 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:58:26 crc kubenswrapper[4832]: I0125 07:58:26.087685 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:58:26Z","lastTransitionTime":"2026-01-25T07:58:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:58:26 crc kubenswrapper[4832]: I0125 07:58:26.191453 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:58:26 crc kubenswrapper[4832]: I0125 07:58:26.191506 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:58:26 crc kubenswrapper[4832]: I0125 07:58:26.191522 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:58:26 crc kubenswrapper[4832]: I0125 07:58:26.191546 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:58:26 crc kubenswrapper[4832]: I0125 07:58:26.191568 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:58:26Z","lastTransitionTime":"2026-01-25T07:58:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:58:26 crc kubenswrapper[4832]: I0125 07:58:26.295380 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:58:26 crc kubenswrapper[4832]: I0125 07:58:26.295486 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:58:26 crc kubenswrapper[4832]: I0125 07:58:26.295495 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:58:26 crc kubenswrapper[4832]: I0125 07:58:26.295510 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:58:26 crc kubenswrapper[4832]: I0125 07:58:26.295522 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:58:26Z","lastTransitionTime":"2026-01-25T07:58:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:58:26 crc kubenswrapper[4832]: I0125 07:58:26.398159 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:58:26 crc kubenswrapper[4832]: I0125 07:58:26.398216 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:58:26 crc kubenswrapper[4832]: I0125 07:58:26.398230 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:58:26 crc kubenswrapper[4832]: I0125 07:58:26.398252 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:58:26 crc kubenswrapper[4832]: I0125 07:58:26.398269 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:58:26Z","lastTransitionTime":"2026-01-25T07:58:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:58:26 crc kubenswrapper[4832]: I0125 07:58:26.501465 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:58:26 crc kubenswrapper[4832]: I0125 07:58:26.501512 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:58:26 crc kubenswrapper[4832]: I0125 07:58:26.501522 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:58:26 crc kubenswrapper[4832]: I0125 07:58:26.501541 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:58:26 crc kubenswrapper[4832]: I0125 07:58:26.501555 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:58:26Z","lastTransitionTime":"2026-01-25T07:58:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:58:26 crc kubenswrapper[4832]: I0125 07:58:26.604353 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:58:26 crc kubenswrapper[4832]: I0125 07:58:26.604428 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:58:26 crc kubenswrapper[4832]: I0125 07:58:26.604440 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:58:26 crc kubenswrapper[4832]: I0125 07:58:26.604458 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:58:26 crc kubenswrapper[4832]: I0125 07:58:26.604533 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:58:26Z","lastTransitionTime":"2026-01-25T07:58:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:58:26 crc kubenswrapper[4832]: I0125 07:58:26.634336 4832 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-15 16:31:16.282367833 +0000 UTC Jan 25 07:58:26 crc kubenswrapper[4832]: I0125 07:58:26.668721 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-nzj5s" Jan 25 07:58:26 crc kubenswrapper[4832]: I0125 07:58:26.668766 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 25 07:58:26 crc kubenswrapper[4832]: I0125 07:58:26.668783 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 25 07:58:26 crc kubenswrapper[4832]: E0125 07:58:26.668970 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-nzj5s" podUID="b1a15135-866b-4644-97aa-34c7da815b6b" Jan 25 07:58:26 crc kubenswrapper[4832]: E0125 07:58:26.669311 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 25 07:58:26 crc kubenswrapper[4832]: E0125 07:58:26.669448 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 25 07:58:26 crc kubenswrapper[4832]: I0125 07:58:26.707034 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:58:26 crc kubenswrapper[4832]: I0125 07:58:26.707081 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:58:26 crc kubenswrapper[4832]: I0125 07:58:26.707091 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:58:26 crc kubenswrapper[4832]: I0125 07:58:26.707107 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:58:26 crc kubenswrapper[4832]: I0125 07:58:26.707118 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:58:26Z","lastTransitionTime":"2026-01-25T07:58:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:58:26 crc kubenswrapper[4832]: I0125 07:58:26.810220 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:58:26 crc kubenswrapper[4832]: I0125 07:58:26.810307 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:58:26 crc kubenswrapper[4832]: I0125 07:58:26.810326 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:58:26 crc kubenswrapper[4832]: I0125 07:58:26.810354 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:58:26 crc kubenswrapper[4832]: I0125 07:58:26.810373 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:58:26Z","lastTransitionTime":"2026-01-25T07:58:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:58:26 crc kubenswrapper[4832]: I0125 07:58:26.912753 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:58:26 crc kubenswrapper[4832]: I0125 07:58:26.912799 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:58:26 crc kubenswrapper[4832]: I0125 07:58:26.912810 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:58:26 crc kubenswrapper[4832]: I0125 07:58:26.912828 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:58:26 crc kubenswrapper[4832]: I0125 07:58:26.912839 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:58:26Z","lastTransitionTime":"2026-01-25T07:58:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:58:27 crc kubenswrapper[4832]: I0125 07:58:27.015688 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:58:27 crc kubenswrapper[4832]: I0125 07:58:27.015746 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:58:27 crc kubenswrapper[4832]: I0125 07:58:27.015759 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:58:27 crc kubenswrapper[4832]: I0125 07:58:27.015780 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:58:27 crc kubenswrapper[4832]: I0125 07:58:27.015793 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:58:27Z","lastTransitionTime":"2026-01-25T07:58:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:58:27 crc kubenswrapper[4832]: I0125 07:58:27.119188 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:58:27 crc kubenswrapper[4832]: I0125 07:58:27.119233 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:58:27 crc kubenswrapper[4832]: I0125 07:58:27.119243 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:58:27 crc kubenswrapper[4832]: I0125 07:58:27.119260 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:58:27 crc kubenswrapper[4832]: I0125 07:58:27.119271 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:58:27Z","lastTransitionTime":"2026-01-25T07:58:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:58:27 crc kubenswrapper[4832]: I0125 07:58:27.221933 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:58:27 crc kubenswrapper[4832]: I0125 07:58:27.222011 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:58:27 crc kubenswrapper[4832]: I0125 07:58:27.222030 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:58:27 crc kubenswrapper[4832]: I0125 07:58:27.222058 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:58:27 crc kubenswrapper[4832]: I0125 07:58:27.222078 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:58:27Z","lastTransitionTime":"2026-01-25T07:58:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:58:27 crc kubenswrapper[4832]: I0125 07:58:27.325868 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:58:27 crc kubenswrapper[4832]: I0125 07:58:27.325955 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:58:27 crc kubenswrapper[4832]: I0125 07:58:27.325981 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:58:27 crc kubenswrapper[4832]: I0125 07:58:27.326090 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:58:27 crc kubenswrapper[4832]: I0125 07:58:27.326120 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:58:27Z","lastTransitionTime":"2026-01-25T07:58:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:58:27 crc kubenswrapper[4832]: I0125 07:58:27.429560 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:58:27 crc kubenswrapper[4832]: I0125 07:58:27.429627 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:58:27 crc kubenswrapper[4832]: I0125 07:58:27.429641 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:58:27 crc kubenswrapper[4832]: I0125 07:58:27.429668 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:58:27 crc kubenswrapper[4832]: I0125 07:58:27.429686 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:58:27Z","lastTransitionTime":"2026-01-25T07:58:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:58:27 crc kubenswrapper[4832]: I0125 07:58:27.456156 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:58:27 crc kubenswrapper[4832]: I0125 07:58:27.456217 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:58:27 crc kubenswrapper[4832]: I0125 07:58:27.456232 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:58:27 crc kubenswrapper[4832]: I0125 07:58:27.456256 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:58:27 crc kubenswrapper[4832]: I0125 07:58:27.456278 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:58:27Z","lastTransitionTime":"2026-01-25T07:58:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:58:27 crc kubenswrapper[4832]: E0125 07:58:27.472149 4832 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-25T07:58:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-25T07:58:27Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-25T07:58:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-25T07:58:27Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-25T07:58:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-25T07:58:27Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-25T07:58:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-25T07:58:27Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"0979aa75-019e-429a-886d-abfe16bbe8b2\\\",\\\"systemUUID\\\":\\\"55010a19-6f9d-4b9e-9f82-47bdc3835176\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T07:58:27Z is after 2025-08-24T17:21:41Z" Jan 25 07:58:27 crc kubenswrapper[4832]: I0125 07:58:27.477559 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:58:27 crc kubenswrapper[4832]: I0125 07:58:27.477650 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:58:27 crc kubenswrapper[4832]: I0125 07:58:27.477669 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:58:27 crc kubenswrapper[4832]: I0125 07:58:27.477696 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:58:27 crc kubenswrapper[4832]: I0125 07:58:27.477716 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:58:27Z","lastTransitionTime":"2026-01-25T07:58:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:58:27 crc kubenswrapper[4832]: E0125 07:58:27.497492 4832 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-25T07:58:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-25T07:58:27Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-25T07:58:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-25T07:58:27Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-25T07:58:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-25T07:58:27Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-25T07:58:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-25T07:58:27Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"0979aa75-019e-429a-886d-abfe16bbe8b2\\\",\\\"systemUUID\\\":\\\"55010a19-6f9d-4b9e-9f82-47bdc3835176\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T07:58:27Z is after 2025-08-24T17:21:41Z" Jan 25 07:58:27 crc kubenswrapper[4832]: I0125 07:58:27.502473 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:58:27 crc kubenswrapper[4832]: I0125 07:58:27.502527 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:58:27 crc kubenswrapper[4832]: I0125 07:58:27.502540 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:58:27 crc kubenswrapper[4832]: I0125 07:58:27.502561 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:58:27 crc kubenswrapper[4832]: I0125 07:58:27.502575 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:58:27Z","lastTransitionTime":"2026-01-25T07:58:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:58:27 crc kubenswrapper[4832]: E0125 07:58:27.522293 4832 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-25T07:58:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-25T07:58:27Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-25T07:58:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-25T07:58:27Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-25T07:58:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-25T07:58:27Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-25T07:58:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-25T07:58:27Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"0979aa75-019e-429a-886d-abfe16bbe8b2\\\",\\\"systemUUID\\\":\\\"55010a19-6f9d-4b9e-9f82-47bdc3835176\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T07:58:27Z is after 2025-08-24T17:21:41Z" Jan 25 07:58:27 crc kubenswrapper[4832]: I0125 07:58:27.528031 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:58:27 crc kubenswrapper[4832]: I0125 07:58:27.528076 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:58:27 crc kubenswrapper[4832]: I0125 07:58:27.528090 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:58:27 crc kubenswrapper[4832]: I0125 07:58:27.528113 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:58:27 crc kubenswrapper[4832]: I0125 07:58:27.528127 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:58:27Z","lastTransitionTime":"2026-01-25T07:58:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:58:27 crc kubenswrapper[4832]: E0125 07:58:27.546685 4832 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-25T07:58:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-25T07:58:27Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-25T07:58:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-25T07:58:27Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-25T07:58:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-25T07:58:27Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-25T07:58:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-25T07:58:27Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"0979aa75-019e-429a-886d-abfe16bbe8b2\\\",\\\"systemUUID\\\":\\\"55010a19-6f9d-4b9e-9f82-47bdc3835176\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T07:58:27Z is after 2025-08-24T17:21:41Z" Jan 25 07:58:27 crc kubenswrapper[4832]: I0125 07:58:27.552577 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:58:27 crc kubenswrapper[4832]: I0125 07:58:27.552629 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:58:27 crc kubenswrapper[4832]: I0125 07:58:27.552644 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:58:27 crc kubenswrapper[4832]: I0125 07:58:27.552667 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:58:27 crc kubenswrapper[4832]: I0125 07:58:27.552685 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:58:27Z","lastTransitionTime":"2026-01-25T07:58:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:58:27 crc kubenswrapper[4832]: E0125 07:58:27.569492 4832 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-25T07:58:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-25T07:58:27Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-25T07:58:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-25T07:58:27Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-25T07:58:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-25T07:58:27Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-25T07:58:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-25T07:58:27Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"0979aa75-019e-429a-886d-abfe16bbe8b2\\\",\\\"systemUUID\\\":\\\"55010a19-6f9d-4b9e-9f82-47bdc3835176\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T07:58:27Z is after 2025-08-24T17:21:41Z" Jan 25 07:58:27 crc kubenswrapper[4832]: E0125 07:58:27.569654 4832 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 25 07:58:27 crc kubenswrapper[4832]: I0125 07:58:27.571703 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:58:27 crc kubenswrapper[4832]: I0125 07:58:27.571743 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:58:27 crc kubenswrapper[4832]: I0125 07:58:27.571760 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:58:27 crc kubenswrapper[4832]: I0125 07:58:27.571781 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:58:27 crc kubenswrapper[4832]: I0125 07:58:27.571797 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:58:27Z","lastTransitionTime":"2026-01-25T07:58:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:58:27 crc kubenswrapper[4832]: I0125 07:58:27.634806 4832 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-30 10:44:14.987029803 +0000 UTC Jan 25 07:58:27 crc kubenswrapper[4832]: I0125 07:58:27.670017 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 25 07:58:27 crc kubenswrapper[4832]: E0125 07:58:27.670358 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 25 07:58:27 crc kubenswrapper[4832]: I0125 07:58:27.674508 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:58:27 crc kubenswrapper[4832]: I0125 07:58:27.674542 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:58:27 crc kubenswrapper[4832]: I0125 07:58:27.674553 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:58:27 crc kubenswrapper[4832]: I0125 07:58:27.674569 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:58:27 crc kubenswrapper[4832]: I0125 07:58:27.674583 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:58:27Z","lastTransitionTime":"2026-01-25T07:58:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:58:27 crc kubenswrapper[4832]: I0125 07:58:27.693545 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f08aec7c666388c5a9a5ccc970acf6e9df3262090951fd1a205cfb2f6cfb26a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e880d54d6b2d147d036dac73afd36230c3a984b018b7bd600dcbd33ca83aa84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T07:58:27Z is after 2025-08-24T17:21:41Z" Jan 25 07:58:27 crc kubenswrapper[4832]: I0125 07:58:27.714420 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-kzrcf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5439ad80-35f6-4da4-8745-8104e9963472\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:58:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:58:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bcaff12dd09b5de72efcfafa4784bfc96159d855dfb239fc5120bb5fb0c6653e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c1f3fab8a8806d76e6199970ac471a73665e6ec874f959a1e7908df814babfff\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-25T07:58:03Z\\\",\\\"message\\\":\\\"2026-01-25T07:57:18+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_ec6ca88f-716a-45cc-bbc3-4dcb86c68fbf\\\\n2026-01-25T07:57:18+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_ec6ca88f-716a-45cc-bbc3-4dcb86c68fbf to /host/opt/cni/bin/\\\\n2026-01-25T07:57:18Z [verbose] multus-daemon started\\\\n2026-01-25T07:57:18Z [verbose] Readiness Indicator file check\\\\n2026-01-25T07:58:03Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-25T07:57:17Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:58:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dg29p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T07:57:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-kzrcf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T07:58:27Z is after 2025-08-24T17:21:41Z" Jan 25 07:58:27 crc kubenswrapper[4832]: I0125 07:58:27.730246 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-nzj5s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b1a15135-866b-4644-97aa-34c7da815b6b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6wc7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6wc7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T07:57:30Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-nzj5s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T07:58:27Z is after 2025-08-24T17:21:41Z" Jan 25 07:58:27 crc kubenswrapper[4832]: I0125 07:58:27.747801 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f6bad725-5721-4824-a4ed-bfc16b247b44\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:56:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:56:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:56:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://acf625e850d98cfae07cd2c4ef9d3f9a5404baad9c9bf3e87fa7ff5d1ba00212\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:56:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://902f7ae070f61b744e77e5cbcc7e585607467b588514ae3e99fdded86279a9b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:56:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e1d1028b32f15c85ebc49f8b388004a91d6c08f1bc2c7bf77c2d34db97525111\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:56:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://79304c289cb94b7a9cd8eed25b9e679ded9f3b2b6133ad21157032e313120e85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://79304c289cb94b7a9cd8eed25b9e679ded9f3b2b6133ad21157032e313120e85\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T07:56:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T07:56:58Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T07:56:57Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T07:58:27Z is after 2025-08-24T17:21:41Z" Jan 25 07:58:27 crc kubenswrapper[4832]: I0125 07:58:27.773148 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d0e4b534-077a-47eb-a9aa-463b4dce27c2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:56:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:56:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e400282707469172abd90879bb5c4f96419dd2fbdbc5cc58c6ee9954624b221f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://22fb11acb07674f4808f4563567010790f12a87af272fdcf5ad1998e616c3f13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7970bc59b29bb18f7064917431bb4dd3388f593b65f71d697e3bc1c37493d087\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ae35d18ac48a31c47656c723134740770a44da6fa1587a853402bbfd4f51956\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://56b41ea1d1a7bb58c288bf3c661f5cd441412fc4790cd8361da2061bd35721dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c6f28ecd4c0dfb159fffbbdfc1ecbfee0ce21de2efa607937d80ec098bfc2534\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c6f28ecd4c0dfb159fffbbdfc1ecbfee0ce21de2efa607937d80ec098bfc2534\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T07:56:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T07:56:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3d6c060504d04d04a811fe906985b4981037f7c249befd89d21694b58983826\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b3d6c060504d04d04a811fe906985b4981037f7c249befd89d21694b58983826\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T07:56:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T07:56:58Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f98f07a514287378206a12966a18bcce2ce996434858c036f7e405a8c5d51721\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f98f07a514287378206a12966a18bcce2ce996434858c036f7e405a8c5d51721\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T07:56:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T07:56:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T07:56:57Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T07:58:27Z is after 2025-08-24T17:21:41Z" Jan 25 07:58:27 crc kubenswrapper[4832]: I0125 07:58:27.776645 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:58:27 crc kubenswrapper[4832]: I0125 07:58:27.776710 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:58:27 crc kubenswrapper[4832]: I0125 07:58:27.776726 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:58:27 crc kubenswrapper[4832]: I0125 07:58:27.776751 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:58:27 crc kubenswrapper[4832]: I0125 07:58:27.776768 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:58:27Z","lastTransitionTime":"2026-01-25T07:58:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:58:27 crc kubenswrapper[4832]: I0125 07:58:27.790401 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:19Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:19Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://097b2ff685144140b86c80b5c605d0ef31116b56237a696d1da4bf98f65d7ae2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T07:58:27Z is after 2025-08-24T17:21:41Z" Jan 25 07:58:27 crc kubenswrapper[4832]: I0125 07:58:27.804613 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-ljmz9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f0e6de28-95c1-4b62-93a5-8141ed12ba8e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://90459cff650e6a278d83d57b502423c3c3bd87cadc083c7642dfc4cc33e7953c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s6dzs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T07:57:16Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-ljmz9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T07:58:27Z is after 2025-08-24T17:21:41Z" Jan 25 07:58:27 crc kubenswrapper[4832]: I0125 07:58:27.814280 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-9r9sz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1fb47e8e-c812-41b4-9be7-3fad81e121b0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11d30ecfbac91cbd5f546d8f064b715e31917d7db31102376299e2c5fa7951f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2t6v2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c32b6a39b2bc87d55b11a88a54d0909633358c70f3fc555cd4308bc5bf2689a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2t6v2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T07:57:16Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-9r9sz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T07:58:27Z is after 2025-08-24T17:21:41Z" Jan 25 07:58:27 crc kubenswrapper[4832]: I0125 07:58:27.825340 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:16Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T07:58:27Z is after 2025-08-24T17:21:41Z" Jan 25 07:58:27 crc kubenswrapper[4832]: I0125 07:58:27.835994 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://49bab1f91a75d2c164a43ba253102a6ac5ba0fd6347fad172ae2052f055d3434\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T07:58:27Z is after 2025-08-24T17:21:41Z" Jan 25 07:58:27 crc kubenswrapper[4832]: I0125 07:58:27.849296 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-7tflx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"947f1c61-f061-4448-b301-9c2554b67933\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://62f9942e292890719dd629a44aa806877367db57a332a97f254fea093c039c5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6tmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://446dcb21c95e4112671db6f4b8376ff3361d3d386ecdaa190f615271511be812\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://446dcb21c95e4112671db6f4b8376ff3361d3d386ecdaa190f615271511be812\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T07:57:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T07:57:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6tmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2ca8e86a16d5f632146a210839dc52fb85013bd79ac5a467847d4a28a672539\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a2ca8e86a16d5f632146a210839dc52fb85013bd79ac5a467847d4a28a672539\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T07:57:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T07:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6tmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e8c763fc8bcc560d4435f2ed3be793465fb9e31b07bc26b76ce14bf7d9ce6b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3e8c763fc8bcc560d4435f2ed3be793465fb9e31b07bc26b76ce14bf7d9ce6b7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T07:57:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T07:57:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6tmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6a224c00f14700b78550beaa705d0f1cf0b2f13ef8ec3ba81aef885b81292f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a6a224c00f14700b78550beaa705d0f1cf0b2f13ef8ec3ba81aef885b81292f3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T07:57:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T07:57:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6tmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0565bbfef6aee4dc36b7eeea5fb9b0d26004395c38af8fb6f1745ff6853957e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0565bbfef6aee4dc36b7eeea5fb9b0d26004395c38af8fb6f1745ff6853957e4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T07:57:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T07:57:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6tmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://21c9f3889231e035c1db9611e076f2db7f52cca1449f9cd143323a8599d3141c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://21c9f3889231e035c1db9611e076f2db7f52cca1449f9cd143323a8599d3141c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T07:57:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T07:57:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6tmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T07:57:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-7tflx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T07:58:27Z is after 2025-08-24T17:21:41Z" Jan 25 07:58:27 crc kubenswrapper[4832]: I0125 07:58:27.858531 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"36702e7a-ed10-4b63-ab8f-af1cd3441960\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:56:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:56:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:56:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:56:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:56:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://16cd5f32fafee871295127ddc44b9575056c8d5c29478dd3fb19da6bda07f5fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:56:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://950d9ef513ef0b8dfe71e41de54a35ffc366d8ec047e5d72819b0dd54a3bf003\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://950d9ef513ef0b8dfe71e41de54a35ffc366d8ec047e5d72819b0dd54a3bf003\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T07:56:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T07:56:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T07:56:57Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T07:58:27Z is after 2025-08-24T17:21:41Z" Jan 25 07:58:27 crc kubenswrapper[4832]: I0125 07:58:27.868183 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T07:58:27Z is after 2025-08-24T17:21:41Z" Jan 25 07:58:27 crc kubenswrapper[4832]: I0125 07:58:27.879158 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:16Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T07:58:27Z is after 2025-08-24T17:21:41Z" Jan 25 07:58:27 crc kubenswrapper[4832]: I0125 07:58:27.880131 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:58:27 crc kubenswrapper[4832]: I0125 07:58:27.880171 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:58:27 crc kubenswrapper[4832]: I0125 07:58:27.880183 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:58:27 crc kubenswrapper[4832]: I0125 07:58:27.880200 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:58:27 crc kubenswrapper[4832]: I0125 07:58:27.880212 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:58:27Z","lastTransitionTime":"2026-01-25T07:58:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:58:27 crc kubenswrapper[4832]: I0125 07:58:27.889995 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-6dqw2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5b30a48c-b823-4cdd-ac0c-def5487d8fa6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d04c4243f10847106daab854b81ba5b24466780aa4900922ae2c460468a12e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxmsw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T07:57:16Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-6dqw2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T07:58:27Z is after 2025-08-24T17:21:41Z" Jan 25 07:58:27 crc kubenswrapper[4832]: I0125 07:58:27.904810 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-plv66" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9c6fdc72-86dc-433d-8aac-57b0eeefaca3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4eb8d5ded80c75addd304eb271c805a5558200db4ad062ef7354d8a0e4d2892d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rkm2k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b2bdf85709ae59146893142e9c99259a30d0a3d382b2212b1863f677f6afc2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rkm2k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://955df1f749685e35f57096ab341705a767f9f044c498ff9fe0c578205ab00e47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rkm2k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a4281c5178e1f538e268252a65fbf98cf6d3febdb246a148f96a4aa074654ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rkm2k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9039a4038315d24ad4f721f3a16dc792881c104d23270f4ab5ffb3d84ff4cb99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rkm2k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0de5e2c0084fa8b9faf368e61b965f84d8411bcbdfb8b3cf6a35f4bc6088e68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rkm2k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9360fc46a4533171758f5c0111aec5209164d6ef530b6c4c7047c14a347f7bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b9360fc46a4533171758f5c0111aec5209164d6ef530b6c4c7047c14a347f7bd\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-25T07:58:05Z\\\",\\\"message\\\":\\\"map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-etcd-operator/metrics]} name:Service_openshift-etcd-operator/metrics_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.188:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {53c717ca-2174-4315-bb03-c937a9c0d9b6}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0125 07:58:05.422450 6811 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-etcd-operator/metrics]} name:Service_openshift-etcd-operator/metrics_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.188:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {53c717ca-2174-4315-bb03-c937a9c0d9b6}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0125 07:58:05.420969 6811 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-25T07:58:04Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-plv66_openshift-ovn-kubernetes(9c6fdc72-86dc-433d-8aac-57b0eeefaca3)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rkm2k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d82289bf3a8f5881decb5d348cc43fdfd61f4ce6af17013a893b687d2c759d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rkm2k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac96bdf8380dbae226d8f186a0449b986660f21889eb73734620b26fb796fbf1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ac96bdf8380dbae226d8f186a0449b986660f21889eb73734620b26fb796fbf1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T07:57:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T07:57:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rkm2k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T07:57:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-plv66\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T07:58:27Z is after 2025-08-24T17:21:41Z" Jan 25 07:58:27 crc kubenswrapper[4832]: I0125 07:58:27.915768 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-ct7hc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1be4ce34-f46c-4ee9-8fb5-7ac13dafef85\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c584b1d69c283cdea5cd50a6f1e3b9a1fd4b4b82bfb1142fb4bb32fd7c7d3fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cd2cg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://80d0c4fe9bedb92c87bfea3e2e7706dac8825366b74adb48b257fa32f31a6277\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cd2cg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T07:57:29Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-ct7hc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T07:58:27Z is after 2025-08-24T17:21:41Z" Jan 25 07:58:27 crc kubenswrapper[4832]: I0125 07:58:27.928981 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4399c971-4476-4d24-ae22-8f9710ee1ea8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:56:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:56:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:56:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://427b76c32266adf832d2068d3a55977e793505c5bb68d7b55f73115596094910\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:56:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://37e9206fcc440929199c51b318bab8d2c23814d1307eaed596434c12edf2ed21\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:56:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://959f94a48ef709e3a3ca62ab6fda1874fd98e4fa70fbde0fa03da51bc8d0ed25\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:56:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://56d7d5b36830b76c8af4d6a98ec50b4096ef677b7ec94784724d5395dbc5e1a5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7e2213b4c4748dc37cf94e9b977630270dedbabf28e81c8a6d75e4ee3346ad7a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-25T07:57:15Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0125 07:57:10.242088 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0125 07:57:10.245266 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3222874030/tls.crt::/tmp/serving-cert-3222874030/tls.key\\\\\\\"\\\\nI0125 07:57:15.582629 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0125 07:57:15.585295 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0125 07:57:15.585315 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0125 07:57:15.585341 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0125 07:57:15.585347 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0125 07:57:15.590465 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0125 07:57:15.590486 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0125 07:57:15.590498 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0125 07:57:15.590502 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0125 07:57:15.590506 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0125 07:57:15.590510 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0125 07:57:15.590513 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0125 07:57:15.590670 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0125 07:57:15.594690 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-25T07:56:59Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:57:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c0b0c638bfaa98aaf9932b5ad1b0bfc04ba52038c40f3aa85103388c557ace5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:56:59Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5cdefbe9da3ff798b69ba79465cd9b6fce74df31802f14dca3fa58ba5b9d1bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b5cdefbe9da3ff798b69ba79465cd9b6fce74df31802f14dca3fa58ba5b9d1bd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T07:56:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T07:56:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T07:56:57Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T07:58:27Z is after 2025-08-24T17:21:41Z" Jan 25 07:58:27 crc kubenswrapper[4832]: I0125 07:58:27.940212 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fcc553c4-1007-4dbc-8420-60b36d54467a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:56:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:56:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:57:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T07:56:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8be196a1dec67a58e78aa9de2efa770fc899f210cc9c13962f0ebe78b967ba34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:56:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b044eb1a229266f00938c08da6aa9e86425ca71d08c8434d7214d54850c36bbb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:56:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://82354c782a5e3edb960aa716e1fc5fa9ab40d1f483ae320f08abfb662c1f1911\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:56:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7833d14895ff5c8aa464bdd04ddfe77dd2cddb9658d863bf6421449e62657bd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T07:56:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T07:56:57Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T07:58:27Z is after 2025-08-24T17:21:41Z" Jan 25 07:58:27 crc kubenswrapper[4832]: I0125 07:58:27.984085 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:58:27 crc kubenswrapper[4832]: I0125 07:58:27.984149 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:58:27 crc kubenswrapper[4832]: I0125 07:58:27.984169 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:58:27 crc kubenswrapper[4832]: I0125 07:58:27.984190 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:58:27 crc kubenswrapper[4832]: I0125 07:58:27.984202 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:58:27Z","lastTransitionTime":"2026-01-25T07:58:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:58:28 crc kubenswrapper[4832]: I0125 07:58:28.087617 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:58:28 crc kubenswrapper[4832]: I0125 07:58:28.087677 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:58:28 crc kubenswrapper[4832]: I0125 07:58:28.087696 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:58:28 crc kubenswrapper[4832]: I0125 07:58:28.087721 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:58:28 crc kubenswrapper[4832]: I0125 07:58:28.087743 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:58:28Z","lastTransitionTime":"2026-01-25T07:58:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:58:28 crc kubenswrapper[4832]: I0125 07:58:28.190447 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:58:28 crc kubenswrapper[4832]: I0125 07:58:28.190511 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:58:28 crc kubenswrapper[4832]: I0125 07:58:28.190526 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:58:28 crc kubenswrapper[4832]: I0125 07:58:28.190553 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:58:28 crc kubenswrapper[4832]: I0125 07:58:28.190570 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:58:28Z","lastTransitionTime":"2026-01-25T07:58:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:58:28 crc kubenswrapper[4832]: I0125 07:58:28.295040 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:58:28 crc kubenswrapper[4832]: I0125 07:58:28.295138 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:58:28 crc kubenswrapper[4832]: I0125 07:58:28.295159 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:58:28 crc kubenswrapper[4832]: I0125 07:58:28.295192 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:58:28 crc kubenswrapper[4832]: I0125 07:58:28.295221 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:58:28Z","lastTransitionTime":"2026-01-25T07:58:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:58:28 crc kubenswrapper[4832]: I0125 07:58:28.399944 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:58:28 crc kubenswrapper[4832]: I0125 07:58:28.399979 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:58:28 crc kubenswrapper[4832]: I0125 07:58:28.399989 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:58:28 crc kubenswrapper[4832]: I0125 07:58:28.400005 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:58:28 crc kubenswrapper[4832]: I0125 07:58:28.400017 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:58:28Z","lastTransitionTime":"2026-01-25T07:58:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:58:28 crc kubenswrapper[4832]: I0125 07:58:28.502915 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:58:28 crc kubenswrapper[4832]: I0125 07:58:28.502982 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:58:28 crc kubenswrapper[4832]: I0125 07:58:28.503000 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:58:28 crc kubenswrapper[4832]: I0125 07:58:28.503025 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:58:28 crc kubenswrapper[4832]: I0125 07:58:28.503043 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:58:28Z","lastTransitionTime":"2026-01-25T07:58:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:58:28 crc kubenswrapper[4832]: I0125 07:58:28.606875 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:58:28 crc kubenswrapper[4832]: I0125 07:58:28.607139 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:58:28 crc kubenswrapper[4832]: I0125 07:58:28.607157 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:58:28 crc kubenswrapper[4832]: I0125 07:58:28.607187 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:58:28 crc kubenswrapper[4832]: I0125 07:58:28.607209 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:58:28Z","lastTransitionTime":"2026-01-25T07:58:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:58:28 crc kubenswrapper[4832]: I0125 07:58:28.635293 4832 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-19 10:15:53.848403506 +0000 UTC Jan 25 07:58:28 crc kubenswrapper[4832]: I0125 07:58:28.669503 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-nzj5s" Jan 25 07:58:28 crc kubenswrapper[4832]: E0125 07:58:28.669806 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-nzj5s" podUID="b1a15135-866b-4644-97aa-34c7da815b6b" Jan 25 07:58:28 crc kubenswrapper[4832]: I0125 07:58:28.669892 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 25 07:58:28 crc kubenswrapper[4832]: I0125 07:58:28.669929 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 25 07:58:28 crc kubenswrapper[4832]: E0125 07:58:28.670704 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 25 07:58:28 crc kubenswrapper[4832]: E0125 07:58:28.670810 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 25 07:58:28 crc kubenswrapper[4832]: I0125 07:58:28.711625 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:58:28 crc kubenswrapper[4832]: I0125 07:58:28.711704 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:58:28 crc kubenswrapper[4832]: I0125 07:58:28.711735 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:58:28 crc kubenswrapper[4832]: I0125 07:58:28.711765 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:58:28 crc kubenswrapper[4832]: I0125 07:58:28.711789 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:58:28Z","lastTransitionTime":"2026-01-25T07:58:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:58:28 crc kubenswrapper[4832]: I0125 07:58:28.814821 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:58:28 crc kubenswrapper[4832]: I0125 07:58:28.814869 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:58:28 crc kubenswrapper[4832]: I0125 07:58:28.814881 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:58:28 crc kubenswrapper[4832]: I0125 07:58:28.814900 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:58:28 crc kubenswrapper[4832]: I0125 07:58:28.814912 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:58:28Z","lastTransitionTime":"2026-01-25T07:58:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:58:28 crc kubenswrapper[4832]: I0125 07:58:28.917818 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:58:28 crc kubenswrapper[4832]: I0125 07:58:28.917898 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:58:28 crc kubenswrapper[4832]: I0125 07:58:28.917926 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:58:28 crc kubenswrapper[4832]: I0125 07:58:28.917954 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:58:28 crc kubenswrapper[4832]: I0125 07:58:28.917977 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:58:28Z","lastTransitionTime":"2026-01-25T07:58:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:58:29 crc kubenswrapper[4832]: I0125 07:58:29.021159 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:58:29 crc kubenswrapper[4832]: I0125 07:58:29.021268 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:58:29 crc kubenswrapper[4832]: I0125 07:58:29.021295 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:58:29 crc kubenswrapper[4832]: I0125 07:58:29.021332 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:58:29 crc kubenswrapper[4832]: I0125 07:58:29.021358 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:58:29Z","lastTransitionTime":"2026-01-25T07:58:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:58:29 crc kubenswrapper[4832]: I0125 07:58:29.124653 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:58:29 crc kubenswrapper[4832]: I0125 07:58:29.124720 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:58:29 crc kubenswrapper[4832]: I0125 07:58:29.124729 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:58:29 crc kubenswrapper[4832]: I0125 07:58:29.124744 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:58:29 crc kubenswrapper[4832]: I0125 07:58:29.124753 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:58:29Z","lastTransitionTime":"2026-01-25T07:58:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:58:29 crc kubenswrapper[4832]: I0125 07:58:29.227644 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:58:29 crc kubenswrapper[4832]: I0125 07:58:29.227739 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:58:29 crc kubenswrapper[4832]: I0125 07:58:29.227757 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:58:29 crc kubenswrapper[4832]: I0125 07:58:29.227781 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:58:29 crc kubenswrapper[4832]: I0125 07:58:29.227799 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:58:29Z","lastTransitionTime":"2026-01-25T07:58:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:58:29 crc kubenswrapper[4832]: I0125 07:58:29.330927 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:58:29 crc kubenswrapper[4832]: I0125 07:58:29.331012 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:58:29 crc kubenswrapper[4832]: I0125 07:58:29.331031 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:58:29 crc kubenswrapper[4832]: I0125 07:58:29.331058 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:58:29 crc kubenswrapper[4832]: I0125 07:58:29.331076 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:58:29Z","lastTransitionTime":"2026-01-25T07:58:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:58:29 crc kubenswrapper[4832]: I0125 07:58:29.434937 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:58:29 crc kubenswrapper[4832]: I0125 07:58:29.434998 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:58:29 crc kubenswrapper[4832]: I0125 07:58:29.435008 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:58:29 crc kubenswrapper[4832]: I0125 07:58:29.435023 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:58:29 crc kubenswrapper[4832]: I0125 07:58:29.435033 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:58:29Z","lastTransitionTime":"2026-01-25T07:58:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:58:29 crc kubenswrapper[4832]: I0125 07:58:29.538851 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:58:29 crc kubenswrapper[4832]: I0125 07:58:29.538912 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:58:29 crc kubenswrapper[4832]: I0125 07:58:29.538935 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:58:29 crc kubenswrapper[4832]: I0125 07:58:29.538970 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:58:29 crc kubenswrapper[4832]: I0125 07:58:29.538996 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:58:29Z","lastTransitionTime":"2026-01-25T07:58:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:58:29 crc kubenswrapper[4832]: I0125 07:58:29.635827 4832 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-09 14:20:34.354081096 +0000 UTC Jan 25 07:58:29 crc kubenswrapper[4832]: I0125 07:58:29.642733 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:58:29 crc kubenswrapper[4832]: I0125 07:58:29.642812 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:58:29 crc kubenswrapper[4832]: I0125 07:58:29.642837 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:58:29 crc kubenswrapper[4832]: I0125 07:58:29.642870 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:58:29 crc kubenswrapper[4832]: I0125 07:58:29.642892 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:58:29Z","lastTransitionTime":"2026-01-25T07:58:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:58:29 crc kubenswrapper[4832]: I0125 07:58:29.669321 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 25 07:58:29 crc kubenswrapper[4832]: E0125 07:58:29.669610 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 25 07:58:29 crc kubenswrapper[4832]: I0125 07:58:29.747164 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:58:29 crc kubenswrapper[4832]: I0125 07:58:29.747214 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:58:29 crc kubenswrapper[4832]: I0125 07:58:29.747225 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:58:29 crc kubenswrapper[4832]: I0125 07:58:29.747243 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:58:29 crc kubenswrapper[4832]: I0125 07:58:29.747258 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:58:29Z","lastTransitionTime":"2026-01-25T07:58:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:58:29 crc kubenswrapper[4832]: I0125 07:58:29.851005 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:58:29 crc kubenswrapper[4832]: I0125 07:58:29.851040 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:58:29 crc kubenswrapper[4832]: I0125 07:58:29.851049 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:58:29 crc kubenswrapper[4832]: I0125 07:58:29.851066 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:58:29 crc kubenswrapper[4832]: I0125 07:58:29.851077 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:58:29Z","lastTransitionTime":"2026-01-25T07:58:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:58:29 crc kubenswrapper[4832]: I0125 07:58:29.954932 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:58:29 crc kubenswrapper[4832]: I0125 07:58:29.955047 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:58:29 crc kubenswrapper[4832]: I0125 07:58:29.955061 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:58:29 crc kubenswrapper[4832]: I0125 07:58:29.955086 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:58:29 crc kubenswrapper[4832]: I0125 07:58:29.955102 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:58:29Z","lastTransitionTime":"2026-01-25T07:58:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:58:30 crc kubenswrapper[4832]: I0125 07:58:30.058662 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:58:30 crc kubenswrapper[4832]: I0125 07:58:30.058722 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:58:30 crc kubenswrapper[4832]: I0125 07:58:30.058735 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:58:30 crc kubenswrapper[4832]: I0125 07:58:30.058760 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:58:30 crc kubenswrapper[4832]: I0125 07:58:30.058778 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:58:30Z","lastTransitionTime":"2026-01-25T07:58:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:58:30 crc kubenswrapper[4832]: I0125 07:58:30.161760 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:58:30 crc kubenswrapper[4832]: I0125 07:58:30.161813 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:58:30 crc kubenswrapper[4832]: I0125 07:58:30.161826 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:58:30 crc kubenswrapper[4832]: I0125 07:58:30.161856 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:58:30 crc kubenswrapper[4832]: I0125 07:58:30.161871 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:58:30Z","lastTransitionTime":"2026-01-25T07:58:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:58:30 crc kubenswrapper[4832]: I0125 07:58:30.264932 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:58:30 crc kubenswrapper[4832]: I0125 07:58:30.265015 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:58:30 crc kubenswrapper[4832]: I0125 07:58:30.265039 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:58:30 crc kubenswrapper[4832]: I0125 07:58:30.265072 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:58:30 crc kubenswrapper[4832]: I0125 07:58:30.265098 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:58:30Z","lastTransitionTime":"2026-01-25T07:58:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:58:30 crc kubenswrapper[4832]: I0125 07:58:30.367362 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:58:30 crc kubenswrapper[4832]: I0125 07:58:30.367431 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:58:30 crc kubenswrapper[4832]: I0125 07:58:30.367444 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:58:30 crc kubenswrapper[4832]: I0125 07:58:30.367465 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:58:30 crc kubenswrapper[4832]: I0125 07:58:30.367484 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:58:30Z","lastTransitionTime":"2026-01-25T07:58:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:58:30 crc kubenswrapper[4832]: I0125 07:58:30.471163 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:58:30 crc kubenswrapper[4832]: I0125 07:58:30.471243 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:58:30 crc kubenswrapper[4832]: I0125 07:58:30.471265 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:58:30 crc kubenswrapper[4832]: I0125 07:58:30.471296 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:58:30 crc kubenswrapper[4832]: I0125 07:58:30.471318 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:58:30Z","lastTransitionTime":"2026-01-25T07:58:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:58:30 crc kubenswrapper[4832]: I0125 07:58:30.574622 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:58:30 crc kubenswrapper[4832]: I0125 07:58:30.574685 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:58:30 crc kubenswrapper[4832]: I0125 07:58:30.574700 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:58:30 crc kubenswrapper[4832]: I0125 07:58:30.574724 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:58:30 crc kubenswrapper[4832]: I0125 07:58:30.574740 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:58:30Z","lastTransitionTime":"2026-01-25T07:58:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:58:30 crc kubenswrapper[4832]: I0125 07:58:30.636305 4832 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-02 06:35:40.7917474 +0000 UTC Jan 25 07:58:30 crc kubenswrapper[4832]: I0125 07:58:30.668981 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 25 07:58:30 crc kubenswrapper[4832]: I0125 07:58:30.668981 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 25 07:58:30 crc kubenswrapper[4832]: I0125 07:58:30.669143 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-nzj5s" Jan 25 07:58:30 crc kubenswrapper[4832]: E0125 07:58:30.669272 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 25 07:58:30 crc kubenswrapper[4832]: E0125 07:58:30.669993 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-nzj5s" podUID="b1a15135-866b-4644-97aa-34c7da815b6b" Jan 25 07:58:30 crc kubenswrapper[4832]: E0125 07:58:30.670073 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 25 07:58:30 crc kubenswrapper[4832]: I0125 07:58:30.678658 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:58:30 crc kubenswrapper[4832]: I0125 07:58:30.678737 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:58:30 crc kubenswrapper[4832]: I0125 07:58:30.678753 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:58:30 crc kubenswrapper[4832]: I0125 07:58:30.678771 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:58:30 crc kubenswrapper[4832]: I0125 07:58:30.678782 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:58:30Z","lastTransitionTime":"2026-01-25T07:58:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:58:30 crc kubenswrapper[4832]: I0125 07:58:30.781894 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:58:30 crc kubenswrapper[4832]: I0125 07:58:30.782004 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:58:30 crc kubenswrapper[4832]: I0125 07:58:30.782025 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:58:30 crc kubenswrapper[4832]: I0125 07:58:30.782059 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:58:30 crc kubenswrapper[4832]: I0125 07:58:30.782083 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:58:30Z","lastTransitionTime":"2026-01-25T07:58:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:58:30 crc kubenswrapper[4832]: I0125 07:58:30.885563 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:58:30 crc kubenswrapper[4832]: I0125 07:58:30.885615 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:58:30 crc kubenswrapper[4832]: I0125 07:58:30.885625 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:58:30 crc kubenswrapper[4832]: I0125 07:58:30.885648 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:58:30 crc kubenswrapper[4832]: I0125 07:58:30.885662 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:58:30Z","lastTransitionTime":"2026-01-25T07:58:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:58:30 crc kubenswrapper[4832]: I0125 07:58:30.988740 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:58:30 crc kubenswrapper[4832]: I0125 07:58:30.988794 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:58:30 crc kubenswrapper[4832]: I0125 07:58:30.988804 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:58:30 crc kubenswrapper[4832]: I0125 07:58:30.988823 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:58:30 crc kubenswrapper[4832]: I0125 07:58:30.988836 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:58:30Z","lastTransitionTime":"2026-01-25T07:58:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:58:31 crc kubenswrapper[4832]: I0125 07:58:31.091649 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:58:31 crc kubenswrapper[4832]: I0125 07:58:31.091709 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:58:31 crc kubenswrapper[4832]: I0125 07:58:31.091719 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:58:31 crc kubenswrapper[4832]: I0125 07:58:31.091740 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:58:31 crc kubenswrapper[4832]: I0125 07:58:31.091754 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:58:31Z","lastTransitionTime":"2026-01-25T07:58:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:58:31 crc kubenswrapper[4832]: I0125 07:58:31.194378 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:58:31 crc kubenswrapper[4832]: I0125 07:58:31.194467 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:58:31 crc kubenswrapper[4832]: I0125 07:58:31.194484 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:58:31 crc kubenswrapper[4832]: I0125 07:58:31.194510 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:58:31 crc kubenswrapper[4832]: I0125 07:58:31.194525 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:58:31Z","lastTransitionTime":"2026-01-25T07:58:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:58:31 crc kubenswrapper[4832]: I0125 07:58:31.299972 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:58:31 crc kubenswrapper[4832]: I0125 07:58:31.300061 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:58:31 crc kubenswrapper[4832]: I0125 07:58:31.300083 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:58:31 crc kubenswrapper[4832]: I0125 07:58:31.300112 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:58:31 crc kubenswrapper[4832]: I0125 07:58:31.300133 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:58:31Z","lastTransitionTime":"2026-01-25T07:58:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:58:31 crc kubenswrapper[4832]: I0125 07:58:31.402914 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:58:31 crc kubenswrapper[4832]: I0125 07:58:31.402974 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:58:31 crc kubenswrapper[4832]: I0125 07:58:31.402996 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:58:31 crc kubenswrapper[4832]: I0125 07:58:31.403029 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:58:31 crc kubenswrapper[4832]: I0125 07:58:31.403051 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:58:31Z","lastTransitionTime":"2026-01-25T07:58:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:58:31 crc kubenswrapper[4832]: I0125 07:58:31.506158 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:58:31 crc kubenswrapper[4832]: I0125 07:58:31.506231 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:58:31 crc kubenswrapper[4832]: I0125 07:58:31.506240 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:58:31 crc kubenswrapper[4832]: I0125 07:58:31.506256 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:58:31 crc kubenswrapper[4832]: I0125 07:58:31.506267 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:58:31Z","lastTransitionTime":"2026-01-25T07:58:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:58:31 crc kubenswrapper[4832]: I0125 07:58:31.609895 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:58:31 crc kubenswrapper[4832]: I0125 07:58:31.609988 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:58:31 crc kubenswrapper[4832]: I0125 07:58:31.610021 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:58:31 crc kubenswrapper[4832]: I0125 07:58:31.610050 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:58:31 crc kubenswrapper[4832]: I0125 07:58:31.610071 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:58:31Z","lastTransitionTime":"2026-01-25T07:58:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:58:31 crc kubenswrapper[4832]: I0125 07:58:31.636456 4832 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-30 05:59:42.572136455 +0000 UTC Jan 25 07:58:31 crc kubenswrapper[4832]: I0125 07:58:31.669157 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 25 07:58:31 crc kubenswrapper[4832]: E0125 07:58:31.669375 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 25 07:58:31 crc kubenswrapper[4832]: I0125 07:58:31.715718 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:58:31 crc kubenswrapper[4832]: I0125 07:58:31.715776 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:58:31 crc kubenswrapper[4832]: I0125 07:58:31.715788 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:58:31 crc kubenswrapper[4832]: I0125 07:58:31.715806 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:58:31 crc kubenswrapper[4832]: I0125 07:58:31.715818 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:58:31Z","lastTransitionTime":"2026-01-25T07:58:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:58:31 crc kubenswrapper[4832]: I0125 07:58:31.819790 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:58:31 crc kubenswrapper[4832]: I0125 07:58:31.819848 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:58:31 crc kubenswrapper[4832]: I0125 07:58:31.819861 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:58:31 crc kubenswrapper[4832]: I0125 07:58:31.819883 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:58:31 crc kubenswrapper[4832]: I0125 07:58:31.819895 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:58:31Z","lastTransitionTime":"2026-01-25T07:58:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:58:31 crc kubenswrapper[4832]: I0125 07:58:31.923921 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:58:31 crc kubenswrapper[4832]: I0125 07:58:31.923996 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:58:31 crc kubenswrapper[4832]: I0125 07:58:31.924021 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:58:31 crc kubenswrapper[4832]: I0125 07:58:31.924055 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:58:31 crc kubenswrapper[4832]: I0125 07:58:31.924081 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:58:31Z","lastTransitionTime":"2026-01-25T07:58:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:58:32 crc kubenswrapper[4832]: I0125 07:58:32.027259 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:58:32 crc kubenswrapper[4832]: I0125 07:58:32.027310 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:58:32 crc kubenswrapper[4832]: I0125 07:58:32.027322 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:58:32 crc kubenswrapper[4832]: I0125 07:58:32.027342 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:58:32 crc kubenswrapper[4832]: I0125 07:58:32.027354 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:58:32Z","lastTransitionTime":"2026-01-25T07:58:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:58:32 crc kubenswrapper[4832]: I0125 07:58:32.131162 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:58:32 crc kubenswrapper[4832]: I0125 07:58:32.131231 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:58:32 crc kubenswrapper[4832]: I0125 07:58:32.131245 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:58:32 crc kubenswrapper[4832]: I0125 07:58:32.131269 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:58:32 crc kubenswrapper[4832]: I0125 07:58:32.131285 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:58:32Z","lastTransitionTime":"2026-01-25T07:58:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:58:32 crc kubenswrapper[4832]: I0125 07:58:32.233917 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:58:32 crc kubenswrapper[4832]: I0125 07:58:32.233996 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:58:32 crc kubenswrapper[4832]: I0125 07:58:32.234010 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:58:32 crc kubenswrapper[4832]: I0125 07:58:32.234027 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:58:32 crc kubenswrapper[4832]: I0125 07:58:32.234040 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:58:32Z","lastTransitionTime":"2026-01-25T07:58:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:58:32 crc kubenswrapper[4832]: I0125 07:58:32.336700 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:58:32 crc kubenswrapper[4832]: I0125 07:58:32.336767 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:58:32 crc kubenswrapper[4832]: I0125 07:58:32.336777 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:58:32 crc kubenswrapper[4832]: I0125 07:58:32.336794 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:58:32 crc kubenswrapper[4832]: I0125 07:58:32.336803 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:58:32Z","lastTransitionTime":"2026-01-25T07:58:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:58:32 crc kubenswrapper[4832]: I0125 07:58:32.439349 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:58:32 crc kubenswrapper[4832]: I0125 07:58:32.439475 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:58:32 crc kubenswrapper[4832]: I0125 07:58:32.439487 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:58:32 crc kubenswrapper[4832]: I0125 07:58:32.439510 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:58:32 crc kubenswrapper[4832]: I0125 07:58:32.439524 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:58:32Z","lastTransitionTime":"2026-01-25T07:58:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:58:32 crc kubenswrapper[4832]: I0125 07:58:32.542246 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:58:32 crc kubenswrapper[4832]: I0125 07:58:32.542306 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:58:32 crc kubenswrapper[4832]: I0125 07:58:32.542319 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:58:32 crc kubenswrapper[4832]: I0125 07:58:32.542335 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:58:32 crc kubenswrapper[4832]: I0125 07:58:32.542349 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:58:32Z","lastTransitionTime":"2026-01-25T07:58:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:58:32 crc kubenswrapper[4832]: I0125 07:58:32.637317 4832 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-05 10:51:56.305567174 +0000 UTC Jan 25 07:58:32 crc kubenswrapper[4832]: I0125 07:58:32.645630 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:58:32 crc kubenswrapper[4832]: I0125 07:58:32.645699 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:58:32 crc kubenswrapper[4832]: I0125 07:58:32.645717 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:58:32 crc kubenswrapper[4832]: I0125 07:58:32.645742 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:58:32 crc kubenswrapper[4832]: I0125 07:58:32.645758 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:58:32Z","lastTransitionTime":"2026-01-25T07:58:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:58:32 crc kubenswrapper[4832]: I0125 07:58:32.668717 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 25 07:58:32 crc kubenswrapper[4832]: I0125 07:58:32.668764 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 25 07:58:32 crc kubenswrapper[4832]: I0125 07:58:32.668782 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-nzj5s" Jan 25 07:58:32 crc kubenswrapper[4832]: E0125 07:58:32.668866 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 25 07:58:32 crc kubenswrapper[4832]: E0125 07:58:32.669046 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-nzj5s" podUID="b1a15135-866b-4644-97aa-34c7da815b6b" Jan 25 07:58:32 crc kubenswrapper[4832]: E0125 07:58:32.669099 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 25 07:58:32 crc kubenswrapper[4832]: I0125 07:58:32.748580 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:58:32 crc kubenswrapper[4832]: I0125 07:58:32.748633 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:58:32 crc kubenswrapper[4832]: I0125 07:58:32.748644 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:58:32 crc kubenswrapper[4832]: I0125 07:58:32.748659 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:58:32 crc kubenswrapper[4832]: I0125 07:58:32.748671 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:58:32Z","lastTransitionTime":"2026-01-25T07:58:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:58:32 crc kubenswrapper[4832]: I0125 07:58:32.851032 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:58:32 crc kubenswrapper[4832]: I0125 07:58:32.851063 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:58:32 crc kubenswrapper[4832]: I0125 07:58:32.851072 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:58:32 crc kubenswrapper[4832]: I0125 07:58:32.851083 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:58:32 crc kubenswrapper[4832]: I0125 07:58:32.851091 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:58:32Z","lastTransitionTime":"2026-01-25T07:58:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:58:32 crc kubenswrapper[4832]: I0125 07:58:32.952983 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:58:32 crc kubenswrapper[4832]: I0125 07:58:32.953257 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:58:32 crc kubenswrapper[4832]: I0125 07:58:32.953275 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:58:32 crc kubenswrapper[4832]: I0125 07:58:32.953293 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:58:32 crc kubenswrapper[4832]: I0125 07:58:32.953303 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:58:32Z","lastTransitionTime":"2026-01-25T07:58:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:58:33 crc kubenswrapper[4832]: I0125 07:58:33.055322 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:58:33 crc kubenswrapper[4832]: I0125 07:58:33.055424 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:58:33 crc kubenswrapper[4832]: I0125 07:58:33.055442 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:58:33 crc kubenswrapper[4832]: I0125 07:58:33.055466 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:58:33 crc kubenswrapper[4832]: I0125 07:58:33.055483 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:58:33Z","lastTransitionTime":"2026-01-25T07:58:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:58:33 crc kubenswrapper[4832]: I0125 07:58:33.158261 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:58:33 crc kubenswrapper[4832]: I0125 07:58:33.158318 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:58:33 crc kubenswrapper[4832]: I0125 07:58:33.158334 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:58:33 crc kubenswrapper[4832]: I0125 07:58:33.158355 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:58:33 crc kubenswrapper[4832]: I0125 07:58:33.158372 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:58:33Z","lastTransitionTime":"2026-01-25T07:58:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:58:33 crc kubenswrapper[4832]: I0125 07:58:33.261565 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:58:33 crc kubenswrapper[4832]: I0125 07:58:33.261643 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:58:33 crc kubenswrapper[4832]: I0125 07:58:33.261696 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:58:33 crc kubenswrapper[4832]: I0125 07:58:33.261731 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:58:33 crc kubenswrapper[4832]: I0125 07:58:33.261755 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:58:33Z","lastTransitionTime":"2026-01-25T07:58:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:58:33 crc kubenswrapper[4832]: I0125 07:58:33.364372 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:58:33 crc kubenswrapper[4832]: I0125 07:58:33.364453 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:58:33 crc kubenswrapper[4832]: I0125 07:58:33.364469 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:58:33 crc kubenswrapper[4832]: I0125 07:58:33.364491 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:58:33 crc kubenswrapper[4832]: I0125 07:58:33.364509 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:58:33Z","lastTransitionTime":"2026-01-25T07:58:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:58:33 crc kubenswrapper[4832]: I0125 07:58:33.466296 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:58:33 crc kubenswrapper[4832]: I0125 07:58:33.466329 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:58:33 crc kubenswrapper[4832]: I0125 07:58:33.466338 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:58:33 crc kubenswrapper[4832]: I0125 07:58:33.466351 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:58:33 crc kubenswrapper[4832]: I0125 07:58:33.466360 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:58:33Z","lastTransitionTime":"2026-01-25T07:58:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:58:33 crc kubenswrapper[4832]: I0125 07:58:33.568768 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:58:33 crc kubenswrapper[4832]: I0125 07:58:33.568805 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:58:33 crc kubenswrapper[4832]: I0125 07:58:33.568816 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:58:33 crc kubenswrapper[4832]: I0125 07:58:33.568830 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:58:33 crc kubenswrapper[4832]: I0125 07:58:33.568838 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:58:33Z","lastTransitionTime":"2026-01-25T07:58:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:58:33 crc kubenswrapper[4832]: I0125 07:58:33.637719 4832 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-30 11:57:59.550594344 +0000 UTC Jan 25 07:58:33 crc kubenswrapper[4832]: I0125 07:58:33.669400 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 25 07:58:33 crc kubenswrapper[4832]: E0125 07:58:33.669532 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 25 07:58:33 crc kubenswrapper[4832]: I0125 07:58:33.671086 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:58:33 crc kubenswrapper[4832]: I0125 07:58:33.671121 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:58:33 crc kubenswrapper[4832]: I0125 07:58:33.671136 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:58:33 crc kubenswrapper[4832]: I0125 07:58:33.671156 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:58:33 crc kubenswrapper[4832]: I0125 07:58:33.671170 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:58:33Z","lastTransitionTime":"2026-01-25T07:58:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:58:33 crc kubenswrapper[4832]: I0125 07:58:33.773822 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:58:33 crc kubenswrapper[4832]: I0125 07:58:33.773855 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:58:33 crc kubenswrapper[4832]: I0125 07:58:33.773870 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:58:33 crc kubenswrapper[4832]: I0125 07:58:33.773887 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:58:33 crc kubenswrapper[4832]: I0125 07:58:33.773898 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:58:33Z","lastTransitionTime":"2026-01-25T07:58:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:58:33 crc kubenswrapper[4832]: I0125 07:58:33.876761 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:58:33 crc kubenswrapper[4832]: I0125 07:58:33.876812 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:58:33 crc kubenswrapper[4832]: I0125 07:58:33.876823 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:58:33 crc kubenswrapper[4832]: I0125 07:58:33.876839 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:58:33 crc kubenswrapper[4832]: I0125 07:58:33.876850 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:58:33Z","lastTransitionTime":"2026-01-25T07:58:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:58:33 crc kubenswrapper[4832]: I0125 07:58:33.978928 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:58:33 crc kubenswrapper[4832]: I0125 07:58:33.978962 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:58:33 crc kubenswrapper[4832]: I0125 07:58:33.978988 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:58:33 crc kubenswrapper[4832]: I0125 07:58:33.979002 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:58:33 crc kubenswrapper[4832]: I0125 07:58:33.979011 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:58:33Z","lastTransitionTime":"2026-01-25T07:58:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:58:34 crc kubenswrapper[4832]: I0125 07:58:34.080876 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:58:34 crc kubenswrapper[4832]: I0125 07:58:34.081375 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:58:34 crc kubenswrapper[4832]: I0125 07:58:34.081437 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:58:34 crc kubenswrapper[4832]: I0125 07:58:34.081459 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:58:34 crc kubenswrapper[4832]: I0125 07:58:34.081474 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:58:34Z","lastTransitionTime":"2026-01-25T07:58:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:58:34 crc kubenswrapper[4832]: I0125 07:58:34.183073 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:58:34 crc kubenswrapper[4832]: I0125 07:58:34.183136 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:58:34 crc kubenswrapper[4832]: I0125 07:58:34.183162 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:58:34 crc kubenswrapper[4832]: I0125 07:58:34.183183 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:58:34 crc kubenswrapper[4832]: I0125 07:58:34.183199 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:58:34Z","lastTransitionTime":"2026-01-25T07:58:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:58:34 crc kubenswrapper[4832]: I0125 07:58:34.285240 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:58:34 crc kubenswrapper[4832]: I0125 07:58:34.285317 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:58:34 crc kubenswrapper[4832]: I0125 07:58:34.285347 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:58:34 crc kubenswrapper[4832]: I0125 07:58:34.285371 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:58:34 crc kubenswrapper[4832]: I0125 07:58:34.285433 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:58:34Z","lastTransitionTime":"2026-01-25T07:58:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:58:34 crc kubenswrapper[4832]: I0125 07:58:34.387304 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:58:34 crc kubenswrapper[4832]: I0125 07:58:34.387494 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:58:34 crc kubenswrapper[4832]: I0125 07:58:34.387512 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:58:34 crc kubenswrapper[4832]: I0125 07:58:34.387534 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:58:34 crc kubenswrapper[4832]: I0125 07:58:34.387550 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:58:34Z","lastTransitionTime":"2026-01-25T07:58:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:58:34 crc kubenswrapper[4832]: I0125 07:58:34.489890 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:58:34 crc kubenswrapper[4832]: I0125 07:58:34.489954 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:58:34 crc kubenswrapper[4832]: I0125 07:58:34.489965 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:58:34 crc kubenswrapper[4832]: I0125 07:58:34.489981 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:58:34 crc kubenswrapper[4832]: I0125 07:58:34.489990 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:58:34Z","lastTransitionTime":"2026-01-25T07:58:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:58:34 crc kubenswrapper[4832]: I0125 07:58:34.592568 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:58:34 crc kubenswrapper[4832]: I0125 07:58:34.592629 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:58:34 crc kubenswrapper[4832]: I0125 07:58:34.592644 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:58:34 crc kubenswrapper[4832]: I0125 07:58:34.592667 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:58:34 crc kubenswrapper[4832]: I0125 07:58:34.592685 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:58:34Z","lastTransitionTime":"2026-01-25T07:58:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:58:34 crc kubenswrapper[4832]: I0125 07:58:34.638542 4832 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-22 09:21:15.846266452 +0000 UTC Jan 25 07:58:34 crc kubenswrapper[4832]: I0125 07:58:34.669069 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 25 07:58:34 crc kubenswrapper[4832]: I0125 07:58:34.669133 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-nzj5s" Jan 25 07:58:34 crc kubenswrapper[4832]: I0125 07:58:34.669212 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 25 07:58:34 crc kubenswrapper[4832]: E0125 07:58:34.669248 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 25 07:58:34 crc kubenswrapper[4832]: E0125 07:58:34.669500 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 25 07:58:34 crc kubenswrapper[4832]: E0125 07:58:34.669671 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-nzj5s" podUID="b1a15135-866b-4644-97aa-34c7da815b6b" Jan 25 07:58:34 crc kubenswrapper[4832]: I0125 07:58:34.694934 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:58:34 crc kubenswrapper[4832]: I0125 07:58:34.694968 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:58:34 crc kubenswrapper[4832]: I0125 07:58:34.694979 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:58:34 crc kubenswrapper[4832]: I0125 07:58:34.694995 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:58:34 crc kubenswrapper[4832]: I0125 07:58:34.695006 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:58:34Z","lastTransitionTime":"2026-01-25T07:58:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:58:34 crc kubenswrapper[4832]: I0125 07:58:34.797131 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:58:34 crc kubenswrapper[4832]: I0125 07:58:34.797183 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:58:34 crc kubenswrapper[4832]: I0125 07:58:34.797417 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:58:34 crc kubenswrapper[4832]: I0125 07:58:34.797441 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:58:34 crc kubenswrapper[4832]: I0125 07:58:34.797460 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:58:34Z","lastTransitionTime":"2026-01-25T07:58:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:58:34 crc kubenswrapper[4832]: I0125 07:58:34.811198 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/b1a15135-866b-4644-97aa-34c7da815b6b-metrics-certs\") pod \"network-metrics-daemon-nzj5s\" (UID: \"b1a15135-866b-4644-97aa-34c7da815b6b\") " pod="openshift-multus/network-metrics-daemon-nzj5s" Jan 25 07:58:34 crc kubenswrapper[4832]: E0125 07:58:34.811355 4832 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 25 07:58:34 crc kubenswrapper[4832]: E0125 07:58:34.811454 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b1a15135-866b-4644-97aa-34c7da815b6b-metrics-certs podName:b1a15135-866b-4644-97aa-34c7da815b6b nodeName:}" failed. No retries permitted until 2026-01-25 07:59:38.811433991 +0000 UTC m=+161.485257534 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/b1a15135-866b-4644-97aa-34c7da815b6b-metrics-certs") pod "network-metrics-daemon-nzj5s" (UID: "b1a15135-866b-4644-97aa-34c7da815b6b") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 25 07:58:34 crc kubenswrapper[4832]: I0125 07:58:34.899787 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:58:34 crc kubenswrapper[4832]: I0125 07:58:34.899832 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:58:34 crc kubenswrapper[4832]: I0125 07:58:34.899842 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:58:34 crc kubenswrapper[4832]: I0125 07:58:34.899860 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:58:34 crc kubenswrapper[4832]: I0125 07:58:34.899875 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:58:34Z","lastTransitionTime":"2026-01-25T07:58:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:58:35 crc kubenswrapper[4832]: I0125 07:58:35.002023 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:58:35 crc kubenswrapper[4832]: I0125 07:58:35.002078 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:58:35 crc kubenswrapper[4832]: I0125 07:58:35.002090 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:58:35 crc kubenswrapper[4832]: I0125 07:58:35.002106 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:58:35 crc kubenswrapper[4832]: I0125 07:58:35.002120 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:58:35Z","lastTransitionTime":"2026-01-25T07:58:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:58:35 crc kubenswrapper[4832]: I0125 07:58:35.104909 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:58:35 crc kubenswrapper[4832]: I0125 07:58:35.105197 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:58:35 crc kubenswrapper[4832]: I0125 07:58:35.105292 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:58:35 crc kubenswrapper[4832]: I0125 07:58:35.105408 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:58:35 crc kubenswrapper[4832]: I0125 07:58:35.105502 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:58:35Z","lastTransitionTime":"2026-01-25T07:58:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:58:35 crc kubenswrapper[4832]: I0125 07:58:35.210102 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:58:35 crc kubenswrapper[4832]: I0125 07:58:35.210152 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:58:35 crc kubenswrapper[4832]: I0125 07:58:35.210188 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:58:35 crc kubenswrapper[4832]: I0125 07:58:35.210204 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:58:35 crc kubenswrapper[4832]: I0125 07:58:35.210214 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:58:35Z","lastTransitionTime":"2026-01-25T07:58:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:58:35 crc kubenswrapper[4832]: I0125 07:58:35.312257 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:58:35 crc kubenswrapper[4832]: I0125 07:58:35.312760 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:58:35 crc kubenswrapper[4832]: I0125 07:58:35.312929 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:58:35 crc kubenswrapper[4832]: I0125 07:58:35.313084 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:58:35 crc kubenswrapper[4832]: I0125 07:58:35.313239 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:58:35Z","lastTransitionTime":"2026-01-25T07:58:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:58:35 crc kubenswrapper[4832]: I0125 07:58:35.415594 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:58:35 crc kubenswrapper[4832]: I0125 07:58:35.415645 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:58:35 crc kubenswrapper[4832]: I0125 07:58:35.415655 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:58:35 crc kubenswrapper[4832]: I0125 07:58:35.415670 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:58:35 crc kubenswrapper[4832]: I0125 07:58:35.415681 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:58:35Z","lastTransitionTime":"2026-01-25T07:58:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:58:35 crc kubenswrapper[4832]: I0125 07:58:35.518424 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:58:35 crc kubenswrapper[4832]: I0125 07:58:35.518455 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:58:35 crc kubenswrapper[4832]: I0125 07:58:35.518463 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:58:35 crc kubenswrapper[4832]: I0125 07:58:35.518476 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:58:35 crc kubenswrapper[4832]: I0125 07:58:35.518485 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:58:35Z","lastTransitionTime":"2026-01-25T07:58:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:58:35 crc kubenswrapper[4832]: I0125 07:58:35.620963 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:58:35 crc kubenswrapper[4832]: I0125 07:58:35.621030 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:58:35 crc kubenswrapper[4832]: I0125 07:58:35.621054 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:58:35 crc kubenswrapper[4832]: I0125 07:58:35.621082 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:58:35 crc kubenswrapper[4832]: I0125 07:58:35.621102 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:58:35Z","lastTransitionTime":"2026-01-25T07:58:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:58:35 crc kubenswrapper[4832]: I0125 07:58:35.639242 4832 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-30 16:37:10.053890135 +0000 UTC Jan 25 07:58:35 crc kubenswrapper[4832]: I0125 07:58:35.669015 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 25 07:58:35 crc kubenswrapper[4832]: E0125 07:58:35.669210 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 25 07:58:35 crc kubenswrapper[4832]: I0125 07:58:35.723625 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:58:35 crc kubenswrapper[4832]: I0125 07:58:35.723673 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:58:35 crc kubenswrapper[4832]: I0125 07:58:35.723684 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:58:35 crc kubenswrapper[4832]: I0125 07:58:35.723701 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:58:35 crc kubenswrapper[4832]: I0125 07:58:35.723714 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:58:35Z","lastTransitionTime":"2026-01-25T07:58:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:58:35 crc kubenswrapper[4832]: I0125 07:58:35.826738 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:58:35 crc kubenswrapper[4832]: I0125 07:58:35.826805 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:58:35 crc kubenswrapper[4832]: I0125 07:58:35.826828 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:58:35 crc kubenswrapper[4832]: I0125 07:58:35.826858 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:58:35 crc kubenswrapper[4832]: I0125 07:58:35.826901 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:58:35Z","lastTransitionTime":"2026-01-25T07:58:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:58:35 crc kubenswrapper[4832]: I0125 07:58:35.929581 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:58:35 crc kubenswrapper[4832]: I0125 07:58:35.929633 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:58:35 crc kubenswrapper[4832]: I0125 07:58:35.929641 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:58:35 crc kubenswrapper[4832]: I0125 07:58:35.929655 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:58:35 crc kubenswrapper[4832]: I0125 07:58:35.929667 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:58:35Z","lastTransitionTime":"2026-01-25T07:58:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:58:36 crc kubenswrapper[4832]: I0125 07:58:36.032542 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:58:36 crc kubenswrapper[4832]: I0125 07:58:36.032588 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:58:36 crc kubenswrapper[4832]: I0125 07:58:36.032599 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:58:36 crc kubenswrapper[4832]: I0125 07:58:36.032614 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:58:36 crc kubenswrapper[4832]: I0125 07:58:36.032625 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:58:36Z","lastTransitionTime":"2026-01-25T07:58:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:58:36 crc kubenswrapper[4832]: I0125 07:58:36.135001 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:58:36 crc kubenswrapper[4832]: I0125 07:58:36.135048 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:58:36 crc kubenswrapper[4832]: I0125 07:58:36.135059 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:58:36 crc kubenswrapper[4832]: I0125 07:58:36.135074 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:58:36 crc kubenswrapper[4832]: I0125 07:58:36.135086 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:58:36Z","lastTransitionTime":"2026-01-25T07:58:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:58:36 crc kubenswrapper[4832]: I0125 07:58:36.237698 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:58:36 crc kubenswrapper[4832]: I0125 07:58:36.237952 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:58:36 crc kubenswrapper[4832]: I0125 07:58:36.238028 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:58:36 crc kubenswrapper[4832]: I0125 07:58:36.238102 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:58:36 crc kubenswrapper[4832]: I0125 07:58:36.238175 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:58:36Z","lastTransitionTime":"2026-01-25T07:58:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:58:36 crc kubenswrapper[4832]: I0125 07:58:36.341128 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:58:36 crc kubenswrapper[4832]: I0125 07:58:36.341180 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:58:36 crc kubenswrapper[4832]: I0125 07:58:36.341193 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:58:36 crc kubenswrapper[4832]: I0125 07:58:36.341211 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:58:36 crc kubenswrapper[4832]: I0125 07:58:36.341228 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:58:36Z","lastTransitionTime":"2026-01-25T07:58:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:58:36 crc kubenswrapper[4832]: I0125 07:58:36.443164 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:58:36 crc kubenswrapper[4832]: I0125 07:58:36.443765 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:58:36 crc kubenswrapper[4832]: I0125 07:58:36.443863 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:58:36 crc kubenswrapper[4832]: I0125 07:58:36.443930 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:58:36 crc kubenswrapper[4832]: I0125 07:58:36.443990 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:58:36Z","lastTransitionTime":"2026-01-25T07:58:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:58:36 crc kubenswrapper[4832]: I0125 07:58:36.546345 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:58:36 crc kubenswrapper[4832]: I0125 07:58:36.546402 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:58:36 crc kubenswrapper[4832]: I0125 07:58:36.546414 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:58:36 crc kubenswrapper[4832]: I0125 07:58:36.546429 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:58:36 crc kubenswrapper[4832]: I0125 07:58:36.546439 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:58:36Z","lastTransitionTime":"2026-01-25T07:58:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:58:36 crc kubenswrapper[4832]: I0125 07:58:36.640364 4832 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-08 16:34:14.9632626 +0000 UTC Jan 25 07:58:36 crc kubenswrapper[4832]: I0125 07:58:36.649152 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:58:36 crc kubenswrapper[4832]: I0125 07:58:36.649544 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:58:36 crc kubenswrapper[4832]: I0125 07:58:36.649754 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:58:36 crc kubenswrapper[4832]: I0125 07:58:36.649991 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:58:36 crc kubenswrapper[4832]: I0125 07:58:36.650223 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:58:36Z","lastTransitionTime":"2026-01-25T07:58:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:58:36 crc kubenswrapper[4832]: I0125 07:58:36.668623 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 25 07:58:36 crc kubenswrapper[4832]: I0125 07:58:36.668662 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-nzj5s" Jan 25 07:58:36 crc kubenswrapper[4832]: E0125 07:58:36.668762 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 25 07:58:36 crc kubenswrapper[4832]: I0125 07:58:36.668776 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 25 07:58:36 crc kubenswrapper[4832]: E0125 07:58:36.668910 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 25 07:58:36 crc kubenswrapper[4832]: E0125 07:58:36.669143 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-nzj5s" podUID="b1a15135-866b-4644-97aa-34c7da815b6b" Jan 25 07:58:36 crc kubenswrapper[4832]: I0125 07:58:36.670781 4832 scope.go:117] "RemoveContainer" containerID="b9360fc46a4533171758f5c0111aec5209164d6ef530b6c4c7047c14a347f7bd" Jan 25 07:58:36 crc kubenswrapper[4832]: E0125 07:58:36.671192 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-plv66_openshift-ovn-kubernetes(9c6fdc72-86dc-433d-8aac-57b0eeefaca3)\"" pod="openshift-ovn-kubernetes/ovnkube-node-plv66" podUID="9c6fdc72-86dc-433d-8aac-57b0eeefaca3" Jan 25 07:58:36 crc kubenswrapper[4832]: I0125 07:58:36.753694 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:58:36 crc kubenswrapper[4832]: I0125 07:58:36.753744 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:58:36 crc kubenswrapper[4832]: I0125 07:58:36.753761 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:58:36 crc kubenswrapper[4832]: I0125 07:58:36.753809 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:58:36 crc kubenswrapper[4832]: I0125 07:58:36.753826 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:58:36Z","lastTransitionTime":"2026-01-25T07:58:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:58:36 crc kubenswrapper[4832]: I0125 07:58:36.857361 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:58:36 crc kubenswrapper[4832]: I0125 07:58:36.857623 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:58:36 crc kubenswrapper[4832]: I0125 07:58:36.857718 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:58:36 crc kubenswrapper[4832]: I0125 07:58:36.857831 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:58:36 crc kubenswrapper[4832]: I0125 07:58:36.857896 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:58:36Z","lastTransitionTime":"2026-01-25T07:58:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:58:36 crc kubenswrapper[4832]: I0125 07:58:36.959740 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:58:36 crc kubenswrapper[4832]: I0125 07:58:36.959978 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:58:36 crc kubenswrapper[4832]: I0125 07:58:36.960099 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:58:36 crc kubenswrapper[4832]: I0125 07:58:36.960190 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:58:36 crc kubenswrapper[4832]: I0125 07:58:36.960254 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:58:36Z","lastTransitionTime":"2026-01-25T07:58:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:58:37 crc kubenswrapper[4832]: I0125 07:58:37.062937 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:58:37 crc kubenswrapper[4832]: I0125 07:58:37.062972 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:58:37 crc kubenswrapper[4832]: I0125 07:58:37.062982 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:58:37 crc kubenswrapper[4832]: I0125 07:58:37.062995 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:58:37 crc kubenswrapper[4832]: I0125 07:58:37.063005 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:58:37Z","lastTransitionTime":"2026-01-25T07:58:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:58:37 crc kubenswrapper[4832]: I0125 07:58:37.164669 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:58:37 crc kubenswrapper[4832]: I0125 07:58:37.164723 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:58:37 crc kubenswrapper[4832]: I0125 07:58:37.164737 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:58:37 crc kubenswrapper[4832]: I0125 07:58:37.164757 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:58:37 crc kubenswrapper[4832]: I0125 07:58:37.164769 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:58:37Z","lastTransitionTime":"2026-01-25T07:58:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:58:37 crc kubenswrapper[4832]: I0125 07:58:37.266593 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:58:37 crc kubenswrapper[4832]: I0125 07:58:37.266649 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:58:37 crc kubenswrapper[4832]: I0125 07:58:37.266664 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:58:37 crc kubenswrapper[4832]: I0125 07:58:37.266681 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:58:37 crc kubenswrapper[4832]: I0125 07:58:37.266693 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:58:37Z","lastTransitionTime":"2026-01-25T07:58:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:58:37 crc kubenswrapper[4832]: I0125 07:58:37.369520 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:58:37 crc kubenswrapper[4832]: I0125 07:58:37.369563 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:58:37 crc kubenswrapper[4832]: I0125 07:58:37.369573 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:58:37 crc kubenswrapper[4832]: I0125 07:58:37.369592 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:58:37 crc kubenswrapper[4832]: I0125 07:58:37.369603 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:58:37Z","lastTransitionTime":"2026-01-25T07:58:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:58:37 crc kubenswrapper[4832]: I0125 07:58:37.472176 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:58:37 crc kubenswrapper[4832]: I0125 07:58:37.472214 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:58:37 crc kubenswrapper[4832]: I0125 07:58:37.472224 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:58:37 crc kubenswrapper[4832]: I0125 07:58:37.472237 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:58:37 crc kubenswrapper[4832]: I0125 07:58:37.472248 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:58:37Z","lastTransitionTime":"2026-01-25T07:58:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:58:37 crc kubenswrapper[4832]: I0125 07:58:37.573919 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:58:37 crc kubenswrapper[4832]: I0125 07:58:37.573984 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:58:37 crc kubenswrapper[4832]: I0125 07:58:37.574009 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:58:37 crc kubenswrapper[4832]: I0125 07:58:37.574040 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:58:37 crc kubenswrapper[4832]: I0125 07:58:37.574063 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:58:37Z","lastTransitionTime":"2026-01-25T07:58:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:58:37 crc kubenswrapper[4832]: I0125 07:58:37.640978 4832 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-22 16:06:17.726890961 +0000 UTC Jan 25 07:58:37 crc kubenswrapper[4832]: I0125 07:58:37.665778 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:58:37 crc kubenswrapper[4832]: I0125 07:58:37.665835 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:58:37 crc kubenswrapper[4832]: I0125 07:58:37.665859 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:58:37 crc kubenswrapper[4832]: I0125 07:58:37.665887 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:58:37 crc kubenswrapper[4832]: I0125 07:58:37.665909 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:58:37Z","lastTransitionTime":"2026-01-25T07:58:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:58:37 crc kubenswrapper[4832]: I0125 07:58:37.668598 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 25 07:58:37 crc kubenswrapper[4832]: E0125 07:58:37.668739 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 25 07:58:37 crc kubenswrapper[4832]: I0125 07:58:37.695978 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 07:58:37 crc kubenswrapper[4832]: I0125 07:58:37.696033 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 07:58:37 crc kubenswrapper[4832]: I0125 07:58:37.696049 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 07:58:37 crc kubenswrapper[4832]: I0125 07:58:37.696071 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 07:58:37 crc kubenswrapper[4832]: I0125 07:58:37.696088 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T07:58:37Z","lastTransitionTime":"2026-01-25T07:58:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 07:58:37 crc kubenswrapper[4832]: I0125 07:58:37.712980 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=82.712947233 podStartE2EDuration="1m22.712947233s" podCreationTimestamp="2026-01-25 07:57:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-25 07:58:37.697217649 +0000 UTC m=+100.371041212" watchObservedRunningTime="2026-01-25 07:58:37.712947233 +0000 UTC m=+100.386770806" Jan 25 07:58:37 crc kubenswrapper[4832]: I0125 07:58:37.713441 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podStartSLOduration=82.713426169 podStartE2EDuration="1m22.713426169s" podCreationTimestamp="2026-01-25 07:57:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-25 07:58:37.713142499 +0000 UTC m=+100.386966112" watchObservedRunningTime="2026-01-25 07:58:37.713426169 +0000 UTC m=+100.387249762" Jan 25 07:58:37 crc kubenswrapper[4832]: I0125 07:58:37.734120 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-version/cluster-version-operator-5c965bbfc6-rtrl5"] Jan 25 07:58:37 crc kubenswrapper[4832]: I0125 07:58:37.734522 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-rtrl5" Jan 25 07:58:37 crc kubenswrapper[4832]: I0125 07:58:37.737363 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Jan 25 07:58:37 crc kubenswrapper[4832]: I0125 07:58:37.737631 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Jan 25 07:58:37 crc kubenswrapper[4832]: I0125 07:58:37.740101 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Jan 25 07:58:37 crc kubenswrapper[4832]: I0125 07:58:37.740979 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Jan 25 07:58:37 crc kubenswrapper[4832]: I0125 07:58:37.770626 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/node-ca-6dqw2" podStartSLOduration=81.770608481 podStartE2EDuration="1m21.770608481s" podCreationTimestamp="2026-01-25 07:57:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-25 07:58:37.738855325 +0000 UTC m=+100.412678858" watchObservedRunningTime="2026-01-25 07:58:37.770608481 +0000 UTC m=+100.444432014" Jan 25 07:58:37 crc kubenswrapper[4832]: I0125 07:58:37.782526 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-ct7hc" podStartSLOduration=81.782503467 podStartE2EDuration="1m21.782503467s" podCreationTimestamp="2026-01-25 07:57:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-25 07:58:37.7823035 +0000 UTC m=+100.456127033" watchObservedRunningTime="2026-01-25 07:58:37.782503467 +0000 UTC m=+100.456327010" Jan 25 07:58:37 crc kubenswrapper[4832]: I0125 07:58:37.819578 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podStartSLOduration=52.819557989 podStartE2EDuration="52.819557989s" podCreationTimestamp="2026-01-25 07:57:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-25 07:58:37.795464897 +0000 UTC m=+100.469288430" watchObservedRunningTime="2026-01-25 07:58:37.819557989 +0000 UTC m=+100.493381522" Jan 25 07:58:37 crc kubenswrapper[4832]: I0125 07:58:37.819732 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/etcd-crc" podStartSLOduration=81.819725815 podStartE2EDuration="1m21.819725815s" podCreationTimestamp="2026-01-25 07:57:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-25 07:58:37.81899621 +0000 UTC m=+100.492819733" watchObservedRunningTime="2026-01-25 07:58:37.819725815 +0000 UTC m=+100.493549358" Jan 25 07:58:37 crc kubenswrapper[4832]: I0125 07:58:37.841106 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/c4dd343e-750b-4b8d-8d1e-f190c6618743-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-rtrl5\" (UID: \"c4dd343e-750b-4b8d-8d1e-f190c6618743\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-rtrl5" Jan 25 07:58:37 crc kubenswrapper[4832]: I0125 07:58:37.841170 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c4dd343e-750b-4b8d-8d1e-f190c6618743-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-rtrl5\" (UID: \"c4dd343e-750b-4b8d-8d1e-f190c6618743\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-rtrl5" Jan 25 07:58:37 crc kubenswrapper[4832]: I0125 07:58:37.841194 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/c4dd343e-750b-4b8d-8d1e-f190c6618743-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-rtrl5\" (UID: \"c4dd343e-750b-4b8d-8d1e-f190c6618743\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-rtrl5" Jan 25 07:58:37 crc kubenswrapper[4832]: I0125 07:58:37.841218 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c4dd343e-750b-4b8d-8d1e-f190c6618743-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-rtrl5\" (UID: \"c4dd343e-750b-4b8d-8d1e-f190c6618743\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-rtrl5" Jan 25 07:58:37 crc kubenswrapper[4832]: I0125 07:58:37.841238 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/c4dd343e-750b-4b8d-8d1e-f190c6618743-service-ca\") pod \"cluster-version-operator-5c965bbfc6-rtrl5\" (UID: \"c4dd343e-750b-4b8d-8d1e-f190c6618743\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-rtrl5" Jan 25 07:58:37 crc kubenswrapper[4832]: I0125 07:58:37.851565 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-kzrcf" podStartSLOduration=81.851550733 podStartE2EDuration="1m21.851550733s" podCreationTimestamp="2026-01-25 07:57:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-25 07:58:37.850964464 +0000 UTC m=+100.524788017" watchObservedRunningTime="2026-01-25 07:58:37.851550733 +0000 UTC m=+100.525374266" Jan 25 07:58:37 crc kubenswrapper[4832]: I0125 07:58:37.927420 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/node-resolver-ljmz9" podStartSLOduration=81.927402216 podStartE2EDuration="1m21.927402216s" podCreationTimestamp="2026-01-25 07:57:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-25 07:58:37.927267972 +0000 UTC m=+100.601091525" watchObservedRunningTime="2026-01-25 07:58:37.927402216 +0000 UTC m=+100.601225749" Jan 25 07:58:37 crc kubenswrapper[4832]: I0125 07:58:37.941797 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-daemon-9r9sz" podStartSLOduration=81.941774635 podStartE2EDuration="1m21.941774635s" podCreationTimestamp="2026-01-25 07:57:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-25 07:58:37.94133481 +0000 UTC m=+100.615158343" watchObservedRunningTime="2026-01-25 07:58:37.941774635 +0000 UTC m=+100.615598168" Jan 25 07:58:37 crc kubenswrapper[4832]: I0125 07:58:37.941942 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/c4dd343e-750b-4b8d-8d1e-f190c6618743-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-rtrl5\" (UID: \"c4dd343e-750b-4b8d-8d1e-f190c6618743\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-rtrl5" Jan 25 07:58:37 crc kubenswrapper[4832]: I0125 07:58:37.942002 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c4dd343e-750b-4b8d-8d1e-f190c6618743-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-rtrl5\" (UID: \"c4dd343e-750b-4b8d-8d1e-f190c6618743\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-rtrl5" Jan 25 07:58:37 crc kubenswrapper[4832]: I0125 07:58:37.942029 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/c4dd343e-750b-4b8d-8d1e-f190c6618743-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-rtrl5\" (UID: \"c4dd343e-750b-4b8d-8d1e-f190c6618743\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-rtrl5" Jan 25 07:58:37 crc kubenswrapper[4832]: I0125 07:58:37.942052 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c4dd343e-750b-4b8d-8d1e-f190c6618743-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-rtrl5\" (UID: \"c4dd343e-750b-4b8d-8d1e-f190c6618743\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-rtrl5" Jan 25 07:58:37 crc kubenswrapper[4832]: I0125 07:58:37.942063 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/c4dd343e-750b-4b8d-8d1e-f190c6618743-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-rtrl5\" (UID: \"c4dd343e-750b-4b8d-8d1e-f190c6618743\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-rtrl5" Jan 25 07:58:37 crc kubenswrapper[4832]: I0125 07:58:37.942073 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/c4dd343e-750b-4b8d-8d1e-f190c6618743-service-ca\") pod \"cluster-version-operator-5c965bbfc6-rtrl5\" (UID: \"c4dd343e-750b-4b8d-8d1e-f190c6618743\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-rtrl5" Jan 25 07:58:37 crc kubenswrapper[4832]: I0125 07:58:37.942151 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/c4dd343e-750b-4b8d-8d1e-f190c6618743-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-rtrl5\" (UID: \"c4dd343e-750b-4b8d-8d1e-f190c6618743\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-rtrl5" Jan 25 07:58:37 crc kubenswrapper[4832]: I0125 07:58:37.943116 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/c4dd343e-750b-4b8d-8d1e-f190c6618743-service-ca\") pod \"cluster-version-operator-5c965bbfc6-rtrl5\" (UID: \"c4dd343e-750b-4b8d-8d1e-f190c6618743\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-rtrl5" Jan 25 07:58:37 crc kubenswrapper[4832]: I0125 07:58:37.956957 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c4dd343e-750b-4b8d-8d1e-f190c6618743-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-rtrl5\" (UID: \"c4dd343e-750b-4b8d-8d1e-f190c6618743\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-rtrl5" Jan 25 07:58:37 crc kubenswrapper[4832]: I0125 07:58:37.960171 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" podStartSLOduration=16.960153117 podStartE2EDuration="16.960153117s" podCreationTimestamp="2026-01-25 07:58:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-25 07:58:37.959894218 +0000 UTC m=+100.633717761" watchObservedRunningTime="2026-01-25 07:58:37.960153117 +0000 UTC m=+100.633976650" Jan 25 07:58:37 crc kubenswrapper[4832]: I0125 07:58:37.965075 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c4dd343e-750b-4b8d-8d1e-f190c6618743-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-rtrl5\" (UID: \"c4dd343e-750b-4b8d-8d1e-f190c6618743\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-rtrl5" Jan 25 07:58:37 crc kubenswrapper[4832]: I0125 07:58:37.989048 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-additional-cni-plugins-7tflx" podStartSLOduration=81.989027187 podStartE2EDuration="1m21.989027187s" podCreationTimestamp="2026-01-25 07:57:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-25 07:58:37.988415126 +0000 UTC m=+100.662238649" watchObservedRunningTime="2026-01-25 07:58:37.989027187 +0000 UTC m=+100.662850740" Jan 25 07:58:38 crc kubenswrapper[4832]: I0125 07:58:38.049016 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-rtrl5" Jan 25 07:58:38 crc kubenswrapper[4832]: I0125 07:58:38.427124 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-rtrl5" event={"ID":"c4dd343e-750b-4b8d-8d1e-f190c6618743","Type":"ContainerStarted","Data":"9403f3526b6dbc0bbd60ea0ce35c2fc39e871e1fc01b0a6aac529a4f6870148f"} Jan 25 07:58:38 crc kubenswrapper[4832]: I0125 07:58:38.427180 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-rtrl5" event={"ID":"c4dd343e-750b-4b8d-8d1e-f190c6618743","Type":"ContainerStarted","Data":"8e04e115d386292d8072c1b8e410ee86a0a635380515fe6d00ea4a85bb899307"} Jan 25 07:58:38 crc kubenswrapper[4832]: I0125 07:58:38.440480 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-rtrl5" podStartSLOduration=82.440464834 podStartE2EDuration="1m22.440464834s" podCreationTimestamp="2026-01-25 07:57:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-25 07:58:38.439641497 +0000 UTC m=+101.113465060" watchObservedRunningTime="2026-01-25 07:58:38.440464834 +0000 UTC m=+101.114288357" Jan 25 07:58:38 crc kubenswrapper[4832]: I0125 07:58:38.642111 4832 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-13 04:40:20.215105186 +0000 UTC Jan 25 07:58:38 crc kubenswrapper[4832]: I0125 07:58:38.642496 4832 certificate_manager.go:356] kubernetes.io/kubelet-serving: Rotating certificates Jan 25 07:58:38 crc kubenswrapper[4832]: I0125 07:58:38.649276 4832 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Jan 25 07:58:38 crc kubenswrapper[4832]: I0125 07:58:38.669330 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 25 07:58:38 crc kubenswrapper[4832]: I0125 07:58:38.669457 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-nzj5s" Jan 25 07:58:38 crc kubenswrapper[4832]: E0125 07:58:38.669532 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 25 07:58:38 crc kubenswrapper[4832]: I0125 07:58:38.669581 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 25 07:58:38 crc kubenswrapper[4832]: E0125 07:58:38.669781 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-nzj5s" podUID="b1a15135-866b-4644-97aa-34c7da815b6b" Jan 25 07:58:38 crc kubenswrapper[4832]: E0125 07:58:38.669912 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 25 07:58:39 crc kubenswrapper[4832]: I0125 07:58:39.668754 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 25 07:58:39 crc kubenswrapper[4832]: E0125 07:58:39.668854 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 25 07:58:40 crc kubenswrapper[4832]: I0125 07:58:40.669423 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 25 07:58:40 crc kubenswrapper[4832]: I0125 07:58:40.669456 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 25 07:58:40 crc kubenswrapper[4832]: I0125 07:58:40.669482 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-nzj5s" Jan 25 07:58:40 crc kubenswrapper[4832]: E0125 07:58:40.669584 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 25 07:58:40 crc kubenswrapper[4832]: E0125 07:58:40.669674 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-nzj5s" podUID="b1a15135-866b-4644-97aa-34c7da815b6b" Jan 25 07:58:40 crc kubenswrapper[4832]: E0125 07:58:40.669771 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 25 07:58:41 crc kubenswrapper[4832]: I0125 07:58:41.669590 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 25 07:58:41 crc kubenswrapper[4832]: E0125 07:58:41.669705 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 25 07:58:42 crc kubenswrapper[4832]: I0125 07:58:42.669113 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 25 07:58:42 crc kubenswrapper[4832]: I0125 07:58:42.669114 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-nzj5s" Jan 25 07:58:42 crc kubenswrapper[4832]: E0125 07:58:42.669301 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 25 07:58:42 crc kubenswrapper[4832]: E0125 07:58:42.669433 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-nzj5s" podUID="b1a15135-866b-4644-97aa-34c7da815b6b" Jan 25 07:58:42 crc kubenswrapper[4832]: I0125 07:58:42.669160 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 25 07:58:42 crc kubenswrapper[4832]: E0125 07:58:42.669512 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 25 07:58:43 crc kubenswrapper[4832]: I0125 07:58:43.669515 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 25 07:58:43 crc kubenswrapper[4832]: E0125 07:58:43.669672 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 25 07:58:44 crc kubenswrapper[4832]: I0125 07:58:44.669921 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-nzj5s" Jan 25 07:58:44 crc kubenswrapper[4832]: I0125 07:58:44.669941 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 25 07:58:44 crc kubenswrapper[4832]: E0125 07:58:44.670144 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-nzj5s" podUID="b1a15135-866b-4644-97aa-34c7da815b6b" Jan 25 07:58:44 crc kubenswrapper[4832]: I0125 07:58:44.669942 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 25 07:58:44 crc kubenswrapper[4832]: E0125 07:58:44.670236 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 25 07:58:44 crc kubenswrapper[4832]: E0125 07:58:44.670298 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 25 07:58:45 crc kubenswrapper[4832]: I0125 07:58:45.669174 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 25 07:58:45 crc kubenswrapper[4832]: E0125 07:58:45.669355 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 25 07:58:46 crc kubenswrapper[4832]: I0125 07:58:46.668983 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 25 07:58:46 crc kubenswrapper[4832]: I0125 07:58:46.669093 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-nzj5s" Jan 25 07:58:46 crc kubenswrapper[4832]: E0125 07:58:46.669137 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 25 07:58:46 crc kubenswrapper[4832]: I0125 07:58:46.669110 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 25 07:58:46 crc kubenswrapper[4832]: E0125 07:58:46.669277 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-nzj5s" podUID="b1a15135-866b-4644-97aa-34c7da815b6b" Jan 25 07:58:46 crc kubenswrapper[4832]: E0125 07:58:46.669451 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 25 07:58:47 crc kubenswrapper[4832]: I0125 07:58:47.668992 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 25 07:58:47 crc kubenswrapper[4832]: E0125 07:58:47.671745 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 25 07:58:48 crc kubenswrapper[4832]: I0125 07:58:48.669573 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 25 07:58:48 crc kubenswrapper[4832]: E0125 07:58:48.670418 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 25 07:58:48 crc kubenswrapper[4832]: I0125 07:58:48.669685 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 25 07:58:48 crc kubenswrapper[4832]: I0125 07:58:48.669590 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-nzj5s" Jan 25 07:58:48 crc kubenswrapper[4832]: I0125 07:58:48.670523 4832 scope.go:117] "RemoveContainer" containerID="b9360fc46a4533171758f5c0111aec5209164d6ef530b6c4c7047c14a347f7bd" Jan 25 07:58:48 crc kubenswrapper[4832]: E0125 07:58:48.670787 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-nzj5s" podUID="b1a15135-866b-4644-97aa-34c7da815b6b" Jan 25 07:58:48 crc kubenswrapper[4832]: E0125 07:58:48.670642 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 25 07:58:49 crc kubenswrapper[4832]: I0125 07:58:49.459228 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-plv66_9c6fdc72-86dc-433d-8aac-57b0eeefaca3/ovnkube-controller/3.log" Jan 25 07:58:49 crc kubenswrapper[4832]: I0125 07:58:49.462467 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-plv66" event={"ID":"9c6fdc72-86dc-433d-8aac-57b0eeefaca3","Type":"ContainerStarted","Data":"d3706bdff863467890f6e3493480a401b3ed42903abef7290645045a203f1741"} Jan 25 07:58:49 crc kubenswrapper[4832]: I0125 07:58:49.462877 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-plv66" Jan 25 07:58:49 crc kubenswrapper[4832]: I0125 07:58:49.475546 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-nzj5s"] Jan 25 07:58:49 crc kubenswrapper[4832]: I0125 07:58:49.475646 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-nzj5s" Jan 25 07:58:49 crc kubenswrapper[4832]: E0125 07:58:49.475728 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-nzj5s" podUID="b1a15135-866b-4644-97aa-34c7da815b6b" Jan 25 07:58:49 crc kubenswrapper[4832]: I0125 07:58:49.513065 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-plv66" podStartSLOduration=93.513047363 podStartE2EDuration="1m33.513047363s" podCreationTimestamp="2026-01-25 07:57:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-25 07:58:49.513039033 +0000 UTC m=+112.186862556" watchObservedRunningTime="2026-01-25 07:58:49.513047363 +0000 UTC m=+112.186870886" Jan 25 07:58:49 crc kubenswrapper[4832]: I0125 07:58:49.668960 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 25 07:58:49 crc kubenswrapper[4832]: E0125 07:58:49.669264 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 25 07:58:50 crc kubenswrapper[4832]: I0125 07:58:50.466320 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-kzrcf_5439ad80-35f6-4da4-8745-8104e9963472/kube-multus/1.log" Jan 25 07:58:50 crc kubenswrapper[4832]: I0125 07:58:50.466790 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-kzrcf_5439ad80-35f6-4da4-8745-8104e9963472/kube-multus/0.log" Jan 25 07:58:50 crc kubenswrapper[4832]: I0125 07:58:50.466830 4832 generic.go:334] "Generic (PLEG): container finished" podID="5439ad80-35f6-4da4-8745-8104e9963472" containerID="bcaff12dd09b5de72efcfafa4784bfc96159d855dfb239fc5120bb5fb0c6653e" exitCode=1 Jan 25 07:58:50 crc kubenswrapper[4832]: I0125 07:58:50.466982 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-kzrcf" event={"ID":"5439ad80-35f6-4da4-8745-8104e9963472","Type":"ContainerDied","Data":"bcaff12dd09b5de72efcfafa4784bfc96159d855dfb239fc5120bb5fb0c6653e"} Jan 25 07:58:50 crc kubenswrapper[4832]: I0125 07:58:50.467080 4832 scope.go:117] "RemoveContainer" containerID="c1f3fab8a8806d76e6199970ac471a73665e6ec874f959a1e7908df814babfff" Jan 25 07:58:50 crc kubenswrapper[4832]: I0125 07:58:50.467569 4832 scope.go:117] "RemoveContainer" containerID="bcaff12dd09b5de72efcfafa4784bfc96159d855dfb239fc5120bb5fb0c6653e" Jan 25 07:58:50 crc kubenswrapper[4832]: E0125 07:58:50.467800 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-multus pod=multus-kzrcf_openshift-multus(5439ad80-35f6-4da4-8745-8104e9963472)\"" pod="openshift-multus/multus-kzrcf" podUID="5439ad80-35f6-4da4-8745-8104e9963472" Jan 25 07:58:50 crc kubenswrapper[4832]: I0125 07:58:50.668801 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-nzj5s" Jan 25 07:58:50 crc kubenswrapper[4832]: I0125 07:58:50.668837 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 25 07:58:50 crc kubenswrapper[4832]: I0125 07:58:50.668827 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 25 07:58:50 crc kubenswrapper[4832]: E0125 07:58:50.668958 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-nzj5s" podUID="b1a15135-866b-4644-97aa-34c7da815b6b" Jan 25 07:58:50 crc kubenswrapper[4832]: E0125 07:58:50.669099 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 25 07:58:50 crc kubenswrapper[4832]: E0125 07:58:50.669167 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 25 07:58:51 crc kubenswrapper[4832]: I0125 07:58:51.473292 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-kzrcf_5439ad80-35f6-4da4-8745-8104e9963472/kube-multus/1.log" Jan 25 07:58:51 crc kubenswrapper[4832]: I0125 07:58:51.670969 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 25 07:58:51 crc kubenswrapper[4832]: E0125 07:58:51.671129 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 25 07:58:52 crc kubenswrapper[4832]: I0125 07:58:52.668658 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-nzj5s" Jan 25 07:58:52 crc kubenswrapper[4832]: I0125 07:58:52.668756 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 25 07:58:52 crc kubenswrapper[4832]: E0125 07:58:52.668822 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-nzj5s" podUID="b1a15135-866b-4644-97aa-34c7da815b6b" Jan 25 07:58:52 crc kubenswrapper[4832]: I0125 07:58:52.668701 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 25 07:58:52 crc kubenswrapper[4832]: E0125 07:58:52.668926 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 25 07:58:52 crc kubenswrapper[4832]: E0125 07:58:52.669017 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 25 07:58:53 crc kubenswrapper[4832]: I0125 07:58:53.668984 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 25 07:58:53 crc kubenswrapper[4832]: E0125 07:58:53.669176 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 25 07:58:54 crc kubenswrapper[4832]: I0125 07:58:54.668703 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 25 07:58:54 crc kubenswrapper[4832]: E0125 07:58:54.668839 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 25 07:58:54 crc kubenswrapper[4832]: I0125 07:58:54.669311 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-nzj5s" Jan 25 07:58:54 crc kubenswrapper[4832]: E0125 07:58:54.669427 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-nzj5s" podUID="b1a15135-866b-4644-97aa-34c7da815b6b" Jan 25 07:58:54 crc kubenswrapper[4832]: I0125 07:58:54.669835 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 25 07:58:54 crc kubenswrapper[4832]: E0125 07:58:54.670086 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 25 07:58:55 crc kubenswrapper[4832]: I0125 07:58:55.669508 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 25 07:58:55 crc kubenswrapper[4832]: E0125 07:58:55.669644 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 25 07:58:56 crc kubenswrapper[4832]: I0125 07:58:56.669612 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 25 07:58:56 crc kubenswrapper[4832]: I0125 07:58:56.669648 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-nzj5s" Jan 25 07:58:56 crc kubenswrapper[4832]: I0125 07:58:56.669659 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 25 07:58:56 crc kubenswrapper[4832]: E0125 07:58:56.669745 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 25 07:58:56 crc kubenswrapper[4832]: E0125 07:58:56.669917 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 25 07:58:56 crc kubenswrapper[4832]: E0125 07:58:56.669997 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-nzj5s" podUID="b1a15135-866b-4644-97aa-34c7da815b6b" Jan 25 07:58:57 crc kubenswrapper[4832]: E0125 07:58:57.619443 4832 kubelet_node_status.go:497] "Node not becoming ready in time after startup" Jan 25 07:58:57 crc kubenswrapper[4832]: I0125 07:58:57.668921 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 25 07:58:57 crc kubenswrapper[4832]: E0125 07:58:57.670017 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 25 07:58:57 crc kubenswrapper[4832]: E0125 07:58:57.771124 4832 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 25 07:58:58 crc kubenswrapper[4832]: I0125 07:58:58.669117 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 25 07:58:58 crc kubenswrapper[4832]: I0125 07:58:58.669188 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-nzj5s" Jan 25 07:58:58 crc kubenswrapper[4832]: E0125 07:58:58.669320 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 25 07:58:58 crc kubenswrapper[4832]: E0125 07:58:58.669428 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-nzj5s" podUID="b1a15135-866b-4644-97aa-34c7da815b6b" Jan 25 07:58:58 crc kubenswrapper[4832]: I0125 07:58:58.669117 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 25 07:58:58 crc kubenswrapper[4832]: E0125 07:58:58.669523 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 25 07:58:59 crc kubenswrapper[4832]: I0125 07:58:59.669312 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 25 07:58:59 crc kubenswrapper[4832]: E0125 07:58:59.669592 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 25 07:59:00 crc kubenswrapper[4832]: I0125 07:59:00.669501 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-nzj5s" Jan 25 07:59:00 crc kubenswrapper[4832]: I0125 07:59:00.669567 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 25 07:59:00 crc kubenswrapper[4832]: I0125 07:59:00.669596 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 25 07:59:00 crc kubenswrapper[4832]: E0125 07:59:00.669672 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-nzj5s" podUID="b1a15135-866b-4644-97aa-34c7da815b6b" Jan 25 07:59:00 crc kubenswrapper[4832]: E0125 07:59:00.669717 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 25 07:59:00 crc kubenswrapper[4832]: E0125 07:59:00.669751 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 25 07:59:01 crc kubenswrapper[4832]: I0125 07:59:01.670658 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 25 07:59:01 crc kubenswrapper[4832]: I0125 07:59:01.670677 4832 scope.go:117] "RemoveContainer" containerID="bcaff12dd09b5de72efcfafa4784bfc96159d855dfb239fc5120bb5fb0c6653e" Jan 25 07:59:01 crc kubenswrapper[4832]: E0125 07:59:01.670808 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 25 07:59:02 crc kubenswrapper[4832]: I0125 07:59:02.507476 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-kzrcf_5439ad80-35f6-4da4-8745-8104e9963472/kube-multus/1.log" Jan 25 07:59:02 crc kubenswrapper[4832]: I0125 07:59:02.507540 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-kzrcf" event={"ID":"5439ad80-35f6-4da4-8745-8104e9963472","Type":"ContainerStarted","Data":"ed577a9d1a5da395208b09f520d83f7012e027930420e43192c4061c5e804650"} Jan 25 07:59:02 crc kubenswrapper[4832]: I0125 07:59:02.669353 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-nzj5s" Jan 25 07:59:02 crc kubenswrapper[4832]: I0125 07:59:02.669423 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 25 07:59:02 crc kubenswrapper[4832]: E0125 07:59:02.669536 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-nzj5s" podUID="b1a15135-866b-4644-97aa-34c7da815b6b" Jan 25 07:59:02 crc kubenswrapper[4832]: E0125 07:59:02.669739 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 25 07:59:02 crc kubenswrapper[4832]: I0125 07:59:02.669881 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 25 07:59:02 crc kubenswrapper[4832]: E0125 07:59:02.670046 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 25 07:59:02 crc kubenswrapper[4832]: E0125 07:59:02.772915 4832 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 25 07:59:03 crc kubenswrapper[4832]: I0125 07:59:03.669309 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 25 07:59:03 crc kubenswrapper[4832]: E0125 07:59:03.669476 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 25 07:59:04 crc kubenswrapper[4832]: I0125 07:59:04.668838 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 25 07:59:04 crc kubenswrapper[4832]: E0125 07:59:04.668966 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 25 07:59:04 crc kubenswrapper[4832]: I0125 07:59:04.669193 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-nzj5s" Jan 25 07:59:04 crc kubenswrapper[4832]: E0125 07:59:04.669249 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-nzj5s" podUID="b1a15135-866b-4644-97aa-34c7da815b6b" Jan 25 07:59:04 crc kubenswrapper[4832]: I0125 07:59:04.669523 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 25 07:59:04 crc kubenswrapper[4832]: E0125 07:59:04.669709 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 25 07:59:05 crc kubenswrapper[4832]: I0125 07:59:05.668884 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 25 07:59:05 crc kubenswrapper[4832]: E0125 07:59:05.669038 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 25 07:59:06 crc kubenswrapper[4832]: I0125 07:59:06.668760 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 25 07:59:06 crc kubenswrapper[4832]: I0125 07:59:06.668807 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-nzj5s" Jan 25 07:59:06 crc kubenswrapper[4832]: I0125 07:59:06.668834 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 25 07:59:06 crc kubenswrapper[4832]: E0125 07:59:06.668904 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 25 07:59:06 crc kubenswrapper[4832]: E0125 07:59:06.669027 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-nzj5s" podUID="b1a15135-866b-4644-97aa-34c7da815b6b" Jan 25 07:59:06 crc kubenswrapper[4832]: E0125 07:59:06.669130 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 25 07:59:07 crc kubenswrapper[4832]: I0125 07:59:07.668579 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 25 07:59:07 crc kubenswrapper[4832]: E0125 07:59:07.670070 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 25 07:59:08 crc kubenswrapper[4832]: I0125 07:59:08.226048 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeReady" Jan 25 07:59:08 crc kubenswrapper[4832]: I0125 07:59:08.260544 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-fcqfl"] Jan 25 07:59:08 crc kubenswrapper[4832]: I0125 07:59:08.261052 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-fcqfl" Jan 25 07:59:08 crc kubenswrapper[4832]: I0125 07:59:08.272205 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Jan 25 07:59:08 crc kubenswrapper[4832]: I0125 07:59:08.275172 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Jan 25 07:59:08 crc kubenswrapper[4832]: I0125 07:59:08.275498 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Jan 25 07:59:08 crc kubenswrapper[4832]: I0125 07:59:08.275667 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Jan 25 07:59:08 crc kubenswrapper[4832]: I0125 07:59:08.275846 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Jan 25 07:59:08 crc kubenswrapper[4832]: I0125 07:59:08.275942 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Jan 25 07:59:08 crc kubenswrapper[4832]: I0125 07:59:08.276067 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Jan 25 07:59:08 crc kubenswrapper[4832]: I0125 07:59:08.276561 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Jan 25 07:59:08 crc kubenswrapper[4832]: I0125 07:59:08.276726 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Jan 25 07:59:08 crc kubenswrapper[4832]: I0125 07:59:08.276819 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-99kns"] Jan 25 07:59:08 crc kubenswrapper[4832]: I0125 07:59:08.277241 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-29fbk"] Jan 25 07:59:08 crc kubenswrapper[4832]: I0125 07:59:08.277501 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-zxhsq"] Jan 25 07:59:08 crc kubenswrapper[4832]: I0125 07:59:08.277752 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-29fbk" Jan 25 07:59:08 crc kubenswrapper[4832]: I0125 07:59:08.277791 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-zxhsq" Jan 25 07:59:08 crc kubenswrapper[4832]: I0125 07:59:08.287911 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Jan 25 07:59:08 crc kubenswrapper[4832]: I0125 07:59:08.290173 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-99kns" Jan 25 07:59:08 crc kubenswrapper[4832]: I0125 07:59:08.297275 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-q5r28"] Jan 25 07:59:08 crc kubenswrapper[4832]: I0125 07:59:08.302800 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-sqbmg"] Jan 25 07:59:08 crc kubenswrapper[4832]: I0125 07:59:08.303103 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-f9d7485db-8pg27"] Jan 25 07:59:08 crc kubenswrapper[4832]: I0125 07:59:08.303374 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-jppn9"] Jan 25 07:59:08 crc kubenswrapper[4832]: I0125 07:59:08.304245 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-nlxgx"] Jan 25 07:59:08 crc kubenswrapper[4832]: I0125 07:59:08.304627 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-xw4z9"] Jan 25 07:59:08 crc kubenswrapper[4832]: I0125 07:59:08.305001 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-c8cgr"] Jan 25 07:59:08 crc kubenswrapper[4832]: I0125 07:59:08.305441 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-b84df"] Jan 25 07:59:08 crc kubenswrapper[4832]: I0125 07:59:08.305982 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-gqjzs"] Jan 25 07:59:08 crc kubenswrapper[4832]: I0125 07:59:08.306307 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Jan 25 07:59:08 crc kubenswrapper[4832]: I0125 07:59:08.306418 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-dswxl"] Jan 25 07:59:08 crc kubenswrapper[4832]: I0125 07:59:08.306512 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Jan 25 07:59:08 crc kubenswrapper[4832]: I0125 07:59:08.306535 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Jan 25 07:59:08 crc kubenswrapper[4832]: I0125 07:59:08.306674 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Jan 25 07:59:08 crc kubenswrapper[4832]: I0125 07:59:08.306931 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Jan 25 07:59:08 crc kubenswrapper[4832]: I0125 07:59:08.306941 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Jan 25 07:59:08 crc kubenswrapper[4832]: I0125 07:59:08.307043 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Jan 25 07:59:08 crc kubenswrapper[4832]: I0125 07:59:08.307138 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Jan 25 07:59:08 crc kubenswrapper[4832]: I0125 07:59:08.307242 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Jan 25 07:59:08 crc kubenswrapper[4832]: I0125 07:59:08.307337 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Jan 25 07:59:08 crc kubenswrapper[4832]: I0125 07:59:08.307445 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Jan 25 07:59:08 crc kubenswrapper[4832]: I0125 07:59:08.307574 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Jan 25 07:59:08 crc kubenswrapper[4832]: I0125 07:59:08.307621 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Jan 25 07:59:08 crc kubenswrapper[4832]: I0125 07:59:08.307780 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Jan 25 07:59:08 crc kubenswrapper[4832]: I0125 07:59:08.307802 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Jan 25 07:59:08 crc kubenswrapper[4832]: I0125 07:59:08.307963 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-q5r28" Jan 25 07:59:08 crc kubenswrapper[4832]: I0125 07:59:08.308057 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-sqbmg" Jan 25 07:59:08 crc kubenswrapper[4832]: I0125 07:59:08.308127 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-8pg27" Jan 25 07:59:08 crc kubenswrapper[4832]: I0125 07:59:08.308349 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-jppn9" Jan 25 07:59:08 crc kubenswrapper[4832]: I0125 07:59:08.308030 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Jan 25 07:59:08 crc kubenswrapper[4832]: I0125 07:59:08.309038 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-nlxgx" Jan 25 07:59:08 crc kubenswrapper[4832]: I0125 07:59:08.309200 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-xw4z9" Jan 25 07:59:08 crc kubenswrapper[4832]: I0125 07:59:08.309472 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-c8cgr" Jan 25 07:59:08 crc kubenswrapper[4832]: I0125 07:59:08.309602 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-b84df" Jan 25 07:59:08 crc kubenswrapper[4832]: I0125 07:59:08.309785 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-dswxl" Jan 25 07:59:08 crc kubenswrapper[4832]: I0125 07:59:08.309846 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-gqjzs" Jan 25 07:59:08 crc kubenswrapper[4832]: I0125 07:59:08.310348 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Jan 25 07:59:08 crc kubenswrapper[4832]: I0125 07:59:08.310664 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Jan 25 07:59:08 crc kubenswrapper[4832]: I0125 07:59:08.310946 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Jan 25 07:59:08 crc kubenswrapper[4832]: I0125 07:59:08.313560 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-csbzw"] Jan 25 07:59:08 crc kubenswrapper[4832]: I0125 07:59:08.314040 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console-operator/console-operator-58897d9998-fswfm"] Jan 25 07:59:08 crc kubenswrapper[4832]: I0125 07:59:08.314417 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-fswfm" Jan 25 07:59:08 crc kubenswrapper[4832]: I0125 07:59:08.314690 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-csbzw" Jan 25 07:59:08 crc kubenswrapper[4832]: I0125 07:59:08.315113 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-7rwcz"] Jan 25 07:59:08 crc kubenswrapper[4832]: I0125 07:59:08.315511 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-7rwcz" Jan 25 07:59:08 crc kubenswrapper[4832]: I0125 07:59:08.330202 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-p7n7p"] Jan 25 07:59:08 crc kubenswrapper[4832]: I0125 07:59:08.330632 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-p7n7p" Jan 25 07:59:08 crc kubenswrapper[4832]: I0125 07:59:08.331050 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-fth6d"] Jan 25 07:59:08 crc kubenswrapper[4832]: I0125 07:59:08.331409 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-fth6d" Jan 25 07:59:08 crc kubenswrapper[4832]: I0125 07:59:08.331872 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-gp55m"] Jan 25 07:59:08 crc kubenswrapper[4832]: I0125 07:59:08.332029 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Jan 25 07:59:08 crc kubenswrapper[4832]: I0125 07:59:08.332238 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Jan 25 07:59:08 crc kubenswrapper[4832]: I0125 07:59:08.342629 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-knhz8"] Jan 25 07:59:08 crc kubenswrapper[4832]: I0125 07:59:08.354486 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-9ll2t"] Jan 25 07:59:08 crc kubenswrapper[4832]: I0125 07:59:08.376995 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Jan 25 07:59:08 crc kubenswrapper[4832]: I0125 07:59:08.387527 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-knhz8" Jan 25 07:59:08 crc kubenswrapper[4832]: I0125 07:59:08.387967 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-gp55m" Jan 25 07:59:08 crc kubenswrapper[4832]: I0125 07:59:08.392211 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-6llzt"] Jan 25 07:59:08 crc kubenswrapper[4832]: I0125 07:59:08.393612 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-machine-approver/machine-approver-56656f9798-9jlxs"] Jan 25 07:59:08 crc kubenswrapper[4832]: I0125 07:59:08.393657 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/d506c861-ab5e-4341-8e16-ce9166f24d5c-encryption-config\") pod \"apiserver-76f77b778f-99kns\" (UID: \"d506c861-ab5e-4341-8e16-ce9166f24d5c\") " pod="openshift-apiserver/apiserver-76f77b778f-99kns" Jan 25 07:59:08 crc kubenswrapper[4832]: I0125 07:59:08.393693 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/c97f51ea-b215-4660-bc7b-2406783aa3bb-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-gqjzs\" (UID: \"c97f51ea-b215-4660-bc7b-2406783aa3bb\") " pod="openshift-marketplace/marketplace-operator-79b997595-gqjzs" Jan 25 07:59:08 crc kubenswrapper[4832]: I0125 07:59:08.393715 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mzhk7\" (UniqueName: \"kubernetes.io/projected/cc912b0f-bde8-4185-be84-2a2c3394024f-kube-api-access-mzhk7\") pod \"dns-operator-744455d44c-fth6d\" (UID: \"cc912b0f-bde8-4185-be84-2a2c3394024f\") " pod="openshift-dns-operator/dns-operator-744455d44c-fth6d" Jan 25 07:59:08 crc kubenswrapper[4832]: I0125 07:59:08.393736 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wvl7k\" (UniqueName: \"kubernetes.io/projected/39120fe3-c252-4345-80bc-048cde22bafe-kube-api-access-wvl7k\") pod \"openshift-config-operator-7777fb866f-jppn9\" (UID: \"39120fe3-c252-4345-80bc-048cde22bafe\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-jppn9" Jan 25 07:59:08 crc kubenswrapper[4832]: I0125 07:59:08.393750 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-9ll2t" Jan 25 07:59:08 crc kubenswrapper[4832]: I0125 07:59:08.393752 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d506c861-ab5e-4341-8e16-ce9166f24d5c-serving-cert\") pod \"apiserver-76f77b778f-99kns\" (UID: \"d506c861-ab5e-4341-8e16-ce9166f24d5c\") " pod="openshift-apiserver/apiserver-76f77b778f-99kns" Jan 25 07:59:08 crc kubenswrapper[4832]: I0125 07:59:08.394170 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Jan 25 07:59:08 crc kubenswrapper[4832]: I0125 07:59:08.396511 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Jan 25 07:59:08 crc kubenswrapper[4832]: I0125 07:59:08.397743 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Jan 25 07:59:08 crc kubenswrapper[4832]: I0125 07:59:08.398701 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-6llzt" Jan 25 07:59:08 crc kubenswrapper[4832]: I0125 07:59:08.399212 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-cbsh6"] Jan 25 07:59:08 crc kubenswrapper[4832]: I0125 07:59:08.399282 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-9jlxs" Jan 25 07:59:08 crc kubenswrapper[4832]: I0125 07:59:08.397778 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Jan 25 07:59:08 crc kubenswrapper[4832]: I0125 07:59:08.400470 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Jan 25 07:59:08 crc kubenswrapper[4832]: I0125 07:59:08.399696 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Jan 25 07:59:08 crc kubenswrapper[4832]: I0125 07:59:08.399843 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Jan 25 07:59:08 crc kubenswrapper[4832]: I0125 07:59:08.400652 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-cbsh6" Jan 25 07:59:08 crc kubenswrapper[4832]: I0125 07:59:08.399872 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 25 07:59:08 crc kubenswrapper[4832]: I0125 07:59:08.399927 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Jan 25 07:59:08 crc kubenswrapper[4832]: I0125 07:59:08.400014 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Jan 25 07:59:08 crc kubenswrapper[4832]: I0125 07:59:08.400047 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Jan 25 07:59:08 crc kubenswrapper[4832]: I0125 07:59:08.400316 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Jan 25 07:59:08 crc kubenswrapper[4832]: I0125 07:59:08.402467 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 25 07:59:08 crc kubenswrapper[4832]: I0125 07:59:08.403890 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Jan 25 07:59:08 crc kubenswrapper[4832]: I0125 07:59:08.404027 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Jan 25 07:59:08 crc kubenswrapper[4832]: I0125 07:59:08.404131 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Jan 25 07:59:08 crc kubenswrapper[4832]: I0125 07:59:08.404200 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Jan 25 07:59:08 crc kubenswrapper[4832]: I0125 07:59:08.404268 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Jan 25 07:59:08 crc kubenswrapper[4832]: I0125 07:59:08.404315 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Jan 25 07:59:08 crc kubenswrapper[4832]: I0125 07:59:08.404361 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Jan 25 07:59:08 crc kubenswrapper[4832]: I0125 07:59:08.404470 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Jan 25 07:59:08 crc kubenswrapper[4832]: I0125 07:59:08.404544 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Jan 25 07:59:08 crc kubenswrapper[4832]: I0125 07:59:08.404618 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Jan 25 07:59:08 crc kubenswrapper[4832]: I0125 07:59:08.404687 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 25 07:59:08 crc kubenswrapper[4832]: I0125 07:59:08.404760 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 25 07:59:08 crc kubenswrapper[4832]: I0125 07:59:08.404829 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 25 07:59:08 crc kubenswrapper[4832]: I0125 07:59:08.404899 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Jan 25 07:59:08 crc kubenswrapper[4832]: I0125 07:59:08.404968 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 25 07:59:08 crc kubenswrapper[4832]: I0125 07:59:08.405035 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Jan 25 07:59:08 crc kubenswrapper[4832]: I0125 07:59:08.405106 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Jan 25 07:59:08 crc kubenswrapper[4832]: I0125 07:59:08.405178 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Jan 25 07:59:08 crc kubenswrapper[4832]: I0125 07:59:08.405252 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Jan 25 07:59:08 crc kubenswrapper[4832]: I0125 07:59:08.405320 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 25 07:59:08 crc kubenswrapper[4832]: I0125 07:59:08.405431 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Jan 25 07:59:08 crc kubenswrapper[4832]: I0125 07:59:08.405499 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 25 07:59:08 crc kubenswrapper[4832]: I0125 07:59:08.405567 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Jan 25 07:59:08 crc kubenswrapper[4832]: I0125 07:59:08.405646 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Jan 25 07:59:08 crc kubenswrapper[4832]: I0125 07:59:08.405740 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Jan 25 07:59:08 crc kubenswrapper[4832]: I0125 07:59:08.405808 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Jan 25 07:59:08 crc kubenswrapper[4832]: I0125 07:59:08.407318 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Jan 25 07:59:08 crc kubenswrapper[4832]: I0125 07:59:08.407413 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 25 07:59:08 crc kubenswrapper[4832]: I0125 07:59:08.407478 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Jan 25 07:59:08 crc kubenswrapper[4832]: I0125 07:59:08.408518 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Jan 25 07:59:08 crc kubenswrapper[4832]: I0125 07:59:08.408717 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Jan 25 07:59:08 crc kubenswrapper[4832]: I0125 07:59:08.408760 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Jan 25 07:59:08 crc kubenswrapper[4832]: I0125 07:59:08.408795 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Jan 25 07:59:08 crc kubenswrapper[4832]: I0125 07:59:08.408850 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Jan 25 07:59:08 crc kubenswrapper[4832]: I0125 07:59:08.409265 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Jan 25 07:59:08 crc kubenswrapper[4832]: I0125 07:59:08.409400 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Jan 25 07:59:08 crc kubenswrapper[4832]: I0125 07:59:08.409477 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Jan 25 07:59:08 crc kubenswrapper[4832]: I0125 07:59:08.409569 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Jan 25 07:59:08 crc kubenswrapper[4832]: I0125 07:59:08.409633 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Jan 25 07:59:08 crc kubenswrapper[4832]: I0125 07:59:08.409724 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 25 07:59:08 crc kubenswrapper[4832]: I0125 07:59:08.409836 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Jan 25 07:59:08 crc kubenswrapper[4832]: I0125 07:59:08.409918 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Jan 25 07:59:08 crc kubenswrapper[4832]: I0125 07:59:08.410214 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Jan 25 07:59:08 crc kubenswrapper[4832]: I0125 07:59:08.412460 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-f222l"] Jan 25 07:59:08 crc kubenswrapper[4832]: I0125 07:59:08.413007 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-fns8l"] Jan 25 07:59:08 crc kubenswrapper[4832]: I0125 07:59:08.413018 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d506c861-ab5e-4341-8e16-ce9166f24d5c-config\") pod \"apiserver-76f77b778f-99kns\" (UID: \"d506c861-ab5e-4341-8e16-ce9166f24d5c\") " pod="openshift-apiserver/apiserver-76f77b778f-99kns" Jan 25 07:59:08 crc kubenswrapper[4832]: I0125 07:59:08.413054 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/d506c861-ab5e-4341-8e16-ce9166f24d5c-etcd-client\") pod \"apiserver-76f77b778f-99kns\" (UID: \"d506c861-ab5e-4341-8e16-ce9166f24d5c\") " pod="openshift-apiserver/apiserver-76f77b778f-99kns" Jan 25 07:59:08 crc kubenswrapper[4832]: I0125 07:59:08.413075 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/d506c861-ab5e-4341-8e16-ce9166f24d5c-audit-dir\") pod \"apiserver-76f77b778f-99kns\" (UID: \"d506c861-ab5e-4341-8e16-ce9166f24d5c\") " pod="openshift-apiserver/apiserver-76f77b778f-99kns" Jan 25 07:59:08 crc kubenswrapper[4832]: I0125 07:59:08.413098 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0e4fd4e7-2916-47d8-8d38-012c53e792fc-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-7rwcz\" (UID: \"0e4fd4e7-2916-47d8-8d38-012c53e792fc\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-7rwcz" Jan 25 07:59:08 crc kubenswrapper[4832]: I0125 07:59:08.413116 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/d506c861-ab5e-4341-8e16-ce9166f24d5c-etcd-serving-ca\") pod \"apiserver-76f77b778f-99kns\" (UID: \"d506c861-ab5e-4341-8e16-ce9166f24d5c\") " pod="openshift-apiserver/apiserver-76f77b778f-99kns" Jan 25 07:59:08 crc kubenswrapper[4832]: I0125 07:59:08.413136 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/d48c21e4-2d38-4055-a586-93b65a3ff446-srv-cert\") pod \"olm-operator-6b444d44fb-nlxgx\" (UID: \"d48c21e4-2d38-4055-a586-93b65a3ff446\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-nlxgx" Jan 25 07:59:08 crc kubenswrapper[4832]: I0125 07:59:08.413173 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/d506c861-ab5e-4341-8e16-ce9166f24d5c-node-pullsecrets\") pod \"apiserver-76f77b778f-99kns\" (UID: \"d506c861-ab5e-4341-8e16-ce9166f24d5c\") " pod="openshift-apiserver/apiserver-76f77b778f-99kns" Jan 25 07:59:08 crc kubenswrapper[4832]: I0125 07:59:08.413193 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/39120fe3-c252-4345-80bc-048cde22bafe-available-featuregates\") pod \"openshift-config-operator-7777fb866f-jppn9\" (UID: \"39120fe3-c252-4345-80bc-048cde22bafe\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-jppn9" Jan 25 07:59:08 crc kubenswrapper[4832]: I0125 07:59:08.413209 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/d506c861-ab5e-4341-8e16-ce9166f24d5c-image-import-ca\") pod \"apiserver-76f77b778f-99kns\" (UID: \"d506c861-ab5e-4341-8e16-ce9166f24d5c\") " pod="openshift-apiserver/apiserver-76f77b778f-99kns" Jan 25 07:59:08 crc kubenswrapper[4832]: I0125 07:59:08.413231 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d506c861-ab5e-4341-8e16-ce9166f24d5c-trusted-ca-bundle\") pod \"apiserver-76f77b778f-99kns\" (UID: \"d506c861-ab5e-4341-8e16-ce9166f24d5c\") " pod="openshift-apiserver/apiserver-76f77b778f-99kns" Jan 25 07:59:08 crc kubenswrapper[4832]: I0125 07:59:08.413247 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m9x6j\" (UniqueName: \"kubernetes.io/projected/c97f51ea-b215-4660-bc7b-2406783aa3bb-kube-api-access-m9x6j\") pod \"marketplace-operator-79b997595-gqjzs\" (UID: \"c97f51ea-b215-4660-bc7b-2406783aa3bb\") " pod="openshift-marketplace/marketplace-operator-79b997595-gqjzs" Jan 25 07:59:08 crc kubenswrapper[4832]: I0125 07:59:08.413283 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zqq64\" (UniqueName: \"kubernetes.io/projected/d506c861-ab5e-4341-8e16-ce9166f24d5c-kube-api-access-zqq64\") pod \"apiserver-76f77b778f-99kns\" (UID: \"d506c861-ab5e-4341-8e16-ce9166f24d5c\") " pod="openshift-apiserver/apiserver-76f77b778f-99kns" Jan 25 07:59:08 crc kubenswrapper[4832]: I0125 07:59:08.413308 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/d48c21e4-2d38-4055-a586-93b65a3ff446-profile-collector-cert\") pod \"olm-operator-6b444d44fb-nlxgx\" (UID: \"d48c21e4-2d38-4055-a586-93b65a3ff446\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-nlxgx" Jan 25 07:59:08 crc kubenswrapper[4832]: I0125 07:59:08.413361 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8k8k5\" (UniqueName: \"kubernetes.io/projected/d48c21e4-2d38-4055-a586-93b65a3ff446-kube-api-access-8k8k5\") pod \"olm-operator-6b444d44fb-nlxgx\" (UID: \"d48c21e4-2d38-4055-a586-93b65a3ff446\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-nlxgx" Jan 25 07:59:08 crc kubenswrapper[4832]: I0125 07:59:08.413394 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/cc912b0f-bde8-4185-be84-2a2c3394024f-metrics-tls\") pod \"dns-operator-744455d44c-fth6d\" (UID: \"cc912b0f-bde8-4185-be84-2a2c3394024f\") " pod="openshift-dns-operator/dns-operator-744455d44c-fth6d" Jan 25 07:59:08 crc kubenswrapper[4832]: I0125 07:59:08.413421 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/39120fe3-c252-4345-80bc-048cde22bafe-serving-cert\") pod \"openshift-config-operator-7777fb866f-jppn9\" (UID: \"39120fe3-c252-4345-80bc-048cde22bafe\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-jppn9" Jan 25 07:59:08 crc kubenswrapper[4832]: I0125 07:59:08.413434 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-fns8l" Jan 25 07:59:08 crc kubenswrapper[4832]: I0125 07:59:08.413439 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0e4fd4e7-2916-47d8-8d38-012c53e792fc-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-7rwcz\" (UID: \"0e4fd4e7-2916-47d8-8d38-012c53e792fc\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-7rwcz" Jan 25 07:59:08 crc kubenswrapper[4832]: I0125 07:59:08.413700 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0e4fd4e7-2916-47d8-8d38-012c53e792fc-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-7rwcz\" (UID: \"0e4fd4e7-2916-47d8-8d38-012c53e792fc\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-7rwcz" Jan 25 07:59:08 crc kubenswrapper[4832]: I0125 07:59:08.413716 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-f222l" Jan 25 07:59:08 crc kubenswrapper[4832]: I0125 07:59:08.413752 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c97f51ea-b215-4660-bc7b-2406783aa3bb-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-gqjzs\" (UID: \"c97f51ea-b215-4660-bc7b-2406783aa3bb\") " pod="openshift-marketplace/marketplace-operator-79b997595-gqjzs" Jan 25 07:59:08 crc kubenswrapper[4832]: I0125 07:59:08.413795 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/d506c861-ab5e-4341-8e16-ce9166f24d5c-audit\") pod \"apiserver-76f77b778f-99kns\" (UID: \"d506c861-ab5e-4341-8e16-ce9166f24d5c\") " pod="openshift-apiserver/apiserver-76f77b778f-99kns" Jan 25 07:59:08 crc kubenswrapper[4832]: I0125 07:59:08.414105 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Jan 25 07:59:08 crc kubenswrapper[4832]: I0125 07:59:08.414158 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Jan 25 07:59:08 crc kubenswrapper[4832]: I0125 07:59:08.414240 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Jan 25 07:59:08 crc kubenswrapper[4832]: I0125 07:59:08.414417 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Jan 25 07:59:08 crc kubenswrapper[4832]: I0125 07:59:08.414817 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Jan 25 07:59:08 crc kubenswrapper[4832]: I0125 07:59:08.415187 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Jan 25 07:59:08 crc kubenswrapper[4832]: I0125 07:59:08.420354 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Jan 25 07:59:08 crc kubenswrapper[4832]: I0125 07:59:08.430247 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress/router-default-5444994796-xjkrg"] Jan 25 07:59:08 crc kubenswrapper[4832]: I0125 07:59:08.430869 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-xjkrg" Jan 25 07:59:08 crc kubenswrapper[4832]: I0125 07:59:08.431209 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-6gswk"] Jan 25 07:59:08 crc kubenswrapper[4832]: I0125 07:59:08.431534 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-6gswk" Jan 25 07:59:08 crc kubenswrapper[4832]: I0125 07:59:08.447090 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Jan 25 07:59:08 crc kubenswrapper[4832]: I0125 07:59:08.466045 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Jan 25 07:59:08 crc kubenswrapper[4832]: I0125 07:59:08.467374 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 25 07:59:08 crc kubenswrapper[4832]: I0125 07:59:08.467659 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 25 07:59:08 crc kubenswrapper[4832]: I0125 07:59:08.467840 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Jan 25 07:59:08 crc kubenswrapper[4832]: I0125 07:59:08.468117 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Jan 25 07:59:08 crc kubenswrapper[4832]: I0125 07:59:08.468515 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Jan 25 07:59:08 crc kubenswrapper[4832]: I0125 07:59:08.470112 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29488785-dcf79"] Jan 25 07:59:08 crc kubenswrapper[4832]: I0125 07:59:08.470736 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29488785-dcf79" Jan 25 07:59:08 crc kubenswrapper[4832]: I0125 07:59:08.471092 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-drfl8"] Jan 25 07:59:08 crc kubenswrapper[4832]: I0125 07:59:08.471807 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-drfl8" Jan 25 07:59:08 crc kubenswrapper[4832]: I0125 07:59:08.471805 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Jan 25 07:59:08 crc kubenswrapper[4832]: I0125 07:59:08.472133 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-c8c6f"] Jan 25 07:59:08 crc kubenswrapper[4832]: I0125 07:59:08.472589 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 25 07:59:08 crc kubenswrapper[4832]: I0125 07:59:08.472910 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Jan 25 07:59:08 crc kubenswrapper[4832]: I0125 07:59:08.474079 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-c8c6f" Jan 25 07:59:08 crc kubenswrapper[4832]: I0125 07:59:08.474951 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Jan 25 07:59:08 crc kubenswrapper[4832]: I0125 07:59:08.475329 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Jan 25 07:59:08 crc kubenswrapper[4832]: I0125 07:59:08.475948 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-tqtnp"] Jan 25 07:59:08 crc kubenswrapper[4832]: I0125 07:59:08.475990 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Jan 25 07:59:08 crc kubenswrapper[4832]: I0125 07:59:08.476747 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-tqtnp" Jan 25 07:59:08 crc kubenswrapper[4832]: I0125 07:59:08.477821 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-fcqfl"] Jan 25 07:59:08 crc kubenswrapper[4832]: I0125 07:59:08.479478 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-cdncb"] Jan 25 07:59:08 crc kubenswrapper[4832]: I0125 07:59:08.479998 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-cdncb" Jan 25 07:59:08 crc kubenswrapper[4832]: I0125 07:59:08.481026 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/downloads-7954f5f757-jvld2"] Jan 25 07:59:08 crc kubenswrapper[4832]: I0125 07:59:08.481761 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-jvld2" Jan 25 07:59:08 crc kubenswrapper[4832]: I0125 07:59:08.482569 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-kpg7m"] Jan 25 07:59:08 crc kubenswrapper[4832]: I0125 07:59:08.482722 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Jan 25 07:59:08 crc kubenswrapper[4832]: I0125 07:59:08.483735 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-kpg7m" Jan 25 07:59:08 crc kubenswrapper[4832]: I0125 07:59:08.484124 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-99kns"] Jan 25 07:59:08 crc kubenswrapper[4832]: I0125 07:59:08.486160 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-zxhsq"] Jan 25 07:59:08 crc kubenswrapper[4832]: I0125 07:59:08.486851 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-29fbk"] Jan 25 07:59:08 crc kubenswrapper[4832]: I0125 07:59:08.488621 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-mggjn"] Jan 25 07:59:08 crc kubenswrapper[4832]: I0125 07:59:08.490879 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-vhn96"] Jan 25 07:59:08 crc kubenswrapper[4832]: I0125 07:59:08.490972 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-mggjn" Jan 25 07:59:08 crc kubenswrapper[4832]: I0125 07:59:08.491451 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-vhn96" Jan 25 07:59:08 crc kubenswrapper[4832]: I0125 07:59:08.491973 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-8pg27"] Jan 25 07:59:08 crc kubenswrapper[4832]: I0125 07:59:08.493466 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-gqjzs"] Jan 25 07:59:08 crc kubenswrapper[4832]: I0125 07:59:08.495018 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-c8cgr"] Jan 25 07:59:08 crc kubenswrapper[4832]: I0125 07:59:08.496506 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-jppn9"] Jan 25 07:59:08 crc kubenswrapper[4832]: I0125 07:59:08.498155 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-server-752ng"] Jan 25 07:59:08 crc kubenswrapper[4832]: I0125 07:59:08.498842 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-752ng" Jan 25 07:59:08 crc kubenswrapper[4832]: I0125 07:59:08.500112 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/dns-default-88fz6"] Jan 25 07:59:08 crc kubenswrapper[4832]: I0125 07:59:08.501783 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-88fz6" Jan 25 07:59:08 crc kubenswrapper[4832]: I0125 07:59:08.503032 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Jan 25 07:59:08 crc kubenswrapper[4832]: I0125 07:59:08.504254 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-q5r28"] Jan 25 07:59:08 crc kubenswrapper[4832]: I0125 07:59:08.506242 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-fns8l"] Jan 25 07:59:08 crc kubenswrapper[4832]: I0125 07:59:08.507612 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-fswfm"] Jan 25 07:59:08 crc kubenswrapper[4832]: I0125 07:59:08.509251 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-csbzw"] Jan 25 07:59:08 crc kubenswrapper[4832]: I0125 07:59:08.511074 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-fth6d"] Jan 25 07:59:08 crc kubenswrapper[4832]: I0125 07:59:08.512827 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-9ll2t"] Jan 25 07:59:08 crc kubenswrapper[4832]: I0125 07:59:08.514074 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-tqtnp"] Jan 25 07:59:08 crc kubenswrapper[4832]: I0125 07:59:08.514436 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0e4fd4e7-2916-47d8-8d38-012c53e792fc-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-7rwcz\" (UID: \"0e4fd4e7-2916-47d8-8d38-012c53e792fc\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-7rwcz" Jan 25 07:59:08 crc kubenswrapper[4832]: I0125 07:59:08.514470 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c97f51ea-b215-4660-bc7b-2406783aa3bb-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-gqjzs\" (UID: \"c97f51ea-b215-4660-bc7b-2406783aa3bb\") " pod="openshift-marketplace/marketplace-operator-79b997595-gqjzs" Jan 25 07:59:08 crc kubenswrapper[4832]: I0125 07:59:08.514500 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/d506c861-ab5e-4341-8e16-ce9166f24d5c-audit\") pod \"apiserver-76f77b778f-99kns\" (UID: \"d506c861-ab5e-4341-8e16-ce9166f24d5c\") " pod="openshift-apiserver/apiserver-76f77b778f-99kns" Jan 25 07:59:08 crc kubenswrapper[4832]: I0125 07:59:08.514521 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mzhk7\" (UniqueName: \"kubernetes.io/projected/cc912b0f-bde8-4185-be84-2a2c3394024f-kube-api-access-mzhk7\") pod \"dns-operator-744455d44c-fth6d\" (UID: \"cc912b0f-bde8-4185-be84-2a2c3394024f\") " pod="openshift-dns-operator/dns-operator-744455d44c-fth6d" Jan 25 07:59:08 crc kubenswrapper[4832]: I0125 07:59:08.514546 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/d506c861-ab5e-4341-8e16-ce9166f24d5c-encryption-config\") pod \"apiserver-76f77b778f-99kns\" (UID: \"d506c861-ab5e-4341-8e16-ce9166f24d5c\") " pod="openshift-apiserver/apiserver-76f77b778f-99kns" Jan 25 07:59:08 crc kubenswrapper[4832]: I0125 07:59:08.514567 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/c97f51ea-b215-4660-bc7b-2406783aa3bb-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-gqjzs\" (UID: \"c97f51ea-b215-4660-bc7b-2406783aa3bb\") " pod="openshift-marketplace/marketplace-operator-79b997595-gqjzs" Jan 25 07:59:08 crc kubenswrapper[4832]: I0125 07:59:08.514584 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wvl7k\" (UniqueName: \"kubernetes.io/projected/39120fe3-c252-4345-80bc-048cde22bafe-kube-api-access-wvl7k\") pod \"openshift-config-operator-7777fb866f-jppn9\" (UID: \"39120fe3-c252-4345-80bc-048cde22bafe\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-jppn9" Jan 25 07:59:08 crc kubenswrapper[4832]: I0125 07:59:08.514600 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d506c861-ab5e-4341-8e16-ce9166f24d5c-serving-cert\") pod \"apiserver-76f77b778f-99kns\" (UID: \"d506c861-ab5e-4341-8e16-ce9166f24d5c\") " pod="openshift-apiserver/apiserver-76f77b778f-99kns" Jan 25 07:59:08 crc kubenswrapper[4832]: I0125 07:59:08.514618 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/d506c861-ab5e-4341-8e16-ce9166f24d5c-audit-dir\") pod \"apiserver-76f77b778f-99kns\" (UID: \"d506c861-ab5e-4341-8e16-ce9166f24d5c\") " pod="openshift-apiserver/apiserver-76f77b778f-99kns" Jan 25 07:59:08 crc kubenswrapper[4832]: I0125 07:59:08.514638 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d506c861-ab5e-4341-8e16-ce9166f24d5c-config\") pod \"apiserver-76f77b778f-99kns\" (UID: \"d506c861-ab5e-4341-8e16-ce9166f24d5c\") " pod="openshift-apiserver/apiserver-76f77b778f-99kns" Jan 25 07:59:08 crc kubenswrapper[4832]: I0125 07:59:08.514666 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/d506c861-ab5e-4341-8e16-ce9166f24d5c-etcd-client\") pod \"apiserver-76f77b778f-99kns\" (UID: \"d506c861-ab5e-4341-8e16-ce9166f24d5c\") " pod="openshift-apiserver/apiserver-76f77b778f-99kns" Jan 25 07:59:08 crc kubenswrapper[4832]: I0125 07:59:08.514683 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/d48c21e4-2d38-4055-a586-93b65a3ff446-srv-cert\") pod \"olm-operator-6b444d44fb-nlxgx\" (UID: \"d48c21e4-2d38-4055-a586-93b65a3ff446\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-nlxgx" Jan 25 07:59:08 crc kubenswrapper[4832]: I0125 07:59:08.514699 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0e4fd4e7-2916-47d8-8d38-012c53e792fc-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-7rwcz\" (UID: \"0e4fd4e7-2916-47d8-8d38-012c53e792fc\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-7rwcz" Jan 25 07:59:08 crc kubenswrapper[4832]: I0125 07:59:08.514714 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/d506c861-ab5e-4341-8e16-ce9166f24d5c-etcd-serving-ca\") pod \"apiserver-76f77b778f-99kns\" (UID: \"d506c861-ab5e-4341-8e16-ce9166f24d5c\") " pod="openshift-apiserver/apiserver-76f77b778f-99kns" Jan 25 07:59:08 crc kubenswrapper[4832]: I0125 07:59:08.514739 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/d506c861-ab5e-4341-8e16-ce9166f24d5c-node-pullsecrets\") pod \"apiserver-76f77b778f-99kns\" (UID: \"d506c861-ab5e-4341-8e16-ce9166f24d5c\") " pod="openshift-apiserver/apiserver-76f77b778f-99kns" Jan 25 07:59:08 crc kubenswrapper[4832]: I0125 07:59:08.514757 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/39120fe3-c252-4345-80bc-048cde22bafe-available-featuregates\") pod \"openshift-config-operator-7777fb866f-jppn9\" (UID: \"39120fe3-c252-4345-80bc-048cde22bafe\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-jppn9" Jan 25 07:59:08 crc kubenswrapper[4832]: I0125 07:59:08.514772 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/d506c861-ab5e-4341-8e16-ce9166f24d5c-image-import-ca\") pod \"apiserver-76f77b778f-99kns\" (UID: \"d506c861-ab5e-4341-8e16-ce9166f24d5c\") " pod="openshift-apiserver/apiserver-76f77b778f-99kns" Jan 25 07:59:08 crc kubenswrapper[4832]: I0125 07:59:08.514789 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d506c861-ab5e-4341-8e16-ce9166f24d5c-trusted-ca-bundle\") pod \"apiserver-76f77b778f-99kns\" (UID: \"d506c861-ab5e-4341-8e16-ce9166f24d5c\") " pod="openshift-apiserver/apiserver-76f77b778f-99kns" Jan 25 07:59:08 crc kubenswrapper[4832]: I0125 07:59:08.514808 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m9x6j\" (UniqueName: \"kubernetes.io/projected/c97f51ea-b215-4660-bc7b-2406783aa3bb-kube-api-access-m9x6j\") pod \"marketplace-operator-79b997595-gqjzs\" (UID: \"c97f51ea-b215-4660-bc7b-2406783aa3bb\") " pod="openshift-marketplace/marketplace-operator-79b997595-gqjzs" Jan 25 07:59:08 crc kubenswrapper[4832]: I0125 07:59:08.514848 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zqq64\" (UniqueName: \"kubernetes.io/projected/d506c861-ab5e-4341-8e16-ce9166f24d5c-kube-api-access-zqq64\") pod \"apiserver-76f77b778f-99kns\" (UID: \"d506c861-ab5e-4341-8e16-ce9166f24d5c\") " pod="openshift-apiserver/apiserver-76f77b778f-99kns" Jan 25 07:59:08 crc kubenswrapper[4832]: I0125 07:59:08.514873 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/d48c21e4-2d38-4055-a586-93b65a3ff446-profile-collector-cert\") pod \"olm-operator-6b444d44fb-nlxgx\" (UID: \"d48c21e4-2d38-4055-a586-93b65a3ff446\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-nlxgx" Jan 25 07:59:08 crc kubenswrapper[4832]: I0125 07:59:08.514900 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8k8k5\" (UniqueName: \"kubernetes.io/projected/d48c21e4-2d38-4055-a586-93b65a3ff446-kube-api-access-8k8k5\") pod \"olm-operator-6b444d44fb-nlxgx\" (UID: \"d48c21e4-2d38-4055-a586-93b65a3ff446\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-nlxgx" Jan 25 07:59:08 crc kubenswrapper[4832]: I0125 07:59:08.514916 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/cc912b0f-bde8-4185-be84-2a2c3394024f-metrics-tls\") pod \"dns-operator-744455d44c-fth6d\" (UID: \"cc912b0f-bde8-4185-be84-2a2c3394024f\") " pod="openshift-dns-operator/dns-operator-744455d44c-fth6d" Jan 25 07:59:08 crc kubenswrapper[4832]: I0125 07:59:08.514933 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/39120fe3-c252-4345-80bc-048cde22bafe-serving-cert\") pod \"openshift-config-operator-7777fb866f-jppn9\" (UID: \"39120fe3-c252-4345-80bc-048cde22bafe\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-jppn9" Jan 25 07:59:08 crc kubenswrapper[4832]: I0125 07:59:08.514946 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0e4fd4e7-2916-47d8-8d38-012c53e792fc-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-7rwcz\" (UID: \"0e4fd4e7-2916-47d8-8d38-012c53e792fc\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-7rwcz" Jan 25 07:59:08 crc kubenswrapper[4832]: I0125 07:59:08.515247 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/d506c861-ab5e-4341-8e16-ce9166f24d5c-audit\") pod \"apiserver-76f77b778f-99kns\" (UID: \"d506c861-ab5e-4341-8e16-ce9166f24d5c\") " pod="openshift-apiserver/apiserver-76f77b778f-99kns" Jan 25 07:59:08 crc kubenswrapper[4832]: I0125 07:59:08.515673 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0e4fd4e7-2916-47d8-8d38-012c53e792fc-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-7rwcz\" (UID: \"0e4fd4e7-2916-47d8-8d38-012c53e792fc\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-7rwcz" Jan 25 07:59:08 crc kubenswrapper[4832]: I0125 07:59:08.515705 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c97f51ea-b215-4660-bc7b-2406783aa3bb-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-gqjzs\" (UID: \"c97f51ea-b215-4660-bc7b-2406783aa3bb\") " pod="openshift-marketplace/marketplace-operator-79b997595-gqjzs" Jan 25 07:59:08 crc kubenswrapper[4832]: I0125 07:59:08.516255 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/d506c861-ab5e-4341-8e16-ce9166f24d5c-audit-dir\") pod \"apiserver-76f77b778f-99kns\" (UID: \"d506c861-ab5e-4341-8e16-ce9166f24d5c\") " pod="openshift-apiserver/apiserver-76f77b778f-99kns" Jan 25 07:59:08 crc kubenswrapper[4832]: I0125 07:59:08.516722 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d506c861-ab5e-4341-8e16-ce9166f24d5c-config\") pod \"apiserver-76f77b778f-99kns\" (UID: \"d506c861-ab5e-4341-8e16-ce9166f24d5c\") " pod="openshift-apiserver/apiserver-76f77b778f-99kns" Jan 25 07:59:08 crc kubenswrapper[4832]: I0125 07:59:08.516771 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-gp55m"] Jan 25 07:59:08 crc kubenswrapper[4832]: I0125 07:59:08.516828 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/d506c861-ab5e-4341-8e16-ce9166f24d5c-node-pullsecrets\") pod \"apiserver-76f77b778f-99kns\" (UID: \"d506c861-ab5e-4341-8e16-ce9166f24d5c\") " pod="openshift-apiserver/apiserver-76f77b778f-99kns" Jan 25 07:59:08 crc kubenswrapper[4832]: I0125 07:59:08.519177 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d506c861-ab5e-4341-8e16-ce9166f24d5c-trusted-ca-bundle\") pod \"apiserver-76f77b778f-99kns\" (UID: \"d506c861-ab5e-4341-8e16-ce9166f24d5c\") " pod="openshift-apiserver/apiserver-76f77b778f-99kns" Jan 25 07:59:08 crc kubenswrapper[4832]: I0125 07:59:08.520829 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/d506c861-ab5e-4341-8e16-ce9166f24d5c-encryption-config\") pod \"apiserver-76f77b778f-99kns\" (UID: \"d506c861-ab5e-4341-8e16-ce9166f24d5c\") " pod="openshift-apiserver/apiserver-76f77b778f-99kns" Jan 25 07:59:08 crc kubenswrapper[4832]: I0125 07:59:08.520862 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/d506c861-ab5e-4341-8e16-ce9166f24d5c-etcd-client\") pod \"apiserver-76f77b778f-99kns\" (UID: \"d506c861-ab5e-4341-8e16-ce9166f24d5c\") " pod="openshift-apiserver/apiserver-76f77b778f-99kns" Jan 25 07:59:08 crc kubenswrapper[4832]: I0125 07:59:08.520946 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0e4fd4e7-2916-47d8-8d38-012c53e792fc-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-7rwcz\" (UID: \"0e4fd4e7-2916-47d8-8d38-012c53e792fc\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-7rwcz" Jan 25 07:59:08 crc kubenswrapper[4832]: I0125 07:59:08.521113 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/d48c21e4-2d38-4055-a586-93b65a3ff446-profile-collector-cert\") pod \"olm-operator-6b444d44fb-nlxgx\" (UID: \"d48c21e4-2d38-4055-a586-93b65a3ff446\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-nlxgx" Jan 25 07:59:08 crc kubenswrapper[4832]: I0125 07:59:08.521585 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/39120fe3-c252-4345-80bc-048cde22bafe-available-featuregates\") pod \"openshift-config-operator-7777fb866f-jppn9\" (UID: \"39120fe3-c252-4345-80bc-048cde22bafe\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-jppn9" Jan 25 07:59:08 crc kubenswrapper[4832]: I0125 07:59:08.521680 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/d506c861-ab5e-4341-8e16-ce9166f24d5c-etcd-serving-ca\") pod \"apiserver-76f77b778f-99kns\" (UID: \"d506c861-ab5e-4341-8e16-ce9166f24d5c\") " pod="openshift-apiserver/apiserver-76f77b778f-99kns" Jan 25 07:59:08 crc kubenswrapper[4832]: I0125 07:59:08.522316 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/d506c861-ab5e-4341-8e16-ce9166f24d5c-image-import-ca\") pod \"apiserver-76f77b778f-99kns\" (UID: \"d506c861-ab5e-4341-8e16-ce9166f24d5c\") " pod="openshift-apiserver/apiserver-76f77b778f-99kns" Jan 25 07:59:08 crc kubenswrapper[4832]: I0125 07:59:08.522223 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/c97f51ea-b215-4660-bc7b-2406783aa3bb-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-gqjzs\" (UID: \"c97f51ea-b215-4660-bc7b-2406783aa3bb\") " pod="openshift-marketplace/marketplace-operator-79b997595-gqjzs" Jan 25 07:59:08 crc kubenswrapper[4832]: I0125 07:59:08.523812 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/d48c21e4-2d38-4055-a586-93b65a3ff446-srv-cert\") pod \"olm-operator-6b444d44fb-nlxgx\" (UID: \"d48c21e4-2d38-4055-a586-93b65a3ff446\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-nlxgx" Jan 25 07:59:08 crc kubenswrapper[4832]: I0125 07:59:08.524975 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Jan 25 07:59:08 crc kubenswrapper[4832]: I0125 07:59:08.525656 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/39120fe3-c252-4345-80bc-048cde22bafe-serving-cert\") pod \"openshift-config-operator-7777fb866f-jppn9\" (UID: \"39120fe3-c252-4345-80bc-048cde22bafe\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-jppn9" Jan 25 07:59:08 crc kubenswrapper[4832]: I0125 07:59:08.529848 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d506c861-ab5e-4341-8e16-ce9166f24d5c-serving-cert\") pod \"apiserver-76f77b778f-99kns\" (UID: \"d506c861-ab5e-4341-8e16-ce9166f24d5c\") " pod="openshift-apiserver/apiserver-76f77b778f-99kns" Jan 25 07:59:08 crc kubenswrapper[4832]: I0125 07:59:08.529926 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-cbsh6"] Jan 25 07:59:08 crc kubenswrapper[4832]: I0125 07:59:08.532297 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-xw4z9"] Jan 25 07:59:08 crc kubenswrapper[4832]: I0125 07:59:08.533233 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-dswxl"] Jan 25 07:59:08 crc kubenswrapper[4832]: I0125 07:59:08.534505 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-drfl8"] Jan 25 07:59:08 crc kubenswrapper[4832]: I0125 07:59:08.535545 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-nlxgx"] Jan 25 07:59:08 crc kubenswrapper[4832]: I0125 07:59:08.536690 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-88fz6"] Jan 25 07:59:08 crc kubenswrapper[4832]: I0125 07:59:08.537971 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-b84df"] Jan 25 07:59:08 crc kubenswrapper[4832]: I0125 07:59:08.539019 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-jvld2"] Jan 25 07:59:08 crc kubenswrapper[4832]: I0125 07:59:08.540118 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-sqbmg"] Jan 25 07:59:08 crc kubenswrapper[4832]: I0125 07:59:08.541206 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29488785-dcf79"] Jan 25 07:59:08 crc kubenswrapper[4832]: I0125 07:59:08.542424 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-canary/ingress-canary-5bk7m"] Jan 25 07:59:08 crc kubenswrapper[4832]: I0125 07:59:08.542853 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Jan 25 07:59:08 crc kubenswrapper[4832]: I0125 07:59:08.543160 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-5bk7m" Jan 25 07:59:08 crc kubenswrapper[4832]: I0125 07:59:08.543686 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-7rwcz"] Jan 25 07:59:08 crc kubenswrapper[4832]: I0125 07:59:08.545599 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-jjs2r"] Jan 25 07:59:08 crc kubenswrapper[4832]: I0125 07:59:08.546689 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-f222l"] Jan 25 07:59:08 crc kubenswrapper[4832]: I0125 07:59:08.546803 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-jjs2r" Jan 25 07:59:08 crc kubenswrapper[4832]: I0125 07:59:08.547746 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-c8c6f"] Jan 25 07:59:08 crc kubenswrapper[4832]: I0125 07:59:08.549189 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-6llzt"] Jan 25 07:59:08 crc kubenswrapper[4832]: I0125 07:59:08.550506 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-cdncb"] Jan 25 07:59:08 crc kubenswrapper[4832]: I0125 07:59:08.551691 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-p7n7p"] Jan 25 07:59:08 crc kubenswrapper[4832]: I0125 07:59:08.552708 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-knhz8"] Jan 25 07:59:08 crc kubenswrapper[4832]: I0125 07:59:08.553794 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-6gswk"] Jan 25 07:59:08 crc kubenswrapper[4832]: I0125 07:59:08.554878 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-mggjn"] Jan 25 07:59:08 crc kubenswrapper[4832]: I0125 07:59:08.555942 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-5bk7m"] Jan 25 07:59:08 crc kubenswrapper[4832]: I0125 07:59:08.557056 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-jjs2r"] Jan 25 07:59:08 crc kubenswrapper[4832]: I0125 07:59:08.558356 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-kpg7m"] Jan 25 07:59:08 crc kubenswrapper[4832]: I0125 07:59:08.559487 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-vhn96"] Jan 25 07:59:08 crc kubenswrapper[4832]: I0125 07:59:08.563055 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Jan 25 07:59:08 crc kubenswrapper[4832]: I0125 07:59:08.583793 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Jan 25 07:59:08 crc kubenswrapper[4832]: I0125 07:59:08.596859 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/cc912b0f-bde8-4185-be84-2a2c3394024f-metrics-tls\") pod \"dns-operator-744455d44c-fth6d\" (UID: \"cc912b0f-bde8-4185-be84-2a2c3394024f\") " pod="openshift-dns-operator/dns-operator-744455d44c-fth6d" Jan 25 07:59:08 crc kubenswrapper[4832]: I0125 07:59:08.614708 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Jan 25 07:59:08 crc kubenswrapper[4832]: I0125 07:59:08.642671 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Jan 25 07:59:08 crc kubenswrapper[4832]: I0125 07:59:08.663574 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Jan 25 07:59:08 crc kubenswrapper[4832]: I0125 07:59:08.669064 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 25 07:59:08 crc kubenswrapper[4832]: I0125 07:59:08.669102 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-nzj5s" Jan 25 07:59:08 crc kubenswrapper[4832]: I0125 07:59:08.669449 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 25 07:59:08 crc kubenswrapper[4832]: I0125 07:59:08.683208 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Jan 25 07:59:08 crc kubenswrapper[4832]: I0125 07:59:08.703311 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Jan 25 07:59:08 crc kubenswrapper[4832]: I0125 07:59:08.723575 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Jan 25 07:59:08 crc kubenswrapper[4832]: I0125 07:59:08.742677 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Jan 25 07:59:08 crc kubenswrapper[4832]: I0125 07:59:08.762633 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Jan 25 07:59:08 crc kubenswrapper[4832]: I0125 07:59:08.783046 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Jan 25 07:59:08 crc kubenswrapper[4832]: I0125 07:59:08.802979 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Jan 25 07:59:08 crc kubenswrapper[4832]: I0125 07:59:08.842563 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Jan 25 07:59:08 crc kubenswrapper[4832]: I0125 07:59:08.862857 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Jan 25 07:59:08 crc kubenswrapper[4832]: I0125 07:59:08.883314 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Jan 25 07:59:08 crc kubenswrapper[4832]: I0125 07:59:08.903314 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Jan 25 07:59:08 crc kubenswrapper[4832]: I0125 07:59:08.928433 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Jan 25 07:59:08 crc kubenswrapper[4832]: I0125 07:59:08.943690 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Jan 25 07:59:08 crc kubenswrapper[4832]: I0125 07:59:08.963361 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Jan 25 07:59:08 crc kubenswrapper[4832]: I0125 07:59:08.982929 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Jan 25 07:59:09 crc kubenswrapper[4832]: I0125 07:59:09.003074 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Jan 25 07:59:09 crc kubenswrapper[4832]: I0125 07:59:09.023351 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Jan 25 07:59:09 crc kubenswrapper[4832]: I0125 07:59:09.042842 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Jan 25 07:59:09 crc kubenswrapper[4832]: I0125 07:59:09.062742 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Jan 25 07:59:09 crc kubenswrapper[4832]: I0125 07:59:09.082503 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Jan 25 07:59:09 crc kubenswrapper[4832]: I0125 07:59:09.103729 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Jan 25 07:59:09 crc kubenswrapper[4832]: I0125 07:59:09.123625 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Jan 25 07:59:09 crc kubenswrapper[4832]: I0125 07:59:09.143353 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Jan 25 07:59:09 crc kubenswrapper[4832]: I0125 07:59:09.163694 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Jan 25 07:59:09 crc kubenswrapper[4832]: I0125 07:59:09.183665 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Jan 25 07:59:09 crc kubenswrapper[4832]: I0125 07:59:09.203109 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Jan 25 07:59:09 crc kubenswrapper[4832]: I0125 07:59:09.223373 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Jan 25 07:59:09 crc kubenswrapper[4832]: I0125 07:59:09.243763 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Jan 25 07:59:09 crc kubenswrapper[4832]: I0125 07:59:09.263824 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Jan 25 07:59:09 crc kubenswrapper[4832]: I0125 07:59:09.283583 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Jan 25 07:59:09 crc kubenswrapper[4832]: I0125 07:59:09.303424 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Jan 25 07:59:09 crc kubenswrapper[4832]: I0125 07:59:09.323176 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Jan 25 07:59:09 crc kubenswrapper[4832]: I0125 07:59:09.327850 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-plv66" Jan 25 07:59:09 crc kubenswrapper[4832]: I0125 07:59:09.344213 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Jan 25 07:59:09 crc kubenswrapper[4832]: I0125 07:59:09.363187 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Jan 25 07:59:09 crc kubenswrapper[4832]: I0125 07:59:09.383077 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Jan 25 07:59:09 crc kubenswrapper[4832]: I0125 07:59:09.404008 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Jan 25 07:59:09 crc kubenswrapper[4832]: I0125 07:59:09.423705 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Jan 25 07:59:09 crc kubenswrapper[4832]: I0125 07:59:09.441456 4832 request.go:700] Waited for 1.010266658s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ingress/secrets?fieldSelector=metadata.name%3Drouter-stats-default&limit=500&resourceVersion=0 Jan 25 07:59:09 crc kubenswrapper[4832]: I0125 07:59:09.443093 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Jan 25 07:59:09 crc kubenswrapper[4832]: I0125 07:59:09.463514 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Jan 25 07:59:09 crc kubenswrapper[4832]: I0125 07:59:09.483038 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Jan 25 07:59:09 crc kubenswrapper[4832]: I0125 07:59:09.506757 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Jan 25 07:59:09 crc kubenswrapper[4832]: I0125 07:59:09.523094 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Jan 25 07:59:09 crc kubenswrapper[4832]: I0125 07:59:09.542770 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 25 07:59:09 crc kubenswrapper[4832]: I0125 07:59:09.562697 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 25 07:59:09 crc kubenswrapper[4832]: I0125 07:59:09.582694 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Jan 25 07:59:09 crc kubenswrapper[4832]: I0125 07:59:09.603198 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Jan 25 07:59:09 crc kubenswrapper[4832]: I0125 07:59:09.623180 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Jan 25 07:59:09 crc kubenswrapper[4832]: I0125 07:59:09.643972 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Jan 25 07:59:09 crc kubenswrapper[4832]: I0125 07:59:09.663862 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Jan 25 07:59:09 crc kubenswrapper[4832]: I0125 07:59:09.668809 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 25 07:59:09 crc kubenswrapper[4832]: I0125 07:59:09.682970 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Jan 25 07:59:09 crc kubenswrapper[4832]: I0125 07:59:09.702788 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Jan 25 07:59:09 crc kubenswrapper[4832]: I0125 07:59:09.722869 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Jan 25 07:59:09 crc kubenswrapper[4832]: I0125 07:59:09.743513 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Jan 25 07:59:09 crc kubenswrapper[4832]: I0125 07:59:09.764353 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Jan 25 07:59:09 crc kubenswrapper[4832]: I0125 07:59:09.783609 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Jan 25 07:59:09 crc kubenswrapper[4832]: I0125 07:59:09.803455 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Jan 25 07:59:09 crc kubenswrapper[4832]: I0125 07:59:09.823890 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Jan 25 07:59:09 crc kubenswrapper[4832]: I0125 07:59:09.843587 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Jan 25 07:59:09 crc kubenswrapper[4832]: I0125 07:59:09.863672 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Jan 25 07:59:09 crc kubenswrapper[4832]: I0125 07:59:09.883705 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Jan 25 07:59:09 crc kubenswrapper[4832]: I0125 07:59:09.903283 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Jan 25 07:59:09 crc kubenswrapper[4832]: I0125 07:59:09.923607 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Jan 25 07:59:09 crc kubenswrapper[4832]: I0125 07:59:09.944236 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Jan 25 07:59:09 crc kubenswrapper[4832]: I0125 07:59:09.962566 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Jan 25 07:59:09 crc kubenswrapper[4832]: I0125 07:59:09.982990 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Jan 25 07:59:10 crc kubenswrapper[4832]: I0125 07:59:10.003679 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Jan 25 07:59:10 crc kubenswrapper[4832]: I0125 07:59:10.023316 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Jan 25 07:59:10 crc kubenswrapper[4832]: I0125 07:59:10.043666 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Jan 25 07:59:10 crc kubenswrapper[4832]: I0125 07:59:10.063517 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Jan 25 07:59:10 crc kubenswrapper[4832]: I0125 07:59:10.082875 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Jan 25 07:59:10 crc kubenswrapper[4832]: I0125 07:59:10.103305 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Jan 25 07:59:10 crc kubenswrapper[4832]: I0125 07:59:10.122475 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Jan 25 07:59:10 crc kubenswrapper[4832]: I0125 07:59:10.142594 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Jan 25 07:59:10 crc kubenswrapper[4832]: I0125 07:59:10.163530 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Jan 25 07:59:10 crc kubenswrapper[4832]: I0125 07:59:10.211188 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0e4fd4e7-2916-47d8-8d38-012c53e792fc-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-7rwcz\" (UID: \"0e4fd4e7-2916-47d8-8d38-012c53e792fc\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-7rwcz" Jan 25 07:59:10 crc kubenswrapper[4832]: I0125 07:59:10.230860 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wvl7k\" (UniqueName: \"kubernetes.io/projected/39120fe3-c252-4345-80bc-048cde22bafe-kube-api-access-wvl7k\") pod \"openshift-config-operator-7777fb866f-jppn9\" (UID: \"39120fe3-c252-4345-80bc-048cde22bafe\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-jppn9" Jan 25 07:59:10 crc kubenswrapper[4832]: I0125 07:59:10.235182 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mzhk7\" (UniqueName: \"kubernetes.io/projected/cc912b0f-bde8-4185-be84-2a2c3394024f-kube-api-access-mzhk7\") pod \"dns-operator-744455d44c-fth6d\" (UID: \"cc912b0f-bde8-4185-be84-2a2c3394024f\") " pod="openshift-dns-operator/dns-operator-744455d44c-fth6d" Jan 25 07:59:10 crc kubenswrapper[4832]: I0125 07:59:10.248662 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-jppn9" Jan 25 07:59:10 crc kubenswrapper[4832]: I0125 07:59:10.257412 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m9x6j\" (UniqueName: \"kubernetes.io/projected/c97f51ea-b215-4660-bc7b-2406783aa3bb-kube-api-access-m9x6j\") pod \"marketplace-operator-79b997595-gqjzs\" (UID: \"c97f51ea-b215-4660-bc7b-2406783aa3bb\") " pod="openshift-marketplace/marketplace-operator-79b997595-gqjzs" Jan 25 07:59:10 crc kubenswrapper[4832]: I0125 07:59:10.278022 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zqq64\" (UniqueName: \"kubernetes.io/projected/d506c861-ab5e-4341-8e16-ce9166f24d5c-kube-api-access-zqq64\") pod \"apiserver-76f77b778f-99kns\" (UID: \"d506c861-ab5e-4341-8e16-ce9166f24d5c\") " pod="openshift-apiserver/apiserver-76f77b778f-99kns" Jan 25 07:59:10 crc kubenswrapper[4832]: I0125 07:59:10.297577 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8k8k5\" (UniqueName: \"kubernetes.io/projected/d48c21e4-2d38-4055-a586-93b65a3ff446-kube-api-access-8k8k5\") pod \"olm-operator-6b444d44fb-nlxgx\" (UID: \"d48c21e4-2d38-4055-a586-93b65a3ff446\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-nlxgx" Jan 25 07:59:10 crc kubenswrapper[4832]: I0125 07:59:10.303870 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Jan 25 07:59:10 crc kubenswrapper[4832]: I0125 07:59:10.314467 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-gqjzs" Jan 25 07:59:10 crc kubenswrapper[4832]: I0125 07:59:10.323186 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Jan 25 07:59:10 crc kubenswrapper[4832]: I0125 07:59:10.346049 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Jan 25 07:59:10 crc kubenswrapper[4832]: I0125 07:59:10.361902 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-7rwcz" Jan 25 07:59:10 crc kubenswrapper[4832]: I0125 07:59:10.364761 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Jan 25 07:59:10 crc kubenswrapper[4832]: I0125 07:59:10.376717 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-fth6d" Jan 25 07:59:10 crc kubenswrapper[4832]: I0125 07:59:10.384663 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Jan 25 07:59:10 crc kubenswrapper[4832]: I0125 07:59:10.404177 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Jan 25 07:59:10 crc kubenswrapper[4832]: I0125 07:59:10.423133 4832 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Jan 25 07:59:10 crc kubenswrapper[4832]: I0125 07:59:10.423260 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-jppn9"] Jan 25 07:59:10 crc kubenswrapper[4832]: I0125 07:59:10.461669 4832 request.go:700] Waited for 1.792177759s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-diagnostics/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&limit=500&resourceVersion=0 Jan 25 07:59:10 crc kubenswrapper[4832]: I0125 07:59:10.463329 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Jan 25 07:59:10 crc kubenswrapper[4832]: I0125 07:59:10.464139 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-99kns" Jan 25 07:59:10 crc kubenswrapper[4832]: I0125 07:59:10.485236 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Jan 25 07:59:10 crc kubenswrapper[4832]: I0125 07:59:10.488727 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-gqjzs"] Jan 25 07:59:10 crc kubenswrapper[4832]: W0125 07:59:10.497461 4832 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc97f51ea_b215_4660_bc7b_2406783aa3bb.slice/crio-09260039b4ef997bc5158f5963a092c064b8417a9c43275caeaa431a633cea7b WatchSource:0}: Error finding container 09260039b4ef997bc5158f5963a092c064b8417a9c43275caeaa431a633cea7b: Status 404 returned error can't find the container with id 09260039b4ef997bc5158f5963a092c064b8417a9c43275caeaa431a633cea7b Jan 25 07:59:10 crc kubenswrapper[4832]: I0125 07:59:10.502842 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Jan 25 07:59:10 crc kubenswrapper[4832]: I0125 07:59:10.523783 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Jan 25 07:59:10 crc kubenswrapper[4832]: I0125 07:59:10.534214 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-jppn9" event={"ID":"39120fe3-c252-4345-80bc-048cde22bafe","Type":"ContainerStarted","Data":"2d45c4b83657e89cc8c91f8884991f46d8766b1c62898fa2c38e5f38e095943a"} Jan 25 07:59:10 crc kubenswrapper[4832]: I0125 07:59:10.534306 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/b1211d5b-db27-4814-85b9-241c30afaaab-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-gp55m\" (UID: \"b1211d5b-db27-4814-85b9-241c30afaaab\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-gp55m" Jan 25 07:59:10 crc kubenswrapper[4832]: I0125 07:59:10.534359 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cb0834ac-2ef5-48dc-a86f-511e79c897f7-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-q5r28\" (UID: \"cb0834ac-2ef5-48dc-a86f-511e79c897f7\") " pod="openshift-authentication/oauth-openshift-558db77b4-q5r28" Jan 25 07:59:10 crc kubenswrapper[4832]: I0125 07:59:10.534380 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/8be00535-0bc6-41a2-a79c-552be0f574a8-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-sqbmg\" (UID: \"8be00535-0bc6-41a2-a79c-552be0f574a8\") " pod="openshift-controller-manager/controller-manager-879f6c89f-sqbmg" Jan 25 07:59:10 crc kubenswrapper[4832]: I0125 07:59:10.534417 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/cb0834ac-2ef5-48dc-a86f-511e79c897f7-audit-dir\") pod \"oauth-openshift-558db77b4-q5r28\" (UID: \"cb0834ac-2ef5-48dc-a86f-511e79c897f7\") " pod="openshift-authentication/oauth-openshift-558db77b4-q5r28" Jan 25 07:59:10 crc kubenswrapper[4832]: I0125 07:59:10.534437 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6afbd903-07e1-4806-9a41-a073a6a4acb7-config\") pod \"machine-api-operator-5694c8668f-29fbk\" (UID: \"6afbd903-07e1-4806-9a41-a073a6a4acb7\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-29fbk" Jan 25 07:59:10 crc kubenswrapper[4832]: I0125 07:59:10.534453 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/cb0834ac-2ef5-48dc-a86f-511e79c897f7-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-q5r28\" (UID: \"cb0834ac-2ef5-48dc-a86f-511e79c897f7\") " pod="openshift-authentication/oauth-openshift-558db77b4-q5r28" Jan 25 07:59:10 crc kubenswrapper[4832]: I0125 07:59:10.534471 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/cb0834ac-2ef5-48dc-a86f-511e79c897f7-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-q5r28\" (UID: \"cb0834ac-2ef5-48dc-a86f-511e79c897f7\") " pod="openshift-authentication/oauth-openshift-558db77b4-q5r28" Jan 25 07:59:10 crc kubenswrapper[4832]: I0125 07:59:10.534530 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7fad5166-9aa0-4c10-8c73-2186af1d226d-serving-cert\") pod \"route-controller-manager-6576b87f9c-csbzw\" (UID: \"7fad5166-9aa0-4c10-8c73-2186af1d226d\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-csbzw" Jan 25 07:59:10 crc kubenswrapper[4832]: I0125 07:59:10.534564 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/95dbbcf8-838b-4f56-928a-81b4f038b259-console-serving-cert\") pod \"console-f9d7485db-8pg27\" (UID: \"95dbbcf8-838b-4f56-928a-81b4f038b259\") " pod="openshift-console/console-f9d7485db-8pg27" Jan 25 07:59:10 crc kubenswrapper[4832]: I0125 07:59:10.534636 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xk4vl\" (UniqueName: \"kubernetes.io/projected/8be00535-0bc6-41a2-a79c-552be0f574a8-kube-api-access-xk4vl\") pod \"controller-manager-879f6c89f-sqbmg\" (UID: \"8be00535-0bc6-41a2-a79c-552be0f574a8\") " pod="openshift-controller-manager/controller-manager-879f6c89f-sqbmg" Jan 25 07:59:10 crc kubenswrapper[4832]: I0125 07:59:10.534662 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c592226b-85c1-48b3-9e85-cbd606c1f94d-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-dswxl\" (UID: \"c592226b-85c1-48b3-9e85-cbd606c1f94d\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-dswxl" Jan 25 07:59:10 crc kubenswrapper[4832]: I0125 07:59:10.534683 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/cb0834ac-2ef5-48dc-a86f-511e79c897f7-audit-policies\") pod \"oauth-openshift-558db77b4-q5r28\" (UID: \"cb0834ac-2ef5-48dc-a86f-511e79c897f7\") " pod="openshift-authentication/oauth-openshift-558db77b4-q5r28" Jan 25 07:59:10 crc kubenswrapper[4832]: I0125 07:59:10.534698 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/f6da273c-cb4f-48a9-88cf-70ae8647e580-etcd-client\") pod \"apiserver-7bbb656c7d-fcqfl\" (UID: \"f6da273c-cb4f-48a9-88cf-70ae8647e580\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-fcqfl" Jan 25 07:59:10 crc kubenswrapper[4832]: I0125 07:59:10.534781 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k4hzj\" (UniqueName: \"kubernetes.io/projected/9d51e019-aeb4-42b0-a900-257aead64221-kube-api-access-k4hzj\") pod \"console-operator-58897d9998-fswfm\" (UID: \"9d51e019-aeb4-42b0-a900-257aead64221\") " pod="openshift-console-operator/console-operator-58897d9998-fswfm" Jan 25 07:59:10 crc kubenswrapper[4832]: I0125 07:59:10.534840 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hh8pm\" (UniqueName: \"kubernetes.io/projected/6afbd903-07e1-4806-9a41-a073a6a4acb7-kube-api-access-hh8pm\") pod \"machine-api-operator-5694c8668f-29fbk\" (UID: \"6afbd903-07e1-4806-9a41-a073a6a4acb7\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-29fbk" Jan 25 07:59:10 crc kubenswrapper[4832]: I0125 07:59:10.534859 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/95dbbcf8-838b-4f56-928a-81b4f038b259-console-oauth-config\") pod \"console-f9d7485db-8pg27\" (UID: \"95dbbcf8-838b-4f56-928a-81b4f038b259\") " pod="openshift-console/console-f9d7485db-8pg27" Jan 25 07:59:10 crc kubenswrapper[4832]: I0125 07:59:10.534889 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/267d2772-42e1-4031-bc5f-ac78559a7f82-registry-tls\") pod \"image-registry-697d97f7c8-xw4z9\" (UID: \"267d2772-42e1-4031-bc5f-ac78559a7f82\") " pod="openshift-image-registry/image-registry-697d97f7c8-xw4z9" Jan 25 07:59:10 crc kubenswrapper[4832]: I0125 07:59:10.534922 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/cb0834ac-2ef5-48dc-a86f-511e79c897f7-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-q5r28\" (UID: \"cb0834ac-2ef5-48dc-a86f-511e79c897f7\") " pod="openshift-authentication/oauth-openshift-558db77b4-q5r28" Jan 25 07:59:10 crc kubenswrapper[4832]: I0125 07:59:10.534956 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/95dbbcf8-838b-4f56-928a-81b4f038b259-oauth-serving-cert\") pod \"console-f9d7485db-8pg27\" (UID: \"95dbbcf8-838b-4f56-928a-81b4f038b259\") " pod="openshift-console/console-f9d7485db-8pg27" Jan 25 07:59:10 crc kubenswrapper[4832]: I0125 07:59:10.535009 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/462c88d9-0b9e-4b53-9b5d-78e14179c952-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-p7n7p\" (UID: \"462c88d9-0b9e-4b53-9b5d-78e14179c952\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-p7n7p" Jan 25 07:59:10 crc kubenswrapper[4832]: I0125 07:59:10.535037 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/cb0834ac-2ef5-48dc-a86f-511e79c897f7-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-q5r28\" (UID: \"cb0834ac-2ef5-48dc-a86f-511e79c897f7\") " pod="openshift-authentication/oauth-openshift-558db77b4-q5r28" Jan 25 07:59:10 crc kubenswrapper[4832]: I0125 07:59:10.535052 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8be00535-0bc6-41a2-a79c-552be0f574a8-serving-cert\") pod \"controller-manager-879f6c89f-sqbmg\" (UID: \"8be00535-0bc6-41a2-a79c-552be0f574a8\") " pod="openshift-controller-manager/controller-manager-879f6c89f-sqbmg" Jan 25 07:59:10 crc kubenswrapper[4832]: I0125 07:59:10.535068 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/58b235e2-ab37-4d26-ba86-c188dae1bcda-metrics-tls\") pod \"ingress-operator-5b745b69d9-zxhsq\" (UID: \"58b235e2-ab37-4d26-ba86-c188dae1bcda\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-zxhsq" Jan 25 07:59:10 crc kubenswrapper[4832]: I0125 07:59:10.535147 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f6da273c-cb4f-48a9-88cf-70ae8647e580-audit-dir\") pod \"apiserver-7bbb656c7d-fcqfl\" (UID: \"f6da273c-cb4f-48a9-88cf-70ae8647e580\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-fcqfl" Jan 25 07:59:10 crc kubenswrapper[4832]: I0125 07:59:10.535184 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/70fee4de-12e8-4452-a3a7-731815ecbedd-config\") pod \"openshift-apiserver-operator-796bbdcf4f-c8cgr\" (UID: \"70fee4de-12e8-4452-a3a7-731815ecbedd\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-c8cgr" Jan 25 07:59:10 crc kubenswrapper[4832]: I0125 07:59:10.535235 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/267d2772-42e1-4031-bc5f-ac78559a7f82-installation-pull-secrets\") pod \"image-registry-697d97f7c8-xw4z9\" (UID: \"267d2772-42e1-4031-bc5f-ac78559a7f82\") " pod="openshift-image-registry/image-registry-697d97f7c8-xw4z9" Jan 25 07:59:10 crc kubenswrapper[4832]: I0125 07:59:10.535263 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rwn9v\" (UniqueName: \"kubernetes.io/projected/468a6836-4216-434c-8c75-16b6d41eb2c4-kube-api-access-rwn9v\") pod \"cluster-samples-operator-665b6dd947-b84df\" (UID: \"468a6836-4216-434c-8c75-16b6d41eb2c4\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-b84df" Jan 25 07:59:10 crc kubenswrapper[4832]: I0125 07:59:10.535228 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-gqjzs" event={"ID":"c97f51ea-b215-4660-bc7b-2406783aa3bb","Type":"ContainerStarted","Data":"09260039b4ef997bc5158f5963a092c064b8417a9c43275caeaa431a633cea7b"} Jan 25 07:59:10 crc kubenswrapper[4832]: I0125 07:59:10.535321 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6afbd903-07e1-4806-9a41-a073a6a4acb7-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-29fbk\" (UID: \"6afbd903-07e1-4806-9a41-a073a6a4acb7\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-29fbk" Jan 25 07:59:10 crc kubenswrapper[4832]: I0125 07:59:10.535343 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/462c88d9-0b9e-4b53-9b5d-78e14179c952-config\") pod \"kube-apiserver-operator-766d6c64bb-p7n7p\" (UID: \"462c88d9-0b9e-4b53-9b5d-78e14179c952\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-p7n7p" Jan 25 07:59:10 crc kubenswrapper[4832]: I0125 07:59:10.535402 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/cb0834ac-2ef5-48dc-a86f-511e79c897f7-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-q5r28\" (UID: \"cb0834ac-2ef5-48dc-a86f-511e79c897f7\") " pod="openshift-authentication/oauth-openshift-558db77b4-q5r28" Jan 25 07:59:10 crc kubenswrapper[4832]: I0125 07:59:10.535432 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/cb0834ac-2ef5-48dc-a86f-511e79c897f7-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-q5r28\" (UID: \"cb0834ac-2ef5-48dc-a86f-511e79c897f7\") " pod="openshift-authentication/oauth-openshift-558db77b4-q5r28" Jan 25 07:59:10 crc kubenswrapper[4832]: I0125 07:59:10.535456 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f6da273c-cb4f-48a9-88cf-70ae8647e580-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-fcqfl\" (UID: \"f6da273c-cb4f-48a9-88cf-70ae8647e580\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-fcqfl" Jan 25 07:59:10 crc kubenswrapper[4832]: I0125 07:59:10.535542 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6afbd903-07e1-4806-9a41-a073a6a4acb7-images\") pod \"machine-api-operator-5694c8668f-29fbk\" (UID: \"6afbd903-07e1-4806-9a41-a073a6a4acb7\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-29fbk" Jan 25 07:59:10 crc kubenswrapper[4832]: I0125 07:59:10.535564 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/f6da273c-cb4f-48a9-88cf-70ae8647e580-encryption-config\") pod \"apiserver-7bbb656c7d-fcqfl\" (UID: \"f6da273c-cb4f-48a9-88cf-70ae8647e580\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-fcqfl" Jan 25 07:59:10 crc kubenswrapper[4832]: I0125 07:59:10.535605 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/c592226b-85c1-48b3-9e85-cbd606c1f94d-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-dswxl\" (UID: \"c592226b-85c1-48b3-9e85-cbd606c1f94d\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-dswxl" Jan 25 07:59:10 crc kubenswrapper[4832]: I0125 07:59:10.535627 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/cb0834ac-2ef5-48dc-a86f-511e79c897f7-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-q5r28\" (UID: \"cb0834ac-2ef5-48dc-a86f-511e79c897f7\") " pod="openshift-authentication/oauth-openshift-558db77b4-q5r28" Jan 25 07:59:10 crc kubenswrapper[4832]: I0125 07:59:10.535648 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/95dbbcf8-838b-4f56-928a-81b4f038b259-service-ca\") pod \"console-f9d7485db-8pg27\" (UID: \"95dbbcf8-838b-4f56-928a-81b4f038b259\") " pod="openshift-console/console-f9d7485db-8pg27" Jan 25 07:59:10 crc kubenswrapper[4832]: I0125 07:59:10.535691 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6646m\" (UniqueName: \"kubernetes.io/projected/c592226b-85c1-48b3-9e85-cbd606c1f94d-kube-api-access-6646m\") pod \"cluster-image-registry-operator-dc59b4c8b-dswxl\" (UID: \"c592226b-85c1-48b3-9e85-cbd606c1f94d\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-dswxl" Jan 25 07:59:10 crc kubenswrapper[4832]: I0125 07:59:10.535723 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c9mbt\" (UniqueName: \"kubernetes.io/projected/95dbbcf8-838b-4f56-928a-81b4f038b259-kube-api-access-c9mbt\") pod \"console-f9d7485db-8pg27\" (UID: \"95dbbcf8-838b-4f56-928a-81b4f038b259\") " pod="openshift-console/console-f9d7485db-8pg27" Jan 25 07:59:10 crc kubenswrapper[4832]: I0125 07:59:10.535776 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d51e019-aeb4-42b0-a900-257aead64221-trusted-ca\") pod \"console-operator-58897d9998-fswfm\" (UID: \"9d51e019-aeb4-42b0-a900-257aead64221\") " pod="openshift-console-operator/console-operator-58897d9998-fswfm" Jan 25 07:59:10 crc kubenswrapper[4832]: I0125 07:59:10.535803 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d51e019-aeb4-42b0-a900-257aead64221-config\") pod \"console-operator-58897d9998-fswfm\" (UID: \"9d51e019-aeb4-42b0-a900-257aead64221\") " pod="openshift-console-operator/console-operator-58897d9998-fswfm" Jan 25 07:59:10 crc kubenswrapper[4832]: I0125 07:59:10.535843 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/58b235e2-ab37-4d26-ba86-c188dae1bcda-bound-sa-token\") pod \"ingress-operator-5b745b69d9-zxhsq\" (UID: \"58b235e2-ab37-4d26-ba86-c188dae1bcda\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-zxhsq" Jan 25 07:59:10 crc kubenswrapper[4832]: I0125 07:59:10.535865 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/267d2772-42e1-4031-bc5f-ac78559a7f82-registry-certificates\") pod \"image-registry-697d97f7c8-xw4z9\" (UID: \"267d2772-42e1-4031-bc5f-ac78559a7f82\") " pod="openshift-image-registry/image-registry-697d97f7c8-xw4z9" Jan 25 07:59:10 crc kubenswrapper[4832]: I0125 07:59:10.536046 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/267d2772-42e1-4031-bc5f-ac78559a7f82-ca-trust-extracted\") pod \"image-registry-697d97f7c8-xw4z9\" (UID: \"267d2772-42e1-4031-bc5f-ac78559a7f82\") " pod="openshift-image-registry/image-registry-697d97f7c8-xw4z9" Jan 25 07:59:10 crc kubenswrapper[4832]: I0125 07:59:10.536078 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/267d2772-42e1-4031-bc5f-ac78559a7f82-trusted-ca\") pod \"image-registry-697d97f7c8-xw4z9\" (UID: \"267d2772-42e1-4031-bc5f-ac78559a7f82\") " pod="openshift-image-registry/image-registry-697d97f7c8-xw4z9" Jan 25 07:59:10 crc kubenswrapper[4832]: I0125 07:59:10.536104 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8be00535-0bc6-41a2-a79c-552be0f574a8-config\") pod \"controller-manager-879f6c89f-sqbmg\" (UID: \"8be00535-0bc6-41a2-a79c-552be0f574a8\") " pod="openshift-controller-manager/controller-manager-879f6c89f-sqbmg" Jan 25 07:59:10 crc kubenswrapper[4832]: I0125 07:59:10.536123 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f6da273c-cb4f-48a9-88cf-70ae8647e580-serving-cert\") pod \"apiserver-7bbb656c7d-fcqfl\" (UID: \"f6da273c-cb4f-48a9-88cf-70ae8647e580\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-fcqfl" Jan 25 07:59:10 crc kubenswrapper[4832]: I0125 07:59:10.536143 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8be00535-0bc6-41a2-a79c-552be0f574a8-client-ca\") pod \"controller-manager-879f6c89f-sqbmg\" (UID: \"8be00535-0bc6-41a2-a79c-552be0f574a8\") " pod="openshift-controller-manager/controller-manager-879f6c89f-sqbmg" Jan 25 07:59:10 crc kubenswrapper[4832]: I0125 07:59:10.536163 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/58b235e2-ab37-4d26-ba86-c188dae1bcda-trusted-ca\") pod \"ingress-operator-5b745b69d9-zxhsq\" (UID: \"58b235e2-ab37-4d26-ba86-c188dae1bcda\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-zxhsq" Jan 25 07:59:10 crc kubenswrapper[4832]: I0125 07:59:10.536185 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/70fee4de-12e8-4452-a3a7-731815ecbedd-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-c8cgr\" (UID: \"70fee4de-12e8-4452-a3a7-731815ecbedd\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-c8cgr" Jan 25 07:59:10 crc kubenswrapper[4832]: I0125 07:59:10.536513 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/95dbbcf8-838b-4f56-928a-81b4f038b259-trusted-ca-bundle\") pod \"console-f9d7485db-8pg27\" (UID: \"95dbbcf8-838b-4f56-928a-81b4f038b259\") " pod="openshift-console/console-f9d7485db-8pg27" Jan 25 07:59:10 crc kubenswrapper[4832]: I0125 07:59:10.536571 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zg799\" (UniqueName: \"kubernetes.io/projected/70fee4de-12e8-4452-a3a7-731815ecbedd-kube-api-access-zg799\") pod \"openshift-apiserver-operator-796bbdcf4f-c8cgr\" (UID: \"70fee4de-12e8-4452-a3a7-731815ecbedd\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-c8cgr" Jan 25 07:59:10 crc kubenswrapper[4832]: I0125 07:59:10.536645 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/cb0834ac-2ef5-48dc-a86f-511e79c897f7-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-q5r28\" (UID: \"cb0834ac-2ef5-48dc-a86f-511e79c897f7\") " pod="openshift-authentication/oauth-openshift-558db77b4-q5r28" Jan 25 07:59:10 crc kubenswrapper[4832]: I0125 07:59:10.536673 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/f6da273c-cb4f-48a9-88cf-70ae8647e580-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-fcqfl\" (UID: \"f6da273c-cb4f-48a9-88cf-70ae8647e580\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-fcqfl" Jan 25 07:59:10 crc kubenswrapper[4832]: I0125 07:59:10.536704 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ch6hx\" (UniqueName: \"kubernetes.io/projected/f6da273c-cb4f-48a9-88cf-70ae8647e580-kube-api-access-ch6hx\") pod \"apiserver-7bbb656c7d-fcqfl\" (UID: \"f6da273c-cb4f-48a9-88cf-70ae8647e580\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-fcqfl" Jan 25 07:59:10 crc kubenswrapper[4832]: I0125 07:59:10.536740 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/468a6836-4216-434c-8c75-16b6d41eb2c4-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-b84df\" (UID: \"468a6836-4216-434c-8c75-16b6d41eb2c4\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-b84df" Jan 25 07:59:10 crc kubenswrapper[4832]: I0125 07:59:10.536758 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d51e019-aeb4-42b0-a900-257aead64221-serving-cert\") pod \"console-operator-58897d9998-fswfm\" (UID: \"9d51e019-aeb4-42b0-a900-257aead64221\") " pod="openshift-console-operator/console-operator-58897d9998-fswfm" Jan 25 07:59:10 crc kubenswrapper[4832]: I0125 07:59:10.536773 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/462c88d9-0b9e-4b53-9b5d-78e14179c952-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-p7n7p\" (UID: \"462c88d9-0b9e-4b53-9b5d-78e14179c952\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-p7n7p" Jan 25 07:59:10 crc kubenswrapper[4832]: I0125 07:59:10.536790 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/cb0834ac-2ef5-48dc-a86f-511e79c897f7-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-q5r28\" (UID: \"cb0834ac-2ef5-48dc-a86f-511e79c897f7\") " pod="openshift-authentication/oauth-openshift-558db77b4-q5r28" Jan 25 07:59:10 crc kubenswrapper[4832]: I0125 07:59:10.536810 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/cb0834ac-2ef5-48dc-a86f-511e79c897f7-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-q5r28\" (UID: \"cb0834ac-2ef5-48dc-a86f-511e79c897f7\") " pod="openshift-authentication/oauth-openshift-558db77b4-q5r28" Jan 25 07:59:10 crc kubenswrapper[4832]: I0125 07:59:10.536827 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-shcjj\" (UniqueName: \"kubernetes.io/projected/7fad5166-9aa0-4c10-8c73-2186af1d226d-kube-api-access-shcjj\") pod \"route-controller-manager-6576b87f9c-csbzw\" (UID: \"7fad5166-9aa0-4c10-8c73-2186af1d226d\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-csbzw" Jan 25 07:59:10 crc kubenswrapper[4832]: I0125 07:59:10.536842 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/95dbbcf8-838b-4f56-928a-81b4f038b259-console-config\") pod \"console-f9d7485db-8pg27\" (UID: \"95dbbcf8-838b-4f56-928a-81b4f038b259\") " pod="openshift-console/console-f9d7485db-8pg27" Jan 25 07:59:10 crc kubenswrapper[4832]: I0125 07:59:10.536867 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xw4z9\" (UID: \"267d2772-42e1-4031-bc5f-ac78559a7f82\") " pod="openshift-image-registry/image-registry-697d97f7c8-xw4z9" Jan 25 07:59:10 crc kubenswrapper[4832]: I0125 07:59:10.536882 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/c592226b-85c1-48b3-9e85-cbd606c1f94d-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-dswxl\" (UID: \"c592226b-85c1-48b3-9e85-cbd606c1f94d\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-dswxl" Jan 25 07:59:10 crc kubenswrapper[4832]: I0125 07:59:10.536900 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7fad5166-9aa0-4c10-8c73-2186af1d226d-client-ca\") pod \"route-controller-manager-6576b87f9c-csbzw\" (UID: \"7fad5166-9aa0-4c10-8c73-2186af1d226d\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-csbzw" Jan 25 07:59:10 crc kubenswrapper[4832]: I0125 07:59:10.536960 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/267d2772-42e1-4031-bc5f-ac78559a7f82-bound-sa-token\") pod \"image-registry-697d97f7c8-xw4z9\" (UID: \"267d2772-42e1-4031-bc5f-ac78559a7f82\") " pod="openshift-image-registry/image-registry-697d97f7c8-xw4z9" Jan 25 07:59:10 crc kubenswrapper[4832]: I0125 07:59:10.537063 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/f6da273c-cb4f-48a9-88cf-70ae8647e580-audit-policies\") pod \"apiserver-7bbb656c7d-fcqfl\" (UID: \"f6da273c-cb4f-48a9-88cf-70ae8647e580\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-fcqfl" Jan 25 07:59:10 crc kubenswrapper[4832]: I0125 07:59:10.537166 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-86wx4\" (UniqueName: \"kubernetes.io/projected/b1211d5b-db27-4814-85b9-241c30afaaab-kube-api-access-86wx4\") pod \"multus-admission-controller-857f4d67dd-gp55m\" (UID: \"b1211d5b-db27-4814-85b9-241c30afaaab\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-gp55m" Jan 25 07:59:10 crc kubenswrapper[4832]: E0125 07:59:10.537212 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-25 07:59:11.037198122 +0000 UTC m=+133.711021745 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xw4z9" (UID: "267d2772-42e1-4031-bc5f-ac78559a7f82") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 25 07:59:10 crc kubenswrapper[4832]: I0125 07:59:10.537250 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l7lq6\" (UniqueName: \"kubernetes.io/projected/267d2772-42e1-4031-bc5f-ac78559a7f82-kube-api-access-l7lq6\") pod \"image-registry-697d97f7c8-xw4z9\" (UID: \"267d2772-42e1-4031-bc5f-ac78559a7f82\") " pod="openshift-image-registry/image-registry-697d97f7c8-xw4z9" Jan 25 07:59:10 crc kubenswrapper[4832]: I0125 07:59:10.537276 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4x5qc\" (UniqueName: \"kubernetes.io/projected/cb0834ac-2ef5-48dc-a86f-511e79c897f7-kube-api-access-4x5qc\") pod \"oauth-openshift-558db77b4-q5r28\" (UID: \"cb0834ac-2ef5-48dc-a86f-511e79c897f7\") " pod="openshift-authentication/oauth-openshift-558db77b4-q5r28" Jan 25 07:59:10 crc kubenswrapper[4832]: I0125 07:59:10.537300 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7fad5166-9aa0-4c10-8c73-2186af1d226d-config\") pod \"route-controller-manager-6576b87f9c-csbzw\" (UID: \"7fad5166-9aa0-4c10-8c73-2186af1d226d\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-csbzw" Jan 25 07:59:10 crc kubenswrapper[4832]: I0125 07:59:10.537317 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6gxcx\" (UniqueName: \"kubernetes.io/projected/58b235e2-ab37-4d26-ba86-c188dae1bcda-kube-api-access-6gxcx\") pod \"ingress-operator-5b745b69d9-zxhsq\" (UID: \"58b235e2-ab37-4d26-ba86-c188dae1bcda\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-zxhsq" Jan 25 07:59:10 crc kubenswrapper[4832]: I0125 07:59:10.543052 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Jan 25 07:59:10 crc kubenswrapper[4832]: I0125 07:59:10.553564 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-7rwcz"] Jan 25 07:59:10 crc kubenswrapper[4832]: I0125 07:59:10.563090 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Jan 25 07:59:10 crc kubenswrapper[4832]: I0125 07:59:10.583458 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-nlxgx" Jan 25 07:59:10 crc kubenswrapper[4832]: I0125 07:59:10.591103 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-fth6d"] Jan 25 07:59:10 crc kubenswrapper[4832]: W0125 07:59:10.595918 4832 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcc912b0f_bde8_4185_be84_2a2c3394024f.slice/crio-4825f4d794d5557aba76d8be3afb23d87154395d4f3f7e546f01595c8dafebfe WatchSource:0}: Error finding container 4825f4d794d5557aba76d8be3afb23d87154395d4f3f7e546f01595c8dafebfe: Status 404 returned error can't find the container with id 4825f4d794d5557aba76d8be3afb23d87154395d4f3f7e546f01595c8dafebfe Jan 25 07:59:10 crc kubenswrapper[4832]: I0125 07:59:10.637966 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 25 07:59:10 crc kubenswrapper[4832]: I0125 07:59:10.638185 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/462c88d9-0b9e-4b53-9b5d-78e14179c952-config\") pod \"kube-apiserver-operator-766d6c64bb-p7n7p\" (UID: \"462c88d9-0b9e-4b53-9b5d-78e14179c952\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-p7n7p" Jan 25 07:59:10 crc kubenswrapper[4832]: I0125 07:59:10.638228 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/cb0834ac-2ef5-48dc-a86f-511e79c897f7-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-q5r28\" (UID: \"cb0834ac-2ef5-48dc-a86f-511e79c897f7\") " pod="openshift-authentication/oauth-openshift-558db77b4-q5r28" Jan 25 07:59:10 crc kubenswrapper[4832]: E0125 07:59:10.638259 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-25 07:59:11.138210482 +0000 UTC m=+133.812034045 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 25 07:59:10 crc kubenswrapper[4832]: I0125 07:59:10.638303 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/cb0834ac-2ef5-48dc-a86f-511e79c897f7-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-q5r28\" (UID: \"cb0834ac-2ef5-48dc-a86f-511e79c897f7\") " pod="openshift-authentication/oauth-openshift-558db77b4-q5r28" Jan 25 07:59:10 crc kubenswrapper[4832]: I0125 07:59:10.638373 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/24acc510-4a43-4275-9a46-fe2e8258b3c7-metrics-tls\") pod \"dns-default-88fz6\" (UID: \"24acc510-4a43-4275-9a46-fe2e8258b3c7\") " pod="openshift-dns/dns-default-88fz6" Jan 25 07:59:10 crc kubenswrapper[4832]: I0125 07:59:10.638438 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6afbd903-07e1-4806-9a41-a073a6a4acb7-images\") pod \"machine-api-operator-5694c8668f-29fbk\" (UID: \"6afbd903-07e1-4806-9a41-a073a6a4acb7\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-29fbk" Jan 25 07:59:10 crc kubenswrapper[4832]: I0125 07:59:10.638491 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/f6da273c-cb4f-48a9-88cf-70ae8647e580-encryption-config\") pod \"apiserver-7bbb656c7d-fcqfl\" (UID: \"f6da273c-cb4f-48a9-88cf-70ae8647e580\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-fcqfl" Jan 25 07:59:10 crc kubenswrapper[4832]: I0125 07:59:10.638513 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/24acc510-4a43-4275-9a46-fe2e8258b3c7-config-volume\") pod \"dns-default-88fz6\" (UID: \"24acc510-4a43-4275-9a46-fe2e8258b3c7\") " pod="openshift-dns/dns-default-88fz6" Jan 25 07:59:10 crc kubenswrapper[4832]: I0125 07:59:10.638574 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/cb0834ac-2ef5-48dc-a86f-511e79c897f7-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-q5r28\" (UID: \"cb0834ac-2ef5-48dc-a86f-511e79c897f7\") " pod="openshift-authentication/oauth-openshift-558db77b4-q5r28" Jan 25 07:59:10 crc kubenswrapper[4832]: I0125 07:59:10.638653 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6646m\" (UniqueName: \"kubernetes.io/projected/c592226b-85c1-48b3-9e85-cbd606c1f94d-kube-api-access-6646m\") pod \"cluster-image-registry-operator-dc59b4c8b-dswxl\" (UID: \"c592226b-85c1-48b3-9e85-cbd606c1f94d\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-dswxl" Jan 25 07:59:10 crc kubenswrapper[4832]: I0125 07:59:10.638687 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fca662f7-e916-4728-8b6a-0b34ace7117f-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-9ll2t\" (UID: \"fca662f7-e916-4728-8b6a-0b34ace7117f\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-9ll2t" Jan 25 07:59:10 crc kubenswrapper[4832]: I0125 07:59:10.638737 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/023a5b50-72c3-42a2-8104-dc50489cf857-etcd-client\") pod \"etcd-operator-b45778765-f222l\" (UID: \"023a5b50-72c3-42a2-8104-dc50489cf857\") " pod="openshift-etcd-operator/etcd-operator-b45778765-f222l" Jan 25 07:59:10 crc kubenswrapper[4832]: I0125 07:59:10.638779 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c9mbt\" (UniqueName: \"kubernetes.io/projected/95dbbcf8-838b-4f56-928a-81b4f038b259-kube-api-access-c9mbt\") pod \"console-f9d7485db-8pg27\" (UID: \"95dbbcf8-838b-4f56-928a-81b4f038b259\") " pod="openshift-console/console-f9d7485db-8pg27" Jan 25 07:59:10 crc kubenswrapper[4832]: I0125 07:59:10.638826 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d51e019-aeb4-42b0-a900-257aead64221-trusted-ca\") pod \"console-operator-58897d9998-fswfm\" (UID: \"9d51e019-aeb4-42b0-a900-257aead64221\") " pod="openshift-console-operator/console-operator-58897d9998-fswfm" Jan 25 07:59:10 crc kubenswrapper[4832]: I0125 07:59:10.638850 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d51e019-aeb4-42b0-a900-257aead64221-config\") pod \"console-operator-58897d9998-fswfm\" (UID: \"9d51e019-aeb4-42b0-a900-257aead64221\") " pod="openshift-console-operator/console-operator-58897d9998-fswfm" Jan 25 07:59:10 crc kubenswrapper[4832]: I0125 07:59:10.638895 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6eb8ff11-3ea3-4569-9d87-e89416c04784-config\") pod \"authentication-operator-69f744f599-6llzt\" (UID: \"6eb8ff11-3ea3-4569-9d87-e89416c04784\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-6llzt" Jan 25 07:59:10 crc kubenswrapper[4832]: I0125 07:59:10.638915 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/462c88d9-0b9e-4b53-9b5d-78e14179c952-config\") pod \"kube-apiserver-operator-766d6c64bb-p7n7p\" (UID: \"462c88d9-0b9e-4b53-9b5d-78e14179c952\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-p7n7p" Jan 25 07:59:10 crc kubenswrapper[4832]: I0125 07:59:10.638927 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tpnh2\" (UniqueName: \"kubernetes.io/projected/fca662f7-e916-4728-8b6a-0b34ace7117f-kube-api-access-tpnh2\") pod \"kube-storage-version-migrator-operator-b67b599dd-9ll2t\" (UID: \"fca662f7-e916-4728-8b6a-0b34ace7117f\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-9ll2t" Jan 25 07:59:10 crc kubenswrapper[4832]: I0125 07:59:10.638973 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/023a5b50-72c3-42a2-8104-dc50489cf857-config\") pod \"etcd-operator-b45778765-f222l\" (UID: \"023a5b50-72c3-42a2-8104-dc50489cf857\") " pod="openshift-etcd-operator/etcd-operator-b45778765-f222l" Jan 25 07:59:10 crc kubenswrapper[4832]: I0125 07:59:10.639003 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/1228f33e-a6bd-4c51-ad90-f005c2848d83-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-tqtnp\" (UID: \"1228f33e-a6bd-4c51-ad90-f005c2848d83\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-tqtnp" Jan 25 07:59:10 crc kubenswrapper[4832]: I0125 07:59:10.639027 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/4b4ff59a-58d8-4822-8be8-d48a5a85b2d2-plugins-dir\") pod \"csi-hostpathplugin-jjs2r\" (UID: \"4b4ff59a-58d8-4822-8be8-d48a5a85b2d2\") " pod="hostpath-provisioner/csi-hostpathplugin-jjs2r" Jan 25 07:59:10 crc kubenswrapper[4832]: I0125 07:59:10.639107 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/267d2772-42e1-4031-bc5f-ac78559a7f82-ca-trust-extracted\") pod \"image-registry-697d97f7c8-xw4z9\" (UID: \"267d2772-42e1-4031-bc5f-ac78559a7f82\") " pod="openshift-image-registry/image-registry-697d97f7c8-xw4z9" Jan 25 07:59:10 crc kubenswrapper[4832]: I0125 07:59:10.639162 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/267d2772-42e1-4031-bc5f-ac78559a7f82-trusted-ca\") pod \"image-registry-697d97f7c8-xw4z9\" (UID: \"267d2772-42e1-4031-bc5f-ac78559a7f82\") " pod="openshift-image-registry/image-registry-697d97f7c8-xw4z9" Jan 25 07:59:10 crc kubenswrapper[4832]: I0125 07:59:10.639187 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pfn4g\" (UniqueName: \"kubernetes.io/projected/5c72bea6-adc6-4db0-aec2-3436d21d9871-kube-api-access-pfn4g\") pod \"machine-config-controller-84d6567774-knhz8\" (UID: \"5c72bea6-adc6-4db0-aec2-3436d21d9871\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-knhz8" Jan 25 07:59:10 crc kubenswrapper[4832]: I0125 07:59:10.639242 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/70fee4de-12e8-4452-a3a7-731815ecbedd-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-c8cgr\" (UID: \"70fee4de-12e8-4452-a3a7-731815ecbedd\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-c8cgr" Jan 25 07:59:10 crc kubenswrapper[4832]: I0125 07:59:10.639275 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4e0912c6-9dfc-437a-92f0-c6ee3063c848-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-cbsh6\" (UID: \"4e0912c6-9dfc-437a-92f0-c6ee3063c848\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-cbsh6" Jan 25 07:59:10 crc kubenswrapper[4832]: I0125 07:59:10.639324 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5wdjp\" (UniqueName: \"kubernetes.io/projected/1228f33e-a6bd-4c51-ad90-f005c2848d83-kube-api-access-5wdjp\") pod \"package-server-manager-789f6589d5-tqtnp\" (UID: \"1228f33e-a6bd-4c51-ad90-f005c2848d83\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-tqtnp" Jan 25 07:59:10 crc kubenswrapper[4832]: I0125 07:59:10.639352 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8dkwz\" (UniqueName: \"kubernetes.io/projected/a32ac557-809a-4a0d-8c18-3c8c5730e849-kube-api-access-8dkwz\") pod \"control-plane-machine-set-operator-78cbb6b69f-fns8l\" (UID: \"a32ac557-809a-4a0d-8c18-3c8c5730e849\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-fns8l" Jan 25 07:59:10 crc kubenswrapper[4832]: I0125 07:59:10.639411 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5be2bfa8-9baa-44a1-92d1-473ff9c0478d-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-drfl8\" (UID: \"5be2bfa8-9baa-44a1-92d1-473ff9c0478d\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-drfl8" Jan 25 07:59:10 crc kubenswrapper[4832]: I0125 07:59:10.639438 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/468a6836-4216-434c-8c75-16b6d41eb2c4-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-b84df\" (UID: \"468a6836-4216-434c-8c75-16b6d41eb2c4\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-b84df" Jan 25 07:59:10 crc kubenswrapper[4832]: I0125 07:59:10.639486 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/cb0834ac-2ef5-48dc-a86f-511e79c897f7-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-q5r28\" (UID: \"cb0834ac-2ef5-48dc-a86f-511e79c897f7\") " pod="openshift-authentication/oauth-openshift-558db77b4-q5r28" Jan 25 07:59:10 crc kubenswrapper[4832]: I0125 07:59:10.639517 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/f6da273c-cb4f-48a9-88cf-70ae8647e580-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-fcqfl\" (UID: \"f6da273c-cb4f-48a9-88cf-70ae8647e580\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-fcqfl" Jan 25 07:59:10 crc kubenswrapper[4832]: I0125 07:59:10.639567 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ch6hx\" (UniqueName: \"kubernetes.io/projected/f6da273c-cb4f-48a9-88cf-70ae8647e580-kube-api-access-ch6hx\") pod \"apiserver-7bbb656c7d-fcqfl\" (UID: \"f6da273c-cb4f-48a9-88cf-70ae8647e580\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-fcqfl" Jan 25 07:59:10 crc kubenswrapper[4832]: I0125 07:59:10.639594 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/051ceaa0-fdb3-480a-9c5d-f56b1194ca81-secret-volume\") pod \"collect-profiles-29488785-dcf79\" (UID: \"051ceaa0-fdb3-480a-9c5d-f56b1194ca81\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29488785-dcf79" Jan 25 07:59:10 crc kubenswrapper[4832]: I0125 07:59:10.639651 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/462c88d9-0b9e-4b53-9b5d-78e14179c952-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-p7n7p\" (UID: \"462c88d9-0b9e-4b53-9b5d-78e14179c952\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-p7n7p" Jan 25 07:59:10 crc kubenswrapper[4832]: I0125 07:59:10.639679 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-shcjj\" (UniqueName: \"kubernetes.io/projected/7fad5166-9aa0-4c10-8c73-2186af1d226d-kube-api-access-shcjj\") pod \"route-controller-manager-6576b87f9c-csbzw\" (UID: \"7fad5166-9aa0-4c10-8c73-2186af1d226d\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-csbzw" Jan 25 07:59:10 crc kubenswrapper[4832]: I0125 07:59:10.640010 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d51e019-aeb4-42b0-a900-257aead64221-trusted-ca\") pod \"console-operator-58897d9998-fswfm\" (UID: \"9d51e019-aeb4-42b0-a900-257aead64221\") " pod="openshift-console-operator/console-operator-58897d9998-fswfm" Jan 25 07:59:10 crc kubenswrapper[4832]: I0125 07:59:10.640274 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d51e019-aeb4-42b0-a900-257aead64221-config\") pod \"console-operator-58897d9998-fswfm\" (UID: \"9d51e019-aeb4-42b0-a900-257aead64221\") " pod="openshift-console-operator/console-operator-58897d9998-fswfm" Jan 25 07:59:10 crc kubenswrapper[4832]: I0125 07:59:10.640622 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/cb0834ac-2ef5-48dc-a86f-511e79c897f7-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-q5r28\" (UID: \"cb0834ac-2ef5-48dc-a86f-511e79c897f7\") " pod="openshift-authentication/oauth-openshift-558db77b4-q5r28" Jan 25 07:59:10 crc kubenswrapper[4832]: I0125 07:59:10.640671 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/cdc4f06b-3e9a-4855-8400-faabc37cd870-metrics-certs\") pod \"router-default-5444994796-xjkrg\" (UID: \"cdc4f06b-3e9a-4855-8400-faabc37cd870\") " pod="openshift-ingress/router-default-5444994796-xjkrg" Jan 25 07:59:10 crc kubenswrapper[4832]: I0125 07:59:10.641620 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6afbd903-07e1-4806-9a41-a073a6a4acb7-images\") pod \"machine-api-operator-5694c8668f-29fbk\" (UID: \"6afbd903-07e1-4806-9a41-a073a6a4acb7\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-29fbk" Jan 25 07:59:10 crc kubenswrapper[4832]: I0125 07:59:10.641450 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/267d2772-42e1-4031-bc5f-ac78559a7f82-trusted-ca\") pod \"image-registry-697d97f7c8-xw4z9\" (UID: \"267d2772-42e1-4031-bc5f-ac78559a7f82\") " pod="openshift-image-registry/image-registry-697d97f7c8-xw4z9" Jan 25 07:59:10 crc kubenswrapper[4832]: I0125 07:59:10.641545 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/f6da273c-cb4f-48a9-88cf-70ae8647e580-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-fcqfl\" (UID: \"f6da273c-cb4f-48a9-88cf-70ae8647e580\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-fcqfl" Jan 25 07:59:10 crc kubenswrapper[4832]: I0125 07:59:10.640866 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/267d2772-42e1-4031-bc5f-ac78559a7f82-ca-trust-extracted\") pod \"image-registry-697d97f7c8-xw4z9\" (UID: \"267d2772-42e1-4031-bc5f-ac78559a7f82\") " pod="openshift-image-registry/image-registry-697d97f7c8-xw4z9" Jan 25 07:59:10 crc kubenswrapper[4832]: I0125 07:59:10.641676 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xw4z9\" (UID: \"267d2772-42e1-4031-bc5f-ac78559a7f82\") " pod="openshift-image-registry/image-registry-697d97f7c8-xw4z9" Jan 25 07:59:10 crc kubenswrapper[4832]: I0125 07:59:10.641723 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7fad5166-9aa0-4c10-8c73-2186af1d226d-client-ca\") pod \"route-controller-manager-6576b87f9c-csbzw\" (UID: \"7fad5166-9aa0-4c10-8c73-2186af1d226d\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-csbzw" Jan 25 07:59:10 crc kubenswrapper[4832]: E0125 07:59:10.642037 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-25 07:59:11.14202198 +0000 UTC m=+133.815845513 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xw4z9" (UID: "267d2772-42e1-4031-bc5f-ac78559a7f82") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 25 07:59:10 crc kubenswrapper[4832]: I0125 07:59:10.643337 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/c670a610-3a09-4fc1-acb2-f768bc4e5bab-cert\") pod \"ingress-canary-5bk7m\" (UID: \"c670a610-3a09-4fc1-acb2-f768bc4e5bab\") " pod="openshift-ingress-canary/ingress-canary-5bk7m" Jan 25 07:59:10 crc kubenswrapper[4832]: I0125 07:59:10.648860 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/267d2772-42e1-4031-bc5f-ac78559a7f82-bound-sa-token\") pod \"image-registry-697d97f7c8-xw4z9\" (UID: \"267d2772-42e1-4031-bc5f-ac78559a7f82\") " pod="openshift-image-registry/image-registry-697d97f7c8-xw4z9" Jan 25 07:59:10 crc kubenswrapper[4832]: I0125 07:59:10.649051 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/f6da273c-cb4f-48a9-88cf-70ae8647e580-audit-policies\") pod \"apiserver-7bbb656c7d-fcqfl\" (UID: \"f6da273c-cb4f-48a9-88cf-70ae8647e580\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-fcqfl" Jan 25 07:59:10 crc kubenswrapper[4832]: I0125 07:59:10.649447 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/468a6836-4216-434c-8c75-16b6d41eb2c4-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-b84df\" (UID: \"468a6836-4216-434c-8c75-16b6d41eb2c4\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-b84df" Jan 25 07:59:10 crc kubenswrapper[4832]: I0125 07:59:10.649720 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/a32ac557-809a-4a0d-8c18-3c8c5730e849-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-fns8l\" (UID: \"a32ac557-809a-4a0d-8c18-3c8c5730e849\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-fns8l" Jan 25 07:59:10 crc kubenswrapper[4832]: I0125 07:59:10.649803 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/cb0834ac-2ef5-48dc-a86f-511e79c897f7-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-q5r28\" (UID: \"cb0834ac-2ef5-48dc-a86f-511e79c897f7\") " pod="openshift-authentication/oauth-openshift-558db77b4-q5r28" Jan 25 07:59:10 crc kubenswrapper[4832]: I0125 07:59:10.649819 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/70fee4de-12e8-4452-a3a7-731815ecbedd-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-c8cgr\" (UID: \"70fee4de-12e8-4452-a3a7-731815ecbedd\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-c8cgr" Jan 25 07:59:10 crc kubenswrapper[4832]: I0125 07:59:10.649965 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-86wx4\" (UniqueName: \"kubernetes.io/projected/b1211d5b-db27-4814-85b9-241c30afaaab-kube-api-access-86wx4\") pod \"multus-admission-controller-857f4d67dd-gp55m\" (UID: \"b1211d5b-db27-4814-85b9-241c30afaaab\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-gp55m" Jan 25 07:59:10 crc kubenswrapper[4832]: I0125 07:59:10.649979 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/cb0834ac-2ef5-48dc-a86f-511e79c897f7-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-q5r28\" (UID: \"cb0834ac-2ef5-48dc-a86f-511e79c897f7\") " pod="openshift-authentication/oauth-openshift-558db77b4-q5r28" Jan 25 07:59:10 crc kubenswrapper[4832]: I0125 07:59:10.650042 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zqdtr\" (UniqueName: \"kubernetes.io/projected/c05896f4-ee7d-4b10-949e-b8bf0d822313-kube-api-access-zqdtr\") pod \"downloads-7954f5f757-jvld2\" (UID: \"c05896f4-ee7d-4b10-949e-b8bf0d822313\") " pod="openshift-console/downloads-7954f5f757-jvld2" Jan 25 07:59:10 crc kubenswrapper[4832]: I0125 07:59:10.650081 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/b945d594-8566-495a-a66a-92fcd625f021-certs\") pod \"machine-config-server-752ng\" (UID: \"b945d594-8566-495a-a66a-92fcd625f021\") " pod="openshift-machine-config-operator/machine-config-server-752ng" Jan 25 07:59:10 crc kubenswrapper[4832]: I0125 07:59:10.650118 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/023a5b50-72c3-42a2-8104-dc50489cf857-etcd-service-ca\") pod \"etcd-operator-b45778765-f222l\" (UID: \"023a5b50-72c3-42a2-8104-dc50489cf857\") " pod="openshift-etcd-operator/etcd-operator-b45778765-f222l" Jan 25 07:59:10 crc kubenswrapper[4832]: I0125 07:59:10.650147 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6gxcx\" (UniqueName: \"kubernetes.io/projected/58b235e2-ab37-4d26-ba86-c188dae1bcda-kube-api-access-6gxcx\") pod \"ingress-operator-5b745b69d9-zxhsq\" (UID: \"58b235e2-ab37-4d26-ba86-c188dae1bcda\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-zxhsq" Jan 25 07:59:10 crc kubenswrapper[4832]: I0125 07:59:10.650168 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/b1211d5b-db27-4814-85b9-241c30afaaab-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-gp55m\" (UID: \"b1211d5b-db27-4814-85b9-241c30afaaab\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-gp55m" Jan 25 07:59:10 crc kubenswrapper[4832]: I0125 07:59:10.650190 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/648bd733-1181-4dcf-8b9c-40806f713ca6-serving-cert\") pod \"service-ca-operator-777779d784-cdncb\" (UID: \"648bd733-1181-4dcf-8b9c-40806f713ca6\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-cdncb" Jan 25 07:59:10 crc kubenswrapper[4832]: I0125 07:59:10.650210 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v65h5\" (UniqueName: \"kubernetes.io/projected/f25ba7b4-ecd6-4e84-a97a-13c8fa94f522-kube-api-access-v65h5\") pod \"catalog-operator-68c6474976-6gswk\" (UID: \"f25ba7b4-ecd6-4e84-a97a-13c8fa94f522\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-6gswk" Jan 25 07:59:10 crc kubenswrapper[4832]: I0125 07:59:10.650232 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f25ba7b4-ecd6-4e84-a97a-13c8fa94f522-srv-cert\") pod \"catalog-operator-68c6474976-6gswk\" (UID: \"f25ba7b4-ecd6-4e84-a97a-13c8fa94f522\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-6gswk" Jan 25 07:59:10 crc kubenswrapper[4832]: I0125 07:59:10.650262 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cb0834ac-2ef5-48dc-a86f-511e79c897f7-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-q5r28\" (UID: \"cb0834ac-2ef5-48dc-a86f-511e79c897f7\") " pod="openshift-authentication/oauth-openshift-558db77b4-q5r28" Jan 25 07:59:10 crc kubenswrapper[4832]: I0125 07:59:10.650343 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/5c72bea6-adc6-4db0-aec2-3436d21d9871-proxy-tls\") pod \"machine-config-controller-84d6567774-knhz8\" (UID: \"5c72bea6-adc6-4db0-aec2-3436d21d9871\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-knhz8" Jan 25 07:59:10 crc kubenswrapper[4832]: I0125 07:59:10.650361 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/023a5b50-72c3-42a2-8104-dc50489cf857-serving-cert\") pod \"etcd-operator-b45778765-f222l\" (UID: \"023a5b50-72c3-42a2-8104-dc50489cf857\") " pod="openshift-etcd-operator/etcd-operator-b45778765-f222l" Jan 25 07:59:10 crc kubenswrapper[4832]: I0125 07:59:10.650424 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l4l94\" (UniqueName: \"kubernetes.io/projected/051ceaa0-fdb3-480a-9c5d-f56b1194ca81-kube-api-access-l4l94\") pod \"collect-profiles-29488785-dcf79\" (UID: \"051ceaa0-fdb3-480a-9c5d-f56b1194ca81\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29488785-dcf79" Jan 25 07:59:10 crc kubenswrapper[4832]: I0125 07:59:10.650472 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/cb0834ac-2ef5-48dc-a86f-511e79c897f7-audit-dir\") pod \"oauth-openshift-558db77b4-q5r28\" (UID: \"cb0834ac-2ef5-48dc-a86f-511e79c897f7\") " pod="openshift-authentication/oauth-openshift-558db77b4-q5r28" Jan 25 07:59:10 crc kubenswrapper[4832]: I0125 07:59:10.650491 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6afbd903-07e1-4806-9a41-a073a6a4acb7-config\") pod \"machine-api-operator-5694c8668f-29fbk\" (UID: \"6afbd903-07e1-4806-9a41-a073a6a4acb7\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-29fbk" Jan 25 07:59:10 crc kubenswrapper[4832]: I0125 07:59:10.650546 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/9626a1b0-481b-4cd5-a439-c45a98f1c391-auth-proxy-config\") pod \"machine-approver-56656f9798-9jlxs\" (UID: \"9626a1b0-481b-4cd5-a439-c45a98f1c391\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-9jlxs" Jan 25 07:59:10 crc kubenswrapper[4832]: I0125 07:59:10.650587 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hcmqb\" (UniqueName: \"kubernetes.io/projected/bd278886-fb8d-4013-ae54-83edde53bdaa-kube-api-access-hcmqb\") pod \"machine-config-operator-74547568cd-mggjn\" (UID: \"bd278886-fb8d-4013-ae54-83edde53bdaa\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-mggjn" Jan 25 07:59:10 crc kubenswrapper[4832]: I0125 07:59:10.650636 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9626a1b0-481b-4cd5-a439-c45a98f1c391-config\") pod \"machine-approver-56656f9798-9jlxs\" (UID: \"9626a1b0-481b-4cd5-a439-c45a98f1c391\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-9jlxs" Jan 25 07:59:10 crc kubenswrapper[4832]: I0125 07:59:10.650687 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/cb0834ac-2ef5-48dc-a86f-511e79c897f7-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-q5r28\" (UID: \"cb0834ac-2ef5-48dc-a86f-511e79c897f7\") " pod="openshift-authentication/oauth-openshift-558db77b4-q5r28" Jan 25 07:59:10 crc kubenswrapper[4832]: I0125 07:59:10.650720 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/95dbbcf8-838b-4f56-928a-81b4f038b259-console-serving-cert\") pod \"console-f9d7485db-8pg27\" (UID: \"95dbbcf8-838b-4f56-928a-81b4f038b259\") " pod="openshift-console/console-f9d7485db-8pg27" Jan 25 07:59:10 crc kubenswrapper[4832]: I0125 07:59:10.650746 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xk4vl\" (UniqueName: \"kubernetes.io/projected/8be00535-0bc6-41a2-a79c-552be0f574a8-kube-api-access-xk4vl\") pod \"controller-manager-879f6c89f-sqbmg\" (UID: \"8be00535-0bc6-41a2-a79c-552be0f574a8\") " pod="openshift-controller-manager/controller-manager-879f6c89f-sqbmg" Jan 25 07:59:10 crc kubenswrapper[4832]: I0125 07:59:10.650798 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c592226b-85c1-48b3-9e85-cbd606c1f94d-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-dswxl\" (UID: \"c592226b-85c1-48b3-9e85-cbd606c1f94d\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-dswxl" Jan 25 07:59:10 crc kubenswrapper[4832]: I0125 07:59:10.650828 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6eb8ff11-3ea3-4569-9d87-e89416c04784-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-6llzt\" (UID: \"6eb8ff11-3ea3-4569-9d87-e89416c04784\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-6llzt" Jan 25 07:59:10 crc kubenswrapper[4832]: I0125 07:59:10.650859 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/cdc4f06b-3e9a-4855-8400-faabc37cd870-stats-auth\") pod \"router-default-5444994796-xjkrg\" (UID: \"cdc4f06b-3e9a-4855-8400-faabc37cd870\") " pod="openshift-ingress/router-default-5444994796-xjkrg" Jan 25 07:59:10 crc kubenswrapper[4832]: I0125 07:59:10.650892 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/cba7e1f8-bc7f-4c85-bdc5-4a81bb6622d1-signing-cabundle\") pod \"service-ca-9c57cc56f-kpg7m\" (UID: \"cba7e1f8-bc7f-4c85-bdc5-4a81bb6622d1\") " pod="openshift-service-ca/service-ca-9c57cc56f-kpg7m" Jan 25 07:59:10 crc kubenswrapper[4832]: I0125 07:59:10.650927 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/267d2772-42e1-4031-bc5f-ac78559a7f82-registry-tls\") pod \"image-registry-697d97f7c8-xw4z9\" (UID: \"267d2772-42e1-4031-bc5f-ac78559a7f82\") " pod="openshift-image-registry/image-registry-697d97f7c8-xw4z9" Jan 25 07:59:10 crc kubenswrapper[4832]: I0125 07:59:10.650960 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hh8pm\" (UniqueName: \"kubernetes.io/projected/6afbd903-07e1-4806-9a41-a073a6a4acb7-kube-api-access-hh8pm\") pod \"machine-api-operator-5694c8668f-29fbk\" (UID: \"6afbd903-07e1-4806-9a41-a073a6a4acb7\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-29fbk" Jan 25 07:59:10 crc kubenswrapper[4832]: I0125 07:59:10.650987 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/95dbbcf8-838b-4f56-928a-81b4f038b259-console-oauth-config\") pod \"console-f9d7485db-8pg27\" (UID: \"95dbbcf8-838b-4f56-928a-81b4f038b259\") " pod="openshift-console/console-f9d7485db-8pg27" Jan 25 07:59:10 crc kubenswrapper[4832]: I0125 07:59:10.651019 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/4b4ff59a-58d8-4822-8be8-d48a5a85b2d2-mountpoint-dir\") pod \"csi-hostpathplugin-jjs2r\" (UID: \"4b4ff59a-58d8-4822-8be8-d48a5a85b2d2\") " pod="hostpath-provisioner/csi-hostpathplugin-jjs2r" Jan 25 07:59:10 crc kubenswrapper[4832]: I0125 07:59:10.651071 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6eb8ff11-3ea3-4569-9d87-e89416c04784-service-ca-bundle\") pod \"authentication-operator-69f744f599-6llzt\" (UID: \"6eb8ff11-3ea3-4569-9d87-e89416c04784\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-6llzt" Jan 25 07:59:10 crc kubenswrapper[4832]: I0125 07:59:10.651102 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/cba7e1f8-bc7f-4c85-bdc5-4a81bb6622d1-signing-key\") pod \"service-ca-9c57cc56f-kpg7m\" (UID: \"cba7e1f8-bc7f-4c85-bdc5-4a81bb6622d1\") " pod="openshift-service-ca/service-ca-9c57cc56f-kpg7m" Jan 25 07:59:10 crc kubenswrapper[4832]: I0125 07:59:10.651131 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/cb0834ac-2ef5-48dc-a86f-511e79c897f7-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-q5r28\" (UID: \"cb0834ac-2ef5-48dc-a86f-511e79c897f7\") " pod="openshift-authentication/oauth-openshift-558db77b4-q5r28" Jan 25 07:59:10 crc kubenswrapper[4832]: I0125 07:59:10.651164 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/95dbbcf8-838b-4f56-928a-81b4f038b259-oauth-serving-cert\") pod \"console-f9d7485db-8pg27\" (UID: \"95dbbcf8-838b-4f56-928a-81b4f038b259\") " pod="openshift-console/console-f9d7485db-8pg27" Jan 25 07:59:10 crc kubenswrapper[4832]: I0125 07:59:10.651180 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/f6da273c-cb4f-48a9-88cf-70ae8647e580-audit-policies\") pod \"apiserver-7bbb656c7d-fcqfl\" (UID: \"f6da273c-cb4f-48a9-88cf-70ae8647e580\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-fcqfl" Jan 25 07:59:10 crc kubenswrapper[4832]: I0125 07:59:10.651258 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/023a5b50-72c3-42a2-8104-dc50489cf857-etcd-ca\") pod \"etcd-operator-b45778765-f222l\" (UID: \"023a5b50-72c3-42a2-8104-dc50489cf857\") " pod="openshift-etcd-operator/etcd-operator-b45778765-f222l" Jan 25 07:59:10 crc kubenswrapper[4832]: I0125 07:59:10.651297 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/267d2772-42e1-4031-bc5f-ac78559a7f82-installation-pull-secrets\") pod \"image-registry-697d97f7c8-xw4z9\" (UID: \"267d2772-42e1-4031-bc5f-ac78559a7f82\") " pod="openshift-image-registry/image-registry-697d97f7c8-xw4z9" Jan 25 07:59:10 crc kubenswrapper[4832]: I0125 07:59:10.651324 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rwn9v\" (UniqueName: \"kubernetes.io/projected/468a6836-4216-434c-8c75-16b6d41eb2c4-kube-api-access-rwn9v\") pod \"cluster-samples-operator-665b6dd947-b84df\" (UID: \"468a6836-4216-434c-8c75-16b6d41eb2c4\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-b84df" Jan 25 07:59:10 crc kubenswrapper[4832]: I0125 07:59:10.651422 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8be00535-0bc6-41a2-a79c-552be0f574a8-serving-cert\") pod \"controller-manager-879f6c89f-sqbmg\" (UID: \"8be00535-0bc6-41a2-a79c-552be0f574a8\") " pod="openshift-controller-manager/controller-manager-879f6c89f-sqbmg" Jan 25 07:59:10 crc kubenswrapper[4832]: I0125 07:59:10.651465 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/58b235e2-ab37-4d26-ba86-c188dae1bcda-metrics-tls\") pod \"ingress-operator-5b745b69d9-zxhsq\" (UID: \"58b235e2-ab37-4d26-ba86-c188dae1bcda\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-zxhsq" Jan 25 07:59:10 crc kubenswrapper[4832]: I0125 07:59:10.651496 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f6da273c-cb4f-48a9-88cf-70ae8647e580-audit-dir\") pod \"apiserver-7bbb656c7d-fcqfl\" (UID: \"f6da273c-cb4f-48a9-88cf-70ae8647e580\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-fcqfl" Jan 25 07:59:10 crc kubenswrapper[4832]: I0125 07:59:10.651528 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/70fee4de-12e8-4452-a3a7-731815ecbedd-config\") pod \"openshift-apiserver-operator-796bbdcf4f-c8cgr\" (UID: \"70fee4de-12e8-4452-a3a7-731815ecbedd\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-c8cgr" Jan 25 07:59:10 crc kubenswrapper[4832]: I0125 07:59:10.651560 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cdc4f06b-3e9a-4855-8400-faabc37cd870-service-ca-bundle\") pod \"router-default-5444994796-xjkrg\" (UID: \"cdc4f06b-3e9a-4855-8400-faabc37cd870\") " pod="openshift-ingress/router-default-5444994796-xjkrg" Jan 25 07:59:10 crc kubenswrapper[4832]: I0125 07:59:10.651729 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/4b4ff59a-58d8-4822-8be8-d48a5a85b2d2-socket-dir\") pod \"csi-hostpathplugin-jjs2r\" (UID: \"4b4ff59a-58d8-4822-8be8-d48a5a85b2d2\") " pod="hostpath-provisioner/csi-hostpathplugin-jjs2r" Jan 25 07:59:10 crc kubenswrapper[4832]: I0125 07:59:10.651766 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/4b4ff59a-58d8-4822-8be8-d48a5a85b2d2-registration-dir\") pod \"csi-hostpathplugin-jjs2r\" (UID: \"4b4ff59a-58d8-4822-8be8-d48a5a85b2d2\") " pod="hostpath-provisioner/csi-hostpathplugin-jjs2r" Jan 25 07:59:10 crc kubenswrapper[4832]: I0125 07:59:10.651802 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f25ba7b4-ecd6-4e84-a97a-13c8fa94f522-profile-collector-cert\") pod \"catalog-operator-68c6474976-6gswk\" (UID: \"f25ba7b4-ecd6-4e84-a97a-13c8fa94f522\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-6gswk" Jan 25 07:59:10 crc kubenswrapper[4832]: I0125 07:59:10.651832 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zrr9d\" (UniqueName: \"kubernetes.io/projected/4b4ff59a-58d8-4822-8be8-d48a5a85b2d2-kube-api-access-zrr9d\") pod \"csi-hostpathplugin-jjs2r\" (UID: \"4b4ff59a-58d8-4822-8be8-d48a5a85b2d2\") " pod="hostpath-provisioner/csi-hostpathplugin-jjs2r" Jan 25 07:59:10 crc kubenswrapper[4832]: I0125 07:59:10.652008 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f6da273c-cb4f-48a9-88cf-70ae8647e580-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-fcqfl\" (UID: \"f6da273c-cb4f-48a9-88cf-70ae8647e580\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-fcqfl" Jan 25 07:59:10 crc kubenswrapper[4832]: I0125 07:59:10.652081 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5wfmj\" (UniqueName: \"kubernetes.io/projected/b945d594-8566-495a-a66a-92fcd625f021-kube-api-access-5wfmj\") pod \"machine-config-server-752ng\" (UID: \"b945d594-8566-495a-a66a-92fcd625f021\") " pod="openshift-machine-config-operator/machine-config-server-752ng" Jan 25 07:59:10 crc kubenswrapper[4832]: I0125 07:59:10.652119 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/c592226b-85c1-48b3-9e85-cbd606c1f94d-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-dswxl\" (UID: \"c592226b-85c1-48b3-9e85-cbd606c1f94d\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-dswxl" Jan 25 07:59:10 crc kubenswrapper[4832]: I0125 07:59:10.652159 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/95dbbcf8-838b-4f56-928a-81b4f038b259-service-ca\") pod \"console-f9d7485db-8pg27\" (UID: \"95dbbcf8-838b-4f56-928a-81b4f038b259\") " pod="openshift-console/console-f9d7485db-8pg27" Jan 25 07:59:10 crc kubenswrapper[4832]: I0125 07:59:10.654443 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7fad5166-9aa0-4c10-8c73-2186af1d226d-client-ca\") pod \"route-controller-manager-6576b87f9c-csbzw\" (UID: \"7fad5166-9aa0-4c10-8c73-2186af1d226d\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-csbzw" Jan 25 07:59:10 crc kubenswrapper[4832]: I0125 07:59:10.654690 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/b1211d5b-db27-4814-85b9-241c30afaaab-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-gp55m\" (UID: \"b1211d5b-db27-4814-85b9-241c30afaaab\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-gp55m" Jan 25 07:59:10 crc kubenswrapper[4832]: I0125 07:59:10.655029 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/cb0834ac-2ef5-48dc-a86f-511e79c897f7-audit-dir\") pod \"oauth-openshift-558db77b4-q5r28\" (UID: \"cb0834ac-2ef5-48dc-a86f-511e79c897f7\") " pod="openshift-authentication/oauth-openshift-558db77b4-q5r28" Jan 25 07:59:10 crc kubenswrapper[4832]: I0125 07:59:10.655547 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f6da273c-cb4f-48a9-88cf-70ae8647e580-audit-dir\") pod \"apiserver-7bbb656c7d-fcqfl\" (UID: \"f6da273c-cb4f-48a9-88cf-70ae8647e580\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-fcqfl" Jan 25 07:59:10 crc kubenswrapper[4832]: I0125 07:59:10.655657 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f6da273c-cb4f-48a9-88cf-70ae8647e580-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-fcqfl\" (UID: \"f6da273c-cb4f-48a9-88cf-70ae8647e580\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-fcqfl" Jan 25 07:59:10 crc kubenswrapper[4832]: I0125 07:59:10.656008 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/9626a1b0-481b-4cd5-a439-c45a98f1c391-machine-approver-tls\") pod \"machine-approver-56656f9798-9jlxs\" (UID: \"9626a1b0-481b-4cd5-a439-c45a98f1c391\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-9jlxs" Jan 25 07:59:10 crc kubenswrapper[4832]: I0125 07:59:10.656044 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/567da687-f308-4473-a3d0-aad511ca6e8b-tmpfs\") pod \"packageserver-d55dfcdfc-vhn96\" (UID: \"567da687-f308-4473-a3d0-aad511ca6e8b\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-vhn96" Jan 25 07:59:10 crc kubenswrapper[4832]: I0125 07:59:10.657258 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c592226b-85c1-48b3-9e85-cbd606c1f94d-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-dswxl\" (UID: \"c592226b-85c1-48b3-9e85-cbd606c1f94d\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-dswxl" Jan 25 07:59:10 crc kubenswrapper[4832]: I0125 07:59:10.658044 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/95dbbcf8-838b-4f56-928a-81b4f038b259-oauth-serving-cert\") pod \"console-f9d7485db-8pg27\" (UID: \"95dbbcf8-838b-4f56-928a-81b4f038b259\") " pod="openshift-console/console-f9d7485db-8pg27" Jan 25 07:59:10 crc kubenswrapper[4832]: I0125 07:59:10.659904 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/70fee4de-12e8-4452-a3a7-731815ecbedd-config\") pod \"openshift-apiserver-operator-796bbdcf4f-c8cgr\" (UID: \"70fee4de-12e8-4452-a3a7-731815ecbedd\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-c8cgr" Jan 25 07:59:10 crc kubenswrapper[4832]: I0125 07:59:10.661555 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6afbd903-07e1-4806-9a41-a073a6a4acb7-config\") pod \"machine-api-operator-5694c8668f-29fbk\" (UID: \"6afbd903-07e1-4806-9a41-a073a6a4acb7\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-29fbk" Jan 25 07:59:10 crc kubenswrapper[4832]: I0125 07:59:10.662160 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/95dbbcf8-838b-4f56-928a-81b4f038b259-console-oauth-config\") pod \"console-f9d7485db-8pg27\" (UID: \"95dbbcf8-838b-4f56-928a-81b4f038b259\") " pod="openshift-console/console-f9d7485db-8pg27" Jan 25 07:59:10 crc kubenswrapper[4832]: I0125 07:59:10.662179 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4e0912c6-9dfc-437a-92f0-c6ee3063c848-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-cbsh6\" (UID: \"4e0912c6-9dfc-437a-92f0-c6ee3063c848\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-cbsh6" Jan 25 07:59:10 crc kubenswrapper[4832]: I0125 07:59:10.662624 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/567da687-f308-4473-a3d0-aad511ca6e8b-webhook-cert\") pod \"packageserver-d55dfcdfc-vhn96\" (UID: \"567da687-f308-4473-a3d0-aad511ca6e8b\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-vhn96" Jan 25 07:59:10 crc kubenswrapper[4832]: I0125 07:59:10.662667 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tk6k2\" (UniqueName: \"kubernetes.io/projected/567da687-f308-4473-a3d0-aad511ca6e8b-kube-api-access-tk6k2\") pod \"packageserver-d55dfcdfc-vhn96\" (UID: \"567da687-f308-4473-a3d0-aad511ca6e8b\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-vhn96" Jan 25 07:59:10 crc kubenswrapper[4832]: I0125 07:59:10.662691 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-44txw\" (UniqueName: \"kubernetes.io/projected/92293986-2979-44e0-8331-72f2546d576e-kube-api-access-44txw\") pod \"migrator-59844c95c7-c8c6f\" (UID: \"92293986-2979-44e0-8331-72f2546d576e\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-c8c6f" Jan 25 07:59:10 crc kubenswrapper[4832]: I0125 07:59:10.662721 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/648bd733-1181-4dcf-8b9c-40806f713ca6-config\") pod \"service-ca-operator-777779d784-cdncb\" (UID: \"648bd733-1181-4dcf-8b9c-40806f713ca6\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-cdncb" Jan 25 07:59:10 crc kubenswrapper[4832]: I0125 07:59:10.662762 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/267d2772-42e1-4031-bc5f-ac78559a7f82-registry-certificates\") pod \"image-registry-697d97f7c8-xw4z9\" (UID: \"267d2772-42e1-4031-bc5f-ac78559a7f82\") " pod="openshift-image-registry/image-registry-697d97f7c8-xw4z9" Jan 25 07:59:10 crc kubenswrapper[4832]: I0125 07:59:10.662791 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/58b235e2-ab37-4d26-ba86-c188dae1bcda-bound-sa-token\") pod \"ingress-operator-5b745b69d9-zxhsq\" (UID: \"58b235e2-ab37-4d26-ba86-c188dae1bcda\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-zxhsq" Jan 25 07:59:10 crc kubenswrapper[4832]: I0125 07:59:10.662813 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/051ceaa0-fdb3-480a-9c5d-f56b1194ca81-config-volume\") pod \"collect-profiles-29488785-dcf79\" (UID: \"051ceaa0-fdb3-480a-9c5d-f56b1194ca81\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29488785-dcf79" Jan 25 07:59:10 crc kubenswrapper[4832]: I0125 07:59:10.662862 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8be00535-0bc6-41a2-a79c-552be0f574a8-config\") pod \"controller-manager-879f6c89f-sqbmg\" (UID: \"8be00535-0bc6-41a2-a79c-552be0f574a8\") " pod="openshift-controller-manager/controller-manager-879f6c89f-sqbmg" Jan 25 07:59:10 crc kubenswrapper[4832]: I0125 07:59:10.662890 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f6da273c-cb4f-48a9-88cf-70ae8647e580-serving-cert\") pod \"apiserver-7bbb656c7d-fcqfl\" (UID: \"f6da273c-cb4f-48a9-88cf-70ae8647e580\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-fcqfl" Jan 25 07:59:10 crc kubenswrapper[4832]: I0125 07:59:10.662923 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fbljd\" (UniqueName: \"kubernetes.io/projected/5be2bfa8-9baa-44a1-92d1-473ff9c0478d-kube-api-access-fbljd\") pod \"openshift-controller-manager-operator-756b6f6bc6-drfl8\" (UID: \"5be2bfa8-9baa-44a1-92d1-473ff9c0478d\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-drfl8" Jan 25 07:59:10 crc kubenswrapper[4832]: I0125 07:59:10.662967 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8be00535-0bc6-41a2-a79c-552be0f574a8-client-ca\") pod \"controller-manager-879f6c89f-sqbmg\" (UID: \"8be00535-0bc6-41a2-a79c-552be0f574a8\") " pod="openshift-controller-manager/controller-manager-879f6c89f-sqbmg" Jan 25 07:59:10 crc kubenswrapper[4832]: I0125 07:59:10.662997 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/58b235e2-ab37-4d26-ba86-c188dae1bcda-trusted-ca\") pod \"ingress-operator-5b745b69d9-zxhsq\" (UID: \"58b235e2-ab37-4d26-ba86-c188dae1bcda\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-zxhsq" Jan 25 07:59:10 crc kubenswrapper[4832]: I0125 07:59:10.663048 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/95dbbcf8-838b-4f56-928a-81b4f038b259-trusted-ca-bundle\") pod \"console-f9d7485db-8pg27\" (UID: \"95dbbcf8-838b-4f56-928a-81b4f038b259\") " pod="openshift-console/console-f9d7485db-8pg27" Jan 25 07:59:10 crc kubenswrapper[4832]: I0125 07:59:10.663081 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zg799\" (UniqueName: \"kubernetes.io/projected/70fee4de-12e8-4452-a3a7-731815ecbedd-kube-api-access-zg799\") pod \"openshift-apiserver-operator-796bbdcf4f-c8cgr\" (UID: \"70fee4de-12e8-4452-a3a7-731815ecbedd\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-c8cgr" Jan 25 07:59:10 crc kubenswrapper[4832]: I0125 07:59:10.663117 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-shhr2\" (UniqueName: \"kubernetes.io/projected/9626a1b0-481b-4cd5-a439-c45a98f1c391-kube-api-access-shhr2\") pod \"machine-approver-56656f9798-9jlxs\" (UID: \"9626a1b0-481b-4cd5-a439-c45a98f1c391\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-9jlxs" Jan 25 07:59:10 crc kubenswrapper[4832]: I0125 07:59:10.663145 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2b69z\" (UniqueName: \"kubernetes.io/projected/cba7e1f8-bc7f-4c85-bdc5-4a81bb6622d1-kube-api-access-2b69z\") pod \"service-ca-9c57cc56f-kpg7m\" (UID: \"cba7e1f8-bc7f-4c85-bdc5-4a81bb6622d1\") " pod="openshift-service-ca/service-ca-9c57cc56f-kpg7m" Jan 25 07:59:10 crc kubenswrapper[4832]: I0125 07:59:10.663171 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/95dbbcf8-838b-4f56-928a-81b4f038b259-console-config\") pod \"console-f9d7485db-8pg27\" (UID: \"95dbbcf8-838b-4f56-928a-81b4f038b259\") " pod="openshift-console/console-f9d7485db-8pg27" Jan 25 07:59:10 crc kubenswrapper[4832]: I0125 07:59:10.663208 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d51e019-aeb4-42b0-a900-257aead64221-serving-cert\") pod \"console-operator-58897d9998-fswfm\" (UID: \"9d51e019-aeb4-42b0-a900-257aead64221\") " pod="openshift-console-operator/console-operator-58897d9998-fswfm" Jan 25 07:59:10 crc kubenswrapper[4832]: I0125 07:59:10.663240 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/cb0834ac-2ef5-48dc-a86f-511e79c897f7-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-q5r28\" (UID: \"cb0834ac-2ef5-48dc-a86f-511e79c897f7\") " pod="openshift-authentication/oauth-openshift-558db77b4-q5r28" Jan 25 07:59:10 crc kubenswrapper[4832]: I0125 07:59:10.663272 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/cb0834ac-2ef5-48dc-a86f-511e79c897f7-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-q5r28\" (UID: \"cb0834ac-2ef5-48dc-a86f-511e79c897f7\") " pod="openshift-authentication/oauth-openshift-558db77b4-q5r28" Jan 25 07:59:10 crc kubenswrapper[4832]: I0125 07:59:10.663304 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-295xp\" (UniqueName: \"kubernetes.io/projected/023a5b50-72c3-42a2-8104-dc50489cf857-kube-api-access-295xp\") pod \"etcd-operator-b45778765-f222l\" (UID: \"023a5b50-72c3-42a2-8104-dc50489cf857\") " pod="openshift-etcd-operator/etcd-operator-b45778765-f222l" Jan 25 07:59:10 crc kubenswrapper[4832]: I0125 07:59:10.663334 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/c592226b-85c1-48b3-9e85-cbd606c1f94d-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-dswxl\" (UID: \"c592226b-85c1-48b3-9e85-cbd606c1f94d\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-dswxl" Jan 25 07:59:10 crc kubenswrapper[4832]: I0125 07:59:10.663433 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4rtpx\" (UniqueName: \"kubernetes.io/projected/648bd733-1181-4dcf-8b9c-40806f713ca6-kube-api-access-4rtpx\") pod \"service-ca-operator-777779d784-cdncb\" (UID: \"648bd733-1181-4dcf-8b9c-40806f713ca6\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-cdncb" Jan 25 07:59:10 crc kubenswrapper[4832]: I0125 07:59:10.663468 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l7lq6\" (UniqueName: \"kubernetes.io/projected/267d2772-42e1-4031-bc5f-ac78559a7f82-kube-api-access-l7lq6\") pod \"image-registry-697d97f7c8-xw4z9\" (UID: \"267d2772-42e1-4031-bc5f-ac78559a7f82\") " pod="openshift-image-registry/image-registry-697d97f7c8-xw4z9" Jan 25 07:59:10 crc kubenswrapper[4832]: I0125 07:59:10.663516 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4x5qc\" (UniqueName: \"kubernetes.io/projected/cb0834ac-2ef5-48dc-a86f-511e79c897f7-kube-api-access-4x5qc\") pod \"oauth-openshift-558db77b4-q5r28\" (UID: \"cb0834ac-2ef5-48dc-a86f-511e79c897f7\") " pod="openshift-authentication/oauth-openshift-558db77b4-q5r28" Jan 25 07:59:10 crc kubenswrapper[4832]: I0125 07:59:10.663549 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7fad5166-9aa0-4c10-8c73-2186af1d226d-config\") pod \"route-controller-manager-6576b87f9c-csbzw\" (UID: \"7fad5166-9aa0-4c10-8c73-2186af1d226d\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-csbzw" Jan 25 07:59:10 crc kubenswrapper[4832]: I0125 07:59:10.663577 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/bd278886-fb8d-4013-ae54-83edde53bdaa-proxy-tls\") pod \"machine-config-operator-74547568cd-mggjn\" (UID: \"bd278886-fb8d-4013-ae54-83edde53bdaa\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-mggjn" Jan 25 07:59:10 crc kubenswrapper[4832]: I0125 07:59:10.663646 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q5q7m\" (UniqueName: \"kubernetes.io/projected/6eb8ff11-3ea3-4569-9d87-e89416c04784-kube-api-access-q5q7m\") pod \"authentication-operator-69f744f599-6llzt\" (UID: \"6eb8ff11-3ea3-4569-9d87-e89416c04784\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-6llzt" Jan 25 07:59:10 crc kubenswrapper[4832]: I0125 07:59:10.663676 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/567da687-f308-4473-a3d0-aad511ca6e8b-apiservice-cert\") pod \"packageserver-d55dfcdfc-vhn96\" (UID: \"567da687-f308-4473-a3d0-aad511ca6e8b\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-vhn96" Jan 25 07:59:10 crc kubenswrapper[4832]: I0125 07:59:10.663704 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fca662f7-e916-4728-8b6a-0b34ace7117f-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-9ll2t\" (UID: \"fca662f7-e916-4728-8b6a-0b34ace7117f\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-9ll2t" Jan 25 07:59:10 crc kubenswrapper[4832]: I0125 07:59:10.663736 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/cdc4f06b-3e9a-4855-8400-faabc37cd870-default-certificate\") pod \"router-default-5444994796-xjkrg\" (UID: \"cdc4f06b-3e9a-4855-8400-faabc37cd870\") " pod="openshift-ingress/router-default-5444994796-xjkrg" Jan 25 07:59:10 crc kubenswrapper[4832]: I0125 07:59:10.663766 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/8be00535-0bc6-41a2-a79c-552be0f574a8-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-sqbmg\" (UID: \"8be00535-0bc6-41a2-a79c-552be0f574a8\") " pod="openshift-controller-manager/controller-manager-879f6c89f-sqbmg" Jan 25 07:59:10 crc kubenswrapper[4832]: I0125 07:59:10.663791 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sfl69\" (UniqueName: \"kubernetes.io/projected/24acc510-4a43-4275-9a46-fe2e8258b3c7-kube-api-access-sfl69\") pod \"dns-default-88fz6\" (UID: \"24acc510-4a43-4275-9a46-fe2e8258b3c7\") " pod="openshift-dns/dns-default-88fz6" Jan 25 07:59:10 crc kubenswrapper[4832]: I0125 07:59:10.663825 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/cb0834ac-2ef5-48dc-a86f-511e79c897f7-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-q5r28\" (UID: \"cb0834ac-2ef5-48dc-a86f-511e79c897f7\") " pod="openshift-authentication/oauth-openshift-558db77b4-q5r28" Jan 25 07:59:10 crc kubenswrapper[4832]: I0125 07:59:10.664112 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/95dbbcf8-838b-4f56-928a-81b4f038b259-console-serving-cert\") pod \"console-f9d7485db-8pg27\" (UID: \"95dbbcf8-838b-4f56-928a-81b4f038b259\") " pod="openshift-console/console-f9d7485db-8pg27" Jan 25 07:59:10 crc kubenswrapper[4832]: I0125 07:59:10.662374 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8be00535-0bc6-41a2-a79c-552be0f574a8-serving-cert\") pod \"controller-manager-879f6c89f-sqbmg\" (UID: \"8be00535-0bc6-41a2-a79c-552be0f574a8\") " pod="openshift-controller-manager/controller-manager-879f6c89f-sqbmg" Jan 25 07:59:10 crc kubenswrapper[4832]: I0125 07:59:10.664870 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/bd278886-fb8d-4013-ae54-83edde53bdaa-images\") pod \"machine-config-operator-74547568cd-mggjn\" (UID: \"bd278886-fb8d-4013-ae54-83edde53bdaa\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-mggjn" Jan 25 07:59:10 crc kubenswrapper[4832]: I0125 07:59:10.664906 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/bd278886-fb8d-4013-ae54-83edde53bdaa-auth-proxy-config\") pod \"machine-config-operator-74547568cd-mggjn\" (UID: \"bd278886-fb8d-4013-ae54-83edde53bdaa\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-mggjn" Jan 25 07:59:10 crc kubenswrapper[4832]: I0125 07:59:10.664938 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7fad5166-9aa0-4c10-8c73-2186af1d226d-serving-cert\") pod \"route-controller-manager-6576b87f9c-csbzw\" (UID: \"7fad5166-9aa0-4c10-8c73-2186af1d226d\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-csbzw" Jan 25 07:59:10 crc kubenswrapper[4832]: I0125 07:59:10.664973 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/cb0834ac-2ef5-48dc-a86f-511e79c897f7-audit-policies\") pod \"oauth-openshift-558db77b4-q5r28\" (UID: \"cb0834ac-2ef5-48dc-a86f-511e79c897f7\") " pod="openshift-authentication/oauth-openshift-558db77b4-q5r28" Jan 25 07:59:10 crc kubenswrapper[4832]: I0125 07:59:10.665003 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/f6da273c-cb4f-48a9-88cf-70ae8647e580-etcd-client\") pod \"apiserver-7bbb656c7d-fcqfl\" (UID: \"f6da273c-cb4f-48a9-88cf-70ae8647e580\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-fcqfl" Jan 25 07:59:10 crc kubenswrapper[4832]: I0125 07:59:10.665035 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xvf6p\" (UniqueName: \"kubernetes.io/projected/cdc4f06b-3e9a-4855-8400-faabc37cd870-kube-api-access-xvf6p\") pod \"router-default-5444994796-xjkrg\" (UID: \"cdc4f06b-3e9a-4855-8400-faabc37cd870\") " pod="openshift-ingress/router-default-5444994796-xjkrg" Jan 25 07:59:10 crc kubenswrapper[4832]: I0125 07:59:10.665063 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/b945d594-8566-495a-a66a-92fcd625f021-node-bootstrap-token\") pod \"machine-config-server-752ng\" (UID: \"b945d594-8566-495a-a66a-92fcd625f021\") " pod="openshift-machine-config-operator/machine-config-server-752ng" Jan 25 07:59:10 crc kubenswrapper[4832]: I0125 07:59:10.665109 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k4hzj\" (UniqueName: \"kubernetes.io/projected/9d51e019-aeb4-42b0-a900-257aead64221-kube-api-access-k4hzj\") pod \"console-operator-58897d9998-fswfm\" (UID: \"9d51e019-aeb4-42b0-a900-257aead64221\") " pod="openshift-console-operator/console-operator-58897d9998-fswfm" Jan 25 07:59:10 crc kubenswrapper[4832]: I0125 07:59:10.665144 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/cb0834ac-2ef5-48dc-a86f-511e79c897f7-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-q5r28\" (UID: \"cb0834ac-2ef5-48dc-a86f-511e79c897f7\") " pod="openshift-authentication/oauth-openshift-558db77b4-q5r28" Jan 25 07:59:10 crc kubenswrapper[4832]: I0125 07:59:10.665177 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/5c72bea6-adc6-4db0-aec2-3436d21d9871-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-knhz8\" (UID: \"5c72bea6-adc6-4db0-aec2-3436d21d9871\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-knhz8" Jan 25 07:59:10 crc kubenswrapper[4832]: I0125 07:59:10.665202 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t2cvd\" (UniqueName: \"kubernetes.io/projected/c670a610-3a09-4fc1-acb2-f768bc4e5bab-kube-api-access-t2cvd\") pod \"ingress-canary-5bk7m\" (UID: \"c670a610-3a09-4fc1-acb2-f768bc4e5bab\") " pod="openshift-ingress-canary/ingress-canary-5bk7m" Jan 25 07:59:10 crc kubenswrapper[4832]: I0125 07:59:10.665234 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/462c88d9-0b9e-4b53-9b5d-78e14179c952-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-p7n7p\" (UID: \"462c88d9-0b9e-4b53-9b5d-78e14179c952\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-p7n7p" Jan 25 07:59:10 crc kubenswrapper[4832]: I0125 07:59:10.665263 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/4b4ff59a-58d8-4822-8be8-d48a5a85b2d2-csi-data-dir\") pod \"csi-hostpathplugin-jjs2r\" (UID: \"4b4ff59a-58d8-4822-8be8-d48a5a85b2d2\") " pod="hostpath-provisioner/csi-hostpathplugin-jjs2r" Jan 25 07:59:10 crc kubenswrapper[4832]: I0125 07:59:10.665308 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4e0912c6-9dfc-437a-92f0-c6ee3063c848-config\") pod \"kube-controller-manager-operator-78b949d7b-cbsh6\" (UID: \"4e0912c6-9dfc-437a-92f0-c6ee3063c848\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-cbsh6" Jan 25 07:59:10 crc kubenswrapper[4832]: I0125 07:59:10.665337 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5be2bfa8-9baa-44a1-92d1-473ff9c0478d-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-drfl8\" (UID: \"5be2bfa8-9baa-44a1-92d1-473ff9c0478d\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-drfl8" Jan 25 07:59:10 crc kubenswrapper[4832]: I0125 07:59:10.665456 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6afbd903-07e1-4806-9a41-a073a6a4acb7-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-29fbk\" (UID: \"6afbd903-07e1-4806-9a41-a073a6a4acb7\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-29fbk" Jan 25 07:59:10 crc kubenswrapper[4832]: I0125 07:59:10.665487 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6eb8ff11-3ea3-4569-9d87-e89416c04784-serving-cert\") pod \"authentication-operator-69f744f599-6llzt\" (UID: \"6eb8ff11-3ea3-4569-9d87-e89416c04784\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-6llzt" Jan 25 07:59:10 crc kubenswrapper[4832]: I0125 07:59:10.666907 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8be00535-0bc6-41a2-a79c-552be0f574a8-config\") pod \"controller-manager-879f6c89f-sqbmg\" (UID: \"8be00535-0bc6-41a2-a79c-552be0f574a8\") " pod="openshift-controller-manager/controller-manager-879f6c89f-sqbmg" Jan 25 07:59:10 crc kubenswrapper[4832]: I0125 07:59:10.668051 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/95dbbcf8-838b-4f56-928a-81b4f038b259-service-ca\") pod \"console-f9d7485db-8pg27\" (UID: \"95dbbcf8-838b-4f56-928a-81b4f038b259\") " pod="openshift-console/console-f9d7485db-8pg27" Jan 25 07:59:10 crc kubenswrapper[4832]: I0125 07:59:10.668741 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/f6da273c-cb4f-48a9-88cf-70ae8647e580-encryption-config\") pod \"apiserver-7bbb656c7d-fcqfl\" (UID: \"f6da273c-cb4f-48a9-88cf-70ae8647e580\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-fcqfl" Jan 25 07:59:10 crc kubenswrapper[4832]: I0125 07:59:10.670974 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cb0834ac-2ef5-48dc-a86f-511e79c897f7-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-q5r28\" (UID: \"cb0834ac-2ef5-48dc-a86f-511e79c897f7\") " pod="openshift-authentication/oauth-openshift-558db77b4-q5r28" Jan 25 07:59:10 crc kubenswrapper[4832]: I0125 07:59:10.673177 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/cb0834ac-2ef5-48dc-a86f-511e79c897f7-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-q5r28\" (UID: \"cb0834ac-2ef5-48dc-a86f-511e79c897f7\") " pod="openshift-authentication/oauth-openshift-558db77b4-q5r28" Jan 25 07:59:10 crc kubenswrapper[4832]: I0125 07:59:10.678355 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8be00535-0bc6-41a2-a79c-552be0f574a8-client-ca\") pod \"controller-manager-879f6c89f-sqbmg\" (UID: \"8be00535-0bc6-41a2-a79c-552be0f574a8\") " pod="openshift-controller-manager/controller-manager-879f6c89f-sqbmg" Jan 25 07:59:10 crc kubenswrapper[4832]: I0125 07:59:10.679617 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/267d2772-42e1-4031-bc5f-ac78559a7f82-installation-pull-secrets\") pod \"image-registry-697d97f7c8-xw4z9\" (UID: \"267d2772-42e1-4031-bc5f-ac78559a7f82\") " pod="openshift-image-registry/image-registry-697d97f7c8-xw4z9" Jan 25 07:59:10 crc kubenswrapper[4832]: I0125 07:59:10.679798 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/58b235e2-ab37-4d26-ba86-c188dae1bcda-trusted-ca\") pod \"ingress-operator-5b745b69d9-zxhsq\" (UID: \"58b235e2-ab37-4d26-ba86-c188dae1bcda\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-zxhsq" Jan 25 07:59:10 crc kubenswrapper[4832]: I0125 07:59:10.680507 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/c592226b-85c1-48b3-9e85-cbd606c1f94d-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-dswxl\" (UID: \"c592226b-85c1-48b3-9e85-cbd606c1f94d\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-dswxl" Jan 25 07:59:10 crc kubenswrapper[4832]: I0125 07:59:10.680633 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/cb0834ac-2ef5-48dc-a86f-511e79c897f7-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-q5r28\" (UID: \"cb0834ac-2ef5-48dc-a86f-511e79c897f7\") " pod="openshift-authentication/oauth-openshift-558db77b4-q5r28" Jan 25 07:59:10 crc kubenswrapper[4832]: I0125 07:59:10.672580 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/95dbbcf8-838b-4f56-928a-81b4f038b259-trusted-ca-bundle\") pod \"console-f9d7485db-8pg27\" (UID: \"95dbbcf8-838b-4f56-928a-81b4f038b259\") " pod="openshift-console/console-f9d7485db-8pg27" Jan 25 07:59:10 crc kubenswrapper[4832]: I0125 07:59:10.681405 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/267d2772-42e1-4031-bc5f-ac78559a7f82-registry-tls\") pod \"image-registry-697d97f7c8-xw4z9\" (UID: \"267d2772-42e1-4031-bc5f-ac78559a7f82\") " pod="openshift-image-registry/image-registry-697d97f7c8-xw4z9" Jan 25 07:59:10 crc kubenswrapper[4832]: I0125 07:59:10.681481 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/267d2772-42e1-4031-bc5f-ac78559a7f82-registry-certificates\") pod \"image-registry-697d97f7c8-xw4z9\" (UID: \"267d2772-42e1-4031-bc5f-ac78559a7f82\") " pod="openshift-image-registry/image-registry-697d97f7c8-xw4z9" Jan 25 07:59:10 crc kubenswrapper[4832]: I0125 07:59:10.681672 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/95dbbcf8-838b-4f56-928a-81b4f038b259-console-config\") pod \"console-f9d7485db-8pg27\" (UID: \"95dbbcf8-838b-4f56-928a-81b4f038b259\") " pod="openshift-console/console-f9d7485db-8pg27" Jan 25 07:59:10 crc kubenswrapper[4832]: I0125 07:59:10.681710 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/58b235e2-ab37-4d26-ba86-c188dae1bcda-metrics-tls\") pod \"ingress-operator-5b745b69d9-zxhsq\" (UID: \"58b235e2-ab37-4d26-ba86-c188dae1bcda\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-zxhsq" Jan 25 07:59:10 crc kubenswrapper[4832]: I0125 07:59:10.682044 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/cb0834ac-2ef5-48dc-a86f-511e79c897f7-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-q5r28\" (UID: \"cb0834ac-2ef5-48dc-a86f-511e79c897f7\") " pod="openshift-authentication/oauth-openshift-558db77b4-q5r28" Jan 25 07:59:10 crc kubenswrapper[4832]: I0125 07:59:10.682156 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7fad5166-9aa0-4c10-8c73-2186af1d226d-config\") pod \"route-controller-manager-6576b87f9c-csbzw\" (UID: \"7fad5166-9aa0-4c10-8c73-2186af1d226d\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-csbzw" Jan 25 07:59:10 crc kubenswrapper[4832]: I0125 07:59:10.682154 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-99kns"] Jan 25 07:59:10 crc kubenswrapper[4832]: I0125 07:59:10.682243 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/8be00535-0bc6-41a2-a79c-552be0f574a8-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-sqbmg\" (UID: \"8be00535-0bc6-41a2-a79c-552be0f574a8\") " pod="openshift-controller-manager/controller-manager-879f6c89f-sqbmg" Jan 25 07:59:10 crc kubenswrapper[4832]: I0125 07:59:10.682599 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/462c88d9-0b9e-4b53-9b5d-78e14179c952-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-p7n7p\" (UID: \"462c88d9-0b9e-4b53-9b5d-78e14179c952\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-p7n7p" Jan 25 07:59:10 crc kubenswrapper[4832]: I0125 07:59:10.682718 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/cb0834ac-2ef5-48dc-a86f-511e79c897f7-audit-policies\") pod \"oauth-openshift-558db77b4-q5r28\" (UID: \"cb0834ac-2ef5-48dc-a86f-511e79c897f7\") " pod="openshift-authentication/oauth-openshift-558db77b4-q5r28" Jan 25 07:59:10 crc kubenswrapper[4832]: I0125 07:59:10.683162 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/cb0834ac-2ef5-48dc-a86f-511e79c897f7-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-q5r28\" (UID: \"cb0834ac-2ef5-48dc-a86f-511e79c897f7\") " pod="openshift-authentication/oauth-openshift-558db77b4-q5r28" Jan 25 07:59:10 crc kubenswrapper[4832]: I0125 07:59:10.683533 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/cb0834ac-2ef5-48dc-a86f-511e79c897f7-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-q5r28\" (UID: \"cb0834ac-2ef5-48dc-a86f-511e79c897f7\") " pod="openshift-authentication/oauth-openshift-558db77b4-q5r28" Jan 25 07:59:10 crc kubenswrapper[4832]: I0125 07:59:10.683871 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/cb0834ac-2ef5-48dc-a86f-511e79c897f7-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-q5r28\" (UID: \"cb0834ac-2ef5-48dc-a86f-511e79c897f7\") " pod="openshift-authentication/oauth-openshift-558db77b4-q5r28" Jan 25 07:59:10 crc kubenswrapper[4832]: I0125 07:59:10.685021 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d51e019-aeb4-42b0-a900-257aead64221-serving-cert\") pod \"console-operator-58897d9998-fswfm\" (UID: \"9d51e019-aeb4-42b0-a900-257aead64221\") " pod="openshift-console-operator/console-operator-58897d9998-fswfm" Jan 25 07:59:10 crc kubenswrapper[4832]: I0125 07:59:10.685770 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6646m\" (UniqueName: \"kubernetes.io/projected/c592226b-85c1-48b3-9e85-cbd606c1f94d-kube-api-access-6646m\") pod \"cluster-image-registry-operator-dc59b4c8b-dswxl\" (UID: \"c592226b-85c1-48b3-9e85-cbd606c1f94d\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-dswxl" Jan 25 07:59:10 crc kubenswrapper[4832]: I0125 07:59:10.685843 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/cb0834ac-2ef5-48dc-a86f-511e79c897f7-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-q5r28\" (UID: \"cb0834ac-2ef5-48dc-a86f-511e79c897f7\") " pod="openshift-authentication/oauth-openshift-558db77b4-q5r28" Jan 25 07:59:10 crc kubenswrapper[4832]: I0125 07:59:10.689930 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6afbd903-07e1-4806-9a41-a073a6a4acb7-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-29fbk\" (UID: \"6afbd903-07e1-4806-9a41-a073a6a4acb7\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-29fbk" Jan 25 07:59:10 crc kubenswrapper[4832]: I0125 07:59:10.696922 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c9mbt\" (UniqueName: \"kubernetes.io/projected/95dbbcf8-838b-4f56-928a-81b4f038b259-kube-api-access-c9mbt\") pod \"console-f9d7485db-8pg27\" (UID: \"95dbbcf8-838b-4f56-928a-81b4f038b259\") " pod="openshift-console/console-f9d7485db-8pg27" Jan 25 07:59:10 crc kubenswrapper[4832]: I0125 07:59:10.697880 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7fad5166-9aa0-4c10-8c73-2186af1d226d-serving-cert\") pod \"route-controller-manager-6576b87f9c-csbzw\" (UID: \"7fad5166-9aa0-4c10-8c73-2186af1d226d\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-csbzw" Jan 25 07:59:10 crc kubenswrapper[4832]: I0125 07:59:10.719753 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ch6hx\" (UniqueName: \"kubernetes.io/projected/f6da273c-cb4f-48a9-88cf-70ae8647e580-kube-api-access-ch6hx\") pod \"apiserver-7bbb656c7d-fcqfl\" (UID: \"f6da273c-cb4f-48a9-88cf-70ae8647e580\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-fcqfl" Jan 25 07:59:10 crc kubenswrapper[4832]: I0125 07:59:10.737504 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-nlxgx"] Jan 25 07:59:10 crc kubenswrapper[4832]: I0125 07:59:10.741171 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/462c88d9-0b9e-4b53-9b5d-78e14179c952-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-p7n7p\" (UID: \"462c88d9-0b9e-4b53-9b5d-78e14179c952\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-p7n7p" Jan 25 07:59:10 crc kubenswrapper[4832]: I0125 07:59:10.745556 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f6da273c-cb4f-48a9-88cf-70ae8647e580-serving-cert\") pod \"apiserver-7bbb656c7d-fcqfl\" (UID: \"f6da273c-cb4f-48a9-88cf-70ae8647e580\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-fcqfl" Jan 25 07:59:10 crc kubenswrapper[4832]: I0125 07:59:10.746865 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/f6da273c-cb4f-48a9-88cf-70ae8647e580-etcd-client\") pod \"apiserver-7bbb656c7d-fcqfl\" (UID: \"f6da273c-cb4f-48a9-88cf-70ae8647e580\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-fcqfl" Jan 25 07:59:10 crc kubenswrapper[4832]: W0125 07:59:10.752136 4832 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd506c861_ab5e_4341_8e16_ce9166f24d5c.slice/crio-8668c5a6fc6f0a636013057a8fab1be32f58a8e7ef9383963c4db20562e0ea8d WatchSource:0}: Error finding container 8668c5a6fc6f0a636013057a8fab1be32f58a8e7ef9383963c4db20562e0ea8d: Status 404 returned error can't find the container with id 8668c5a6fc6f0a636013057a8fab1be32f58a8e7ef9383963c4db20562e0ea8d Jan 25 07:59:10 crc kubenswrapper[4832]: W0125 07:59:10.755501 4832 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd48c21e4_2d38_4055_a586_93b65a3ff446.slice/crio-d1e59c05f9e1ed520523cd14a8ffb43bad4b6cca10f0ccd478670d9e15309c27 WatchSource:0}: Error finding container d1e59c05f9e1ed520523cd14a8ffb43bad4b6cca10f0ccd478670d9e15309c27: Status 404 returned error can't find the container with id d1e59c05f9e1ed520523cd14a8ffb43bad4b6cca10f0ccd478670d9e15309c27 Jan 25 07:59:10 crc kubenswrapper[4832]: I0125 07:59:10.760408 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-shcjj\" (UniqueName: \"kubernetes.io/projected/7fad5166-9aa0-4c10-8c73-2186af1d226d-kube-api-access-shcjj\") pod \"route-controller-manager-6576b87f9c-csbzw\" (UID: \"7fad5166-9aa0-4c10-8c73-2186af1d226d\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-csbzw" Jan 25 07:59:10 crc kubenswrapper[4832]: I0125 07:59:10.766574 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 25 07:59:10 crc kubenswrapper[4832]: E0125 07:59:10.766726 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-25 07:59:11.266694616 +0000 UTC m=+133.940518149 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 25 07:59:10 crc kubenswrapper[4832]: I0125 07:59:10.766786 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-shhr2\" (UniqueName: \"kubernetes.io/projected/9626a1b0-481b-4cd5-a439-c45a98f1c391-kube-api-access-shhr2\") pod \"machine-approver-56656f9798-9jlxs\" (UID: \"9626a1b0-481b-4cd5-a439-c45a98f1c391\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-9jlxs" Jan 25 07:59:10 crc kubenswrapper[4832]: I0125 07:59:10.766828 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2b69z\" (UniqueName: \"kubernetes.io/projected/cba7e1f8-bc7f-4c85-bdc5-4a81bb6622d1-kube-api-access-2b69z\") pod \"service-ca-9c57cc56f-kpg7m\" (UID: \"cba7e1f8-bc7f-4c85-bdc5-4a81bb6622d1\") " pod="openshift-service-ca/service-ca-9c57cc56f-kpg7m" Jan 25 07:59:10 crc kubenswrapper[4832]: I0125 07:59:10.766864 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-295xp\" (UniqueName: \"kubernetes.io/projected/023a5b50-72c3-42a2-8104-dc50489cf857-kube-api-access-295xp\") pod \"etcd-operator-b45778765-f222l\" (UID: \"023a5b50-72c3-42a2-8104-dc50489cf857\") " pod="openshift-etcd-operator/etcd-operator-b45778765-f222l" Jan 25 07:59:10 crc kubenswrapper[4832]: I0125 07:59:10.766903 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4rtpx\" (UniqueName: \"kubernetes.io/projected/648bd733-1181-4dcf-8b9c-40806f713ca6-kube-api-access-4rtpx\") pod \"service-ca-operator-777779d784-cdncb\" (UID: \"648bd733-1181-4dcf-8b9c-40806f713ca6\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-cdncb" Jan 25 07:59:10 crc kubenswrapper[4832]: I0125 07:59:10.766961 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/bd278886-fb8d-4013-ae54-83edde53bdaa-proxy-tls\") pod \"machine-config-operator-74547568cd-mggjn\" (UID: \"bd278886-fb8d-4013-ae54-83edde53bdaa\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-mggjn" Jan 25 07:59:10 crc kubenswrapper[4832]: I0125 07:59:10.766990 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q5q7m\" (UniqueName: \"kubernetes.io/projected/6eb8ff11-3ea3-4569-9d87-e89416c04784-kube-api-access-q5q7m\") pod \"authentication-operator-69f744f599-6llzt\" (UID: \"6eb8ff11-3ea3-4569-9d87-e89416c04784\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-6llzt" Jan 25 07:59:10 crc kubenswrapper[4832]: I0125 07:59:10.767010 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/567da687-f308-4473-a3d0-aad511ca6e8b-apiservice-cert\") pod \"packageserver-d55dfcdfc-vhn96\" (UID: \"567da687-f308-4473-a3d0-aad511ca6e8b\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-vhn96" Jan 25 07:59:10 crc kubenswrapper[4832]: I0125 07:59:10.767026 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fca662f7-e916-4728-8b6a-0b34ace7117f-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-9ll2t\" (UID: \"fca662f7-e916-4728-8b6a-0b34ace7117f\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-9ll2t" Jan 25 07:59:10 crc kubenswrapper[4832]: I0125 07:59:10.767043 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/cdc4f06b-3e9a-4855-8400-faabc37cd870-default-certificate\") pod \"router-default-5444994796-xjkrg\" (UID: \"cdc4f06b-3e9a-4855-8400-faabc37cd870\") " pod="openshift-ingress/router-default-5444994796-xjkrg" Jan 25 07:59:10 crc kubenswrapper[4832]: I0125 07:59:10.767060 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sfl69\" (UniqueName: \"kubernetes.io/projected/24acc510-4a43-4275-9a46-fe2e8258b3c7-kube-api-access-sfl69\") pod \"dns-default-88fz6\" (UID: \"24acc510-4a43-4275-9a46-fe2e8258b3c7\") " pod="openshift-dns/dns-default-88fz6" Jan 25 07:59:10 crc kubenswrapper[4832]: I0125 07:59:10.767076 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/bd278886-fb8d-4013-ae54-83edde53bdaa-images\") pod \"machine-config-operator-74547568cd-mggjn\" (UID: \"bd278886-fb8d-4013-ae54-83edde53bdaa\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-mggjn" Jan 25 07:59:10 crc kubenswrapper[4832]: I0125 07:59:10.767096 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/bd278886-fb8d-4013-ae54-83edde53bdaa-auth-proxy-config\") pod \"machine-config-operator-74547568cd-mggjn\" (UID: \"bd278886-fb8d-4013-ae54-83edde53bdaa\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-mggjn" Jan 25 07:59:10 crc kubenswrapper[4832]: I0125 07:59:10.767120 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xvf6p\" (UniqueName: \"kubernetes.io/projected/cdc4f06b-3e9a-4855-8400-faabc37cd870-kube-api-access-xvf6p\") pod \"router-default-5444994796-xjkrg\" (UID: \"cdc4f06b-3e9a-4855-8400-faabc37cd870\") " pod="openshift-ingress/router-default-5444994796-xjkrg" Jan 25 07:59:10 crc kubenswrapper[4832]: I0125 07:59:10.767137 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/b945d594-8566-495a-a66a-92fcd625f021-node-bootstrap-token\") pod \"machine-config-server-752ng\" (UID: \"b945d594-8566-495a-a66a-92fcd625f021\") " pod="openshift-machine-config-operator/machine-config-server-752ng" Jan 25 07:59:10 crc kubenswrapper[4832]: I0125 07:59:10.767162 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/5c72bea6-adc6-4db0-aec2-3436d21d9871-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-knhz8\" (UID: \"5c72bea6-adc6-4db0-aec2-3436d21d9871\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-knhz8" Jan 25 07:59:10 crc kubenswrapper[4832]: I0125 07:59:10.767182 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t2cvd\" (UniqueName: \"kubernetes.io/projected/c670a610-3a09-4fc1-acb2-f768bc4e5bab-kube-api-access-t2cvd\") pod \"ingress-canary-5bk7m\" (UID: \"c670a610-3a09-4fc1-acb2-f768bc4e5bab\") " pod="openshift-ingress-canary/ingress-canary-5bk7m" Jan 25 07:59:10 crc kubenswrapper[4832]: I0125 07:59:10.767201 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/4b4ff59a-58d8-4822-8be8-d48a5a85b2d2-csi-data-dir\") pod \"csi-hostpathplugin-jjs2r\" (UID: \"4b4ff59a-58d8-4822-8be8-d48a5a85b2d2\") " pod="hostpath-provisioner/csi-hostpathplugin-jjs2r" Jan 25 07:59:10 crc kubenswrapper[4832]: I0125 07:59:10.767217 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4e0912c6-9dfc-437a-92f0-c6ee3063c848-config\") pod \"kube-controller-manager-operator-78b949d7b-cbsh6\" (UID: \"4e0912c6-9dfc-437a-92f0-c6ee3063c848\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-cbsh6" Jan 25 07:59:10 crc kubenswrapper[4832]: I0125 07:59:10.767235 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5be2bfa8-9baa-44a1-92d1-473ff9c0478d-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-drfl8\" (UID: \"5be2bfa8-9baa-44a1-92d1-473ff9c0478d\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-drfl8" Jan 25 07:59:10 crc kubenswrapper[4832]: I0125 07:59:10.767260 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6eb8ff11-3ea3-4569-9d87-e89416c04784-serving-cert\") pod \"authentication-operator-69f744f599-6llzt\" (UID: \"6eb8ff11-3ea3-4569-9d87-e89416c04784\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-6llzt" Jan 25 07:59:10 crc kubenswrapper[4832]: I0125 07:59:10.767287 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/24acc510-4a43-4275-9a46-fe2e8258b3c7-metrics-tls\") pod \"dns-default-88fz6\" (UID: \"24acc510-4a43-4275-9a46-fe2e8258b3c7\") " pod="openshift-dns/dns-default-88fz6" Jan 25 07:59:10 crc kubenswrapper[4832]: I0125 07:59:10.767305 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/24acc510-4a43-4275-9a46-fe2e8258b3c7-config-volume\") pod \"dns-default-88fz6\" (UID: \"24acc510-4a43-4275-9a46-fe2e8258b3c7\") " pod="openshift-dns/dns-default-88fz6" Jan 25 07:59:10 crc kubenswrapper[4832]: I0125 07:59:10.767335 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fca662f7-e916-4728-8b6a-0b34ace7117f-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-9ll2t\" (UID: \"fca662f7-e916-4728-8b6a-0b34ace7117f\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-9ll2t" Jan 25 07:59:10 crc kubenswrapper[4832]: I0125 07:59:10.767353 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/023a5b50-72c3-42a2-8104-dc50489cf857-etcd-client\") pod \"etcd-operator-b45778765-f222l\" (UID: \"023a5b50-72c3-42a2-8104-dc50489cf857\") " pod="openshift-etcd-operator/etcd-operator-b45778765-f222l" Jan 25 07:59:10 crc kubenswrapper[4832]: I0125 07:59:10.767374 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6eb8ff11-3ea3-4569-9d87-e89416c04784-config\") pod \"authentication-operator-69f744f599-6llzt\" (UID: \"6eb8ff11-3ea3-4569-9d87-e89416c04784\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-6llzt" Jan 25 07:59:10 crc kubenswrapper[4832]: I0125 07:59:10.767410 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tpnh2\" (UniqueName: \"kubernetes.io/projected/fca662f7-e916-4728-8b6a-0b34ace7117f-kube-api-access-tpnh2\") pod \"kube-storage-version-migrator-operator-b67b599dd-9ll2t\" (UID: \"fca662f7-e916-4728-8b6a-0b34ace7117f\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-9ll2t" Jan 25 07:59:10 crc kubenswrapper[4832]: I0125 07:59:10.767431 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/023a5b50-72c3-42a2-8104-dc50489cf857-config\") pod \"etcd-operator-b45778765-f222l\" (UID: \"023a5b50-72c3-42a2-8104-dc50489cf857\") " pod="openshift-etcd-operator/etcd-operator-b45778765-f222l" Jan 25 07:59:10 crc kubenswrapper[4832]: I0125 07:59:10.767450 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/1228f33e-a6bd-4c51-ad90-f005c2848d83-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-tqtnp\" (UID: \"1228f33e-a6bd-4c51-ad90-f005c2848d83\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-tqtnp" Jan 25 07:59:10 crc kubenswrapper[4832]: I0125 07:59:10.767466 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/4b4ff59a-58d8-4822-8be8-d48a5a85b2d2-plugins-dir\") pod \"csi-hostpathplugin-jjs2r\" (UID: \"4b4ff59a-58d8-4822-8be8-d48a5a85b2d2\") " pod="hostpath-provisioner/csi-hostpathplugin-jjs2r" Jan 25 07:59:10 crc kubenswrapper[4832]: I0125 07:59:10.767486 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pfn4g\" (UniqueName: \"kubernetes.io/projected/5c72bea6-adc6-4db0-aec2-3436d21d9871-kube-api-access-pfn4g\") pod \"machine-config-controller-84d6567774-knhz8\" (UID: \"5c72bea6-adc6-4db0-aec2-3436d21d9871\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-knhz8" Jan 25 07:59:10 crc kubenswrapper[4832]: I0125 07:59:10.767506 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4e0912c6-9dfc-437a-92f0-c6ee3063c848-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-cbsh6\" (UID: \"4e0912c6-9dfc-437a-92f0-c6ee3063c848\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-cbsh6" Jan 25 07:59:10 crc kubenswrapper[4832]: I0125 07:59:10.767524 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5wdjp\" (UniqueName: \"kubernetes.io/projected/1228f33e-a6bd-4c51-ad90-f005c2848d83-kube-api-access-5wdjp\") pod \"package-server-manager-789f6589d5-tqtnp\" (UID: \"1228f33e-a6bd-4c51-ad90-f005c2848d83\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-tqtnp" Jan 25 07:59:10 crc kubenswrapper[4832]: I0125 07:59:10.767543 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8dkwz\" (UniqueName: \"kubernetes.io/projected/a32ac557-809a-4a0d-8c18-3c8c5730e849-kube-api-access-8dkwz\") pod \"control-plane-machine-set-operator-78cbb6b69f-fns8l\" (UID: \"a32ac557-809a-4a0d-8c18-3c8c5730e849\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-fns8l" Jan 25 07:59:10 crc kubenswrapper[4832]: I0125 07:59:10.767559 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5be2bfa8-9baa-44a1-92d1-473ff9c0478d-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-drfl8\" (UID: \"5be2bfa8-9baa-44a1-92d1-473ff9c0478d\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-drfl8" Jan 25 07:59:10 crc kubenswrapper[4832]: I0125 07:59:10.767579 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/051ceaa0-fdb3-480a-9c5d-f56b1194ca81-secret-volume\") pod \"collect-profiles-29488785-dcf79\" (UID: \"051ceaa0-fdb3-480a-9c5d-f56b1194ca81\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29488785-dcf79" Jan 25 07:59:10 crc kubenswrapper[4832]: I0125 07:59:10.767596 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/cdc4f06b-3e9a-4855-8400-faabc37cd870-metrics-certs\") pod \"router-default-5444994796-xjkrg\" (UID: \"cdc4f06b-3e9a-4855-8400-faabc37cd870\") " pod="openshift-ingress/router-default-5444994796-xjkrg" Jan 25 07:59:10 crc kubenswrapper[4832]: I0125 07:59:10.767618 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xw4z9\" (UID: \"267d2772-42e1-4031-bc5f-ac78559a7f82\") " pod="openshift-image-registry/image-registry-697d97f7c8-xw4z9" Jan 25 07:59:10 crc kubenswrapper[4832]: I0125 07:59:10.767635 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/c670a610-3a09-4fc1-acb2-f768bc4e5bab-cert\") pod \"ingress-canary-5bk7m\" (UID: \"c670a610-3a09-4fc1-acb2-f768bc4e5bab\") " pod="openshift-ingress-canary/ingress-canary-5bk7m" Jan 25 07:59:10 crc kubenswrapper[4832]: I0125 07:59:10.767658 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/a32ac557-809a-4a0d-8c18-3c8c5730e849-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-fns8l\" (UID: \"a32ac557-809a-4a0d-8c18-3c8c5730e849\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-fns8l" Jan 25 07:59:10 crc kubenswrapper[4832]: I0125 07:59:10.767680 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zqdtr\" (UniqueName: \"kubernetes.io/projected/c05896f4-ee7d-4b10-949e-b8bf0d822313-kube-api-access-zqdtr\") pod \"downloads-7954f5f757-jvld2\" (UID: \"c05896f4-ee7d-4b10-949e-b8bf0d822313\") " pod="openshift-console/downloads-7954f5f757-jvld2" Jan 25 07:59:10 crc kubenswrapper[4832]: I0125 07:59:10.767696 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/b945d594-8566-495a-a66a-92fcd625f021-certs\") pod \"machine-config-server-752ng\" (UID: \"b945d594-8566-495a-a66a-92fcd625f021\") " pod="openshift-machine-config-operator/machine-config-server-752ng" Jan 25 07:59:10 crc kubenswrapper[4832]: I0125 07:59:10.767731 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/023a5b50-72c3-42a2-8104-dc50489cf857-etcd-service-ca\") pod \"etcd-operator-b45778765-f222l\" (UID: \"023a5b50-72c3-42a2-8104-dc50489cf857\") " pod="openshift-etcd-operator/etcd-operator-b45778765-f222l" Jan 25 07:59:10 crc kubenswrapper[4832]: I0125 07:59:10.767755 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/648bd733-1181-4dcf-8b9c-40806f713ca6-serving-cert\") pod \"service-ca-operator-777779d784-cdncb\" (UID: \"648bd733-1181-4dcf-8b9c-40806f713ca6\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-cdncb" Jan 25 07:59:10 crc kubenswrapper[4832]: I0125 07:59:10.767772 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v65h5\" (UniqueName: \"kubernetes.io/projected/f25ba7b4-ecd6-4e84-a97a-13c8fa94f522-kube-api-access-v65h5\") pod \"catalog-operator-68c6474976-6gswk\" (UID: \"f25ba7b4-ecd6-4e84-a97a-13c8fa94f522\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-6gswk" Jan 25 07:59:10 crc kubenswrapper[4832]: I0125 07:59:10.767796 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f25ba7b4-ecd6-4e84-a97a-13c8fa94f522-srv-cert\") pod \"catalog-operator-68c6474976-6gswk\" (UID: \"f25ba7b4-ecd6-4e84-a97a-13c8fa94f522\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-6gswk" Jan 25 07:59:10 crc kubenswrapper[4832]: I0125 07:59:10.767815 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/5c72bea6-adc6-4db0-aec2-3436d21d9871-proxy-tls\") pod \"machine-config-controller-84d6567774-knhz8\" (UID: \"5c72bea6-adc6-4db0-aec2-3436d21d9871\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-knhz8" Jan 25 07:59:10 crc kubenswrapper[4832]: I0125 07:59:10.767831 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/023a5b50-72c3-42a2-8104-dc50489cf857-serving-cert\") pod \"etcd-operator-b45778765-f222l\" (UID: \"023a5b50-72c3-42a2-8104-dc50489cf857\") " pod="openshift-etcd-operator/etcd-operator-b45778765-f222l" Jan 25 07:59:10 crc kubenswrapper[4832]: I0125 07:59:10.767851 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l4l94\" (UniqueName: \"kubernetes.io/projected/051ceaa0-fdb3-480a-9c5d-f56b1194ca81-kube-api-access-l4l94\") pod \"collect-profiles-29488785-dcf79\" (UID: \"051ceaa0-fdb3-480a-9c5d-f56b1194ca81\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29488785-dcf79" Jan 25 07:59:10 crc kubenswrapper[4832]: I0125 07:59:10.767870 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/9626a1b0-481b-4cd5-a439-c45a98f1c391-auth-proxy-config\") pod \"machine-approver-56656f9798-9jlxs\" (UID: \"9626a1b0-481b-4cd5-a439-c45a98f1c391\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-9jlxs" Jan 25 07:59:10 crc kubenswrapper[4832]: I0125 07:59:10.767886 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hcmqb\" (UniqueName: \"kubernetes.io/projected/bd278886-fb8d-4013-ae54-83edde53bdaa-kube-api-access-hcmqb\") pod \"machine-config-operator-74547568cd-mggjn\" (UID: \"bd278886-fb8d-4013-ae54-83edde53bdaa\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-mggjn" Jan 25 07:59:10 crc kubenswrapper[4832]: I0125 07:59:10.767903 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9626a1b0-481b-4cd5-a439-c45a98f1c391-config\") pod \"machine-approver-56656f9798-9jlxs\" (UID: \"9626a1b0-481b-4cd5-a439-c45a98f1c391\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-9jlxs" Jan 25 07:59:10 crc kubenswrapper[4832]: I0125 07:59:10.767940 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6eb8ff11-3ea3-4569-9d87-e89416c04784-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-6llzt\" (UID: \"6eb8ff11-3ea3-4569-9d87-e89416c04784\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-6llzt" Jan 25 07:59:10 crc kubenswrapper[4832]: I0125 07:59:10.767958 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/cdc4f06b-3e9a-4855-8400-faabc37cd870-stats-auth\") pod \"router-default-5444994796-xjkrg\" (UID: \"cdc4f06b-3e9a-4855-8400-faabc37cd870\") " pod="openshift-ingress/router-default-5444994796-xjkrg" Jan 25 07:59:10 crc kubenswrapper[4832]: I0125 07:59:10.767974 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/cba7e1f8-bc7f-4c85-bdc5-4a81bb6622d1-signing-cabundle\") pod \"service-ca-9c57cc56f-kpg7m\" (UID: \"cba7e1f8-bc7f-4c85-bdc5-4a81bb6622d1\") " pod="openshift-service-ca/service-ca-9c57cc56f-kpg7m" Jan 25 07:59:10 crc kubenswrapper[4832]: I0125 07:59:10.767996 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/4b4ff59a-58d8-4822-8be8-d48a5a85b2d2-mountpoint-dir\") pod \"csi-hostpathplugin-jjs2r\" (UID: \"4b4ff59a-58d8-4822-8be8-d48a5a85b2d2\") " pod="hostpath-provisioner/csi-hostpathplugin-jjs2r" Jan 25 07:59:10 crc kubenswrapper[4832]: I0125 07:59:10.768014 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6eb8ff11-3ea3-4569-9d87-e89416c04784-service-ca-bundle\") pod \"authentication-operator-69f744f599-6llzt\" (UID: \"6eb8ff11-3ea3-4569-9d87-e89416c04784\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-6llzt" Jan 25 07:59:10 crc kubenswrapper[4832]: I0125 07:59:10.768028 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/cba7e1f8-bc7f-4c85-bdc5-4a81bb6622d1-signing-key\") pod \"service-ca-9c57cc56f-kpg7m\" (UID: \"cba7e1f8-bc7f-4c85-bdc5-4a81bb6622d1\") " pod="openshift-service-ca/service-ca-9c57cc56f-kpg7m" Jan 25 07:59:10 crc kubenswrapper[4832]: I0125 07:59:10.768045 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/023a5b50-72c3-42a2-8104-dc50489cf857-etcd-ca\") pod \"etcd-operator-b45778765-f222l\" (UID: \"023a5b50-72c3-42a2-8104-dc50489cf857\") " pod="openshift-etcd-operator/etcd-operator-b45778765-f222l" Jan 25 07:59:10 crc kubenswrapper[4832]: I0125 07:59:10.768070 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cdc4f06b-3e9a-4855-8400-faabc37cd870-service-ca-bundle\") pod \"router-default-5444994796-xjkrg\" (UID: \"cdc4f06b-3e9a-4855-8400-faabc37cd870\") " pod="openshift-ingress/router-default-5444994796-xjkrg" Jan 25 07:59:10 crc kubenswrapper[4832]: I0125 07:59:10.768085 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/4b4ff59a-58d8-4822-8be8-d48a5a85b2d2-socket-dir\") pod \"csi-hostpathplugin-jjs2r\" (UID: \"4b4ff59a-58d8-4822-8be8-d48a5a85b2d2\") " pod="hostpath-provisioner/csi-hostpathplugin-jjs2r" Jan 25 07:59:10 crc kubenswrapper[4832]: I0125 07:59:10.768101 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/4b4ff59a-58d8-4822-8be8-d48a5a85b2d2-registration-dir\") pod \"csi-hostpathplugin-jjs2r\" (UID: \"4b4ff59a-58d8-4822-8be8-d48a5a85b2d2\") " pod="hostpath-provisioner/csi-hostpathplugin-jjs2r" Jan 25 07:59:10 crc kubenswrapper[4832]: I0125 07:59:10.768117 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f25ba7b4-ecd6-4e84-a97a-13c8fa94f522-profile-collector-cert\") pod \"catalog-operator-68c6474976-6gswk\" (UID: \"f25ba7b4-ecd6-4e84-a97a-13c8fa94f522\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-6gswk" Jan 25 07:59:10 crc kubenswrapper[4832]: I0125 07:59:10.768135 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zrr9d\" (UniqueName: \"kubernetes.io/projected/4b4ff59a-58d8-4822-8be8-d48a5a85b2d2-kube-api-access-zrr9d\") pod \"csi-hostpathplugin-jjs2r\" (UID: \"4b4ff59a-58d8-4822-8be8-d48a5a85b2d2\") " pod="hostpath-provisioner/csi-hostpathplugin-jjs2r" Jan 25 07:59:10 crc kubenswrapper[4832]: I0125 07:59:10.768156 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5wfmj\" (UniqueName: \"kubernetes.io/projected/b945d594-8566-495a-a66a-92fcd625f021-kube-api-access-5wfmj\") pod \"machine-config-server-752ng\" (UID: \"b945d594-8566-495a-a66a-92fcd625f021\") " pod="openshift-machine-config-operator/machine-config-server-752ng" Jan 25 07:59:10 crc kubenswrapper[4832]: I0125 07:59:10.768161 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/bd278886-fb8d-4013-ae54-83edde53bdaa-images\") pod \"machine-config-operator-74547568cd-mggjn\" (UID: \"bd278886-fb8d-4013-ae54-83edde53bdaa\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-mggjn" Jan 25 07:59:10 crc kubenswrapper[4832]: I0125 07:59:10.768175 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/9626a1b0-481b-4cd5-a439-c45a98f1c391-machine-approver-tls\") pod \"machine-approver-56656f9798-9jlxs\" (UID: \"9626a1b0-481b-4cd5-a439-c45a98f1c391\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-9jlxs" Jan 25 07:59:10 crc kubenswrapper[4832]: I0125 07:59:10.768252 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4e0912c6-9dfc-437a-92f0-c6ee3063c848-config\") pod \"kube-controller-manager-operator-78b949d7b-cbsh6\" (UID: \"4e0912c6-9dfc-437a-92f0-c6ee3063c848\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-cbsh6" Jan 25 07:59:10 crc kubenswrapper[4832]: I0125 07:59:10.768272 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/567da687-f308-4473-a3d0-aad511ca6e8b-tmpfs\") pod \"packageserver-d55dfcdfc-vhn96\" (UID: \"567da687-f308-4473-a3d0-aad511ca6e8b\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-vhn96" Jan 25 07:59:10 crc kubenswrapper[4832]: I0125 07:59:10.768339 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4e0912c6-9dfc-437a-92f0-c6ee3063c848-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-cbsh6\" (UID: \"4e0912c6-9dfc-437a-92f0-c6ee3063c848\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-cbsh6" Jan 25 07:59:10 crc kubenswrapper[4832]: I0125 07:59:10.768369 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/567da687-f308-4473-a3d0-aad511ca6e8b-webhook-cert\") pod \"packageserver-d55dfcdfc-vhn96\" (UID: \"567da687-f308-4473-a3d0-aad511ca6e8b\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-vhn96" Jan 25 07:59:10 crc kubenswrapper[4832]: I0125 07:59:10.768415 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tk6k2\" (UniqueName: \"kubernetes.io/projected/567da687-f308-4473-a3d0-aad511ca6e8b-kube-api-access-tk6k2\") pod \"packageserver-d55dfcdfc-vhn96\" (UID: \"567da687-f308-4473-a3d0-aad511ca6e8b\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-vhn96" Jan 25 07:59:10 crc kubenswrapper[4832]: I0125 07:59:10.768444 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-44txw\" (UniqueName: \"kubernetes.io/projected/92293986-2979-44e0-8331-72f2546d576e-kube-api-access-44txw\") pod \"migrator-59844c95c7-c8c6f\" (UID: \"92293986-2979-44e0-8331-72f2546d576e\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-c8c6f" Jan 25 07:59:10 crc kubenswrapper[4832]: I0125 07:59:10.768472 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/648bd733-1181-4dcf-8b9c-40806f713ca6-config\") pod \"service-ca-operator-777779d784-cdncb\" (UID: \"648bd733-1181-4dcf-8b9c-40806f713ca6\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-cdncb" Jan 25 07:59:10 crc kubenswrapper[4832]: I0125 07:59:10.768514 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/051ceaa0-fdb3-480a-9c5d-f56b1194ca81-config-volume\") pod \"collect-profiles-29488785-dcf79\" (UID: \"051ceaa0-fdb3-480a-9c5d-f56b1194ca81\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29488785-dcf79" Jan 25 07:59:10 crc kubenswrapper[4832]: I0125 07:59:10.768549 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fbljd\" (UniqueName: \"kubernetes.io/projected/5be2bfa8-9baa-44a1-92d1-473ff9c0478d-kube-api-access-fbljd\") pod \"openshift-controller-manager-operator-756b6f6bc6-drfl8\" (UID: \"5be2bfa8-9baa-44a1-92d1-473ff9c0478d\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-drfl8" Jan 25 07:59:10 crc kubenswrapper[4832]: I0125 07:59:10.768669 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/567da687-f308-4473-a3d0-aad511ca6e8b-tmpfs\") pod \"packageserver-d55dfcdfc-vhn96\" (UID: \"567da687-f308-4473-a3d0-aad511ca6e8b\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-vhn96" Jan 25 07:59:10 crc kubenswrapper[4832]: I0125 07:59:10.768747 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/5c72bea6-adc6-4db0-aec2-3436d21d9871-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-knhz8\" (UID: \"5c72bea6-adc6-4db0-aec2-3436d21d9871\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-knhz8" Jan 25 07:59:10 crc kubenswrapper[4832]: I0125 07:59:10.768840 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/4b4ff59a-58d8-4822-8be8-d48a5a85b2d2-csi-data-dir\") pod \"csi-hostpathplugin-jjs2r\" (UID: \"4b4ff59a-58d8-4822-8be8-d48a5a85b2d2\") " pod="hostpath-provisioner/csi-hostpathplugin-jjs2r" Jan 25 07:59:10 crc kubenswrapper[4832]: I0125 07:59:10.769795 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/bd278886-fb8d-4013-ae54-83edde53bdaa-auth-proxy-config\") pod \"machine-config-operator-74547568cd-mggjn\" (UID: \"bd278886-fb8d-4013-ae54-83edde53bdaa\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-mggjn" Jan 25 07:59:10 crc kubenswrapper[4832]: I0125 07:59:10.770098 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5be2bfa8-9baa-44a1-92d1-473ff9c0478d-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-drfl8\" (UID: \"5be2bfa8-9baa-44a1-92d1-473ff9c0478d\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-drfl8" Jan 25 07:59:10 crc kubenswrapper[4832]: I0125 07:59:10.770400 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/648bd733-1181-4dcf-8b9c-40806f713ca6-config\") pod \"service-ca-operator-777779d784-cdncb\" (UID: \"648bd733-1181-4dcf-8b9c-40806f713ca6\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-cdncb" Jan 25 07:59:10 crc kubenswrapper[4832]: I0125 07:59:10.770885 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/051ceaa0-fdb3-480a-9c5d-f56b1194ca81-config-volume\") pod \"collect-profiles-29488785-dcf79\" (UID: \"051ceaa0-fdb3-480a-9c5d-f56b1194ca81\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29488785-dcf79" Jan 25 07:59:10 crc kubenswrapper[4832]: I0125 07:59:10.771146 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/bd278886-fb8d-4013-ae54-83edde53bdaa-proxy-tls\") pod \"machine-config-operator-74547568cd-mggjn\" (UID: \"bd278886-fb8d-4013-ae54-83edde53bdaa\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-mggjn" Jan 25 07:59:10 crc kubenswrapper[4832]: I0125 07:59:10.771220 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5be2bfa8-9baa-44a1-92d1-473ff9c0478d-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-drfl8\" (UID: \"5be2bfa8-9baa-44a1-92d1-473ff9c0478d\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-drfl8" Jan 25 07:59:10 crc kubenswrapper[4832]: I0125 07:59:10.773628 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fca662f7-e916-4728-8b6a-0b34ace7117f-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-9ll2t\" (UID: \"fca662f7-e916-4728-8b6a-0b34ace7117f\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-9ll2t" Jan 25 07:59:10 crc kubenswrapper[4832]: I0125 07:59:10.771244 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/4b4ff59a-58d8-4822-8be8-d48a5a85b2d2-plugins-dir\") pod \"csi-hostpathplugin-jjs2r\" (UID: \"4b4ff59a-58d8-4822-8be8-d48a5a85b2d2\") " pod="hostpath-provisioner/csi-hostpathplugin-jjs2r" Jan 25 07:59:10 crc kubenswrapper[4832]: I0125 07:59:10.771422 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6eb8ff11-3ea3-4569-9d87-e89416c04784-config\") pod \"authentication-operator-69f744f599-6llzt\" (UID: \"6eb8ff11-3ea3-4569-9d87-e89416c04784\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-6llzt" Jan 25 07:59:10 crc kubenswrapper[4832]: I0125 07:59:10.772484 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/24acc510-4a43-4275-9a46-fe2e8258b3c7-config-volume\") pod \"dns-default-88fz6\" (UID: \"24acc510-4a43-4275-9a46-fe2e8258b3c7\") " pod="openshift-dns/dns-default-88fz6" Jan 25 07:59:10 crc kubenswrapper[4832]: I0125 07:59:10.772695 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/567da687-f308-4473-a3d0-aad511ca6e8b-apiservice-cert\") pod \"packageserver-d55dfcdfc-vhn96\" (UID: \"567da687-f308-4473-a3d0-aad511ca6e8b\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-vhn96" Jan 25 07:59:10 crc kubenswrapper[4832]: I0125 07:59:10.772927 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/b945d594-8566-495a-a66a-92fcd625f021-node-bootstrap-token\") pod \"machine-config-server-752ng\" (UID: \"b945d594-8566-495a-a66a-92fcd625f021\") " pod="openshift-machine-config-operator/machine-config-server-752ng" Jan 25 07:59:10 crc kubenswrapper[4832]: I0125 07:59:10.773482 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/023a5b50-72c3-42a2-8104-dc50489cf857-etcd-service-ca\") pod \"etcd-operator-b45778765-f222l\" (UID: \"023a5b50-72c3-42a2-8104-dc50489cf857\") " pod="openshift-etcd-operator/etcd-operator-b45778765-f222l" Jan 25 07:59:10 crc kubenswrapper[4832]: I0125 07:59:10.771966 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/9626a1b0-481b-4cd5-a439-c45a98f1c391-auth-proxy-config\") pod \"machine-approver-56656f9798-9jlxs\" (UID: \"9626a1b0-481b-4cd5-a439-c45a98f1c391\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-9jlxs" Jan 25 07:59:10 crc kubenswrapper[4832]: E0125 07:59:10.773887 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-25 07:59:11.273873536 +0000 UTC m=+133.947697069 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xw4z9" (UID: "267d2772-42e1-4031-bc5f-ac78559a7f82") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 25 07:59:10 crc kubenswrapper[4832]: I0125 07:59:10.773984 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/24acc510-4a43-4275-9a46-fe2e8258b3c7-metrics-tls\") pod \"dns-default-88fz6\" (UID: \"24acc510-4a43-4275-9a46-fe2e8258b3c7\") " pod="openshift-dns/dns-default-88fz6" Jan 25 07:59:10 crc kubenswrapper[4832]: I0125 07:59:10.774184 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6eb8ff11-3ea3-4569-9d87-e89416c04784-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-6llzt\" (UID: \"6eb8ff11-3ea3-4569-9d87-e89416c04784\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-6llzt" Jan 25 07:59:10 crc kubenswrapper[4832]: I0125 07:59:10.774488 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9626a1b0-481b-4cd5-a439-c45a98f1c391-config\") pod \"machine-approver-56656f9798-9jlxs\" (UID: \"9626a1b0-481b-4cd5-a439-c45a98f1c391\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-9jlxs" Jan 25 07:59:10 crc kubenswrapper[4832]: I0125 07:59:10.775091 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/cba7e1f8-bc7f-4c85-bdc5-4a81bb6622d1-signing-cabundle\") pod \"service-ca-9c57cc56f-kpg7m\" (UID: \"cba7e1f8-bc7f-4c85-bdc5-4a81bb6622d1\") " pod="openshift-service-ca/service-ca-9c57cc56f-kpg7m" Jan 25 07:59:10 crc kubenswrapper[4832]: I0125 07:59:10.775168 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/4b4ff59a-58d8-4822-8be8-d48a5a85b2d2-mountpoint-dir\") pod \"csi-hostpathplugin-jjs2r\" (UID: \"4b4ff59a-58d8-4822-8be8-d48a5a85b2d2\") " pod="hostpath-provisioner/csi-hostpathplugin-jjs2r" Jan 25 07:59:10 crc kubenswrapper[4832]: I0125 07:59:10.775712 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6eb8ff11-3ea3-4569-9d87-e89416c04784-service-ca-bundle\") pod \"authentication-operator-69f744f599-6llzt\" (UID: \"6eb8ff11-3ea3-4569-9d87-e89416c04784\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-6llzt" Jan 25 07:59:10 crc kubenswrapper[4832]: I0125 07:59:10.776509 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/4b4ff59a-58d8-4822-8be8-d48a5a85b2d2-registration-dir\") pod \"csi-hostpathplugin-jjs2r\" (UID: \"4b4ff59a-58d8-4822-8be8-d48a5a85b2d2\") " pod="hostpath-provisioner/csi-hostpathplugin-jjs2r" Jan 25 07:59:10 crc kubenswrapper[4832]: I0125 07:59:10.776591 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/023a5b50-72c3-42a2-8104-dc50489cf857-config\") pod \"etcd-operator-b45778765-f222l\" (UID: \"023a5b50-72c3-42a2-8104-dc50489cf857\") " pod="openshift-etcd-operator/etcd-operator-b45778765-f222l" Jan 25 07:59:10 crc kubenswrapper[4832]: I0125 07:59:10.777137 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/c670a610-3a09-4fc1-acb2-f768bc4e5bab-cert\") pod \"ingress-canary-5bk7m\" (UID: \"c670a610-3a09-4fc1-acb2-f768bc4e5bab\") " pod="openshift-ingress-canary/ingress-canary-5bk7m" Jan 25 07:59:10 crc kubenswrapper[4832]: I0125 07:59:10.777255 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6eb8ff11-3ea3-4569-9d87-e89416c04784-serving-cert\") pod \"authentication-operator-69f744f599-6llzt\" (UID: \"6eb8ff11-3ea3-4569-9d87-e89416c04784\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-6llzt" Jan 25 07:59:10 crc kubenswrapper[4832]: I0125 07:59:10.777323 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/023a5b50-72c3-42a2-8104-dc50489cf857-etcd-ca\") pod \"etcd-operator-b45778765-f222l\" (UID: \"023a5b50-72c3-42a2-8104-dc50489cf857\") " pod="openshift-etcd-operator/etcd-operator-b45778765-f222l" Jan 25 07:59:10 crc kubenswrapper[4832]: I0125 07:59:10.777594 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/4b4ff59a-58d8-4822-8be8-d48a5a85b2d2-socket-dir\") pod \"csi-hostpathplugin-jjs2r\" (UID: \"4b4ff59a-58d8-4822-8be8-d48a5a85b2d2\") " pod="hostpath-provisioner/csi-hostpathplugin-jjs2r" Jan 25 07:59:10 crc kubenswrapper[4832]: I0125 07:59:10.777842 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/1228f33e-a6bd-4c51-ad90-f005c2848d83-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-tqtnp\" (UID: \"1228f33e-a6bd-4c51-ad90-f005c2848d83\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-tqtnp" Jan 25 07:59:10 crc kubenswrapper[4832]: I0125 07:59:10.778024 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cdc4f06b-3e9a-4855-8400-faabc37cd870-service-ca-bundle\") pod \"router-default-5444994796-xjkrg\" (UID: \"cdc4f06b-3e9a-4855-8400-faabc37cd870\") " pod="openshift-ingress/router-default-5444994796-xjkrg" Jan 25 07:59:10 crc kubenswrapper[4832]: I0125 07:59:10.778114 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/023a5b50-72c3-42a2-8104-dc50489cf857-etcd-client\") pod \"etcd-operator-b45778765-f222l\" (UID: \"023a5b50-72c3-42a2-8104-dc50489cf857\") " pod="openshift-etcd-operator/etcd-operator-b45778765-f222l" Jan 25 07:59:10 crc kubenswrapper[4832]: I0125 07:59:10.778533 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/cdc4f06b-3e9a-4855-8400-faabc37cd870-stats-auth\") pod \"router-default-5444994796-xjkrg\" (UID: \"cdc4f06b-3e9a-4855-8400-faabc37cd870\") " pod="openshift-ingress/router-default-5444994796-xjkrg" Jan 25 07:59:10 crc kubenswrapper[4832]: I0125 07:59:10.780114 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/648bd733-1181-4dcf-8b9c-40806f713ca6-serving-cert\") pod \"service-ca-operator-777779d784-cdncb\" (UID: \"648bd733-1181-4dcf-8b9c-40806f713ca6\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-cdncb" Jan 25 07:59:10 crc kubenswrapper[4832]: I0125 07:59:10.780876 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/9626a1b0-481b-4cd5-a439-c45a98f1c391-machine-approver-tls\") pod \"machine-approver-56656f9798-9jlxs\" (UID: \"9626a1b0-481b-4cd5-a439-c45a98f1c391\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-9jlxs" Jan 25 07:59:10 crc kubenswrapper[4832]: I0125 07:59:10.781171 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/567da687-f308-4473-a3d0-aad511ca6e8b-webhook-cert\") pod \"packageserver-d55dfcdfc-vhn96\" (UID: \"567da687-f308-4473-a3d0-aad511ca6e8b\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-vhn96" Jan 25 07:59:10 crc kubenswrapper[4832]: I0125 07:59:10.781462 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/051ceaa0-fdb3-480a-9c5d-f56b1194ca81-secret-volume\") pod \"collect-profiles-29488785-dcf79\" (UID: \"051ceaa0-fdb3-480a-9c5d-f56b1194ca81\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29488785-dcf79" Jan 25 07:59:10 crc kubenswrapper[4832]: I0125 07:59:10.781565 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/023a5b50-72c3-42a2-8104-dc50489cf857-serving-cert\") pod \"etcd-operator-b45778765-f222l\" (UID: \"023a5b50-72c3-42a2-8104-dc50489cf857\") " pod="openshift-etcd-operator/etcd-operator-b45778765-f222l" Jan 25 07:59:10 crc kubenswrapper[4832]: I0125 07:59:10.781960 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/a32ac557-809a-4a0d-8c18-3c8c5730e849-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-fns8l\" (UID: \"a32ac557-809a-4a0d-8c18-3c8c5730e849\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-fns8l" Jan 25 07:59:10 crc kubenswrapper[4832]: I0125 07:59:10.782034 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/cdc4f06b-3e9a-4855-8400-faabc37cd870-metrics-certs\") pod \"router-default-5444994796-xjkrg\" (UID: \"cdc4f06b-3e9a-4855-8400-faabc37cd870\") " pod="openshift-ingress/router-default-5444994796-xjkrg" Jan 25 07:59:10 crc kubenswrapper[4832]: I0125 07:59:10.782186 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/5c72bea6-adc6-4db0-aec2-3436d21d9871-proxy-tls\") pod \"machine-config-controller-84d6567774-knhz8\" (UID: \"5c72bea6-adc6-4db0-aec2-3436d21d9871\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-knhz8" Jan 25 07:59:10 crc kubenswrapper[4832]: I0125 07:59:10.782659 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/secret/b945d594-8566-495a-a66a-92fcd625f021-certs\") pod \"machine-config-server-752ng\" (UID: \"b945d594-8566-495a-a66a-92fcd625f021\") " pod="openshift-machine-config-operator/machine-config-server-752ng" Jan 25 07:59:10 crc kubenswrapper[4832]: I0125 07:59:10.782934 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/cdc4f06b-3e9a-4855-8400-faabc37cd870-default-certificate\") pod \"router-default-5444994796-xjkrg\" (UID: \"cdc4f06b-3e9a-4855-8400-faabc37cd870\") " pod="openshift-ingress/router-default-5444994796-xjkrg" Jan 25 07:59:10 crc kubenswrapper[4832]: I0125 07:59:10.783518 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/cba7e1f8-bc7f-4c85-bdc5-4a81bb6622d1-signing-key\") pod \"service-ca-9c57cc56f-kpg7m\" (UID: \"cba7e1f8-bc7f-4c85-bdc5-4a81bb6622d1\") " pod="openshift-service-ca/service-ca-9c57cc56f-kpg7m" Jan 25 07:59:10 crc kubenswrapper[4832]: I0125 07:59:10.788066 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4e0912c6-9dfc-437a-92f0-c6ee3063c848-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-cbsh6\" (UID: \"4e0912c6-9dfc-437a-92f0-c6ee3063c848\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-cbsh6" Jan 25 07:59:10 crc kubenswrapper[4832]: I0125 07:59:10.788725 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fca662f7-e916-4728-8b6a-0b34ace7117f-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-9ll2t\" (UID: \"fca662f7-e916-4728-8b6a-0b34ace7117f\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-9ll2t" Jan 25 07:59:10 crc kubenswrapper[4832]: I0125 07:59:10.789062 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f25ba7b4-ecd6-4e84-a97a-13c8fa94f522-srv-cert\") pod \"catalog-operator-68c6474976-6gswk\" (UID: \"f25ba7b4-ecd6-4e84-a97a-13c8fa94f522\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-6gswk" Jan 25 07:59:10 crc kubenswrapper[4832]: I0125 07:59:10.790928 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f25ba7b4-ecd6-4e84-a97a-13c8fa94f522-profile-collector-cert\") pod \"catalog-operator-68c6474976-6gswk\" (UID: \"f25ba7b4-ecd6-4e84-a97a-13c8fa94f522\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-6gswk" Jan 25 07:59:10 crc kubenswrapper[4832]: I0125 07:59:10.795782 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/267d2772-42e1-4031-bc5f-ac78559a7f82-bound-sa-token\") pod \"image-registry-697d97f7c8-xw4z9\" (UID: \"267d2772-42e1-4031-bc5f-ac78559a7f82\") " pod="openshift-image-registry/image-registry-697d97f7c8-xw4z9" Jan 25 07:59:10 crc kubenswrapper[4832]: I0125 07:59:10.796435 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-8pg27" Jan 25 07:59:10 crc kubenswrapper[4832]: I0125 07:59:10.824037 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-86wx4\" (UniqueName: \"kubernetes.io/projected/b1211d5b-db27-4814-85b9-241c30afaaab-kube-api-access-86wx4\") pod \"multus-admission-controller-857f4d67dd-gp55m\" (UID: \"b1211d5b-db27-4814-85b9-241c30afaaab\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-gp55m" Jan 25 07:59:10 crc kubenswrapper[4832]: I0125 07:59:10.841894 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6gxcx\" (UniqueName: \"kubernetes.io/projected/58b235e2-ab37-4d26-ba86-c188dae1bcda-kube-api-access-6gxcx\") pod \"ingress-operator-5b745b69d9-zxhsq\" (UID: \"58b235e2-ab37-4d26-ba86-c188dae1bcda\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-zxhsq" Jan 25 07:59:10 crc kubenswrapper[4832]: I0125 07:59:10.859240 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hh8pm\" (UniqueName: \"kubernetes.io/projected/6afbd903-07e1-4806-9a41-a073a6a4acb7-kube-api-access-hh8pm\") pod \"machine-api-operator-5694c8668f-29fbk\" (UID: \"6afbd903-07e1-4806-9a41-a073a6a4acb7\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-29fbk" Jan 25 07:59:10 crc kubenswrapper[4832]: I0125 07:59:10.870510 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 25 07:59:10 crc kubenswrapper[4832]: E0125 07:59:10.871086 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-25 07:59:11.371070149 +0000 UTC m=+134.044893682 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 25 07:59:10 crc kubenswrapper[4832]: I0125 07:59:10.876931 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xk4vl\" (UniqueName: \"kubernetes.io/projected/8be00535-0bc6-41a2-a79c-552be0f574a8-kube-api-access-xk4vl\") pod \"controller-manager-879f6c89f-sqbmg\" (UID: \"8be00535-0bc6-41a2-a79c-552be0f574a8\") " pod="openshift-controller-manager/controller-manager-879f6c89f-sqbmg" Jan 25 07:59:10 crc kubenswrapper[4832]: I0125 07:59:10.903000 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rwn9v\" (UniqueName: \"kubernetes.io/projected/468a6836-4216-434c-8c75-16b6d41eb2c4-kube-api-access-rwn9v\") pod \"cluster-samples-operator-665b6dd947-b84df\" (UID: \"468a6836-4216-434c-8c75-16b6d41eb2c4\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-b84df" Jan 25 07:59:10 crc kubenswrapper[4832]: I0125 07:59:10.923668 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/58b235e2-ab37-4d26-ba86-c188dae1bcda-bound-sa-token\") pod \"ingress-operator-5b745b69d9-zxhsq\" (UID: \"58b235e2-ab37-4d26-ba86-c188dae1bcda\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-zxhsq" Jan 25 07:59:10 crc kubenswrapper[4832]: I0125 07:59:10.942027 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l7lq6\" (UniqueName: \"kubernetes.io/projected/267d2772-42e1-4031-bc5f-ac78559a7f82-kube-api-access-l7lq6\") pod \"image-registry-697d97f7c8-xw4z9\" (UID: \"267d2772-42e1-4031-bc5f-ac78559a7f82\") " pod="openshift-image-registry/image-registry-697d97f7c8-xw4z9" Jan 25 07:59:10 crc kubenswrapper[4832]: I0125 07:59:10.954263 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-csbzw" Jan 25 07:59:10 crc kubenswrapper[4832]: I0125 07:59:10.966095 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4x5qc\" (UniqueName: \"kubernetes.io/projected/cb0834ac-2ef5-48dc-a86f-511e79c897f7-kube-api-access-4x5qc\") pod \"oauth-openshift-558db77b4-q5r28\" (UID: \"cb0834ac-2ef5-48dc-a86f-511e79c897f7\") " pod="openshift-authentication/oauth-openshift-558db77b4-q5r28" Jan 25 07:59:10 crc kubenswrapper[4832]: I0125 07:59:10.969661 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-p7n7p" Jan 25 07:59:10 crc kubenswrapper[4832]: I0125 07:59:10.972025 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xw4z9\" (UID: \"267d2772-42e1-4031-bc5f-ac78559a7f82\") " pod="openshift-image-registry/image-registry-697d97f7c8-xw4z9" Jan 25 07:59:10 crc kubenswrapper[4832]: E0125 07:59:10.972342 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-25 07:59:11.472330348 +0000 UTC m=+134.146153881 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xw4z9" (UID: "267d2772-42e1-4031-bc5f-ac78559a7f82") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 25 07:59:10 crc kubenswrapper[4832]: I0125 07:59:10.981897 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zg799\" (UniqueName: \"kubernetes.io/projected/70fee4de-12e8-4452-a3a7-731815ecbedd-kube-api-access-zg799\") pod \"openshift-apiserver-operator-796bbdcf4f-c8cgr\" (UID: \"70fee4de-12e8-4452-a3a7-731815ecbedd\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-c8cgr" Jan 25 07:59:10 crc kubenswrapper[4832]: I0125 07:59:10.987960 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-gp55m" Jan 25 07:59:10 crc kubenswrapper[4832]: I0125 07:59:10.999172 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-8pg27"] Jan 25 07:59:11 crc kubenswrapper[4832]: I0125 07:59:11.000787 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k4hzj\" (UniqueName: \"kubernetes.io/projected/9d51e019-aeb4-42b0-a900-257aead64221-kube-api-access-k4hzj\") pod \"console-operator-58897d9998-fswfm\" (UID: \"9d51e019-aeb4-42b0-a900-257aead64221\") " pod="openshift-console-operator/console-operator-58897d9998-fswfm" Jan 25 07:59:11 crc kubenswrapper[4832]: I0125 07:59:11.002545 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-fcqfl" Jan 25 07:59:11 crc kubenswrapper[4832]: W0125 07:59:11.007073 4832 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod95dbbcf8_838b_4f56_928a_81b4f038b259.slice/crio-90480f7dba8ae9fdd219e48f2f1853f5e269418bf954755c0e82491f1fd113da WatchSource:0}: Error finding container 90480f7dba8ae9fdd219e48f2f1853f5e269418bf954755c0e82491f1fd113da: Status 404 returned error can't find the container with id 90480f7dba8ae9fdd219e48f2f1853f5e269418bf954755c0e82491f1fd113da Jan 25 07:59:11 crc kubenswrapper[4832]: I0125 07:59:11.012556 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-29fbk" Jan 25 07:59:11 crc kubenswrapper[4832]: I0125 07:59:11.020213 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/c592226b-85c1-48b3-9e85-cbd606c1f94d-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-dswxl\" (UID: \"c592226b-85c1-48b3-9e85-cbd606c1f94d\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-dswxl" Jan 25 07:59:11 crc kubenswrapper[4832]: I0125 07:59:11.026006 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-zxhsq" Jan 25 07:59:11 crc kubenswrapper[4832]: I0125 07:59:11.038665 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-shhr2\" (UniqueName: \"kubernetes.io/projected/9626a1b0-481b-4cd5-a439-c45a98f1c391-kube-api-access-shhr2\") pod \"machine-approver-56656f9798-9jlxs\" (UID: \"9626a1b0-481b-4cd5-a439-c45a98f1c391\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-9jlxs" Jan 25 07:59:11 crc kubenswrapper[4832]: I0125 07:59:11.060620 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2b69z\" (UniqueName: \"kubernetes.io/projected/cba7e1f8-bc7f-4c85-bdc5-4a81bb6622d1-kube-api-access-2b69z\") pod \"service-ca-9c57cc56f-kpg7m\" (UID: \"cba7e1f8-bc7f-4c85-bdc5-4a81bb6622d1\") " pod="openshift-service-ca/service-ca-9c57cc56f-kpg7m" Jan 25 07:59:11 crc kubenswrapper[4832]: I0125 07:59:11.072971 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 25 07:59:11 crc kubenswrapper[4832]: E0125 07:59:11.073345 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-25 07:59:11.573330406 +0000 UTC m=+134.247153929 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 25 07:59:11 crc kubenswrapper[4832]: I0125 07:59:11.079563 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-295xp\" (UniqueName: \"kubernetes.io/projected/023a5b50-72c3-42a2-8104-dc50489cf857-kube-api-access-295xp\") pod \"etcd-operator-b45778765-f222l\" (UID: \"023a5b50-72c3-42a2-8104-dc50489cf857\") " pod="openshift-etcd-operator/etcd-operator-b45778765-f222l" Jan 25 07:59:11 crc kubenswrapper[4832]: I0125 07:59:11.099694 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-sqbmg" Jan 25 07:59:11 crc kubenswrapper[4832]: I0125 07:59:11.100160 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4rtpx\" (UniqueName: \"kubernetes.io/projected/648bd733-1181-4dcf-8b9c-40806f713ca6-kube-api-access-4rtpx\") pod \"service-ca-operator-777779d784-cdncb\" (UID: \"648bd733-1181-4dcf-8b9c-40806f713ca6\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-cdncb" Jan 25 07:59:11 crc kubenswrapper[4832]: I0125 07:59:11.120268 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-cdncb" Jan 25 07:59:11 crc kubenswrapper[4832]: I0125 07:59:11.136261 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-kpg7m" Jan 25 07:59:11 crc kubenswrapper[4832]: I0125 07:59:11.136735 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q5q7m\" (UniqueName: \"kubernetes.io/projected/6eb8ff11-3ea3-4569-9d87-e89416c04784-kube-api-access-q5q7m\") pod \"authentication-operator-69f744f599-6llzt\" (UID: \"6eb8ff11-3ea3-4569-9d87-e89416c04784\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-6llzt" Jan 25 07:59:11 crc kubenswrapper[4832]: I0125 07:59:11.143614 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sfl69\" (UniqueName: \"kubernetes.io/projected/24acc510-4a43-4275-9a46-fe2e8258b3c7-kube-api-access-sfl69\") pod \"dns-default-88fz6\" (UID: \"24acc510-4a43-4275-9a46-fe2e8258b3c7\") " pod="openshift-dns/dns-default-88fz6" Jan 25 07:59:11 crc kubenswrapper[4832]: I0125 07:59:11.157027 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-p7n7p"] Jan 25 07:59:11 crc kubenswrapper[4832]: I0125 07:59:11.162177 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xvf6p\" (UniqueName: \"kubernetes.io/projected/cdc4f06b-3e9a-4855-8400-faabc37cd870-kube-api-access-xvf6p\") pod \"router-default-5444994796-xjkrg\" (UID: \"cdc4f06b-3e9a-4855-8400-faabc37cd870\") " pod="openshift-ingress/router-default-5444994796-xjkrg" Jan 25 07:59:11 crc kubenswrapper[4832]: I0125 07:59:11.169081 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-88fz6" Jan 25 07:59:11 crc kubenswrapper[4832]: I0125 07:59:11.173973 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xw4z9\" (UID: \"267d2772-42e1-4031-bc5f-ac78559a7f82\") " pod="openshift-image-registry/image-registry-697d97f7c8-xw4z9" Jan 25 07:59:11 crc kubenswrapper[4832]: E0125 07:59:11.174347 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-25 07:59:11.674335495 +0000 UTC m=+134.348159028 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xw4z9" (UID: "267d2772-42e1-4031-bc5f-ac78559a7f82") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 25 07:59:11 crc kubenswrapper[4832]: I0125 07:59:11.178342 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-q5r28" Jan 25 07:59:11 crc kubenswrapper[4832]: I0125 07:59:11.187342 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fbljd\" (UniqueName: \"kubernetes.io/projected/5be2bfa8-9baa-44a1-92d1-473ff9c0478d-kube-api-access-fbljd\") pod \"openshift-controller-manager-operator-756b6f6bc6-drfl8\" (UID: \"5be2bfa8-9baa-44a1-92d1-473ff9c0478d\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-drfl8" Jan 25 07:59:11 crc kubenswrapper[4832]: I0125 07:59:11.193323 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-c8cgr" Jan 25 07:59:11 crc kubenswrapper[4832]: I0125 07:59:11.201412 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t2cvd\" (UniqueName: \"kubernetes.io/projected/c670a610-3a09-4fc1-acb2-f768bc4e5bab-kube-api-access-t2cvd\") pod \"ingress-canary-5bk7m\" (UID: \"c670a610-3a09-4fc1-acb2-f768bc4e5bab\") " pod="openshift-ingress-canary/ingress-canary-5bk7m" Jan 25 07:59:11 crc kubenswrapper[4832]: I0125 07:59:11.207305 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-b84df" Jan 25 07:59:11 crc kubenswrapper[4832]: I0125 07:59:11.207301 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-dswxl" Jan 25 07:59:11 crc kubenswrapper[4832]: I0125 07:59:11.223037 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4e0912c6-9dfc-437a-92f0-c6ee3063c848-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-cbsh6\" (UID: \"4e0912c6-9dfc-437a-92f0-c6ee3063c848\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-cbsh6" Jan 25 07:59:11 crc kubenswrapper[4832]: I0125 07:59:11.240899 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-44txw\" (UniqueName: \"kubernetes.io/projected/92293986-2979-44e0-8331-72f2546d576e-kube-api-access-44txw\") pod \"migrator-59844c95c7-c8c6f\" (UID: \"92293986-2979-44e0-8331-72f2546d576e\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-c8c6f" Jan 25 07:59:11 crc kubenswrapper[4832]: I0125 07:59:11.248119 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-fswfm" Jan 25 07:59:11 crc kubenswrapper[4832]: I0125 07:59:11.256966 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tk6k2\" (UniqueName: \"kubernetes.io/projected/567da687-f308-4473-a3d0-aad511ca6e8b-kube-api-access-tk6k2\") pod \"packageserver-d55dfcdfc-vhn96\" (UID: \"567da687-f308-4473-a3d0-aad511ca6e8b\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-vhn96" Jan 25 07:59:11 crc kubenswrapper[4832]: I0125 07:59:11.275468 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 25 07:59:11 crc kubenswrapper[4832]: E0125 07:59:11.275956 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-25 07:59:11.775940694 +0000 UTC m=+134.449764217 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 25 07:59:11 crc kubenswrapper[4832]: I0125 07:59:11.285056 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tpnh2\" (UniqueName: \"kubernetes.io/projected/fca662f7-e916-4728-8b6a-0b34ace7117f-kube-api-access-tpnh2\") pod \"kube-storage-version-migrator-operator-b67b599dd-9ll2t\" (UID: \"fca662f7-e916-4728-8b6a-0b34ace7117f\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-9ll2t" Jan 25 07:59:11 crc kubenswrapper[4832]: I0125 07:59:11.294353 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-9ll2t" Jan 25 07:59:11 crc kubenswrapper[4832]: I0125 07:59:11.322827 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hcmqb\" (UniqueName: \"kubernetes.io/projected/bd278886-fb8d-4013-ae54-83edde53bdaa-kube-api-access-hcmqb\") pod \"machine-config-operator-74547568cd-mggjn\" (UID: \"bd278886-fb8d-4013-ae54-83edde53bdaa\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-mggjn" Jan 25 07:59:11 crc kubenswrapper[4832]: I0125 07:59:11.326144 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-6llzt" Jan 25 07:59:11 crc kubenswrapper[4832]: I0125 07:59:11.332618 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-9jlxs" Jan 25 07:59:11 crc kubenswrapper[4832]: I0125 07:59:11.333040 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v65h5\" (UniqueName: \"kubernetes.io/projected/f25ba7b4-ecd6-4e84-a97a-13c8fa94f522-kube-api-access-v65h5\") pod \"catalog-operator-68c6474976-6gswk\" (UID: \"f25ba7b4-ecd6-4e84-a97a-13c8fa94f522\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-6gswk" Jan 25 07:59:11 crc kubenswrapper[4832]: I0125 07:59:11.339600 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-f222l" Jan 25 07:59:11 crc kubenswrapper[4832]: I0125 07:59:11.340300 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5wdjp\" (UniqueName: \"kubernetes.io/projected/1228f33e-a6bd-4c51-ad90-f005c2848d83-kube-api-access-5wdjp\") pod \"package-server-manager-789f6589d5-tqtnp\" (UID: \"1228f33e-a6bd-4c51-ad90-f005c2848d83\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-tqtnp" Jan 25 07:59:11 crc kubenswrapper[4832]: I0125 07:59:11.351976 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-cbsh6" Jan 25 07:59:11 crc kubenswrapper[4832]: I0125 07:59:11.362612 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-xjkrg" Jan 25 07:59:11 crc kubenswrapper[4832]: I0125 07:59:11.369112 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8dkwz\" (UniqueName: \"kubernetes.io/projected/a32ac557-809a-4a0d-8c18-3c8c5730e849-kube-api-access-8dkwz\") pod \"control-plane-machine-set-operator-78cbb6b69f-fns8l\" (UID: \"a32ac557-809a-4a0d-8c18-3c8c5730e849\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-fns8l" Jan 25 07:59:11 crc kubenswrapper[4832]: I0125 07:59:11.378159 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xw4z9\" (UID: \"267d2772-42e1-4031-bc5f-ac78559a7f82\") " pod="openshift-image-registry/image-registry-697d97f7c8-xw4z9" Jan 25 07:59:11 crc kubenswrapper[4832]: E0125 07:59:11.378483 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-25 07:59:11.878471055 +0000 UTC m=+134.552294588 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xw4z9" (UID: "267d2772-42e1-4031-bc5f-ac78559a7f82") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 25 07:59:11 crc kubenswrapper[4832]: I0125 07:59:11.378823 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-6gswk" Jan 25 07:59:11 crc kubenswrapper[4832]: I0125 07:59:11.388146 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pfn4g\" (UniqueName: \"kubernetes.io/projected/5c72bea6-adc6-4db0-aec2-3436d21d9871-kube-api-access-pfn4g\") pod \"machine-config-controller-84d6567774-knhz8\" (UID: \"5c72bea6-adc6-4db0-aec2-3436d21d9871\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-knhz8" Jan 25 07:59:11 crc kubenswrapper[4832]: I0125 07:59:11.392836 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-drfl8" Jan 25 07:59:11 crc kubenswrapper[4832]: I0125 07:59:11.403545 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-c8c6f" Jan 25 07:59:11 crc kubenswrapper[4832]: I0125 07:59:11.406740 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-gp55m"] Jan 25 07:59:11 crc kubenswrapper[4832]: I0125 07:59:11.410956 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zrr9d\" (UniqueName: \"kubernetes.io/projected/4b4ff59a-58d8-4822-8be8-d48a5a85b2d2-kube-api-access-zrr9d\") pod \"csi-hostpathplugin-jjs2r\" (UID: \"4b4ff59a-58d8-4822-8be8-d48a5a85b2d2\") " pod="hostpath-provisioner/csi-hostpathplugin-jjs2r" Jan 25 07:59:11 crc kubenswrapper[4832]: I0125 07:59:11.414620 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-tqtnp" Jan 25 07:59:11 crc kubenswrapper[4832]: I0125 07:59:11.442320 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-fcqfl"] Jan 25 07:59:11 crc kubenswrapper[4832]: I0125 07:59:11.444566 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-mggjn" Jan 25 07:59:11 crc kubenswrapper[4832]: I0125 07:59:11.447657 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zqdtr\" (UniqueName: \"kubernetes.io/projected/c05896f4-ee7d-4b10-949e-b8bf0d822313-kube-api-access-zqdtr\") pod \"downloads-7954f5f757-jvld2\" (UID: \"c05896f4-ee7d-4b10-949e-b8bf0d822313\") " pod="openshift-console/downloads-7954f5f757-jvld2" Jan 25 07:59:11 crc kubenswrapper[4832]: I0125 07:59:11.452736 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-vhn96" Jan 25 07:59:11 crc kubenswrapper[4832]: I0125 07:59:11.455078 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5wfmj\" (UniqueName: \"kubernetes.io/projected/b945d594-8566-495a-a66a-92fcd625f021-kube-api-access-5wfmj\") pod \"machine-config-server-752ng\" (UID: \"b945d594-8566-495a-a66a-92fcd625f021\") " pod="openshift-machine-config-operator/machine-config-server-752ng" Jan 25 07:59:11 crc kubenswrapper[4832]: I0125 07:59:11.464733 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-752ng" Jan 25 07:59:11 crc kubenswrapper[4832]: I0125 07:59:11.478961 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-5bk7m" Jan 25 07:59:11 crc kubenswrapper[4832]: I0125 07:59:11.479285 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-csbzw"] Jan 25 07:59:11 crc kubenswrapper[4832]: I0125 07:59:11.479332 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l4l94\" (UniqueName: \"kubernetes.io/projected/051ceaa0-fdb3-480a-9c5d-f56b1194ca81-kube-api-access-l4l94\") pod \"collect-profiles-29488785-dcf79\" (UID: \"051ceaa0-fdb3-480a-9c5d-f56b1194ca81\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29488785-dcf79" Jan 25 07:59:11 crc kubenswrapper[4832]: I0125 07:59:11.480146 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 25 07:59:11 crc kubenswrapper[4832]: E0125 07:59:11.480876 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-25 07:59:11.98085913 +0000 UTC m=+134.654682663 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 25 07:59:11 crc kubenswrapper[4832]: I0125 07:59:11.508170 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-jjs2r" Jan 25 07:59:11 crc kubenswrapper[4832]: W0125 07:59:11.518813 4832 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcdc4f06b_3e9a_4855_8400_faabc37cd870.slice/crio-fd324648b504bca7552cff917c0d791f97023638ec2058224271af5c3f397555 WatchSource:0}: Error finding container fd324648b504bca7552cff917c0d791f97023638ec2058224271af5c3f397555: Status 404 returned error can't find the container with id fd324648b504bca7552cff917c0d791f97023638ec2058224271af5c3f397555 Jan 25 07:59:11 crc kubenswrapper[4832]: I0125 07:59:11.573320 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-8pg27" event={"ID":"95dbbcf8-838b-4f56-928a-81b4f038b259","Type":"ContainerStarted","Data":"33d0fc31b0bc1409c2a27e276061ecab896dcb3c68dd7eae28791bbd6fcd9d91"} Jan 25 07:59:11 crc kubenswrapper[4832]: I0125 07:59:11.573656 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-8pg27" event={"ID":"95dbbcf8-838b-4f56-928a-81b4f038b259","Type":"ContainerStarted","Data":"90480f7dba8ae9fdd219e48f2f1853f5e269418bf954755c0e82491f1fd113da"} Jan 25 07:59:11 crc kubenswrapper[4832]: I0125 07:59:11.575336 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-nlxgx" event={"ID":"d48c21e4-2d38-4055-a586-93b65a3ff446","Type":"ContainerStarted","Data":"3b3906fa1933965b9dd080c8e34505e07cacd83a688cc9c54bb9c6c6444c0e7a"} Jan 25 07:59:11 crc kubenswrapper[4832]: I0125 07:59:11.575368 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-nlxgx" event={"ID":"d48c21e4-2d38-4055-a586-93b65a3ff446","Type":"ContainerStarted","Data":"d1e59c05f9e1ed520523cd14a8ffb43bad4b6cca10f0ccd478670d9e15309c27"} Jan 25 07:59:11 crc kubenswrapper[4832]: I0125 07:59:11.577666 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-nlxgx" Jan 25 07:59:11 crc kubenswrapper[4832]: I0125 07:59:11.580879 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-fth6d" event={"ID":"cc912b0f-bde8-4185-be84-2a2c3394024f","Type":"ContainerStarted","Data":"c28352937f47d8338c08b0f31be0ac73adf0e37bbb1eafe4cb2803aa14c45544"} Jan 25 07:59:11 crc kubenswrapper[4832]: I0125 07:59:11.580932 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-fth6d" event={"ID":"cc912b0f-bde8-4185-be84-2a2c3394024f","Type":"ContainerStarted","Data":"4825f4d794d5557aba76d8be3afb23d87154395d4f3f7e546f01595c8dafebfe"} Jan 25 07:59:11 crc kubenswrapper[4832]: I0125 07:59:11.581421 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-knhz8" Jan 25 07:59:11 crc kubenswrapper[4832]: I0125 07:59:11.582376 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xw4z9\" (UID: \"267d2772-42e1-4031-bc5f-ac78559a7f82\") " pod="openshift-image-registry/image-registry-697d97f7c8-xw4z9" Jan 25 07:59:11 crc kubenswrapper[4832]: E0125 07:59:11.582712 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-25 07:59:12.082698817 +0000 UTC m=+134.756522350 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xw4z9" (UID: "267d2772-42e1-4031-bc5f-ac78559a7f82") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 25 07:59:11 crc kubenswrapper[4832]: I0125 07:59:11.583090 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-p7n7p" event={"ID":"462c88d9-0b9e-4b53-9b5d-78e14179c952","Type":"ContainerStarted","Data":"595c1a9979cb9b6c8d2b841f95a2b3818607fd26839dbfda90221c3ceb4f6d46"} Jan 25 07:59:11 crc kubenswrapper[4832]: I0125 07:59:11.586034 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-nlxgx" Jan 25 07:59:11 crc kubenswrapper[4832]: I0125 07:59:11.586058 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-7rwcz" event={"ID":"0e4fd4e7-2916-47d8-8d38-012c53e792fc","Type":"ContainerStarted","Data":"8b5555a7f59037d5e95a65aec8f0cb60e502fcbdefe489bcc8f7ac1545462932"} Jan 25 07:59:11 crc kubenswrapper[4832]: I0125 07:59:11.586075 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-7rwcz" event={"ID":"0e4fd4e7-2916-47d8-8d38-012c53e792fc","Type":"ContainerStarted","Data":"864be867e92b602edabb44dd02a9c8a834613968f369970067213980b6a4085c"} Jan 25 07:59:11 crc kubenswrapper[4832]: I0125 07:59:11.600890 4832 generic.go:334] "Generic (PLEG): container finished" podID="39120fe3-c252-4345-80bc-048cde22bafe" containerID="7f8bf0d316741e0153c1694781bb413a9b38d2225e24048020811a04d52b42d6" exitCode=0 Jan 25 07:59:11 crc kubenswrapper[4832]: I0125 07:59:11.600975 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-jppn9" event={"ID":"39120fe3-c252-4345-80bc-048cde22bafe","Type":"ContainerDied","Data":"7f8bf0d316741e0153c1694781bb413a9b38d2225e24048020811a04d52b42d6"} Jan 25 07:59:11 crc kubenswrapper[4832]: W0125 07:59:11.606949 4832 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb1211d5b_db27_4814_85b9_241c30afaaab.slice/crio-9226840dc2be99783776e1be1dd91b99d5ab2747ef6f173af45a52526aa90edc WatchSource:0}: Error finding container 9226840dc2be99783776e1be1dd91b99d5ab2747ef6f173af45a52526aa90edc: Status 404 returned error can't find the container with id 9226840dc2be99783776e1be1dd91b99d5ab2747ef6f173af45a52526aa90edc Jan 25 07:59:11 crc kubenswrapper[4832]: W0125 07:59:11.614050 4832 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf6da273c_cb4f_48a9_88cf_70ae8647e580.slice/crio-05b95d59b70fd187f8d7acd36fe139b56c6a02a5454d9298f491b9e7f267af37 WatchSource:0}: Error finding container 05b95d59b70fd187f8d7acd36fe139b56c6a02a5454d9298f491b9e7f267af37: Status 404 returned error can't find the container with id 05b95d59b70fd187f8d7acd36fe139b56c6a02a5454d9298f491b9e7f267af37 Jan 25 07:59:11 crc kubenswrapper[4832]: I0125 07:59:11.616754 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-gqjzs" event={"ID":"c97f51ea-b215-4660-bc7b-2406783aa3bb","Type":"ContainerStarted","Data":"c7664c7ac9b4377cc9c7b624c5daefd6b6623febb560cc7ea9d15dcfc36d59e8"} Jan 25 07:59:11 crc kubenswrapper[4832]: I0125 07:59:11.618650 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-gqjzs" Jan 25 07:59:11 crc kubenswrapper[4832]: I0125 07:59:11.627252 4832 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-gqjzs container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.13:8080/healthz\": dial tcp 10.217.0.13:8080: connect: connection refused" start-of-body= Jan 25 07:59:11 crc kubenswrapper[4832]: I0125 07:59:11.627299 4832 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-gqjzs" podUID="c97f51ea-b215-4660-bc7b-2406783aa3bb" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.13:8080/healthz\": dial tcp 10.217.0.13:8080: connect: connection refused" Jan 25 07:59:11 crc kubenswrapper[4832]: I0125 07:59:11.627621 4832 generic.go:334] "Generic (PLEG): container finished" podID="d506c861-ab5e-4341-8e16-ce9166f24d5c" containerID="ccbe128bd135686f2ab15be80dfb410b8c60cb7c08266df9c24bb9cb6f91b860" exitCode=0 Jan 25 07:59:11 crc kubenswrapper[4832]: I0125 07:59:11.627697 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-99kns" event={"ID":"d506c861-ab5e-4341-8e16-ce9166f24d5c","Type":"ContainerDied","Data":"ccbe128bd135686f2ab15be80dfb410b8c60cb7c08266df9c24bb9cb6f91b860"} Jan 25 07:59:11 crc kubenswrapper[4832]: I0125 07:59:11.627749 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-99kns" event={"ID":"d506c861-ab5e-4341-8e16-ce9166f24d5c","Type":"ContainerStarted","Data":"8668c5a6fc6f0a636013057a8fab1be32f58a8e7ef9383963c4db20562e0ea8d"} Jan 25 07:59:11 crc kubenswrapper[4832]: I0125 07:59:11.654758 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-fns8l" Jan 25 07:59:11 crc kubenswrapper[4832]: I0125 07:59:11.660592 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-29fbk"] Jan 25 07:59:11 crc kubenswrapper[4832]: I0125 07:59:11.686182 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 25 07:59:11 crc kubenswrapper[4832]: E0125 07:59:11.687208 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-25 07:59:12.187194103 +0000 UTC m=+134.861017636 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 25 07:59:11 crc kubenswrapper[4832]: I0125 07:59:11.694709 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29488785-dcf79" Jan 25 07:59:11 crc kubenswrapper[4832]: I0125 07:59:11.734416 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-zxhsq"] Jan 25 07:59:11 crc kubenswrapper[4832]: I0125 07:59:11.737375 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-jvld2" Jan 25 07:59:11 crc kubenswrapper[4832]: I0125 07:59:11.787711 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xw4z9\" (UID: \"267d2772-42e1-4031-bc5f-ac78559a7f82\") " pod="openshift-image-registry/image-registry-697d97f7c8-xw4z9" Jan 25 07:59:11 crc kubenswrapper[4832]: E0125 07:59:11.788143 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-25 07:59:12.28812749 +0000 UTC m=+134.961951023 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xw4z9" (UID: "267d2772-42e1-4031-bc5f-ac78559a7f82") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 25 07:59:11 crc kubenswrapper[4832]: I0125 07:59:11.888923 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 25 07:59:11 crc kubenswrapper[4832]: E0125 07:59:11.890507 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-25 07:59:12.390488045 +0000 UTC m=+135.064311588 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 25 07:59:11 crc kubenswrapper[4832]: I0125 07:59:11.991331 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xw4z9\" (UID: \"267d2772-42e1-4031-bc5f-ac78559a7f82\") " pod="openshift-image-registry/image-registry-697d97f7c8-xw4z9" Jan 25 07:59:11 crc kubenswrapper[4832]: E0125 07:59:11.991692 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-25 07:59:12.49167848 +0000 UTC m=+135.165502013 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xw4z9" (UID: "267d2772-42e1-4031-bc5f-ac78559a7f82") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 25 07:59:12 crc kubenswrapper[4832]: I0125 07:59:12.092691 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 25 07:59:12 crc kubenswrapper[4832]: E0125 07:59:12.094160 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-25 07:59:12.594105817 +0000 UTC m=+135.267929370 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 25 07:59:12 crc kubenswrapper[4832]: I0125 07:59:12.146213 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-cdncb"] Jan 25 07:59:12 crc kubenswrapper[4832]: I0125 07:59:12.172426 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-88fz6"] Jan 25 07:59:12 crc kubenswrapper[4832]: I0125 07:59:12.174324 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-sqbmg"] Jan 25 07:59:12 crc kubenswrapper[4832]: I0125 07:59:12.194246 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xw4z9\" (UID: \"267d2772-42e1-4031-bc5f-ac78559a7f82\") " pod="openshift-image-registry/image-registry-697d97f7c8-xw4z9" Jan 25 07:59:12 crc kubenswrapper[4832]: E0125 07:59:12.194605 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-25 07:59:12.694594448 +0000 UTC m=+135.368417981 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xw4z9" (UID: "267d2772-42e1-4031-bc5f-ac78559a7f82") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 25 07:59:12 crc kubenswrapper[4832]: I0125 07:59:12.299830 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-nlxgx" podStartSLOduration=116.299810129 podStartE2EDuration="1m56.299810129s" podCreationTimestamp="2026-01-25 07:57:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-25 07:59:12.256538739 +0000 UTC m=+134.930362272" watchObservedRunningTime="2026-01-25 07:59:12.299810129 +0000 UTC m=+134.973633662" Jan 25 07:59:12 crc kubenswrapper[4832]: I0125 07:59:12.300285 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 25 07:59:12 crc kubenswrapper[4832]: E0125 07:59:12.300766 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-25 07:59:12.80074753 +0000 UTC m=+135.474571063 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 25 07:59:12 crc kubenswrapper[4832]: I0125 07:59:12.378543 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-f9d7485db-8pg27" podStartSLOduration=116.378525285 podStartE2EDuration="1m56.378525285s" podCreationTimestamp="2026-01-25 07:57:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-25 07:59:12.357273498 +0000 UTC m=+135.031097031" watchObservedRunningTime="2026-01-25 07:59:12.378525285 +0000 UTC m=+135.052348818" Jan 25 07:59:12 crc kubenswrapper[4832]: I0125 07:59:12.408764 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xw4z9\" (UID: \"267d2772-42e1-4031-bc5f-ac78559a7f82\") " pod="openshift-image-registry/image-registry-697d97f7c8-xw4z9" Jan 25 07:59:12 crc kubenswrapper[4832]: E0125 07:59:12.409070 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-25 07:59:12.909057045 +0000 UTC m=+135.582880578 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xw4z9" (UID: "267d2772-42e1-4031-bc5f-ac78559a7f82") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 25 07:59:12 crc kubenswrapper[4832]: I0125 07:59:12.422356 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-7rwcz" podStartSLOduration=116.422338254 podStartE2EDuration="1m56.422338254s" podCreationTimestamp="2026-01-25 07:57:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-25 07:59:12.420222722 +0000 UTC m=+135.094046255" watchObservedRunningTime="2026-01-25 07:59:12.422338254 +0000 UTC m=+135.096161787" Jan 25 07:59:12 crc kubenswrapper[4832]: I0125 07:59:12.509567 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 25 07:59:12 crc kubenswrapper[4832]: E0125 07:59:12.509698 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-25 07:59:13.009677632 +0000 UTC m=+135.683501165 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 25 07:59:12 crc kubenswrapper[4832]: I0125 07:59:12.510186 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xw4z9\" (UID: \"267d2772-42e1-4031-bc5f-ac78559a7f82\") " pod="openshift-image-registry/image-registry-697d97f7c8-xw4z9" Jan 25 07:59:12 crc kubenswrapper[4832]: E0125 07:59:12.510600 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-25 07:59:13.010585232 +0000 UTC m=+135.684408765 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xw4z9" (UID: "267d2772-42e1-4031-bc5f-ac78559a7f82") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 25 07:59:12 crc kubenswrapper[4832]: I0125 07:59:12.611445 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 25 07:59:12 crc kubenswrapper[4832]: E0125 07:59:12.611807 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-25 07:59:13.111782898 +0000 UTC m=+135.785606431 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 25 07:59:12 crc kubenswrapper[4832]: I0125 07:59:12.645565 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-zxhsq" event={"ID":"58b235e2-ab37-4d26-ba86-c188dae1bcda","Type":"ContainerStarted","Data":"890b51954b2c8f12dd89d2f72ebf56e42216b7219d9d6da8e0628db99a630f24"} Jan 25 07:59:12 crc kubenswrapper[4832]: I0125 07:59:12.659421 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-cdncb" event={"ID":"648bd733-1181-4dcf-8b9c-40806f713ca6","Type":"ContainerStarted","Data":"402d9c85e7ccbcec839b6576f2c5ec122a93e18997121dfcbf356c918ba3a85d"} Jan 25 07:59:12 crc kubenswrapper[4832]: I0125 07:59:12.665753 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-fcqfl" event={"ID":"f6da273c-cb4f-48a9-88cf-70ae8647e580","Type":"ContainerStarted","Data":"05b95d59b70fd187f8d7acd36fe139b56c6a02a5454d9298f491b9e7f267af37"} Jan 25 07:59:12 crc kubenswrapper[4832]: I0125 07:59:12.676077 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-p7n7p" event={"ID":"462c88d9-0b9e-4b53-9b5d-78e14179c952","Type":"ContainerStarted","Data":"3caa1c3e0800105c1aeb5345b23e7cab09d4fff50dc90e8d6daab643ceaae744"} Jan 25 07:59:12 crc kubenswrapper[4832]: I0125 07:59:12.713111 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xw4z9\" (UID: \"267d2772-42e1-4031-bc5f-ac78559a7f82\") " pod="openshift-image-registry/image-registry-697d97f7c8-xw4z9" Jan 25 07:59:12 crc kubenswrapper[4832]: E0125 07:59:12.713486 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-25 07:59:13.2134733 +0000 UTC m=+135.887296843 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xw4z9" (UID: "267d2772-42e1-4031-bc5f-ac78559a7f82") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 25 07:59:12 crc kubenswrapper[4832]: I0125 07:59:12.732036 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-29fbk" event={"ID":"6afbd903-07e1-4806-9a41-a073a6a4acb7","Type":"ContainerStarted","Data":"bcced3a96d71c9797f143212c52e1ecc9b12f16f68441b863d27c5333a03ea3e"} Jan 25 07:59:12 crc kubenswrapper[4832]: I0125 07:59:12.732086 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-29fbk" event={"ID":"6afbd903-07e1-4806-9a41-a073a6a4acb7","Type":"ContainerStarted","Data":"f93e627bb0abc450ac49df31c554634a3ccf8d78562358b0725c54d44a0fce9c"} Jan 25 07:59:12 crc kubenswrapper[4832]: I0125 07:59:12.735679 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-88fz6" event={"ID":"24acc510-4a43-4275-9a46-fe2e8258b3c7","Type":"ContainerStarted","Data":"641e3e985510232199ae7bb5744024ce815cc7c5615df2c24073bebb5137289f"} Jan 25 07:59:12 crc kubenswrapper[4832]: I0125 07:59:12.786461 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-fth6d" event={"ID":"cc912b0f-bde8-4185-be84-2a2c3394024f","Type":"ContainerStarted","Data":"26064e96a2f9de9bf8cbde3a709cdc62520c56974e52aeb55a46a7a33c130026"} Jan 25 07:59:12 crc kubenswrapper[4832]: I0125 07:59:12.788029 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-dswxl"] Jan 25 07:59:12 crc kubenswrapper[4832]: I0125 07:59:12.805559 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-xjkrg" event={"ID":"cdc4f06b-3e9a-4855-8400-faabc37cd870","Type":"ContainerStarted","Data":"2c343ac25ec33a703b8181e597da35953198250b285539fc844cddf8ab11e40c"} Jan 25 07:59:12 crc kubenswrapper[4832]: I0125 07:59:12.805604 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-xjkrg" event={"ID":"cdc4f06b-3e9a-4855-8400-faabc37cd870","Type":"ContainerStarted","Data":"fd324648b504bca7552cff917c0d791f97023638ec2058224271af5c3f397555"} Jan 25 07:59:12 crc kubenswrapper[4832]: I0125 07:59:12.817352 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 25 07:59:12 crc kubenswrapper[4832]: E0125 07:59:12.817590 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-25 07:59:13.317569753 +0000 UTC m=+135.991393286 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 25 07:59:12 crc kubenswrapper[4832]: I0125 07:59:12.818040 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xw4z9\" (UID: \"267d2772-42e1-4031-bc5f-ac78559a7f82\") " pod="openshift-image-registry/image-registry-697d97f7c8-xw4z9" Jan 25 07:59:12 crc kubenswrapper[4832]: E0125 07:59:12.818429 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-25 07:59:13.318412611 +0000 UTC m=+135.992236154 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xw4z9" (UID: "267d2772-42e1-4031-bc5f-ac78559a7f82") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 25 07:59:12 crc kubenswrapper[4832]: I0125 07:59:12.832130 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-jppn9" event={"ID":"39120fe3-c252-4345-80bc-048cde22bafe","Type":"ContainerStarted","Data":"bcd1acaaf623e8b2db807dcc9bdb5ffcc0904057b968692867cf097a53e11ced"} Jan 25 07:59:12 crc kubenswrapper[4832]: I0125 07:59:12.832210 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-7777fb866f-jppn9" Jan 25 07:59:12 crc kubenswrapper[4832]: I0125 07:59:12.904207 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-gp55m" event={"ID":"b1211d5b-db27-4814-85b9-241c30afaaab","Type":"ContainerStarted","Data":"371bcfcf13bb404f3493b08b08afb9f0f66dca883eadbf3b3e20c558f8d1cc43"} Jan 25 07:59:12 crc kubenswrapper[4832]: I0125 07:59:12.904290 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-gp55m" event={"ID":"b1211d5b-db27-4814-85b9-241c30afaaab","Type":"ContainerStarted","Data":"9226840dc2be99783776e1be1dd91b99d5ab2747ef6f173af45a52526aa90edc"} Jan 25 07:59:12 crc kubenswrapper[4832]: I0125 07:59:12.926040 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 25 07:59:12 crc kubenswrapper[4832]: E0125 07:59:12.926283 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-25 07:59:13.426266882 +0000 UTC m=+136.100090425 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 25 07:59:12 crc kubenswrapper[4832]: I0125 07:59:12.926765 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xw4z9\" (UID: \"267d2772-42e1-4031-bc5f-ac78559a7f82\") " pod="openshift-image-registry/image-registry-697d97f7c8-xw4z9" Jan 25 07:59:12 crc kubenswrapper[4832]: E0125 07:59:12.927158 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-25 07:59:13.427147201 +0000 UTC m=+136.100970734 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xw4z9" (UID: "267d2772-42e1-4031-bc5f-ac78559a7f82") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 25 07:59:12 crc kubenswrapper[4832]: I0125 07:59:12.929481 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-752ng" event={"ID":"b945d594-8566-495a-a66a-92fcd625f021","Type":"ContainerStarted","Data":"1a19356e3776a533ff8e3346af12cd432d4c72d2ff9bcd2ec44c4647bff938c3"} Jan 25 07:59:12 crc kubenswrapper[4832]: I0125 07:59:12.929550 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-752ng" event={"ID":"b945d594-8566-495a-a66a-92fcd625f021","Type":"ContainerStarted","Data":"2f4efd1281197b7242391ece4051008a2936267a1acb357104b572e3e1ea9d4e"} Jan 25 07:59:12 crc kubenswrapper[4832]: I0125 07:59:12.935754 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-kpg7m"] Jan 25 07:59:12 crc kubenswrapper[4832]: I0125 07:59:12.943873 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-9jlxs" event={"ID":"9626a1b0-481b-4cd5-a439-c45a98f1c391","Type":"ContainerStarted","Data":"b9d1978c3b5ebd1697e6b8b22cd606c0e8c95d26e9cdf68da41eb6b9c496c594"} Jan 25 07:59:12 crc kubenswrapper[4832]: I0125 07:59:12.948155 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-csbzw" event={"ID":"7fad5166-9aa0-4c10-8c73-2186af1d226d","Type":"ContainerStarted","Data":"42dd21d4e8703a89f775e8ff69d13fc6b03894f2734a8752f71a2f070db1bcaf"} Jan 25 07:59:12 crc kubenswrapper[4832]: I0125 07:59:12.948884 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-csbzw" Jan 25 07:59:12 crc kubenswrapper[4832]: I0125 07:59:12.948991 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-csbzw" event={"ID":"7fad5166-9aa0-4c10-8c73-2186af1d226d","Type":"ContainerStarted","Data":"9b7781d0df06fa0acac3945c3db98d23cdcb581ebfc8e1b6c83e46dc05d5432e"} Jan 25 07:59:12 crc kubenswrapper[4832]: I0125 07:59:12.961831 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-gqjzs" Jan 25 07:59:13 crc kubenswrapper[4832]: I0125 07:59:13.028444 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 25 07:59:13 crc kubenswrapper[4832]: E0125 07:59:13.028674 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-25 07:59:13.528647476 +0000 UTC m=+136.202471009 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 25 07:59:13 crc kubenswrapper[4832]: I0125 07:59:13.029246 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xw4z9\" (UID: \"267d2772-42e1-4031-bc5f-ac78559a7f82\") " pod="openshift-image-registry/image-registry-697d97f7c8-xw4z9" Jan 25 07:59:13 crc kubenswrapper[4832]: E0125 07:59:13.032254 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-25 07:59:13.532241428 +0000 UTC m=+136.206064961 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xw4z9" (UID: "267d2772-42e1-4031-bc5f-ac78559a7f82") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 25 07:59:13 crc kubenswrapper[4832]: I0125 07:59:13.134328 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 25 07:59:13 crc kubenswrapper[4832]: E0125 07:59:13.134422 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-25 07:59:13.634406026 +0000 UTC m=+136.308229549 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 25 07:59:13 crc kubenswrapper[4832]: I0125 07:59:13.134889 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xw4z9\" (UID: \"267d2772-42e1-4031-bc5f-ac78559a7f82\") " pod="openshift-image-registry/image-registry-697d97f7c8-xw4z9" Jan 25 07:59:13 crc kubenswrapper[4832]: E0125 07:59:13.135271 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-25 07:59:13.635262955 +0000 UTC m=+136.309086488 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xw4z9" (UID: "267d2772-42e1-4031-bc5f-ac78559a7f82") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 25 07:59:13 crc kubenswrapper[4832]: I0125 07:59:13.234752 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-gqjzs" podStartSLOduration=117.234735782 podStartE2EDuration="1m57.234735782s" podCreationTimestamp="2026-01-25 07:57:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-25 07:59:13.214709486 +0000 UTC m=+135.888533029" watchObservedRunningTime="2026-01-25 07:59:13.234735782 +0000 UTC m=+135.908559315" Jan 25 07:59:13 crc kubenswrapper[4832]: I0125 07:59:13.235736 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 25 07:59:13 crc kubenswrapper[4832]: E0125 07:59:13.235928 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-25 07:59:13.735901361 +0000 UTC m=+136.409724904 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 25 07:59:13 crc kubenswrapper[4832]: I0125 07:59:13.236115 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xw4z9\" (UID: \"267d2772-42e1-4031-bc5f-ac78559a7f82\") " pod="openshift-image-registry/image-registry-697d97f7c8-xw4z9" Jan 25 07:59:13 crc kubenswrapper[4832]: E0125 07:59:13.236601 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-25 07:59:13.736591425 +0000 UTC m=+136.410414968 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xw4z9" (UID: "267d2772-42e1-4031-bc5f-ac78559a7f82") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 25 07:59:13 crc kubenswrapper[4832]: I0125 07:59:13.237971 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-9ll2t"] Jan 25 07:59:13 crc kubenswrapper[4832]: I0125 07:59:13.240149 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-b84df"] Jan 25 07:59:13 crc kubenswrapper[4832]: I0125 07:59:13.242283 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-fswfm"] Jan 25 07:59:13 crc kubenswrapper[4832]: I0125 07:59:13.260475 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-q5r28"] Jan 25 07:59:13 crc kubenswrapper[4832]: I0125 07:59:13.270786 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-cbsh6"] Jan 25 07:59:13 crc kubenswrapper[4832]: I0125 07:59:13.298875 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-server-752ng" podStartSLOduration=5.2988590460000005 podStartE2EDuration="5.298859046s" podCreationTimestamp="2026-01-25 07:59:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-25 07:59:13.296855888 +0000 UTC m=+135.970679421" watchObservedRunningTime="2026-01-25 07:59:13.298859046 +0000 UTC m=+135.972682579" Jan 25 07:59:13 crc kubenswrapper[4832]: I0125 07:59:13.303014 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-c8c6f"] Jan 25 07:59:13 crc kubenswrapper[4832]: I0125 07:59:13.303893 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-drfl8"] Jan 25 07:59:13 crc kubenswrapper[4832]: I0125 07:59:13.307482 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-vhn96"] Jan 25 07:59:13 crc kubenswrapper[4832]: I0125 07:59:13.309730 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-6gswk"] Jan 25 07:59:13 crc kubenswrapper[4832]: I0125 07:59:13.321683 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-knhz8"] Jan 25 07:59:13 crc kubenswrapper[4832]: I0125 07:59:13.324581 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-c8cgr"] Jan 25 07:59:13 crc kubenswrapper[4832]: I0125 07:59:13.330560 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-f222l"] Jan 25 07:59:13 crc kubenswrapper[4832]: I0125 07:59:13.330602 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-6llzt"] Jan 25 07:59:13 crc kubenswrapper[4832]: I0125 07:59:13.331218 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-tqtnp"] Jan 25 07:59:13 crc kubenswrapper[4832]: I0125 07:59:13.336935 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 25 07:59:13 crc kubenswrapper[4832]: E0125 07:59:13.337132 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-25 07:59:13.837036345 +0000 UTC m=+136.510859868 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 25 07:59:13 crc kubenswrapper[4832]: I0125 07:59:13.337218 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xw4z9\" (UID: \"267d2772-42e1-4031-bc5f-ac78559a7f82\") " pod="openshift-image-registry/image-registry-697d97f7c8-xw4z9" Jan 25 07:59:13 crc kubenswrapper[4832]: E0125 07:59:13.337494 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-25 07:59:13.83748672 +0000 UTC m=+136.511310243 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xw4z9" (UID: "267d2772-42e1-4031-bc5f-ac78559a7f82") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 25 07:59:13 crc kubenswrapper[4832]: I0125 07:59:13.363628 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-5444994796-xjkrg" Jan 25 07:59:13 crc kubenswrapper[4832]: I0125 07:59:13.373805 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-p7n7p" podStartSLOduration=117.373768484 podStartE2EDuration="1m57.373768484s" podCreationTimestamp="2026-01-25 07:57:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-25 07:59:13.336405383 +0000 UTC m=+136.010228916" watchObservedRunningTime="2026-01-25 07:59:13.373768484 +0000 UTC m=+136.047592007" Jan 25 07:59:13 crc kubenswrapper[4832]: I0125 07:59:13.413107 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-config-operator/openshift-config-operator-7777fb866f-jppn9" podStartSLOduration=117.413090231 podStartE2EDuration="1m57.413090231s" podCreationTimestamp="2026-01-25 07:57:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-25 07:59:13.374010833 +0000 UTC m=+136.047834366" watchObservedRunningTime="2026-01-25 07:59:13.413090231 +0000 UTC m=+136.086913764" Jan 25 07:59:13 crc kubenswrapper[4832]: I0125 07:59:13.437712 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 25 07:59:13 crc kubenswrapper[4832]: E0125 07:59:13.437881 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-25 07:59:13.937854397 +0000 UTC m=+136.611677930 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 25 07:59:13 crc kubenswrapper[4832]: I0125 07:59:13.438072 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xw4z9\" (UID: \"267d2772-42e1-4031-bc5f-ac78559a7f82\") " pod="openshift-image-registry/image-registry-697d97f7c8-xw4z9" Jan 25 07:59:13 crc kubenswrapper[4832]: E0125 07:59:13.438352 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-25 07:59:13.938340234 +0000 UTC m=+136.612163767 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xw4z9" (UID: "267d2772-42e1-4031-bc5f-ac78559a7f82") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 25 07:59:13 crc kubenswrapper[4832]: I0125 07:59:13.439514 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-fns8l"] Jan 25 07:59:13 crc kubenswrapper[4832]: I0125 07:59:13.441829 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-jjs2r"] Jan 25 07:59:13 crc kubenswrapper[4832]: I0125 07:59:13.497047 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns-operator/dns-operator-744455d44c-fth6d" podStartSLOduration=117.497026174 podStartE2EDuration="1m57.497026174s" podCreationTimestamp="2026-01-25 07:57:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-25 07:59:13.456107033 +0000 UTC m=+136.129930576" watchObservedRunningTime="2026-01-25 07:59:13.497026174 +0000 UTC m=+136.170849707" Jan 25 07:59:13 crc kubenswrapper[4832]: I0125 07:59:13.499820 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-csbzw" podStartSLOduration=117.499812128 podStartE2EDuration="1m57.499812128s" podCreationTimestamp="2026-01-25 07:57:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-25 07:59:13.495264794 +0000 UTC m=+136.169088327" watchObservedRunningTime="2026-01-25 07:59:13.499812128 +0000 UTC m=+136.173635661" Jan 25 07:59:13 crc kubenswrapper[4832]: I0125 07:59:13.520266 4832 patch_prober.go:28] interesting pod/router-default-5444994796-xjkrg container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 25 07:59:13 crc kubenswrapper[4832]: [-]has-synced failed: reason withheld Jan 25 07:59:13 crc kubenswrapper[4832]: [+]process-running ok Jan 25 07:59:13 crc kubenswrapper[4832]: healthz check failed Jan 25 07:59:13 crc kubenswrapper[4832]: I0125 07:59:13.520343 4832 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-xjkrg" podUID="cdc4f06b-3e9a-4855-8400-faabc37cd870" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 25 07:59:13 crc kubenswrapper[4832]: I0125 07:59:13.540150 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 25 07:59:13 crc kubenswrapper[4832]: E0125 07:59:13.541037 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-25 07:59:14.041020059 +0000 UTC m=+136.714843592 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 25 07:59:13 crc kubenswrapper[4832]: I0125 07:59:13.559961 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-5bk7m"] Jan 25 07:59:13 crc kubenswrapper[4832]: I0125 07:59:13.567717 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-mggjn"] Jan 25 07:59:13 crc kubenswrapper[4832]: I0125 07:59:13.570440 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-jvld2"] Jan 25 07:59:13 crc kubenswrapper[4832]: I0125 07:59:13.582378 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress/router-default-5444994796-xjkrg" podStartSLOduration=117.582356784 podStartE2EDuration="1m57.582356784s" podCreationTimestamp="2026-01-25 07:57:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-25 07:59:13.554760292 +0000 UTC m=+136.228583835" watchObservedRunningTime="2026-01-25 07:59:13.582356784 +0000 UTC m=+136.256180317" Jan 25 07:59:13 crc kubenswrapper[4832]: I0125 07:59:13.596037 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29488785-dcf79"] Jan 25 07:59:13 crc kubenswrapper[4832]: W0125 07:59:13.613685 4832 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcba7e1f8_bc7f_4c85_bdc5_4a81bb6622d1.slice/crio-9bc16132e0496628ef11095d72573f8a6f08969e7fd049ce8391f6549f072ac6 WatchSource:0}: Error finding container 9bc16132e0496628ef11095d72573f8a6f08969e7fd049ce8391f6549f072ac6: Status 404 returned error can't find the container with id 9bc16132e0496628ef11095d72573f8a6f08969e7fd049ce8391f6549f072ac6 Jan 25 07:59:13 crc kubenswrapper[4832]: I0125 07:59:13.628453 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-csbzw" Jan 25 07:59:13 crc kubenswrapper[4832]: W0125 07:59:13.633000 4832 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5c72bea6_adc6_4db0_aec2_3436d21d9871.slice/crio-aa61997a7d40a03ea0d0d9bf7394576dd804e0a7a9478316bf239232d663ac2d WatchSource:0}: Error finding container aa61997a7d40a03ea0d0d9bf7394576dd804e0a7a9478316bf239232d663ac2d: Status 404 returned error can't find the container with id aa61997a7d40a03ea0d0d9bf7394576dd804e0a7a9478316bf239232d663ac2d Jan 25 07:59:13 crc kubenswrapper[4832]: I0125 07:59:13.643791 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xw4z9\" (UID: \"267d2772-42e1-4031-bc5f-ac78559a7f82\") " pod="openshift-image-registry/image-registry-697d97f7c8-xw4z9" Jan 25 07:59:13 crc kubenswrapper[4832]: E0125 07:59:13.645240 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-25 07:59:14.144675257 +0000 UTC m=+136.818498790 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xw4z9" (UID: "267d2772-42e1-4031-bc5f-ac78559a7f82") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 25 07:59:13 crc kubenswrapper[4832]: I0125 07:59:13.745564 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 25 07:59:13 crc kubenswrapper[4832]: E0125 07:59:13.746211 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-25 07:59:14.246191993 +0000 UTC m=+136.920015526 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 25 07:59:13 crc kubenswrapper[4832]: I0125 07:59:13.848077 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xw4z9\" (UID: \"267d2772-42e1-4031-bc5f-ac78559a7f82\") " pod="openshift-image-registry/image-registry-697d97f7c8-xw4z9" Jan 25 07:59:13 crc kubenswrapper[4832]: E0125 07:59:13.848361 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-25 07:59:14.348350271 +0000 UTC m=+137.022173804 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xw4z9" (UID: "267d2772-42e1-4031-bc5f-ac78559a7f82") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 25 07:59:13 crc kubenswrapper[4832]: I0125 07:59:13.948729 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 25 07:59:13 crc kubenswrapper[4832]: E0125 07:59:13.949742 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-25 07:59:14.449726952 +0000 UTC m=+137.123550475 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 25 07:59:13 crc kubenswrapper[4832]: I0125 07:59:13.967626 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-cdncb" event={"ID":"648bd733-1181-4dcf-8b9c-40806f713ca6","Type":"ContainerStarted","Data":"90b8ecc694a6c433687e73acb61124cf03674891211a161b685df2103c6083e4"} Jan 25 07:59:13 crc kubenswrapper[4832]: I0125 07:59:13.973122 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-mggjn" event={"ID":"bd278886-fb8d-4013-ae54-83edde53bdaa","Type":"ContainerStarted","Data":"174ea3ed620bacbdc47bdcb86abf477636686fafbefd93a4b815426fee7bffe3"} Jan 25 07:59:13 crc kubenswrapper[4832]: I0125 07:59:13.975258 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-f222l" event={"ID":"023a5b50-72c3-42a2-8104-dc50489cf857","Type":"ContainerStarted","Data":"60c57c8edb0e8fb22f20684ca20da8fa1b30ce303d207f2ffd7d077ecfee25e4"} Jan 25 07:59:13 crc kubenswrapper[4832]: I0125 07:59:13.977676 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-cbsh6" event={"ID":"4e0912c6-9dfc-437a-92f0-c6ee3063c848","Type":"ContainerStarted","Data":"14256836a5f111f004b81870e2a3c8cc74416908c2906e5236826e0068c3d903"} Jan 25 07:59:13 crc kubenswrapper[4832]: I0125 07:59:13.979465 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-6llzt" event={"ID":"6eb8ff11-3ea3-4569-9d87-e89416c04784","Type":"ContainerStarted","Data":"ffbf2ac92dd3d1d5a5f99edd0103ec5d5ccc20c95ede4233ab84b1c1591a1479"} Jan 25 07:59:13 crc kubenswrapper[4832]: I0125 07:59:13.987522 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-c8c6f" event={"ID":"92293986-2979-44e0-8331-72f2546d576e","Type":"ContainerStarted","Data":"d68173de46b359607fb5d4a953ac7e59d71f7a46cf1ed4adbccceb9410cce5ca"} Jan 25 07:59:13 crc kubenswrapper[4832]: I0125 07:59:13.990601 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-dswxl" event={"ID":"c592226b-85c1-48b3-9e85-cbd606c1f94d","Type":"ContainerStarted","Data":"891f3a6345ad129d4196f710371241ba62808e9cab52f8f85a51aef8bad088e8"} Jan 25 07:59:14 crc kubenswrapper[4832]: I0125 07:59:14.006622 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-29fbk" event={"ID":"6afbd903-07e1-4806-9a41-a073a6a4acb7","Type":"ContainerStarted","Data":"0d7a862eccb3aae5eab5ac5c7df67fba548338980df0450c1f736a852bd4898d"} Jan 25 07:59:14 crc kubenswrapper[4832]: I0125 07:59:14.017522 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-jjs2r" event={"ID":"4b4ff59a-58d8-4822-8be8-d48a5a85b2d2","Type":"ContainerStarted","Data":"58fca5ca58444ce7341b436c52190962acebfc34412f658d902314a9c12181ce"} Jan 25 07:59:14 crc kubenswrapper[4832]: I0125 07:59:14.020070 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-q5r28" event={"ID":"cb0834ac-2ef5-48dc-a86f-511e79c897f7","Type":"ContainerStarted","Data":"2487fbdce256f30617517b45f0729e348432293f6deade7aceb3e47928c6adcb"} Jan 25 07:59:14 crc kubenswrapper[4832]: I0125 07:59:14.037760 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/machine-api-operator-5694c8668f-29fbk" podStartSLOduration=118.037730113 podStartE2EDuration="1m58.037730113s" podCreationTimestamp="2026-01-25 07:57:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-25 07:59:14.03764293 +0000 UTC m=+136.711466463" watchObservedRunningTime="2026-01-25 07:59:14.037730113 +0000 UTC m=+136.711553646" Jan 25 07:59:14 crc kubenswrapper[4832]: I0125 07:59:14.039272 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca-operator/service-ca-operator-777779d784-cdncb" podStartSLOduration=118.039263374 podStartE2EDuration="1m58.039263374s" podCreationTimestamp="2026-01-25 07:57:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-25 07:59:13.997785454 +0000 UTC m=+136.671609017" watchObservedRunningTime="2026-01-25 07:59:14.039263374 +0000 UTC m=+136.713086907" Jan 25 07:59:14 crc kubenswrapper[4832]: I0125 07:59:14.051996 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xw4z9\" (UID: \"267d2772-42e1-4031-bc5f-ac78559a7f82\") " pod="openshift-image-registry/image-registry-697d97f7c8-xw4z9" Jan 25 07:59:14 crc kubenswrapper[4832]: E0125 07:59:14.052270 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-25 07:59:14.552259123 +0000 UTC m=+137.226082656 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xw4z9" (UID: "267d2772-42e1-4031-bc5f-ac78559a7f82") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 25 07:59:14 crc kubenswrapper[4832]: I0125 07:59:14.059316 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-vhn96" event={"ID":"567da687-f308-4473-a3d0-aad511ca6e8b","Type":"ContainerStarted","Data":"25f7588852dea721f7532009c5a9fa52d156ea3f832a39e6146962a7e9d09fe1"} Jan 25 07:59:14 crc kubenswrapper[4832]: I0125 07:59:14.071303 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29488785-dcf79" event={"ID":"051ceaa0-fdb3-480a-9c5d-f56b1194ca81","Type":"ContainerStarted","Data":"d0a22e098791e15839c35b35e96e335c398e955897d9a70799c3ad2fb614120c"} Jan 25 07:59:14 crc kubenswrapper[4832]: I0125 07:59:14.140949 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-gp55m" event={"ID":"b1211d5b-db27-4814-85b9-241c30afaaab","Type":"ContainerStarted","Data":"8fed056e454c26d78973cb3d82140e46cb817fd8ccbeeeb4c2091540006c54df"} Jan 25 07:59:14 crc kubenswrapper[4832]: I0125 07:59:14.144499 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-knhz8" event={"ID":"5c72bea6-adc6-4db0-aec2-3436d21d9871","Type":"ContainerStarted","Data":"aa61997a7d40a03ea0d0d9bf7394576dd804e0a7a9478316bf239232d663ac2d"} Jan 25 07:59:14 crc kubenswrapper[4832]: I0125 07:59:14.147718 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-5bk7m" event={"ID":"c670a610-3a09-4fc1-acb2-f768bc4e5bab","Type":"ContainerStarted","Data":"0a27f48099a5a4793f3c0da3e1022c77063dafb261c16c2e7e3ec77d51c385c4"} Jan 25 07:59:14 crc kubenswrapper[4832]: I0125 07:59:14.149094 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-zxhsq" event={"ID":"58b235e2-ab37-4d26-ba86-c188dae1bcda","Type":"ContainerStarted","Data":"7cfb5d799d242c911975b5680c4943ab6db90fdf0d99ae45adc117799696b9ca"} Jan 25 07:59:14 crc kubenswrapper[4832]: I0125 07:59:14.153874 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 25 07:59:14 crc kubenswrapper[4832]: E0125 07:59:14.154517 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-25 07:59:14.654500503 +0000 UTC m=+137.328324036 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 25 07:59:14 crc kubenswrapper[4832]: I0125 07:59:14.164752 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-fns8l" event={"ID":"a32ac557-809a-4a0d-8c18-3c8c5730e849","Type":"ContainerStarted","Data":"7fe7dabcce56d8e5c7e4c1f0609f9fc890a5c833f59af58723e9d5cdeb5bc7b4"} Jan 25 07:59:14 crc kubenswrapper[4832]: I0125 07:59:14.179002 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-kpg7m" event={"ID":"cba7e1f8-bc7f-4c85-bdc5-4a81bb6622d1","Type":"ContainerStarted","Data":"9bc16132e0496628ef11095d72573f8a6f08969e7fd049ce8391f6549f072ac6"} Jan 25 07:59:14 crc kubenswrapper[4832]: I0125 07:59:14.189928 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-jvld2" event={"ID":"c05896f4-ee7d-4b10-949e-b8bf0d822313","Type":"ContainerStarted","Data":"4519000201fd85ed60a7a6202334ae10a14eafef8daa79aef6226ee84c6e8c13"} Jan 25 07:59:14 crc kubenswrapper[4832]: I0125 07:59:14.211637 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-c8cgr" event={"ID":"70fee4de-12e8-4452-a3a7-731815ecbedd","Type":"ContainerStarted","Data":"f2cf5c7e8207344725ae9114d50e068098cf1713ba9d88385b902028dfb1627f"} Jan 25 07:59:14 crc kubenswrapper[4832]: I0125 07:59:14.247587 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-88fz6" event={"ID":"24acc510-4a43-4275-9a46-fe2e8258b3c7","Type":"ContainerStarted","Data":"941e4b8f0716523a71a24586fd82fec7e1265d5ccb9885e386d31973a0db9521"} Jan 25 07:59:14 crc kubenswrapper[4832]: I0125 07:59:14.255018 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xw4z9\" (UID: \"267d2772-42e1-4031-bc5f-ac78559a7f82\") " pod="openshift-image-registry/image-registry-697d97f7c8-xw4z9" Jan 25 07:59:14 crc kubenswrapper[4832]: E0125 07:59:14.257250 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-25 07:59:14.75723397 +0000 UTC m=+137.431057503 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xw4z9" (UID: "267d2772-42e1-4031-bc5f-ac78559a7f82") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 25 07:59:14 crc kubenswrapper[4832]: I0125 07:59:14.364655 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-fswfm" event={"ID":"9d51e019-aeb4-42b0-a900-257aead64221","Type":"ContainerStarted","Data":"8df2cdbf621375cba93a858a4c57cdb03769544e62c30e55b9881b1a8ceba9b6"} Jan 25 07:59:14 crc kubenswrapper[4832]: I0125 07:59:14.364722 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-fswfm" event={"ID":"9d51e019-aeb4-42b0-a900-257aead64221","Type":"ContainerStarted","Data":"4ddf422face144be5af714d3181a0a3f992544ac1024c0ec8b3e2fe81556e6d2"} Jan 25 07:59:14 crc kubenswrapper[4832]: I0125 07:59:14.366242 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console-operator/console-operator-58897d9998-fswfm" Jan 25 07:59:14 crc kubenswrapper[4832]: I0125 07:59:14.371072 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 25 07:59:14 crc kubenswrapper[4832]: E0125 07:59:14.371429 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-25 07:59:14.871407954 +0000 UTC m=+137.545231487 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 25 07:59:14 crc kubenswrapper[4832]: I0125 07:59:14.381267 4832 patch_prober.go:28] interesting pod/router-default-5444994796-xjkrg container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 25 07:59:14 crc kubenswrapper[4832]: [-]has-synced failed: reason withheld Jan 25 07:59:14 crc kubenswrapper[4832]: [+]process-running ok Jan 25 07:59:14 crc kubenswrapper[4832]: healthz check failed Jan 25 07:59:14 crc kubenswrapper[4832]: I0125 07:59:14.381751 4832 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-xjkrg" podUID="cdc4f06b-3e9a-4855-8400-faabc37cd870" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 25 07:59:14 crc kubenswrapper[4832]: I0125 07:59:14.383100 4832 patch_prober.go:28] interesting pod/console-operator-58897d9998-fswfm container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.16:8443/readyz\": dial tcp 10.217.0.16:8443: connect: connection refused" start-of-body= Jan 25 07:59:14 crc kubenswrapper[4832]: I0125 07:59:14.383175 4832 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-fswfm" podUID="9d51e019-aeb4-42b0-a900-257aead64221" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.16:8443/readyz\": dial tcp 10.217.0.16:8443: connect: connection refused" Jan 25 07:59:14 crc kubenswrapper[4832]: I0125 07:59:14.396570 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console-operator/console-operator-58897d9998-fswfm" podStartSLOduration=118.396550363 podStartE2EDuration="1m58.396550363s" podCreationTimestamp="2026-01-25 07:57:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-25 07:59:14.395903801 +0000 UTC m=+137.069727334" watchObservedRunningTime="2026-01-25 07:59:14.396550363 +0000 UTC m=+137.070373896" Jan 25 07:59:14 crc kubenswrapper[4832]: I0125 07:59:14.396669 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-admission-controller-857f4d67dd-gp55m" podStartSLOduration=118.396664657 podStartE2EDuration="1m58.396664657s" podCreationTimestamp="2026-01-25 07:57:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-25 07:59:14.191855285 +0000 UTC m=+136.865678818" watchObservedRunningTime="2026-01-25 07:59:14.396664657 +0000 UTC m=+137.070488190" Jan 25 07:59:14 crc kubenswrapper[4832]: I0125 07:59:14.398375 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-9jlxs" event={"ID":"9626a1b0-481b-4cd5-a439-c45a98f1c391","Type":"ContainerStarted","Data":"a728f532b4d3bb579d1180c1f83b973927f814599437f21b407c642a98ce53a3"} Jan 25 07:59:14 crc kubenswrapper[4832]: I0125 07:59:14.426289 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-6gswk" event={"ID":"f25ba7b4-ecd6-4e84-a97a-13c8fa94f522","Type":"ContainerStarted","Data":"3da70fb0a6c39f32c23ed9c44361fa9e6b1a4a7a0260b39fe2d5a99660886b17"} Jan 25 07:59:14 crc kubenswrapper[4832]: I0125 07:59:14.426735 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-6gswk" Jan 25 07:59:14 crc kubenswrapper[4832]: I0125 07:59:14.439265 4832 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-6gswk container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.34:8443/healthz\": dial tcp 10.217.0.34:8443: connect: connection refused" start-of-body= Jan 25 07:59:14 crc kubenswrapper[4832]: I0125 07:59:14.439331 4832 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-6gswk" podUID="f25ba7b4-ecd6-4e84-a97a-13c8fa94f522" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.34:8443/healthz\": dial tcp 10.217.0.34:8443: connect: connection refused" Jan 25 07:59:14 crc kubenswrapper[4832]: I0125 07:59:14.450358 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-tqtnp" event={"ID":"1228f33e-a6bd-4c51-ad90-f005c2848d83","Type":"ContainerStarted","Data":"2079ad841756f61237812ce524468b81087a71ca882c40ad313aa81db83e1b7a"} Jan 25 07:59:14 crc kubenswrapper[4832]: I0125 07:59:14.473966 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xw4z9\" (UID: \"267d2772-42e1-4031-bc5f-ac78559a7f82\") " pod="openshift-image-registry/image-registry-697d97f7c8-xw4z9" Jan 25 07:59:14 crc kubenswrapper[4832]: E0125 07:59:14.475566 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-25 07:59:14.975555159 +0000 UTC m=+137.649378692 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xw4z9" (UID: "267d2772-42e1-4031-bc5f-ac78559a7f82") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 25 07:59:14 crc kubenswrapper[4832]: I0125 07:59:14.488768 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-9ll2t" event={"ID":"fca662f7-e916-4728-8b6a-0b34ace7117f","Type":"ContainerStarted","Data":"76fb630ce7b0767eb60a1fedba4ded7bff868eb38409f94af1be089786b1c979"} Jan 25 07:59:14 crc kubenswrapper[4832]: I0125 07:59:14.488817 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-9ll2t" event={"ID":"fca662f7-e916-4728-8b6a-0b34ace7117f","Type":"ContainerStarted","Data":"d91aee2d4742a8c924b5647b437fa84298aac91bab6b00e2a13d11b7d2c9a259"} Jan 25 07:59:14 crc kubenswrapper[4832]: I0125 07:59:14.504398 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-6gswk" podStartSLOduration=118.504363281 podStartE2EDuration="1m58.504363281s" podCreationTimestamp="2026-01-25 07:57:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-25 07:59:14.50077261 +0000 UTC m=+137.174596143" watchObservedRunningTime="2026-01-25 07:59:14.504363281 +0000 UTC m=+137.178186814" Jan 25 07:59:14 crc kubenswrapper[4832]: I0125 07:59:14.558042 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-9ll2t" podStartSLOduration=118.558019852 podStartE2EDuration="1m58.558019852s" podCreationTimestamp="2026-01-25 07:57:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-25 07:59:14.555609681 +0000 UTC m=+137.229433214" watchObservedRunningTime="2026-01-25 07:59:14.558019852 +0000 UTC m=+137.231843385" Jan 25 07:59:14 crc kubenswrapper[4832]: I0125 07:59:14.574515 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 25 07:59:14 crc kubenswrapper[4832]: E0125 07:59:14.574809 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-25 07:59:15.074785018 +0000 UTC m=+137.748608551 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 25 07:59:14 crc kubenswrapper[4832]: I0125 07:59:14.575073 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xw4z9\" (UID: \"267d2772-42e1-4031-bc5f-ac78559a7f82\") " pod="openshift-image-registry/image-registry-697d97f7c8-xw4z9" Jan 25 07:59:14 crc kubenswrapper[4832]: E0125 07:59:14.575358 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-25 07:59:15.075351267 +0000 UTC m=+137.749174800 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xw4z9" (UID: "267d2772-42e1-4031-bc5f-ac78559a7f82") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 25 07:59:14 crc kubenswrapper[4832]: I0125 07:59:14.652030 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-99kns" event={"ID":"d506c861-ab5e-4341-8e16-ce9166f24d5c","Type":"ContainerStarted","Data":"610f1cbf8f5cbf207dc1bcf91b264dbd508ced86d31459e77d7d1f7bdf2f8bf3"} Jan 25 07:59:14 crc kubenswrapper[4832]: I0125 07:59:14.658511 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-drfl8" event={"ID":"5be2bfa8-9baa-44a1-92d1-473ff9c0478d","Type":"ContainerStarted","Data":"a7006e4e26a64717a1bc6978c8086f8a4e72cae43c70e46dad6cde22d7a9fbfc"} Jan 25 07:59:14 crc kubenswrapper[4832]: I0125 07:59:14.660524 4832 generic.go:334] "Generic (PLEG): container finished" podID="f6da273c-cb4f-48a9-88cf-70ae8647e580" containerID="2af17e2ba0ac5057eb49820bd4d82c33b032063cb0d830edb8ef5fee1ef267f1" exitCode=0 Jan 25 07:59:14 crc kubenswrapper[4832]: I0125 07:59:14.660571 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-fcqfl" event={"ID":"f6da273c-cb4f-48a9-88cf-70ae8647e580","Type":"ContainerDied","Data":"2af17e2ba0ac5057eb49820bd4d82c33b032063cb0d830edb8ef5fee1ef267f1"} Jan 25 07:59:14 crc kubenswrapper[4832]: I0125 07:59:14.672410 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-sqbmg" event={"ID":"8be00535-0bc6-41a2-a79c-552be0f574a8","Type":"ContainerStarted","Data":"9000c5cb2305bfd03ddd15ab32c5d7c5de5d0fa5cebf5d45d85557ac0e62a18f"} Jan 25 07:59:14 crc kubenswrapper[4832]: I0125 07:59:14.672464 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-sqbmg" event={"ID":"8be00535-0bc6-41a2-a79c-552be0f574a8","Type":"ContainerStarted","Data":"540bf08f9a452ad64ac7c34ee7785738e4574473c85b256f4b4b816be7d14e87"} Jan 25 07:59:14 crc kubenswrapper[4832]: I0125 07:59:14.672479 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-879f6c89f-sqbmg" Jan 25 07:59:14 crc kubenswrapper[4832]: I0125 07:59:14.689013 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 25 07:59:14 crc kubenswrapper[4832]: E0125 07:59:14.690684 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-25 07:59:15.190667969 +0000 UTC m=+137.864491502 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 25 07:59:14 crc kubenswrapper[4832]: I0125 07:59:14.702796 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver/apiserver-76f77b778f-99kns" podStartSLOduration=118.702758797 podStartE2EDuration="1m58.702758797s" podCreationTimestamp="2026-01-25 07:57:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-25 07:59:14.701207805 +0000 UTC m=+137.375031338" watchObservedRunningTime="2026-01-25 07:59:14.702758797 +0000 UTC m=+137.376582330" Jan 25 07:59:14 crc kubenswrapper[4832]: I0125 07:59:14.706591 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-879f6c89f-sqbmg" Jan 25 07:59:14 crc kubenswrapper[4832]: I0125 07:59:14.797527 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-879f6c89f-sqbmg" podStartSLOduration=118.797470043 podStartE2EDuration="1m58.797470043s" podCreationTimestamp="2026-01-25 07:57:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-25 07:59:14.741897538 +0000 UTC m=+137.415721071" watchObservedRunningTime="2026-01-25 07:59:14.797470043 +0000 UTC m=+137.471293576" Jan 25 07:59:14 crc kubenswrapper[4832]: I0125 07:59:14.814422 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xw4z9\" (UID: \"267d2772-42e1-4031-bc5f-ac78559a7f82\") " pod="openshift-image-registry/image-registry-697d97f7c8-xw4z9" Jan 25 07:59:14 crc kubenswrapper[4832]: E0125 07:59:14.816001 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-25 07:59:15.315987699 +0000 UTC m=+137.989811232 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xw4z9" (UID: "267d2772-42e1-4031-bc5f-ac78559a7f82") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 25 07:59:14 crc kubenswrapper[4832]: I0125 07:59:14.919182 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 25 07:59:14 crc kubenswrapper[4832]: E0125 07:59:14.919899 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-25 07:59:15.419870215 +0000 UTC m=+138.093693748 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 25 07:59:14 crc kubenswrapper[4832]: I0125 07:59:14.919967 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xw4z9\" (UID: \"267d2772-42e1-4031-bc5f-ac78559a7f82\") " pod="openshift-image-registry/image-registry-697d97f7c8-xw4z9" Jan 25 07:59:14 crc kubenswrapper[4832]: E0125 07:59:14.922213 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-25 07:59:15.422198903 +0000 UTC m=+138.096022436 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xw4z9" (UID: "267d2772-42e1-4031-bc5f-ac78559a7f82") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 25 07:59:15 crc kubenswrapper[4832]: I0125 07:59:15.024851 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 25 07:59:15 crc kubenswrapper[4832]: E0125 07:59:15.025606 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-25 07:59:15.525591322 +0000 UTC m=+138.199414855 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 25 07:59:15 crc kubenswrapper[4832]: I0125 07:59:15.131262 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xw4z9\" (UID: \"267d2772-42e1-4031-bc5f-ac78559a7f82\") " pod="openshift-image-registry/image-registry-697d97f7c8-xw4z9" Jan 25 07:59:15 crc kubenswrapper[4832]: E0125 07:59:15.131831 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-25 07:59:15.631815697 +0000 UTC m=+138.305639230 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xw4z9" (UID: "267d2772-42e1-4031-bc5f-ac78559a7f82") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 25 07:59:15 crc kubenswrapper[4832]: I0125 07:59:15.233641 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 25 07:59:15 crc kubenswrapper[4832]: E0125 07:59:15.235055 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-25 07:59:15.735028031 +0000 UTC m=+138.408851564 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 25 07:59:15 crc kubenswrapper[4832]: I0125 07:59:15.336287 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xw4z9\" (UID: \"267d2772-42e1-4031-bc5f-ac78559a7f82\") " pod="openshift-image-registry/image-registry-697d97f7c8-xw4z9" Jan 25 07:59:15 crc kubenswrapper[4832]: E0125 07:59:15.336605 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-25 07:59:15.836595179 +0000 UTC m=+138.510418712 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xw4z9" (UID: "267d2772-42e1-4031-bc5f-ac78559a7f82") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 25 07:59:15 crc kubenswrapper[4832]: I0125 07:59:15.377599 4832 patch_prober.go:28] interesting pod/router-default-5444994796-xjkrg container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 25 07:59:15 crc kubenswrapper[4832]: [-]has-synced failed: reason withheld Jan 25 07:59:15 crc kubenswrapper[4832]: [+]process-running ok Jan 25 07:59:15 crc kubenswrapper[4832]: healthz check failed Jan 25 07:59:15 crc kubenswrapper[4832]: I0125 07:59:15.377656 4832 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-xjkrg" podUID="cdc4f06b-3e9a-4855-8400-faabc37cd870" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 25 07:59:15 crc kubenswrapper[4832]: I0125 07:59:15.439701 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 25 07:59:15 crc kubenswrapper[4832]: E0125 07:59:15.440080 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-25 07:59:15.94004571 +0000 UTC m=+138.613869243 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 25 07:59:15 crc kubenswrapper[4832]: I0125 07:59:15.465754 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-apiserver/apiserver-76f77b778f-99kns" Jan 25 07:59:15 crc kubenswrapper[4832]: I0125 07:59:15.466192 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-76f77b778f-99kns" Jan 25 07:59:15 crc kubenswrapper[4832]: I0125 07:59:15.541292 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xw4z9\" (UID: \"267d2772-42e1-4031-bc5f-ac78559a7f82\") " pod="openshift-image-registry/image-registry-697d97f7c8-xw4z9" Jan 25 07:59:15 crc kubenswrapper[4832]: E0125 07:59:15.541894 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-25 07:59:16.041877766 +0000 UTC m=+138.715701299 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xw4z9" (UID: "267d2772-42e1-4031-bc5f-ac78559a7f82") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 25 07:59:15 crc kubenswrapper[4832]: I0125 07:59:15.643160 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 25 07:59:15 crc kubenswrapper[4832]: E0125 07:59:15.643529 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-25 07:59:16.143513477 +0000 UTC m=+138.817337010 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 25 07:59:15 crc kubenswrapper[4832]: I0125 07:59:15.679157 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-f222l" event={"ID":"023a5b50-72c3-42a2-8104-dc50489cf857","Type":"ContainerStarted","Data":"2076cb929cd0b43012fa062f5067fb910cdb3aef588cf5340881f48235ef767d"} Jan 25 07:59:15 crc kubenswrapper[4832]: I0125 07:59:15.680747 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-6llzt" event={"ID":"6eb8ff11-3ea3-4569-9d87-e89416c04784","Type":"ContainerStarted","Data":"1c798ec3e7e636dba4dd78b56f3b2d40f0f9ebfe1b5f67db4ad7ba9318ba87e3"} Jan 25 07:59:15 crc kubenswrapper[4832]: I0125 07:59:15.682510 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-6gswk" event={"ID":"f25ba7b4-ecd6-4e84-a97a-13c8fa94f522","Type":"ContainerStarted","Data":"c065cb191d079945bbd4609721099fd04cf3e72d3148f945a089e6262cf874b3"} Jan 25 07:59:15 crc kubenswrapper[4832]: I0125 07:59:15.686001 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-mggjn" event={"ID":"bd278886-fb8d-4013-ae54-83edde53bdaa","Type":"ContainerStarted","Data":"88d45b18f16b1a123c729c3fb6f766676ff02ab60de174ed53835e6981649da2"} Jan 25 07:59:15 crc kubenswrapper[4832]: I0125 07:59:15.686066 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-mggjn" event={"ID":"bd278886-fb8d-4013-ae54-83edde53bdaa","Type":"ContainerStarted","Data":"d380d0bdc980a43ac5d3fa89ded99e6331a52753be9e4443d56fb85105292d5d"} Jan 25 07:59:15 crc kubenswrapper[4832]: I0125 07:59:15.688265 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-cbsh6" event={"ID":"4e0912c6-9dfc-437a-92f0-c6ee3063c848","Type":"ContainerStarted","Data":"43d54085c4b2826825e31eac2d43c9bd40224d93716dbb26556c2db47263277f"} Jan 25 07:59:15 crc kubenswrapper[4832]: I0125 07:59:15.690450 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-5bk7m" event={"ID":"c670a610-3a09-4fc1-acb2-f768bc4e5bab","Type":"ContainerStarted","Data":"4a228762147e2d25ceccddf1cbaa5e085064526a48bc991f193e8a788a28125b"} Jan 25 07:59:15 crc kubenswrapper[4832]: I0125 07:59:15.691100 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-6gswk" Jan 25 07:59:15 crc kubenswrapper[4832]: I0125 07:59:15.693066 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-b84df" event={"ID":"468a6836-4216-434c-8c75-16b6d41eb2c4","Type":"ContainerStarted","Data":"67f889852e0233f2de5adb533329db9bb5d9041a6b54f3b2f0d00150236b50ad"} Jan 25 07:59:15 crc kubenswrapper[4832]: I0125 07:59:15.693112 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-b84df" event={"ID":"468a6836-4216-434c-8c75-16b6d41eb2c4","Type":"ContainerStarted","Data":"f9bcdb0890bf01ad7b24b8cbbc7a6b2864ae62bc983bb9e4517529e424d6913c"} Jan 25 07:59:15 crc kubenswrapper[4832]: I0125 07:59:15.693128 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-b84df" event={"ID":"468a6836-4216-434c-8c75-16b6d41eb2c4","Type":"ContainerStarted","Data":"5dc575784de2611d9550abb9c194b407cc4357a372cfd3a77c210388feb0726f"} Jan 25 07:59:15 crc kubenswrapper[4832]: I0125 07:59:15.695178 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-zxhsq" event={"ID":"58b235e2-ab37-4d26-ba86-c188dae1bcda","Type":"ContainerStarted","Data":"4fbe16c4c6702fb0b988a71a2a5b4c360e13db1b0ddfbb490c8d712da75c4c7a"} Jan 25 07:59:15 crc kubenswrapper[4832]: I0125 07:59:15.697953 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-c8cgr" event={"ID":"70fee4de-12e8-4452-a3a7-731815ecbedd","Type":"ContainerStarted","Data":"13b4bd1ae5afe25fc585e93551215bb35506b3eb8846dc412ca8414a33e5cfe5"} Jan 25 07:59:15 crc kubenswrapper[4832]: I0125 07:59:15.708491 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-88fz6" event={"ID":"24acc510-4a43-4275-9a46-fe2e8258b3c7","Type":"ContainerStarted","Data":"8c76c97febfe3e58ccec7c672434847833d81921df3397866c2f71d475c581b5"} Jan 25 07:59:15 crc kubenswrapper[4832]: I0125 07:59:15.708638 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-dns/dns-default-88fz6" Jan 25 07:59:15 crc kubenswrapper[4832]: I0125 07:59:15.714866 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-fcqfl" event={"ID":"f6da273c-cb4f-48a9-88cf-70ae8647e580","Type":"ContainerStarted","Data":"17a5828a47cc0f55ae47262a1af62d48b805fe4db4df793401b79e9667fc7534"} Jan 25 07:59:15 crc kubenswrapper[4832]: I0125 07:59:15.721701 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-fns8l" event={"ID":"a32ac557-809a-4a0d-8c18-3c8c5730e849","Type":"ContainerStarted","Data":"045d23e948a5231a27b5aee93f32484cb2c9f96e5393776a89b68504728cec4c"} Jan 25 07:59:15 crc kubenswrapper[4832]: I0125 07:59:15.725189 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-9jlxs" event={"ID":"9626a1b0-481b-4cd5-a439-c45a98f1c391","Type":"ContainerStarted","Data":"82bcc9b7d6342aa0c91d2b53adad241c97595510a2e640d2d318e7d8862ca251"} Jan 25 07:59:15 crc kubenswrapper[4832]: I0125 07:59:15.728402 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-q5r28" event={"ID":"cb0834ac-2ef5-48dc-a86f-511e79c897f7","Type":"ContainerStarted","Data":"4aa2f99a6cb09e58bd131a500f9c11f552be7eba00ee188e76ad7a3b5ac1987e"} Jan 25 07:59:15 crc kubenswrapper[4832]: I0125 07:59:15.729133 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-558db77b4-q5r28" Jan 25 07:59:15 crc kubenswrapper[4832]: I0125 07:59:15.731000 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-jvld2" event={"ID":"c05896f4-ee7d-4b10-949e-b8bf0d822313","Type":"ContainerStarted","Data":"bc520db7bf44979d68afc4b2b44cb39da060f852163c383261e83cf6901e4174"} Jan 25 07:59:15 crc kubenswrapper[4832]: I0125 07:59:15.731978 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-7954f5f757-jvld2" Jan 25 07:59:15 crc kubenswrapper[4832]: I0125 07:59:15.732431 4832 patch_prober.go:28] interesting pod/oauth-openshift-558db77b4-q5r28 container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.9:6443/healthz\": dial tcp 10.217.0.9:6443: connect: connection refused" start-of-body= Jan 25 07:59:15 crc kubenswrapper[4832]: I0125 07:59:15.732486 4832 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-558db77b4-q5r28" podUID="cb0834ac-2ef5-48dc-a86f-511e79c897f7" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.9:6443/healthz\": dial tcp 10.217.0.9:6443: connect: connection refused" Jan 25 07:59:15 crc kubenswrapper[4832]: I0125 07:59:15.733159 4832 patch_prober.go:28] interesting pod/downloads-7954f5f757-jvld2 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.27:8080/\": dial tcp 10.217.0.27:8080: connect: connection refused" start-of-body= Jan 25 07:59:15 crc kubenswrapper[4832]: I0125 07:59:15.733193 4832 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-jvld2" podUID="c05896f4-ee7d-4b10-949e-b8bf0d822313" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.27:8080/\": dial tcp 10.217.0.27:8080: connect: connection refused" Jan 25 07:59:15 crc kubenswrapper[4832]: I0125 07:59:15.734519 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-dswxl" event={"ID":"c592226b-85c1-48b3-9e85-cbd606c1f94d","Type":"ContainerStarted","Data":"dc4a69f72216c1313ca527848b91a3085ef92b5efc3878a03f6564d91a55268f"} Jan 25 07:59:15 crc kubenswrapper[4832]: I0125 07:59:15.740884 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-99kns" event={"ID":"d506c861-ab5e-4341-8e16-ce9166f24d5c","Type":"ContainerStarted","Data":"4e758b544fff48ce03b84621868839c8f02e8ba24d71acfe532cbabd817086ea"} Jan 25 07:59:15 crc kubenswrapper[4832]: I0125 07:59:15.743944 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xw4z9\" (UID: \"267d2772-42e1-4031-bc5f-ac78559a7f82\") " pod="openshift-image-registry/image-registry-697d97f7c8-xw4z9" Jan 25 07:59:15 crc kubenswrapper[4832]: E0125 07:59:15.744314 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-25 07:59:16.244297008 +0000 UTC m=+138.918120541 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xw4z9" (UID: "267d2772-42e1-4031-bc5f-ac78559a7f82") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 25 07:59:15 crc kubenswrapper[4832]: I0125 07:59:15.744521 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29488785-dcf79" event={"ID":"051ceaa0-fdb3-480a-9c5d-f56b1194ca81","Type":"ContainerStarted","Data":"6387974f472abd37b386de1337e463ca8517d1c91ef706a01e56a7509c79ae88"} Jan 25 07:59:15 crc kubenswrapper[4832]: I0125 07:59:15.750745 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-drfl8" event={"ID":"5be2bfa8-9baa-44a1-92d1-473ff9c0478d","Type":"ContainerStarted","Data":"ed52a97718171957caed04f2d8aa2a0fc002ca821efdab72e5283419371f3b8f"} Jan 25 07:59:15 crc kubenswrapper[4832]: I0125 07:59:15.759465 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd-operator/etcd-operator-b45778765-f222l" podStartSLOduration=119.75944589 podStartE2EDuration="1m59.75944589s" podCreationTimestamp="2026-01-25 07:57:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-25 07:59:15.719546663 +0000 UTC m=+138.393370196" watchObservedRunningTime="2026-01-25 07:59:15.75944589 +0000 UTC m=+138.433269413" Jan 25 07:59:15 crc kubenswrapper[4832]: I0125 07:59:15.760068 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-mggjn" podStartSLOduration=119.760065151 podStartE2EDuration="1m59.760065151s" podCreationTimestamp="2026-01-25 07:57:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-25 07:59:15.759342006 +0000 UTC m=+138.433165539" watchObservedRunningTime="2026-01-25 07:59:15.760065151 +0000 UTC m=+138.433888674" Jan 25 07:59:15 crc kubenswrapper[4832]: I0125 07:59:15.760790 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-knhz8" event={"ID":"5c72bea6-adc6-4db0-aec2-3436d21d9871","Type":"ContainerStarted","Data":"c678e5177fe5e681829b343851c6b0f32c6b9c417a739c1f5c62ae52175c1190"} Jan 25 07:59:15 crc kubenswrapper[4832]: I0125 07:59:15.760865 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-knhz8" event={"ID":"5c72bea6-adc6-4db0-aec2-3436d21d9871","Type":"ContainerStarted","Data":"0480668bbedf5eabba26dd9824895bb5230c82da41bb6fbd7d76b254dec44fea"} Jan 25 07:59:15 crc kubenswrapper[4832]: I0125 07:59:15.800122 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-c8c6f" event={"ID":"92293986-2979-44e0-8331-72f2546d576e","Type":"ContainerStarted","Data":"d37b4dea59a36d90e316bfc11e8213e9f092ce94ba27fa5d9ed03227c4e214ad"} Jan 25 07:59:15 crc kubenswrapper[4832]: I0125 07:59:15.800189 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-c8c6f" event={"ID":"92293986-2979-44e0-8331-72f2546d576e","Type":"ContainerStarted","Data":"c72e27ce13a3ca0f02a361bbd7a94e2531a51f75c1d49e91b3884b556ec57e9b"} Jan 25 07:59:15 crc kubenswrapper[4832]: I0125 07:59:15.800350 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-cbsh6" podStartSLOduration=119.800331209 podStartE2EDuration="1m59.800331209s" podCreationTimestamp="2026-01-25 07:57:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-25 07:59:15.800153783 +0000 UTC m=+138.473977316" watchObservedRunningTime="2026-01-25 07:59:15.800331209 +0000 UTC m=+138.474154742" Jan 25 07:59:15 crc kubenswrapper[4832]: I0125 07:59:15.841720 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-kpg7m" event={"ID":"cba7e1f8-bc7f-4c85-bdc5-4a81bb6622d1","Type":"ContainerStarted","Data":"cdae4dd6bbd935a7922b47b0367669c73426776d91587110b6b9552fe74f4877"} Jan 25 07:59:15 crc kubenswrapper[4832]: I0125 07:59:15.847932 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 25 07:59:15 crc kubenswrapper[4832]: E0125 07:59:15.848047 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-25 07:59:16.348031119 +0000 UTC m=+139.021854652 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 25 07:59:15 crc kubenswrapper[4832]: I0125 07:59:15.851692 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xw4z9\" (UID: \"267d2772-42e1-4031-bc5f-ac78559a7f82\") " pod="openshift-image-registry/image-registry-697d97f7c8-xw4z9" Jan 25 07:59:15 crc kubenswrapper[4832]: E0125 07:59:15.869319 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-25 07:59:16.369300466 +0000 UTC m=+139.043123999 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xw4z9" (UID: "267d2772-42e1-4031-bc5f-ac78559a7f82") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 25 07:59:15 crc kubenswrapper[4832]: I0125 07:59:15.902895 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-tqtnp" event={"ID":"1228f33e-a6bd-4c51-ad90-f005c2848d83","Type":"ContainerStarted","Data":"cfe51e54035ef263174ff8c4a4d5293cc510522dea021e08626d0269a72eb00f"} Jan 25 07:59:15 crc kubenswrapper[4832]: I0125 07:59:15.903396 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-tqtnp" event={"ID":"1228f33e-a6bd-4c51-ad90-f005c2848d83","Type":"ContainerStarted","Data":"9d9c68a6b1c3c79d89a99fc778fb54ebf0d4ab2943b158605e58b480714d6c5b"} Jan 25 07:59:15 crc kubenswrapper[4832]: I0125 07:59:15.904087 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-tqtnp" Jan 25 07:59:15 crc kubenswrapper[4832]: I0125 07:59:15.925210 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-vhn96" event={"ID":"567da687-f308-4473-a3d0-aad511ca6e8b","Type":"ContainerStarted","Data":"1539bfa2ad7b0112df2e164a9db05d64a98785e78820f61d2fc49b4d69ddcd57"} Jan 25 07:59:15 crc kubenswrapper[4832]: I0125 07:59:15.925247 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-vhn96" Jan 25 07:59:15 crc kubenswrapper[4832]: I0125 07:59:15.953021 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 25 07:59:15 crc kubenswrapper[4832]: E0125 07:59:15.954451 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-25 07:59:16.45443057 +0000 UTC m=+139.128254113 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 25 07:59:15 crc kubenswrapper[4832]: I0125 07:59:15.968912 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-fcqfl" podStartSLOduration=119.968897158 podStartE2EDuration="1m59.968897158s" podCreationTimestamp="2026-01-25 07:57:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-25 07:59:15.927888515 +0000 UTC m=+138.601712048" watchObservedRunningTime="2026-01-25 07:59:15.968897158 +0000 UTC m=+138.642720691" Jan 25 07:59:15 crc kubenswrapper[4832]: I0125 07:59:15.995083 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-vhn96" Jan 25 07:59:16 crc kubenswrapper[4832]: I0125 07:59:16.008215 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-fcqfl" Jan 25 07:59:16 crc kubenswrapper[4832]: I0125 07:59:16.008275 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-fcqfl" Jan 25 07:59:16 crc kubenswrapper[4832]: I0125 07:59:16.040775 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication-operator/authentication-operator-69f744f599-6llzt" podStartSLOduration=120.040756223 podStartE2EDuration="2m0.040756223s" podCreationTimestamp="2026-01-25 07:57:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-25 07:59:15.970195572 +0000 UTC m=+138.644019115" watchObservedRunningTime="2026-01-25 07:59:16.040756223 +0000 UTC m=+138.714579756" Jan 25 07:59:16 crc kubenswrapper[4832]: I0125 07:59:16.046483 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/dns-default-88fz6" podStartSLOduration=8.046473126 podStartE2EDuration="8.046473126s" podCreationTimestamp="2026-01-25 07:59:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-25 07:59:16.017255591 +0000 UTC m=+138.691079124" watchObservedRunningTime="2026-01-25 07:59:16.046473126 +0000 UTC m=+138.720296659" Jan 25 07:59:16 crc kubenswrapper[4832]: I0125 07:59:16.056921 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xw4z9\" (UID: \"267d2772-42e1-4031-bc5f-ac78559a7f82\") " pod="openshift-image-registry/image-registry-697d97f7c8-xw4z9" Jan 25 07:59:16 crc kubenswrapper[4832]: E0125 07:59:16.059368 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-25 07:59:16.559353252 +0000 UTC m=+139.233176785 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xw4z9" (UID: "267d2772-42e1-4031-bc5f-ac78559a7f82") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 25 07:59:16 crc kubenswrapper[4832]: I0125 07:59:16.158930 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 25 07:59:16 crc kubenswrapper[4832]: E0125 07:59:16.159351 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-25 07:59:16.659331885 +0000 UTC m=+139.333155418 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 25 07:59:16 crc kubenswrapper[4832]: I0125 07:59:16.208225 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-zxhsq" podStartSLOduration=120.208206574 podStartE2EDuration="2m0.208206574s" podCreationTimestamp="2026-01-25 07:57:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-25 07:59:16.143750339 +0000 UTC m=+138.817573872" watchObservedRunningTime="2026-01-25 07:59:16.208206574 +0000 UTC m=+138.882030107" Jan 25 07:59:16 crc kubenswrapper[4832]: I0125 07:59:16.260923 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xw4z9\" (UID: \"267d2772-42e1-4031-bc5f-ac78559a7f82\") " pod="openshift-image-registry/image-registry-697d97f7c8-xw4z9" Jan 25 07:59:16 crc kubenswrapper[4832]: E0125 07:59:16.261193 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-25 07:59:16.761182393 +0000 UTC m=+139.435005916 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xw4z9" (UID: "267d2772-42e1-4031-bc5f-ac78559a7f82") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 25 07:59:16 crc kubenswrapper[4832]: I0125 07:59:16.287216 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-7777fb866f-jppn9" Jan 25 07:59:16 crc kubenswrapper[4832]: I0125 07:59:16.361859 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 25 07:59:16 crc kubenswrapper[4832]: E0125 07:59:16.362922 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-25 07:59:16.862907546 +0000 UTC m=+139.536731079 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 25 07:59:16 crc kubenswrapper[4832]: I0125 07:59:16.376658 4832 patch_prober.go:28] interesting pod/router-default-5444994796-xjkrg container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 25 07:59:16 crc kubenswrapper[4832]: [-]has-synced failed: reason withheld Jan 25 07:59:16 crc kubenswrapper[4832]: [+]process-running ok Jan 25 07:59:16 crc kubenswrapper[4832]: healthz check failed Jan 25 07:59:16 crc kubenswrapper[4832]: I0125 07:59:16.376727 4832 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-xjkrg" podUID="cdc4f06b-3e9a-4855-8400-faabc37cd870" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 25 07:59:16 crc kubenswrapper[4832]: I0125 07:59:16.391490 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-c8cgr" podStartSLOduration=120.39146886 podStartE2EDuration="2m0.39146886s" podCreationTimestamp="2026-01-25 07:57:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-25 07:59:16.328183034 +0000 UTC m=+139.002006567" watchObservedRunningTime="2026-01-25 07:59:16.39146886 +0000 UTC m=+139.065292393" Jan 25 07:59:16 crc kubenswrapper[4832]: I0125 07:59:16.455246 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-b84df" podStartSLOduration=120.455228321 podStartE2EDuration="2m0.455228321s" podCreationTimestamp="2026-01-25 07:57:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-25 07:59:16.396042254 +0000 UTC m=+139.069865807" watchObservedRunningTime="2026-01-25 07:59:16.455228321 +0000 UTC m=+139.129051864" Jan 25 07:59:16 crc kubenswrapper[4832]: I0125 07:59:16.456518 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-canary/ingress-canary-5bk7m" podStartSLOduration=8.456509875 podStartE2EDuration="8.456509875s" podCreationTimestamp="2026-01-25 07:59:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-25 07:59:16.453842075 +0000 UTC m=+139.127665618" watchObservedRunningTime="2026-01-25 07:59:16.456509875 +0000 UTC m=+139.130333408" Jan 25 07:59:16 crc kubenswrapper[4832]: I0125 07:59:16.463247 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xw4z9\" (UID: \"267d2772-42e1-4031-bc5f-ac78559a7f82\") " pod="openshift-image-registry/image-registry-697d97f7c8-xw4z9" Jan 25 07:59:16 crc kubenswrapper[4832]: E0125 07:59:16.463594 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-25 07:59:16.963580163 +0000 UTC m=+139.637403696 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xw4z9" (UID: "267d2772-42e1-4031-bc5f-ac78559a7f82") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 25 07:59:16 crc kubenswrapper[4832]: I0125 07:59:16.469193 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console-operator/console-operator-58897d9998-fswfm" Jan 25 07:59:16 crc kubenswrapper[4832]: I0125 07:59:16.555654 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-fns8l" podStartSLOduration=120.55563899 podStartE2EDuration="2m0.55563899s" podCreationTimestamp="2026-01-25 07:57:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-25 07:59:16.555246237 +0000 UTC m=+139.229069770" watchObservedRunningTime="2026-01-25 07:59:16.55563899 +0000 UTC m=+139.229462523" Jan 25 07:59:16 crc kubenswrapper[4832]: I0125 07:59:16.563913 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 25 07:59:16 crc kubenswrapper[4832]: E0125 07:59:16.564332 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-25 07:59:17.064318073 +0000 UTC m=+139.738141606 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 25 07:59:16 crc kubenswrapper[4832]: I0125 07:59:16.595638 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-c8c6f" podStartSLOduration=120.595613639 podStartE2EDuration="2m0.595613639s" podCreationTimestamp="2026-01-25 07:57:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-25 07:59:16.592927979 +0000 UTC m=+139.266751522" watchObservedRunningTime="2026-01-25 07:59:16.595613639 +0000 UTC m=+139.269437172" Jan 25 07:59:16 crc kubenswrapper[4832]: I0125 07:59:16.627611 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-hgzxd"] Jan 25 07:59:16 crc kubenswrapper[4832]: I0125 07:59:16.628833 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-hgzxd" Jan 25 07:59:16 crc kubenswrapper[4832]: I0125 07:59:16.634199 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Jan 25 07:59:16 crc kubenswrapper[4832]: I0125 07:59:16.665178 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9ca2e919-2c33-41e7-baa6-40f5437a2c3c-catalog-content\") pod \"community-operators-hgzxd\" (UID: \"9ca2e919-2c33-41e7-baa6-40f5437a2c3c\") " pod="openshift-marketplace/community-operators-hgzxd" Jan 25 07:59:16 crc kubenswrapper[4832]: I0125 07:59:16.665247 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gbmfg\" (UniqueName: \"kubernetes.io/projected/9ca2e919-2c33-41e7-baa6-40f5437a2c3c-kube-api-access-gbmfg\") pod \"community-operators-hgzxd\" (UID: \"9ca2e919-2c33-41e7-baa6-40f5437a2c3c\") " pod="openshift-marketplace/community-operators-hgzxd" Jan 25 07:59:16 crc kubenswrapper[4832]: I0125 07:59:16.665295 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9ca2e919-2c33-41e7-baa6-40f5437a2c3c-utilities\") pod \"community-operators-hgzxd\" (UID: \"9ca2e919-2c33-41e7-baa6-40f5437a2c3c\") " pod="openshift-marketplace/community-operators-hgzxd" Jan 25 07:59:16 crc kubenswrapper[4832]: I0125 07:59:16.665318 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xw4z9\" (UID: \"267d2772-42e1-4031-bc5f-ac78559a7f82\") " pod="openshift-image-registry/image-registry-697d97f7c8-xw4z9" Jan 25 07:59:16 crc kubenswrapper[4832]: E0125 07:59:16.665606 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-25 07:59:17.165594741 +0000 UTC m=+139.839418274 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xw4z9" (UID: "267d2772-42e1-4031-bc5f-ac78559a7f82") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 25 07:59:16 crc kubenswrapper[4832]: I0125 07:59:16.666064 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-hgzxd"] Jan 25 07:59:16 crc kubenswrapper[4832]: I0125 07:59:16.711701 4832 csr.go:261] certificate signing request csr-rqfh6 is approved, waiting to be issued Jan 25 07:59:16 crc kubenswrapper[4832]: I0125 07:59:16.722497 4832 patch_prober.go:28] interesting pod/apiserver-76f77b778f-99kns container/openshift-apiserver namespace/openshift-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Jan 25 07:59:16 crc kubenswrapper[4832]: [+]log ok Jan 25 07:59:16 crc kubenswrapper[4832]: [+]etcd ok Jan 25 07:59:16 crc kubenswrapper[4832]: [+]poststarthook/start-apiserver-admission-initializer ok Jan 25 07:59:16 crc kubenswrapper[4832]: [+]poststarthook/generic-apiserver-start-informers ok Jan 25 07:59:16 crc kubenswrapper[4832]: [+]poststarthook/max-in-flight-filter ok Jan 25 07:59:16 crc kubenswrapper[4832]: [+]poststarthook/storage-object-count-tracker-hook ok Jan 25 07:59:16 crc kubenswrapper[4832]: [+]poststarthook/image.openshift.io-apiserver-caches ok Jan 25 07:59:16 crc kubenswrapper[4832]: [-]poststarthook/authorization.openshift.io-bootstrapclusterroles failed: reason withheld Jan 25 07:59:16 crc kubenswrapper[4832]: [-]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa failed: reason withheld Jan 25 07:59:16 crc kubenswrapper[4832]: [+]poststarthook/project.openshift.io-projectcache ok Jan 25 07:59:16 crc kubenswrapper[4832]: [+]poststarthook/project.openshift.io-projectauthorizationcache ok Jan 25 07:59:16 crc kubenswrapper[4832]: [+]poststarthook/openshift.io-startinformers ok Jan 25 07:59:16 crc kubenswrapper[4832]: [+]poststarthook/openshift.io-restmapperupdater ok Jan 25 07:59:16 crc kubenswrapper[4832]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Jan 25 07:59:16 crc kubenswrapper[4832]: livez check failed Jan 25 07:59:16 crc kubenswrapper[4832]: I0125 07:59:16.722562 4832 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-apiserver/apiserver-76f77b778f-99kns" podUID="d506c861-ab5e-4341-8e16-ce9166f24d5c" containerName="openshift-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 25 07:59:16 crc kubenswrapper[4832]: I0125 07:59:16.755731 4832 csr.go:257] certificate signing request csr-rqfh6 is issued Jan 25 07:59:16 crc kubenswrapper[4832]: I0125 07:59:16.760191 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-knhz8" podStartSLOduration=120.760170913 podStartE2EDuration="2m0.760170913s" podCreationTimestamp="2026-01-25 07:57:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-25 07:59:16.712751723 +0000 UTC m=+139.386575266" watchObservedRunningTime="2026-01-25 07:59:16.760170913 +0000 UTC m=+139.433994446" Jan 25 07:59:16 crc kubenswrapper[4832]: I0125 07:59:16.766834 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 25 07:59:16 crc kubenswrapper[4832]: E0125 07:59:16.766997 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-25 07:59:17.266977473 +0000 UTC m=+139.940801006 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 25 07:59:16 crc kubenswrapper[4832]: I0125 07:59:16.767268 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gbmfg\" (UniqueName: \"kubernetes.io/projected/9ca2e919-2c33-41e7-baa6-40f5437a2c3c-kube-api-access-gbmfg\") pod \"community-operators-hgzxd\" (UID: \"9ca2e919-2c33-41e7-baa6-40f5437a2c3c\") " pod="openshift-marketplace/community-operators-hgzxd" Jan 25 07:59:16 crc kubenswrapper[4832]: I0125 07:59:16.767337 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9ca2e919-2c33-41e7-baa6-40f5437a2c3c-utilities\") pod \"community-operators-hgzxd\" (UID: \"9ca2e919-2c33-41e7-baa6-40f5437a2c3c\") " pod="openshift-marketplace/community-operators-hgzxd" Jan 25 07:59:16 crc kubenswrapper[4832]: I0125 07:59:16.767364 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xw4z9\" (UID: \"267d2772-42e1-4031-bc5f-ac78559a7f82\") " pod="openshift-image-registry/image-registry-697d97f7c8-xw4z9" Jan 25 07:59:16 crc kubenswrapper[4832]: I0125 07:59:16.767432 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9ca2e919-2c33-41e7-baa6-40f5437a2c3c-catalog-content\") pod \"community-operators-hgzxd\" (UID: \"9ca2e919-2c33-41e7-baa6-40f5437a2c3c\") " pod="openshift-marketplace/community-operators-hgzxd" Jan 25 07:59:16 crc kubenswrapper[4832]: I0125 07:59:16.767899 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9ca2e919-2c33-41e7-baa6-40f5437a2c3c-catalog-content\") pod \"community-operators-hgzxd\" (UID: \"9ca2e919-2c33-41e7-baa6-40f5437a2c3c\") " pod="openshift-marketplace/community-operators-hgzxd" Jan 25 07:59:16 crc kubenswrapper[4832]: I0125 07:59:16.768452 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9ca2e919-2c33-41e7-baa6-40f5437a2c3c-utilities\") pod \"community-operators-hgzxd\" (UID: \"9ca2e919-2c33-41e7-baa6-40f5437a2c3c\") " pod="openshift-marketplace/community-operators-hgzxd" Jan 25 07:59:16 crc kubenswrapper[4832]: E0125 07:59:16.768728 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-25 07:59:17.268712251 +0000 UTC m=+139.942535784 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xw4z9" (UID: "267d2772-42e1-4031-bc5f-ac78559a7f82") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 25 07:59:16 crc kubenswrapper[4832]: I0125 07:59:16.810547 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gbmfg\" (UniqueName: \"kubernetes.io/projected/9ca2e919-2c33-41e7-baa6-40f5437a2c3c-kube-api-access-gbmfg\") pod \"community-operators-hgzxd\" (UID: \"9ca2e919-2c33-41e7-baa6-40f5437a2c3c\") " pod="openshift-marketplace/community-operators-hgzxd" Jan 25 07:59:16 crc kubenswrapper[4832]: I0125 07:59:16.811345 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-tqtnp" podStartSLOduration=120.81133146 podStartE2EDuration="2m0.81133146s" podCreationTimestamp="2026-01-25 07:57:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-25 07:59:16.761759487 +0000 UTC m=+139.435583020" watchObservedRunningTime="2026-01-25 07:59:16.81133146 +0000 UTC m=+139.485154993" Jan 25 07:59:16 crc kubenswrapper[4832]: I0125 07:59:16.814159 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-7ntqw"] Jan 25 07:59:16 crc kubenswrapper[4832]: I0125 07:59:16.815246 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7ntqw" Jan 25 07:59:16 crc kubenswrapper[4832]: I0125 07:59:16.826591 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Jan 25 07:59:16 crc kubenswrapper[4832]: I0125 07:59:16.839917 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-dswxl" podStartSLOduration=120.839895464 podStartE2EDuration="2m0.839895464s" podCreationTimestamp="2026-01-25 07:57:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-25 07:59:16.826587865 +0000 UTC m=+139.500411398" watchObservedRunningTime="2026-01-25 07:59:16.839895464 +0000 UTC m=+139.513718997" Jan 25 07:59:16 crc kubenswrapper[4832]: I0125 07:59:16.840969 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-7ntqw"] Jan 25 07:59:16 crc kubenswrapper[4832]: I0125 07:59:16.868835 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 25 07:59:16 crc kubenswrapper[4832]: E0125 07:59:16.869073 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-25 07:59:17.368995566 +0000 UTC m=+140.042819099 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 25 07:59:16 crc kubenswrapper[4832]: I0125 07:59:16.869172 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e70962d8-5db3-43c3-84bf-380addc38e9c-utilities\") pod \"certified-operators-7ntqw\" (UID: \"e70962d8-5db3-43c3-84bf-380addc38e9c\") " pod="openshift-marketplace/certified-operators-7ntqw" Jan 25 07:59:16 crc kubenswrapper[4832]: I0125 07:59:16.869212 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e70962d8-5db3-43c3-84bf-380addc38e9c-catalog-content\") pod \"certified-operators-7ntqw\" (UID: \"e70962d8-5db3-43c3-84bf-380addc38e9c\") " pod="openshift-marketplace/certified-operators-7ntqw" Jan 25 07:59:16 crc kubenswrapper[4832]: I0125 07:59:16.869449 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s4xpz\" (UniqueName: \"kubernetes.io/projected/e70962d8-5db3-43c3-84bf-380addc38e9c-kube-api-access-s4xpz\") pod \"certified-operators-7ntqw\" (UID: \"e70962d8-5db3-43c3-84bf-380addc38e9c\") " pod="openshift-marketplace/certified-operators-7ntqw" Jan 25 07:59:16 crc kubenswrapper[4832]: I0125 07:59:16.869568 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xw4z9\" (UID: \"267d2772-42e1-4031-bc5f-ac78559a7f82\") " pod="openshift-image-registry/image-registry-697d97f7c8-xw4z9" Jan 25 07:59:16 crc kubenswrapper[4832]: E0125 07:59:16.883883 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-25 07:59:17.383867958 +0000 UTC m=+140.057691491 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xw4z9" (UID: "267d2772-42e1-4031-bc5f-ac78559a7f82") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 25 07:59:16 crc kubenswrapper[4832]: I0125 07:59:16.885704 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29488785-dcf79" podStartSLOduration=120.885692069 podStartE2EDuration="2m0.885692069s" podCreationTimestamp="2026-01-25 07:57:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-25 07:59:16.885146131 +0000 UTC m=+139.558969664" watchObservedRunningTime="2026-01-25 07:59:16.885692069 +0000 UTC m=+139.559515602" Jan 25 07:59:16 crc kubenswrapper[4832]: I0125 07:59:16.930215 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-558db77b4-q5r28" podStartSLOduration=120.930199652 podStartE2EDuration="2m0.930199652s" podCreationTimestamp="2026-01-25 07:57:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-25 07:59:16.928447502 +0000 UTC m=+139.602271055" watchObservedRunningTime="2026-01-25 07:59:16.930199652 +0000 UTC m=+139.604023185" Jan 25 07:59:16 crc kubenswrapper[4832]: I0125 07:59:16.932917 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-jjs2r" event={"ID":"4b4ff59a-58d8-4822-8be8-d48a5a85b2d2","Type":"ContainerStarted","Data":"3df8532b36c96eb79feb1a63d9302483fd6cb947a880460d3f2785356ab25cd4"} Jan 25 07:59:16 crc kubenswrapper[4832]: I0125 07:59:16.938310 4832 patch_prober.go:28] interesting pod/downloads-7954f5f757-jvld2 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.27:8080/\": dial tcp 10.217.0.27:8080: connect: connection refused" start-of-body= Jan 25 07:59:16 crc kubenswrapper[4832]: I0125 07:59:16.938351 4832 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-jvld2" podUID="c05896f4-ee7d-4b10-949e-b8bf0d822313" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.27:8080/\": dial tcp 10.217.0.27:8080: connect: connection refused" Jan 25 07:59:16 crc kubenswrapper[4832]: I0125 07:59:16.950814 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-hgzxd" Jan 25 07:59:16 crc kubenswrapper[4832]: I0125 07:59:16.970870 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 25 07:59:16 crc kubenswrapper[4832]: I0125 07:59:16.971110 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s4xpz\" (UniqueName: \"kubernetes.io/projected/e70962d8-5db3-43c3-84bf-380addc38e9c-kube-api-access-s4xpz\") pod \"certified-operators-7ntqw\" (UID: \"e70962d8-5db3-43c3-84bf-380addc38e9c\") " pod="openshift-marketplace/certified-operators-7ntqw" Jan 25 07:59:16 crc kubenswrapper[4832]: I0125 07:59:16.971333 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e70962d8-5db3-43c3-84bf-380addc38e9c-utilities\") pod \"certified-operators-7ntqw\" (UID: \"e70962d8-5db3-43c3-84bf-380addc38e9c\") " pod="openshift-marketplace/certified-operators-7ntqw" Jan 25 07:59:16 crc kubenswrapper[4832]: I0125 07:59:16.971359 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e70962d8-5db3-43c3-84bf-380addc38e9c-catalog-content\") pod \"certified-operators-7ntqw\" (UID: \"e70962d8-5db3-43c3-84bf-380addc38e9c\") " pod="openshift-marketplace/certified-operators-7ntqw" Jan 25 07:59:16 crc kubenswrapper[4832]: E0125 07:59:16.972466 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-25 07:59:17.472417086 +0000 UTC m=+140.146240619 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 25 07:59:16 crc kubenswrapper[4832]: I0125 07:59:16.972951 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e70962d8-5db3-43c3-84bf-380addc38e9c-utilities\") pod \"certified-operators-7ntqw\" (UID: \"e70962d8-5db3-43c3-84bf-380addc38e9c\") " pod="openshift-marketplace/certified-operators-7ntqw" Jan 25 07:59:16 crc kubenswrapper[4832]: I0125 07:59:16.973687 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e70962d8-5db3-43c3-84bf-380addc38e9c-catalog-content\") pod \"certified-operators-7ntqw\" (UID: \"e70962d8-5db3-43c3-84bf-380addc38e9c\") " pod="openshift-marketplace/certified-operators-7ntqw" Jan 25 07:59:17 crc kubenswrapper[4832]: I0125 07:59:17.003167 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-vhn96" podStartSLOduration=121.003149333 podStartE2EDuration="2m1.003149333s" podCreationTimestamp="2026-01-25 07:57:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-25 07:59:17.002562894 +0000 UTC m=+139.676386427" watchObservedRunningTime="2026-01-25 07:59:17.003149333 +0000 UTC m=+139.676972866" Jan 25 07:59:17 crc kubenswrapper[4832]: I0125 07:59:17.051726 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-t7rlc"] Jan 25 07:59:17 crc kubenswrapper[4832]: I0125 07:59:17.074279 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xw4z9\" (UID: \"267d2772-42e1-4031-bc5f-ac78559a7f82\") " pod="openshift-image-registry/image-registry-697d97f7c8-xw4z9" Jan 25 07:59:17 crc kubenswrapper[4832]: E0125 07:59:17.074583 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-25 07:59:17.574567464 +0000 UTC m=+140.248390997 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xw4z9" (UID: "267d2772-42e1-4031-bc5f-ac78559a7f82") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 25 07:59:17 crc kubenswrapper[4832]: I0125 07:59:17.075545 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s4xpz\" (UniqueName: \"kubernetes.io/projected/e70962d8-5db3-43c3-84bf-380addc38e9c-kube-api-access-s4xpz\") pod \"certified-operators-7ntqw\" (UID: \"e70962d8-5db3-43c3-84bf-380addc38e9c\") " pod="openshift-marketplace/certified-operators-7ntqw" Jan 25 07:59:17 crc kubenswrapper[4832]: I0125 07:59:17.128082 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-fcqfl" Jan 25 07:59:17 crc kubenswrapper[4832]: I0125 07:59:17.128122 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-t7rlc"] Jan 25 07:59:17 crc kubenswrapper[4832]: I0125 07:59:17.128197 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-t7rlc" Jan 25 07:59:17 crc kubenswrapper[4832]: I0125 07:59:17.146637 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7ntqw" Jan 25 07:59:17 crc kubenswrapper[4832]: I0125 07:59:17.149030 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-drfl8" podStartSLOduration=121.149021176 podStartE2EDuration="2m1.149021176s" podCreationTimestamp="2026-01-25 07:57:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-25 07:59:17.104244965 +0000 UTC m=+139.778068498" watchObservedRunningTime="2026-01-25 07:59:17.149021176 +0000 UTC m=+139.822844709" Jan 25 07:59:17 crc kubenswrapper[4832]: I0125 07:59:17.150411 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/downloads-7954f5f757-jvld2" podStartSLOduration=121.150406784 podStartE2EDuration="2m1.150406784s" podCreationTimestamp="2026-01-25 07:57:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-25 07:59:17.148116046 +0000 UTC m=+139.821939579" watchObservedRunningTime="2026-01-25 07:59:17.150406784 +0000 UTC m=+139.824230317" Jan 25 07:59:17 crc kubenswrapper[4832]: I0125 07:59:17.163099 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca/service-ca-9c57cc56f-kpg7m" podStartSLOduration=121.163082301 podStartE2EDuration="2m1.163082301s" podCreationTimestamp="2026-01-25 07:57:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-25 07:59:17.162735579 +0000 UTC m=+139.836559112" watchObservedRunningTime="2026-01-25 07:59:17.163082301 +0000 UTC m=+139.836905824" Jan 25 07:59:17 crc kubenswrapper[4832]: I0125 07:59:17.201473 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 25 07:59:17 crc kubenswrapper[4832]: I0125 07:59:17.201736 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n9gvc\" (UniqueName: \"kubernetes.io/projected/41a974dc-0fea-4f11-930e-c11f28840e71-kube-api-access-n9gvc\") pod \"community-operators-t7rlc\" (UID: \"41a974dc-0fea-4f11-930e-c11f28840e71\") " pod="openshift-marketplace/community-operators-t7rlc" Jan 25 07:59:17 crc kubenswrapper[4832]: I0125 07:59:17.201814 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/41a974dc-0fea-4f11-930e-c11f28840e71-utilities\") pod \"community-operators-t7rlc\" (UID: \"41a974dc-0fea-4f11-930e-c11f28840e71\") " pod="openshift-marketplace/community-operators-t7rlc" Jan 25 07:59:17 crc kubenswrapper[4832]: I0125 07:59:17.201900 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/41a974dc-0fea-4f11-930e-c11f28840e71-catalog-content\") pod \"community-operators-t7rlc\" (UID: \"41a974dc-0fea-4f11-930e-c11f28840e71\") " pod="openshift-marketplace/community-operators-t7rlc" Jan 25 07:59:17 crc kubenswrapper[4832]: E0125 07:59:17.202099 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-25 07:59:17.702076487 +0000 UTC m=+140.375900020 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 25 07:59:17 crc kubenswrapper[4832]: I0125 07:59:17.212295 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-9jlxs" podStartSLOduration=121.212263701 podStartE2EDuration="2m1.212263701s" podCreationTimestamp="2026-01-25 07:57:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-25 07:59:17.193820828 +0000 UTC m=+139.867644361" watchObservedRunningTime="2026-01-25 07:59:17.212263701 +0000 UTC m=+139.886087234" Jan 25 07:59:17 crc kubenswrapper[4832]: I0125 07:59:17.225985 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-rxv7n"] Jan 25 07:59:17 crc kubenswrapper[4832]: I0125 07:59:17.227162 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-rxv7n" Jan 25 07:59:17 crc kubenswrapper[4832]: I0125 07:59:17.268168 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-rxv7n"] Jan 25 07:59:17 crc kubenswrapper[4832]: I0125 07:59:17.306108 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n9gvc\" (UniqueName: \"kubernetes.io/projected/41a974dc-0fea-4f11-930e-c11f28840e71-kube-api-access-n9gvc\") pod \"community-operators-t7rlc\" (UID: \"41a974dc-0fea-4f11-930e-c11f28840e71\") " pod="openshift-marketplace/community-operators-t7rlc" Jan 25 07:59:17 crc kubenswrapper[4832]: I0125 07:59:17.306476 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/41a974dc-0fea-4f11-930e-c11f28840e71-utilities\") pod \"community-operators-t7rlc\" (UID: \"41a974dc-0fea-4f11-930e-c11f28840e71\") " pod="openshift-marketplace/community-operators-t7rlc" Jan 25 07:59:17 crc kubenswrapper[4832]: I0125 07:59:17.306515 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xw4z9\" (UID: \"267d2772-42e1-4031-bc5f-ac78559a7f82\") " pod="openshift-image-registry/image-registry-697d97f7c8-xw4z9" Jan 25 07:59:17 crc kubenswrapper[4832]: I0125 07:59:17.306561 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/41a974dc-0fea-4f11-930e-c11f28840e71-catalog-content\") pod \"community-operators-t7rlc\" (UID: \"41a974dc-0fea-4f11-930e-c11f28840e71\") " pod="openshift-marketplace/community-operators-t7rlc" Jan 25 07:59:17 crc kubenswrapper[4832]: I0125 07:59:17.306593 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/af8ce14e-9431-4f98-b50b-761208bdab1c-utilities\") pod \"certified-operators-rxv7n\" (UID: \"af8ce14e-9431-4f98-b50b-761208bdab1c\") " pod="openshift-marketplace/certified-operators-rxv7n" Jan 25 07:59:17 crc kubenswrapper[4832]: I0125 07:59:17.306619 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pz87r\" (UniqueName: \"kubernetes.io/projected/af8ce14e-9431-4f98-b50b-761208bdab1c-kube-api-access-pz87r\") pod \"certified-operators-rxv7n\" (UID: \"af8ce14e-9431-4f98-b50b-761208bdab1c\") " pod="openshift-marketplace/certified-operators-rxv7n" Jan 25 07:59:17 crc kubenswrapper[4832]: I0125 07:59:17.306657 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/af8ce14e-9431-4f98-b50b-761208bdab1c-catalog-content\") pod \"certified-operators-rxv7n\" (UID: \"af8ce14e-9431-4f98-b50b-761208bdab1c\") " pod="openshift-marketplace/certified-operators-rxv7n" Jan 25 07:59:17 crc kubenswrapper[4832]: I0125 07:59:17.307352 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/41a974dc-0fea-4f11-930e-c11f28840e71-utilities\") pod \"community-operators-t7rlc\" (UID: \"41a974dc-0fea-4f11-930e-c11f28840e71\") " pod="openshift-marketplace/community-operators-t7rlc" Jan 25 07:59:17 crc kubenswrapper[4832]: E0125 07:59:17.307624 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-25 07:59:17.807614339 +0000 UTC m=+140.481437872 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xw4z9" (UID: "267d2772-42e1-4031-bc5f-ac78559a7f82") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 25 07:59:17 crc kubenswrapper[4832]: I0125 07:59:17.310733 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/41a974dc-0fea-4f11-930e-c11f28840e71-catalog-content\") pod \"community-operators-t7rlc\" (UID: \"41a974dc-0fea-4f11-930e-c11f28840e71\") " pod="openshift-marketplace/community-operators-t7rlc" Jan 25 07:59:17 crc kubenswrapper[4832]: I0125 07:59:17.351886 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n9gvc\" (UniqueName: \"kubernetes.io/projected/41a974dc-0fea-4f11-930e-c11f28840e71-kube-api-access-n9gvc\") pod \"community-operators-t7rlc\" (UID: \"41a974dc-0fea-4f11-930e-c11f28840e71\") " pod="openshift-marketplace/community-operators-t7rlc" Jan 25 07:59:17 crc kubenswrapper[4832]: I0125 07:59:17.371803 4832 patch_prober.go:28] interesting pod/router-default-5444994796-xjkrg container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 25 07:59:17 crc kubenswrapper[4832]: [-]has-synced failed: reason withheld Jan 25 07:59:17 crc kubenswrapper[4832]: [+]process-running ok Jan 25 07:59:17 crc kubenswrapper[4832]: healthz check failed Jan 25 07:59:17 crc kubenswrapper[4832]: I0125 07:59:17.371876 4832 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-xjkrg" podUID="cdc4f06b-3e9a-4855-8400-faabc37cd870" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 25 07:59:17 crc kubenswrapper[4832]: I0125 07:59:17.411954 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 25 07:59:17 crc kubenswrapper[4832]: I0125 07:59:17.412102 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/af8ce14e-9431-4f98-b50b-761208bdab1c-catalog-content\") pod \"certified-operators-rxv7n\" (UID: \"af8ce14e-9431-4f98-b50b-761208bdab1c\") " pod="openshift-marketplace/certified-operators-rxv7n" Jan 25 07:59:17 crc kubenswrapper[4832]: I0125 07:59:17.412212 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/af8ce14e-9431-4f98-b50b-761208bdab1c-utilities\") pod \"certified-operators-rxv7n\" (UID: \"af8ce14e-9431-4f98-b50b-761208bdab1c\") " pod="openshift-marketplace/certified-operators-rxv7n" Jan 25 07:59:17 crc kubenswrapper[4832]: I0125 07:59:17.412230 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pz87r\" (UniqueName: \"kubernetes.io/projected/af8ce14e-9431-4f98-b50b-761208bdab1c-kube-api-access-pz87r\") pod \"certified-operators-rxv7n\" (UID: \"af8ce14e-9431-4f98-b50b-761208bdab1c\") " pod="openshift-marketplace/certified-operators-rxv7n" Jan 25 07:59:17 crc kubenswrapper[4832]: I0125 07:59:17.412901 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/af8ce14e-9431-4f98-b50b-761208bdab1c-catalog-content\") pod \"certified-operators-rxv7n\" (UID: \"af8ce14e-9431-4f98-b50b-761208bdab1c\") " pod="openshift-marketplace/certified-operators-rxv7n" Jan 25 07:59:17 crc kubenswrapper[4832]: E0125 07:59:17.413009 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-25 07:59:17.912983835 +0000 UTC m=+140.586807438 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 25 07:59:17 crc kubenswrapper[4832]: I0125 07:59:17.416562 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/af8ce14e-9431-4f98-b50b-761208bdab1c-utilities\") pod \"certified-operators-rxv7n\" (UID: \"af8ce14e-9431-4f98-b50b-761208bdab1c\") " pod="openshift-marketplace/certified-operators-rxv7n" Jan 25 07:59:17 crc kubenswrapper[4832]: I0125 07:59:17.445997 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pz87r\" (UniqueName: \"kubernetes.io/projected/af8ce14e-9431-4f98-b50b-761208bdab1c-kube-api-access-pz87r\") pod \"certified-operators-rxv7n\" (UID: \"af8ce14e-9431-4f98-b50b-761208bdab1c\") " pod="openshift-marketplace/certified-operators-rxv7n" Jan 25 07:59:17 crc kubenswrapper[4832]: I0125 07:59:17.471642 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-t7rlc" Jan 25 07:59:17 crc kubenswrapper[4832]: I0125 07:59:17.491851 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-hgzxd"] Jan 25 07:59:17 crc kubenswrapper[4832]: I0125 07:59:17.514183 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xw4z9\" (UID: \"267d2772-42e1-4031-bc5f-ac78559a7f82\") " pod="openshift-image-registry/image-registry-697d97f7c8-xw4z9" Jan 25 07:59:17 crc kubenswrapper[4832]: E0125 07:59:17.514522 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-25 07:59:18.014509412 +0000 UTC m=+140.688332945 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xw4z9" (UID: "267d2772-42e1-4031-bc5f-ac78559a7f82") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 25 07:59:17 crc kubenswrapper[4832]: W0125 07:59:17.526772 4832 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9ca2e919_2c33_41e7_baa6_40f5437a2c3c.slice/crio-b6c719bac066722a1521079a1ebc6dfc92367eaa1f1374b71e48ced4dd4c69cb WatchSource:0}: Error finding container b6c719bac066722a1521079a1ebc6dfc92367eaa1f1374b71e48ced4dd4c69cb: Status 404 returned error can't find the container with id b6c719bac066722a1521079a1ebc6dfc92367eaa1f1374b71e48ced4dd4c69cb Jan 25 07:59:17 crc kubenswrapper[4832]: I0125 07:59:17.610226 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-rxv7n" Jan 25 07:59:17 crc kubenswrapper[4832]: I0125 07:59:17.625526 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 25 07:59:17 crc kubenswrapper[4832]: E0125 07:59:17.626051 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-25 07:59:18.126035186 +0000 UTC m=+140.799858719 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 25 07:59:17 crc kubenswrapper[4832]: I0125 07:59:17.711477 4832 plugin_watcher.go:194] "Adding socket path or updating timestamp to desired state cache" path="/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock" Jan 25 07:59:17 crc kubenswrapper[4832]: I0125 07:59:17.720790 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-558db77b4-q5r28" Jan 25 07:59:17 crc kubenswrapper[4832]: I0125 07:59:17.727973 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xw4z9\" (UID: \"267d2772-42e1-4031-bc5f-ac78559a7f82\") " pod="openshift-image-registry/image-registry-697d97f7c8-xw4z9" Jan 25 07:59:17 crc kubenswrapper[4832]: E0125 07:59:17.728496 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-25 07:59:18.228481943 +0000 UTC m=+140.902305466 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xw4z9" (UID: "267d2772-42e1-4031-bc5f-ac78559a7f82") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 25 07:59:17 crc kubenswrapper[4832]: I0125 07:59:17.757504 4832 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2027-01-25 07:54:16 +0000 UTC, rotation deadline is 2026-12-03 14:36:19.637836943 +0000 UTC Jan 25 07:59:17 crc kubenswrapper[4832]: I0125 07:59:17.757535 4832 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 7494h37m1.88030481s for next certificate rotation Jan 25 07:59:17 crc kubenswrapper[4832]: I0125 07:59:17.828457 4832 reconciler.go:161] "OperationExecutor.RegisterPlugin started" plugin={"SocketPath":"/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock","Timestamp":"2026-01-25T07:59:17.711503Z","Handler":null,"Name":""} Jan 25 07:59:17 crc kubenswrapper[4832]: I0125 07:59:17.830055 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 25 07:59:17 crc kubenswrapper[4832]: E0125 07:59:17.830404 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-25 07:59:18.330372672 +0000 UTC m=+141.004196205 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 25 07:59:17 crc kubenswrapper[4832]: I0125 07:59:17.851659 4832 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: kubevirt.io.hostpath-provisioner endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock versions: 1.0.0 Jan 25 07:59:17 crc kubenswrapper[4832]: I0125 07:59:17.851698 4832 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: kubevirt.io.hostpath-provisioner at endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock Jan 25 07:59:17 crc kubenswrapper[4832]: I0125 07:59:17.935337 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xw4z9\" (UID: \"267d2772-42e1-4031-bc5f-ac78559a7f82\") " pod="openshift-image-registry/image-registry-697d97f7c8-xw4z9" Jan 25 07:59:17 crc kubenswrapper[4832]: I0125 07:59:17.991708 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-7ntqw"] Jan 25 07:59:17 crc kubenswrapper[4832]: I0125 07:59:17.993207 4832 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 25 07:59:17 crc kubenswrapper[4832]: I0125 07:59:17.993236 4832 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xw4z9\" (UID: \"267d2772-42e1-4031-bc5f-ac78559a7f82\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount\"" pod="openshift-image-registry/image-registry-697d97f7c8-xw4z9" Jan 25 07:59:18 crc kubenswrapper[4832]: I0125 07:59:18.014886 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hgzxd" event={"ID":"9ca2e919-2c33-41e7-baa6-40f5437a2c3c","Type":"ContainerStarted","Data":"b6c719bac066722a1521079a1ebc6dfc92367eaa1f1374b71e48ced4dd4c69cb"} Jan 25 07:59:18 crc kubenswrapper[4832]: I0125 07:59:18.070614 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-jjs2r" event={"ID":"4b4ff59a-58d8-4822-8be8-d48a5a85b2d2","Type":"ContainerStarted","Data":"5b2280a0779135a3ea8715e3237094cacd0aa987c4f4ab84b33ca68d7d384f95"} Jan 25 07:59:18 crc kubenswrapper[4832]: I0125 07:59:18.077608 4832 patch_prober.go:28] interesting pod/downloads-7954f5f757-jvld2 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.27:8080/\": dial tcp 10.217.0.27:8080: connect: connection refused" start-of-body= Jan 25 07:59:18 crc kubenswrapper[4832]: I0125 07:59:18.077666 4832 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-jvld2" podUID="c05896f4-ee7d-4b10-949e-b8bf0d822313" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.27:8080/\": dial tcp 10.217.0.27:8080: connect: connection refused" Jan 25 07:59:18 crc kubenswrapper[4832]: I0125 07:59:18.087067 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-fcqfl" Jan 25 07:59:18 crc kubenswrapper[4832]: I0125 07:59:18.183373 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-t7rlc"] Jan 25 07:59:18 crc kubenswrapper[4832]: I0125 07:59:18.284466 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xw4z9\" (UID: \"267d2772-42e1-4031-bc5f-ac78559a7f82\") " pod="openshift-image-registry/image-registry-697d97f7c8-xw4z9" Jan 25 07:59:18 crc kubenswrapper[4832]: I0125 07:59:18.347559 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 25 07:59:18 crc kubenswrapper[4832]: I0125 07:59:18.371428 4832 patch_prober.go:28] interesting pod/router-default-5444994796-xjkrg container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 25 07:59:18 crc kubenswrapper[4832]: [-]has-synced failed: reason withheld Jan 25 07:59:18 crc kubenswrapper[4832]: [+]process-running ok Jan 25 07:59:18 crc kubenswrapper[4832]: healthz check failed Jan 25 07:59:18 crc kubenswrapper[4832]: I0125 07:59:18.371484 4832 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-xjkrg" podUID="cdc4f06b-3e9a-4855-8400-faabc37cd870" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 25 07:59:18 crc kubenswrapper[4832]: I0125 07:59:18.390724 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-xw4z9" Jan 25 07:59:18 crc kubenswrapper[4832]: I0125 07:59:18.395526 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 25 07:59:18 crc kubenswrapper[4832]: I0125 07:59:18.429366 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-rxv7n"] Jan 25 07:59:18 crc kubenswrapper[4832]: I0125 07:59:18.599294 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-qmnth"] Jan 25 07:59:18 crc kubenswrapper[4832]: E0125 07:59:18.634265 4832 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9ca2e919_2c33_41e7_baa6_40f5437a2c3c.slice/crio-a9740819c55ba65dac41e257c64271a6fffa2f105bd173d52ba77be1e1a91b2f.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9ca2e919_2c33_41e7_baa6_40f5437a2c3c.slice/crio-conmon-a9740819c55ba65dac41e257c64271a6fffa2f105bd173d52ba77be1e1a91b2f.scope\": RecentStats: unable to find data in memory cache]" Jan 25 07:59:18 crc kubenswrapper[4832]: I0125 07:59:18.635535 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-qmnth" Jan 25 07:59:18 crc kubenswrapper[4832]: I0125 07:59:18.658060 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Jan 25 07:59:18 crc kubenswrapper[4832]: I0125 07:59:18.670649 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-qmnth"] Jan 25 07:59:18 crc kubenswrapper[4832]: I0125 07:59:18.752958 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/de82f302-d899-48c7-aedc-4b24f4541b2b-catalog-content\") pod \"redhat-marketplace-qmnth\" (UID: \"de82f302-d899-48c7-aedc-4b24f4541b2b\") " pod="openshift-marketplace/redhat-marketplace-qmnth" Jan 25 07:59:18 crc kubenswrapper[4832]: I0125 07:59:18.752997 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/de82f302-d899-48c7-aedc-4b24f4541b2b-utilities\") pod \"redhat-marketplace-qmnth\" (UID: \"de82f302-d899-48c7-aedc-4b24f4541b2b\") " pod="openshift-marketplace/redhat-marketplace-qmnth" Jan 25 07:59:18 crc kubenswrapper[4832]: I0125 07:59:18.753037 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wxbkz\" (UniqueName: \"kubernetes.io/projected/de82f302-d899-48c7-aedc-4b24f4541b2b-kube-api-access-wxbkz\") pod \"redhat-marketplace-qmnth\" (UID: \"de82f302-d899-48c7-aedc-4b24f4541b2b\") " pod="openshift-marketplace/redhat-marketplace-qmnth" Jan 25 07:59:18 crc kubenswrapper[4832]: I0125 07:59:18.784747 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-xw4z9"] Jan 25 07:59:18 crc kubenswrapper[4832]: I0125 07:59:18.854033 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/de82f302-d899-48c7-aedc-4b24f4541b2b-catalog-content\") pod \"redhat-marketplace-qmnth\" (UID: \"de82f302-d899-48c7-aedc-4b24f4541b2b\") " pod="openshift-marketplace/redhat-marketplace-qmnth" Jan 25 07:59:18 crc kubenswrapper[4832]: I0125 07:59:18.854086 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/de82f302-d899-48c7-aedc-4b24f4541b2b-utilities\") pod \"redhat-marketplace-qmnth\" (UID: \"de82f302-d899-48c7-aedc-4b24f4541b2b\") " pod="openshift-marketplace/redhat-marketplace-qmnth" Jan 25 07:59:18 crc kubenswrapper[4832]: I0125 07:59:18.854132 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wxbkz\" (UniqueName: \"kubernetes.io/projected/de82f302-d899-48c7-aedc-4b24f4541b2b-kube-api-access-wxbkz\") pod \"redhat-marketplace-qmnth\" (UID: \"de82f302-d899-48c7-aedc-4b24f4541b2b\") " pod="openshift-marketplace/redhat-marketplace-qmnth" Jan 25 07:59:18 crc kubenswrapper[4832]: I0125 07:59:18.854626 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/de82f302-d899-48c7-aedc-4b24f4541b2b-utilities\") pod \"redhat-marketplace-qmnth\" (UID: \"de82f302-d899-48c7-aedc-4b24f4541b2b\") " pod="openshift-marketplace/redhat-marketplace-qmnth" Jan 25 07:59:18 crc kubenswrapper[4832]: I0125 07:59:18.854945 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/de82f302-d899-48c7-aedc-4b24f4541b2b-catalog-content\") pod \"redhat-marketplace-qmnth\" (UID: \"de82f302-d899-48c7-aedc-4b24f4541b2b\") " pod="openshift-marketplace/redhat-marketplace-qmnth" Jan 25 07:59:18 crc kubenswrapper[4832]: I0125 07:59:18.876606 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wxbkz\" (UniqueName: \"kubernetes.io/projected/de82f302-d899-48c7-aedc-4b24f4541b2b-kube-api-access-wxbkz\") pod \"redhat-marketplace-qmnth\" (UID: \"de82f302-d899-48c7-aedc-4b24f4541b2b\") " pod="openshift-marketplace/redhat-marketplace-qmnth" Jan 25 07:59:19 crc kubenswrapper[4832]: I0125 07:59:19.002443 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-lbczx"] Jan 25 07:59:19 crc kubenswrapper[4832]: I0125 07:59:19.003679 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-lbczx" Jan 25 07:59:19 crc kubenswrapper[4832]: I0125 07:59:19.009976 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-qmnth" Jan 25 07:59:19 crc kubenswrapper[4832]: I0125 07:59:19.024463 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-lbczx"] Jan 25 07:59:19 crc kubenswrapper[4832]: I0125 07:59:19.057193 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f61facf9-6be6-4e92-b219-73da2609112a-catalog-content\") pod \"redhat-marketplace-lbczx\" (UID: \"f61facf9-6be6-4e92-b219-73da2609112a\") " pod="openshift-marketplace/redhat-marketplace-lbczx" Jan 25 07:59:19 crc kubenswrapper[4832]: I0125 07:59:19.057656 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f61facf9-6be6-4e92-b219-73da2609112a-utilities\") pod \"redhat-marketplace-lbczx\" (UID: \"f61facf9-6be6-4e92-b219-73da2609112a\") " pod="openshift-marketplace/redhat-marketplace-lbczx" Jan 25 07:59:19 crc kubenswrapper[4832]: I0125 07:59:19.057688 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2lkrn\" (UniqueName: \"kubernetes.io/projected/f61facf9-6be6-4e92-b219-73da2609112a-kube-api-access-2lkrn\") pod \"redhat-marketplace-lbczx\" (UID: \"f61facf9-6be6-4e92-b219-73da2609112a\") " pod="openshift-marketplace/redhat-marketplace-lbczx" Jan 25 07:59:19 crc kubenswrapper[4832]: I0125 07:59:19.088591 4832 generic.go:334] "Generic (PLEG): container finished" podID="af8ce14e-9431-4f98-b50b-761208bdab1c" containerID="b270e4b790ebc92e727cbbe5c83877d8d93626934a92e8742f1d4375db64f092" exitCode=0 Jan 25 07:59:19 crc kubenswrapper[4832]: I0125 07:59:19.088676 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rxv7n" event={"ID":"af8ce14e-9431-4f98-b50b-761208bdab1c","Type":"ContainerDied","Data":"b270e4b790ebc92e727cbbe5c83877d8d93626934a92e8742f1d4375db64f092"} Jan 25 07:59:19 crc kubenswrapper[4832]: I0125 07:59:19.088726 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rxv7n" event={"ID":"af8ce14e-9431-4f98-b50b-761208bdab1c","Type":"ContainerStarted","Data":"d382ef68ef07fc75cceda225337a1834a482e1607f321dd4423475b08cf3e3fd"} Jan 25 07:59:19 crc kubenswrapper[4832]: I0125 07:59:19.090738 4832 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 25 07:59:19 crc kubenswrapper[4832]: I0125 07:59:19.093301 4832 generic.go:334] "Generic (PLEG): container finished" podID="41a974dc-0fea-4f11-930e-c11f28840e71" containerID="26c22dde58d1e0d8a24d93e22410c4c4b46912472c0afbde1cbf51960e9ce222" exitCode=0 Jan 25 07:59:19 crc kubenswrapper[4832]: I0125 07:59:19.093411 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-t7rlc" event={"ID":"41a974dc-0fea-4f11-930e-c11f28840e71","Type":"ContainerDied","Data":"26c22dde58d1e0d8a24d93e22410c4c4b46912472c0afbde1cbf51960e9ce222"} Jan 25 07:59:19 crc kubenswrapper[4832]: I0125 07:59:19.093436 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-t7rlc" event={"ID":"41a974dc-0fea-4f11-930e-c11f28840e71","Type":"ContainerStarted","Data":"8e312a737e7edaab9ff8909117577b06b829fd2dda2596086481329749b7220a"} Jan 25 07:59:19 crc kubenswrapper[4832]: I0125 07:59:19.106994 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-jjs2r" event={"ID":"4b4ff59a-58d8-4822-8be8-d48a5a85b2d2","Type":"ContainerStarted","Data":"566a40fd88fe85fdf80a0e241329e10d09a6f058272952e5037e909329e194c9"} Jan 25 07:59:19 crc kubenswrapper[4832]: I0125 07:59:19.116306 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-xw4z9" event={"ID":"267d2772-42e1-4031-bc5f-ac78559a7f82","Type":"ContainerStarted","Data":"2e4a259f45e25f040e748dd03bdc843d58af9dfb6b764398371bccceeb62895b"} Jan 25 07:59:19 crc kubenswrapper[4832]: I0125 07:59:19.116407 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-xw4z9" event={"ID":"267d2772-42e1-4031-bc5f-ac78559a7f82","Type":"ContainerStarted","Data":"c12e4fcdfe62748c8378c2d864a15c0e20bcb1ff3331dd8ec72ab9e1e242d267"} Jan 25 07:59:19 crc kubenswrapper[4832]: I0125 07:59:19.117301 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-697d97f7c8-xw4z9" Jan 25 07:59:19 crc kubenswrapper[4832]: I0125 07:59:19.121614 4832 generic.go:334] "Generic (PLEG): container finished" podID="e70962d8-5db3-43c3-84bf-380addc38e9c" containerID="54eca1bc87adc3d2b05494c017fdad90e29819a526374686473f122d4dffd0c8" exitCode=0 Jan 25 07:59:19 crc kubenswrapper[4832]: I0125 07:59:19.121676 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7ntqw" event={"ID":"e70962d8-5db3-43c3-84bf-380addc38e9c","Type":"ContainerDied","Data":"54eca1bc87adc3d2b05494c017fdad90e29819a526374686473f122d4dffd0c8"} Jan 25 07:59:19 crc kubenswrapper[4832]: I0125 07:59:19.121695 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7ntqw" event={"ID":"e70962d8-5db3-43c3-84bf-380addc38e9c","Type":"ContainerStarted","Data":"1c962dbb608a1dee25986c1352c3b194a3342adc2556faad12137e1d2184c600"} Jan 25 07:59:19 crc kubenswrapper[4832]: I0125 07:59:19.143026 4832 generic.go:334] "Generic (PLEG): container finished" podID="9ca2e919-2c33-41e7-baa6-40f5437a2c3c" containerID="a9740819c55ba65dac41e257c64271a6fffa2f105bd173d52ba77be1e1a91b2f" exitCode=0 Jan 25 07:59:19 crc kubenswrapper[4832]: I0125 07:59:19.144006 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hgzxd" event={"ID":"9ca2e919-2c33-41e7-baa6-40f5437a2c3c","Type":"ContainerDied","Data":"a9740819c55ba65dac41e257c64271a6fffa2f105bd173d52ba77be1e1a91b2f"} Jan 25 07:59:19 crc kubenswrapper[4832]: I0125 07:59:19.159067 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f61facf9-6be6-4e92-b219-73da2609112a-catalog-content\") pod \"redhat-marketplace-lbczx\" (UID: \"f61facf9-6be6-4e92-b219-73da2609112a\") " pod="openshift-marketplace/redhat-marketplace-lbczx" Jan 25 07:59:19 crc kubenswrapper[4832]: I0125 07:59:19.159137 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f61facf9-6be6-4e92-b219-73da2609112a-utilities\") pod \"redhat-marketplace-lbczx\" (UID: \"f61facf9-6be6-4e92-b219-73da2609112a\") " pod="openshift-marketplace/redhat-marketplace-lbczx" Jan 25 07:59:19 crc kubenswrapper[4832]: I0125 07:59:19.159180 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2lkrn\" (UniqueName: \"kubernetes.io/projected/f61facf9-6be6-4e92-b219-73da2609112a-kube-api-access-2lkrn\") pod \"redhat-marketplace-lbczx\" (UID: \"f61facf9-6be6-4e92-b219-73da2609112a\") " pod="openshift-marketplace/redhat-marketplace-lbczx" Jan 25 07:59:19 crc kubenswrapper[4832]: I0125 07:59:19.162205 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f61facf9-6be6-4e92-b219-73da2609112a-catalog-content\") pod \"redhat-marketplace-lbczx\" (UID: \"f61facf9-6be6-4e92-b219-73da2609112a\") " pod="openshift-marketplace/redhat-marketplace-lbczx" Jan 25 07:59:19 crc kubenswrapper[4832]: I0125 07:59:19.162850 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f61facf9-6be6-4e92-b219-73da2609112a-utilities\") pod \"redhat-marketplace-lbczx\" (UID: \"f61facf9-6be6-4e92-b219-73da2609112a\") " pod="openshift-marketplace/redhat-marketplace-lbczx" Jan 25 07:59:19 crc kubenswrapper[4832]: I0125 07:59:19.167305 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-697d97f7c8-xw4z9" podStartSLOduration=123.167294582 podStartE2EDuration="2m3.167294582s" podCreationTimestamp="2026-01-25 07:57:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-25 07:59:19.166510477 +0000 UTC m=+141.840334010" watchObservedRunningTime="2026-01-25 07:59:19.167294582 +0000 UTC m=+141.841118115" Jan 25 07:59:19 crc kubenswrapper[4832]: I0125 07:59:19.199896 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2lkrn\" (UniqueName: \"kubernetes.io/projected/f61facf9-6be6-4e92-b219-73da2609112a-kube-api-access-2lkrn\") pod \"redhat-marketplace-lbczx\" (UID: \"f61facf9-6be6-4e92-b219-73da2609112a\") " pod="openshift-marketplace/redhat-marketplace-lbczx" Jan 25 07:59:19 crc kubenswrapper[4832]: I0125 07:59:19.296586 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-qmnth"] Jan 25 07:59:19 crc kubenswrapper[4832]: W0125 07:59:19.304639 4832 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podde82f302_d899_48c7_aedc_4b24f4541b2b.slice/crio-431d294c492ed2eb7131c55cbcf8b2b7d3cfeb9b126674d8cf875938e17d1637 WatchSource:0}: Error finding container 431d294c492ed2eb7131c55cbcf8b2b7d3cfeb9b126674d8cf875938e17d1637: Status 404 returned error can't find the container with id 431d294c492ed2eb7131c55cbcf8b2b7d3cfeb9b126674d8cf875938e17d1637 Jan 25 07:59:19 crc kubenswrapper[4832]: I0125 07:59:19.324214 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-lbczx" Jan 25 07:59:19 crc kubenswrapper[4832]: I0125 07:59:19.365943 4832 patch_prober.go:28] interesting pod/router-default-5444994796-xjkrg container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 25 07:59:19 crc kubenswrapper[4832]: [-]has-synced failed: reason withheld Jan 25 07:59:19 crc kubenswrapper[4832]: [+]process-running ok Jan 25 07:59:19 crc kubenswrapper[4832]: healthz check failed Jan 25 07:59:19 crc kubenswrapper[4832]: I0125 07:59:19.365999 4832 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-xjkrg" podUID="cdc4f06b-3e9a-4855-8400-faabc37cd870" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 25 07:59:19 crc kubenswrapper[4832]: I0125 07:59:19.553166 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-lbczx"] Jan 25 07:59:19 crc kubenswrapper[4832]: W0125 07:59:19.561561 4832 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf61facf9_6be6_4e92_b219_73da2609112a.slice/crio-9c2aea71028eadd5c75dda6ae960d3cfbfb9c5f3eadca52b33ba3f2b0d4a6922 WatchSource:0}: Error finding container 9c2aea71028eadd5c75dda6ae960d3cfbfb9c5f3eadca52b33ba3f2b0d4a6922: Status 404 returned error can't find the container with id 9c2aea71028eadd5c75dda6ae960d3cfbfb9c5f3eadca52b33ba3f2b0d4a6922 Jan 25 07:59:19 crc kubenswrapper[4832]: I0125 07:59:19.684904 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f668bae-612b-4b75-9490-919e737c6a3b" path="/var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes" Jan 25 07:59:19 crc kubenswrapper[4832]: I0125 07:59:19.794756 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-f6nwt"] Jan 25 07:59:19 crc kubenswrapper[4832]: I0125 07:59:19.796226 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f6nwt" Jan 25 07:59:19 crc kubenswrapper[4832]: I0125 07:59:19.798099 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Jan 25 07:59:19 crc kubenswrapper[4832]: I0125 07:59:19.814815 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-f6nwt"] Jan 25 07:59:19 crc kubenswrapper[4832]: I0125 07:59:19.870045 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dbldb\" (UniqueName: \"kubernetes.io/projected/479892d8-5a53-40ee-9f16-d4480c2c3e03-kube-api-access-dbldb\") pod \"redhat-operators-f6nwt\" (UID: \"479892d8-5a53-40ee-9f16-d4480c2c3e03\") " pod="openshift-marketplace/redhat-operators-f6nwt" Jan 25 07:59:19 crc kubenswrapper[4832]: I0125 07:59:19.870442 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/479892d8-5a53-40ee-9f16-d4480c2c3e03-utilities\") pod \"redhat-operators-f6nwt\" (UID: \"479892d8-5a53-40ee-9f16-d4480c2c3e03\") " pod="openshift-marketplace/redhat-operators-f6nwt" Jan 25 07:59:19 crc kubenswrapper[4832]: I0125 07:59:19.870594 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/479892d8-5a53-40ee-9f16-d4480c2c3e03-catalog-content\") pod \"redhat-operators-f6nwt\" (UID: \"479892d8-5a53-40ee-9f16-d4480c2c3e03\") " pod="openshift-marketplace/redhat-operators-f6nwt" Jan 25 07:59:19 crc kubenswrapper[4832]: I0125 07:59:19.971598 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/479892d8-5a53-40ee-9f16-d4480c2c3e03-utilities\") pod \"redhat-operators-f6nwt\" (UID: \"479892d8-5a53-40ee-9f16-d4480c2c3e03\") " pod="openshift-marketplace/redhat-operators-f6nwt" Jan 25 07:59:19 crc kubenswrapper[4832]: I0125 07:59:19.971706 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/479892d8-5a53-40ee-9f16-d4480c2c3e03-catalog-content\") pod \"redhat-operators-f6nwt\" (UID: \"479892d8-5a53-40ee-9f16-d4480c2c3e03\") " pod="openshift-marketplace/redhat-operators-f6nwt" Jan 25 07:59:19 crc kubenswrapper[4832]: I0125 07:59:19.971782 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dbldb\" (UniqueName: \"kubernetes.io/projected/479892d8-5a53-40ee-9f16-d4480c2c3e03-kube-api-access-dbldb\") pod \"redhat-operators-f6nwt\" (UID: \"479892d8-5a53-40ee-9f16-d4480c2c3e03\") " pod="openshift-marketplace/redhat-operators-f6nwt" Jan 25 07:59:19 crc kubenswrapper[4832]: I0125 07:59:19.972993 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/479892d8-5a53-40ee-9f16-d4480c2c3e03-utilities\") pod \"redhat-operators-f6nwt\" (UID: \"479892d8-5a53-40ee-9f16-d4480c2c3e03\") " pod="openshift-marketplace/redhat-operators-f6nwt" Jan 25 07:59:19 crc kubenswrapper[4832]: I0125 07:59:19.973208 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/479892d8-5a53-40ee-9f16-d4480c2c3e03-catalog-content\") pod \"redhat-operators-f6nwt\" (UID: \"479892d8-5a53-40ee-9f16-d4480c2c3e03\") " pod="openshift-marketplace/redhat-operators-f6nwt" Jan 25 07:59:19 crc kubenswrapper[4832]: I0125 07:59:19.996412 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dbldb\" (UniqueName: \"kubernetes.io/projected/479892d8-5a53-40ee-9f16-d4480c2c3e03-kube-api-access-dbldb\") pod \"redhat-operators-f6nwt\" (UID: \"479892d8-5a53-40ee-9f16-d4480c2c3e03\") " pod="openshift-marketplace/redhat-operators-f6nwt" Jan 25 07:59:20 crc kubenswrapper[4832]: I0125 07:59:20.112053 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f6nwt" Jan 25 07:59:20 crc kubenswrapper[4832]: I0125 07:59:20.156083 4832 generic.go:334] "Generic (PLEG): container finished" podID="de82f302-d899-48c7-aedc-4b24f4541b2b" containerID="bbc3775b6b6494c05ef373c63a534637c6029db1d75be738e8d862cbca808950" exitCode=0 Jan 25 07:59:20 crc kubenswrapper[4832]: I0125 07:59:20.156528 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qmnth" event={"ID":"de82f302-d899-48c7-aedc-4b24f4541b2b","Type":"ContainerDied","Data":"bbc3775b6b6494c05ef373c63a534637c6029db1d75be738e8d862cbca808950"} Jan 25 07:59:20 crc kubenswrapper[4832]: I0125 07:59:20.156584 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qmnth" event={"ID":"de82f302-d899-48c7-aedc-4b24f4541b2b","Type":"ContainerStarted","Data":"431d294c492ed2eb7131c55cbcf8b2b7d3cfeb9b126674d8cf875938e17d1637"} Jan 25 07:59:20 crc kubenswrapper[4832]: I0125 07:59:20.174993 4832 generic.go:334] "Generic (PLEG): container finished" podID="f61facf9-6be6-4e92-b219-73da2609112a" containerID="f2f1cfdcb4c31c4471992b5911dc06df838ccff4afdf30db167fe8223454f869" exitCode=0 Jan 25 07:59:20 crc kubenswrapper[4832]: I0125 07:59:20.175050 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lbczx" event={"ID":"f61facf9-6be6-4e92-b219-73da2609112a","Type":"ContainerDied","Data":"f2f1cfdcb4c31c4471992b5911dc06df838ccff4afdf30db167fe8223454f869"} Jan 25 07:59:20 crc kubenswrapper[4832]: I0125 07:59:20.175075 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lbczx" event={"ID":"f61facf9-6be6-4e92-b219-73da2609112a","Type":"ContainerStarted","Data":"9c2aea71028eadd5c75dda6ae960d3cfbfb9c5f3eadca52b33ba3f2b0d4a6922"} Jan 25 07:59:20 crc kubenswrapper[4832]: I0125 07:59:20.209782 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-jjs2r" event={"ID":"4b4ff59a-58d8-4822-8be8-d48a5a85b2d2","Type":"ContainerStarted","Data":"0f45bcd055e3e380932d8e4cb1c1cef816c5680b3e7e1e8b17751baaf538ec18"} Jan 25 07:59:20 crc kubenswrapper[4832]: I0125 07:59:20.213050 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-c5q4h"] Jan 25 07:59:20 crc kubenswrapper[4832]: I0125 07:59:20.214432 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-c5q4h" Jan 25 07:59:20 crc kubenswrapper[4832]: I0125 07:59:20.221340 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-c5q4h"] Jan 25 07:59:20 crc kubenswrapper[4832]: I0125 07:59:20.288969 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a844ec-e431-4caf-9471-00460db6589c-utilities\") pod \"redhat-operators-c5q4h\" (UID: \"57a844ec-e431-4caf-9471-00460db6589c\") " pod="openshift-marketplace/redhat-operators-c5q4h" Jan 25 07:59:20 crc kubenswrapper[4832]: I0125 07:59:20.289051 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a844ec-e431-4caf-9471-00460db6589c-catalog-content\") pod \"redhat-operators-c5q4h\" (UID: \"57a844ec-e431-4caf-9471-00460db6589c\") " pod="openshift-marketplace/redhat-operators-c5q4h" Jan 25 07:59:20 crc kubenswrapper[4832]: I0125 07:59:20.289271 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w5rqj\" (UniqueName: \"kubernetes.io/projected/57a844ec-e431-4caf-9471-00460db6589c-kube-api-access-w5rqj\") pod \"redhat-operators-c5q4h\" (UID: \"57a844ec-e431-4caf-9471-00460db6589c\") " pod="openshift-marketplace/redhat-operators-c5q4h" Jan 25 07:59:20 crc kubenswrapper[4832]: I0125 07:59:20.367973 4832 patch_prober.go:28] interesting pod/router-default-5444994796-xjkrg container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 25 07:59:20 crc kubenswrapper[4832]: [-]has-synced failed: reason withheld Jan 25 07:59:20 crc kubenswrapper[4832]: [+]process-running ok Jan 25 07:59:20 crc kubenswrapper[4832]: healthz check failed Jan 25 07:59:20 crc kubenswrapper[4832]: I0125 07:59:20.368033 4832 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-xjkrg" podUID="cdc4f06b-3e9a-4855-8400-faabc37cd870" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 25 07:59:20 crc kubenswrapper[4832]: I0125 07:59:20.391143 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w5rqj\" (UniqueName: \"kubernetes.io/projected/57a844ec-e431-4caf-9471-00460db6589c-kube-api-access-w5rqj\") pod \"redhat-operators-c5q4h\" (UID: \"57a844ec-e431-4caf-9471-00460db6589c\") " pod="openshift-marketplace/redhat-operators-c5q4h" Jan 25 07:59:20 crc kubenswrapper[4832]: I0125 07:59:20.391223 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a844ec-e431-4caf-9471-00460db6589c-utilities\") pod \"redhat-operators-c5q4h\" (UID: \"57a844ec-e431-4caf-9471-00460db6589c\") " pod="openshift-marketplace/redhat-operators-c5q4h" Jan 25 07:59:20 crc kubenswrapper[4832]: I0125 07:59:20.391274 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a844ec-e431-4caf-9471-00460db6589c-catalog-content\") pod \"redhat-operators-c5q4h\" (UID: \"57a844ec-e431-4caf-9471-00460db6589c\") " pod="openshift-marketplace/redhat-operators-c5q4h" Jan 25 07:59:20 crc kubenswrapper[4832]: I0125 07:59:20.392453 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a844ec-e431-4caf-9471-00460db6589c-catalog-content\") pod \"redhat-operators-c5q4h\" (UID: \"57a844ec-e431-4caf-9471-00460db6589c\") " pod="openshift-marketplace/redhat-operators-c5q4h" Jan 25 07:59:20 crc kubenswrapper[4832]: I0125 07:59:20.392507 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a844ec-e431-4caf-9471-00460db6589c-utilities\") pod \"redhat-operators-c5q4h\" (UID: \"57a844ec-e431-4caf-9471-00460db6589c\") " pod="openshift-marketplace/redhat-operators-c5q4h" Jan 25 07:59:20 crc kubenswrapper[4832]: I0125 07:59:20.392754 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="hostpath-provisioner/csi-hostpathplugin-jjs2r" podStartSLOduration=12.392742181 podStartE2EDuration="12.392742181s" podCreationTimestamp="2026-01-25 07:59:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-25 07:59:20.271410286 +0000 UTC m=+142.945233819" watchObservedRunningTime="2026-01-25 07:59:20.392742181 +0000 UTC m=+143.066565724" Jan 25 07:59:20 crc kubenswrapper[4832]: I0125 07:59:20.397729 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-f6nwt"] Jan 25 07:59:20 crc kubenswrapper[4832]: I0125 07:59:20.408932 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w5rqj\" (UniqueName: \"kubernetes.io/projected/57a844ec-e431-4caf-9471-00460db6589c-kube-api-access-w5rqj\") pod \"redhat-operators-c5q4h\" (UID: \"57a844ec-e431-4caf-9471-00460db6589c\") " pod="openshift-marketplace/redhat-operators-c5q4h" Jan 25 07:59:20 crc kubenswrapper[4832]: I0125 07:59:20.471267 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-apiserver/apiserver-76f77b778f-99kns" Jan 25 07:59:20 crc kubenswrapper[4832]: I0125 07:59:20.476888 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-apiserver/apiserver-76f77b778f-99kns" Jan 25 07:59:20 crc kubenswrapper[4832]: I0125 07:59:20.540920 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-c5q4h" Jan 25 07:59:20 crc kubenswrapper[4832]: I0125 07:59:20.797767 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-f9d7485db-8pg27" Jan 25 07:59:20 crc kubenswrapper[4832]: I0125 07:59:20.797821 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-f9d7485db-8pg27" Jan 25 07:59:20 crc kubenswrapper[4832]: I0125 07:59:20.816377 4832 patch_prober.go:28] interesting pod/console-f9d7485db-8pg27 container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.11:8443/health\": dial tcp 10.217.0.11:8443: connect: connection refused" start-of-body= Jan 25 07:59:20 crc kubenswrapper[4832]: I0125 07:59:20.816652 4832 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-8pg27" podUID="95dbbcf8-838b-4f56-928a-81b4f038b259" containerName="console" probeResult="failure" output="Get \"https://10.217.0.11:8443/health\": dial tcp 10.217.0.11:8443: connect: connection refused" Jan 25 07:59:20 crc kubenswrapper[4832]: I0125 07:59:20.824806 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-c5q4h"] Jan 25 07:59:20 crc kubenswrapper[4832]: W0125 07:59:20.858713 4832 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod57a844ec_e431_4caf_9471_00460db6589c.slice/crio-7b714b4fe08a81b53110b640ac40fc64d4af249cc00d60dc79131c287d99f3d8 WatchSource:0}: Error finding container 7b714b4fe08a81b53110b640ac40fc64d4af249cc00d60dc79131c287d99f3d8: Status 404 returned error can't find the container with id 7b714b4fe08a81b53110b640ac40fc64d4af249cc00d60dc79131c287d99f3d8 Jan 25 07:59:21 crc kubenswrapper[4832]: I0125 07:59:21.198709 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Jan 25 07:59:21 crc kubenswrapper[4832]: I0125 07:59:21.200111 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 25 07:59:21 crc kubenswrapper[4832]: I0125 07:59:21.203689 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager"/"installer-sa-dockercfg-kjl2n" Jan 25 07:59:21 crc kubenswrapper[4832]: I0125 07:59:21.203764 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"kube-root-ca.crt" Jan 25 07:59:21 crc kubenswrapper[4832]: I0125 07:59:21.207021 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Jan 25 07:59:21 crc kubenswrapper[4832]: I0125 07:59:21.230106 4832 generic.go:334] "Generic (PLEG): container finished" podID="051ceaa0-fdb3-480a-9c5d-f56b1194ca81" containerID="6387974f472abd37b386de1337e463ca8517d1c91ef706a01e56a7509c79ae88" exitCode=0 Jan 25 07:59:21 crc kubenswrapper[4832]: I0125 07:59:21.230265 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29488785-dcf79" event={"ID":"051ceaa0-fdb3-480a-9c5d-f56b1194ca81","Type":"ContainerDied","Data":"6387974f472abd37b386de1337e463ca8517d1c91ef706a01e56a7509c79ae88"} Jan 25 07:59:21 crc kubenswrapper[4832]: I0125 07:59:21.242707 4832 generic.go:334] "Generic (PLEG): container finished" podID="57a844ec-e431-4caf-9471-00460db6589c" containerID="cfa24793b9bb832c35772653f752387268ede9def4b222f81bd79c32bc9bc02e" exitCode=0 Jan 25 07:59:21 crc kubenswrapper[4832]: I0125 07:59:21.243030 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-c5q4h" event={"ID":"57a844ec-e431-4caf-9471-00460db6589c","Type":"ContainerDied","Data":"cfa24793b9bb832c35772653f752387268ede9def4b222f81bd79c32bc9bc02e"} Jan 25 07:59:21 crc kubenswrapper[4832]: I0125 07:59:21.243100 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-c5q4h" event={"ID":"57a844ec-e431-4caf-9471-00460db6589c","Type":"ContainerStarted","Data":"7b714b4fe08a81b53110b640ac40fc64d4af249cc00d60dc79131c287d99f3d8"} Jan 25 07:59:21 crc kubenswrapper[4832]: I0125 07:59:21.245497 4832 generic.go:334] "Generic (PLEG): container finished" podID="479892d8-5a53-40ee-9f16-d4480c2c3e03" containerID="e0b7fe92ad2aa5af33f56e083dd111fbc1388c3d3d952adfc8bd0213a65b7766" exitCode=0 Jan 25 07:59:21 crc kubenswrapper[4832]: I0125 07:59:21.246454 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-f6nwt" event={"ID":"479892d8-5a53-40ee-9f16-d4480c2c3e03","Type":"ContainerDied","Data":"e0b7fe92ad2aa5af33f56e083dd111fbc1388c3d3d952adfc8bd0213a65b7766"} Jan 25 07:59:21 crc kubenswrapper[4832]: I0125 07:59:21.246477 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-f6nwt" event={"ID":"479892d8-5a53-40ee-9f16-d4480c2c3e03","Type":"ContainerStarted","Data":"127cc4332ddae9518675191b7ff5d76421650c33e5fd334f43393e427ed6939d"} Jan 25 07:59:21 crc kubenswrapper[4832]: I0125 07:59:21.333809 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2d670ef2-a7fe-4ce7-903c-685c953bb63e-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"2d670ef2-a7fe-4ce7-903c-685c953bb63e\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 25 07:59:21 crc kubenswrapper[4832]: I0125 07:59:21.334064 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/2d670ef2-a7fe-4ce7-903c-685c953bb63e-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"2d670ef2-a7fe-4ce7-903c-685c953bb63e\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 25 07:59:21 crc kubenswrapper[4832]: I0125 07:59:21.337078 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Jan 25 07:59:21 crc kubenswrapper[4832]: I0125 07:59:21.337983 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 25 07:59:21 crc kubenswrapper[4832]: I0125 07:59:21.344906 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Jan 25 07:59:21 crc kubenswrapper[4832]: I0125 07:59:21.345125 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Jan 25 07:59:21 crc kubenswrapper[4832]: I0125 07:59:21.352771 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Jan 25 07:59:21 crc kubenswrapper[4832]: I0125 07:59:21.362975 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-5444994796-xjkrg" Jan 25 07:59:21 crc kubenswrapper[4832]: I0125 07:59:21.369859 4832 patch_prober.go:28] interesting pod/router-default-5444994796-xjkrg container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 25 07:59:21 crc kubenswrapper[4832]: [-]has-synced failed: reason withheld Jan 25 07:59:21 crc kubenswrapper[4832]: [+]process-running ok Jan 25 07:59:21 crc kubenswrapper[4832]: healthz check failed Jan 25 07:59:21 crc kubenswrapper[4832]: I0125 07:59:21.369947 4832 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-xjkrg" podUID="cdc4f06b-3e9a-4855-8400-faabc37cd870" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 25 07:59:21 crc kubenswrapper[4832]: I0125 07:59:21.435011 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/2d670ef2-a7fe-4ce7-903c-685c953bb63e-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"2d670ef2-a7fe-4ce7-903c-685c953bb63e\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 25 07:59:21 crc kubenswrapper[4832]: I0125 07:59:21.435082 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2d670ef2-a7fe-4ce7-903c-685c953bb63e-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"2d670ef2-a7fe-4ce7-903c-685c953bb63e\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 25 07:59:21 crc kubenswrapper[4832]: I0125 07:59:21.435159 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/328c90cf-cde8-414b-b243-a29a708b2a87-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"328c90cf-cde8-414b-b243-a29a708b2a87\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 25 07:59:21 crc kubenswrapper[4832]: I0125 07:59:21.435217 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/328c90cf-cde8-414b-b243-a29a708b2a87-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"328c90cf-cde8-414b-b243-a29a708b2a87\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 25 07:59:21 crc kubenswrapper[4832]: I0125 07:59:21.435235 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/2d670ef2-a7fe-4ce7-903c-685c953bb63e-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"2d670ef2-a7fe-4ce7-903c-685c953bb63e\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 25 07:59:21 crc kubenswrapper[4832]: I0125 07:59:21.479477 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2d670ef2-a7fe-4ce7-903c-685c953bb63e-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"2d670ef2-a7fe-4ce7-903c-685c953bb63e\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 25 07:59:21 crc kubenswrapper[4832]: I0125 07:59:21.536122 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/328c90cf-cde8-414b-b243-a29a708b2a87-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"328c90cf-cde8-414b-b243-a29a708b2a87\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 25 07:59:21 crc kubenswrapper[4832]: I0125 07:59:21.536198 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/328c90cf-cde8-414b-b243-a29a708b2a87-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"328c90cf-cde8-414b-b243-a29a708b2a87\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 25 07:59:21 crc kubenswrapper[4832]: I0125 07:59:21.536222 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/328c90cf-cde8-414b-b243-a29a708b2a87-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"328c90cf-cde8-414b-b243-a29a708b2a87\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 25 07:59:21 crc kubenswrapper[4832]: I0125 07:59:21.562202 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/328c90cf-cde8-414b-b243-a29a708b2a87-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"328c90cf-cde8-414b-b243-a29a708b2a87\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 25 07:59:21 crc kubenswrapper[4832]: I0125 07:59:21.577896 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 25 07:59:21 crc kubenswrapper[4832]: I0125 07:59:21.672717 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 25 07:59:21 crc kubenswrapper[4832]: I0125 07:59:21.738822 4832 patch_prober.go:28] interesting pod/downloads-7954f5f757-jvld2 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.27:8080/\": dial tcp 10.217.0.27:8080: connect: connection refused" start-of-body= Jan 25 07:59:21 crc kubenswrapper[4832]: I0125 07:59:21.738871 4832 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-jvld2" podUID="c05896f4-ee7d-4b10-949e-b8bf0d822313" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.27:8080/\": dial tcp 10.217.0.27:8080: connect: connection refused" Jan 25 07:59:21 crc kubenswrapper[4832]: I0125 07:59:21.738900 4832 patch_prober.go:28] interesting pod/downloads-7954f5f757-jvld2 container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.27:8080/\": dial tcp 10.217.0.27:8080: connect: connection refused" start-of-body= Jan 25 07:59:21 crc kubenswrapper[4832]: I0125 07:59:21.738954 4832 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-jvld2" podUID="c05896f4-ee7d-4b10-949e-b8bf0d822313" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.27:8080/\": dial tcp 10.217.0.27:8080: connect: connection refused" Jan 25 07:59:21 crc kubenswrapper[4832]: I0125 07:59:21.964202 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Jan 25 07:59:22 crc kubenswrapper[4832]: I0125 07:59:22.165048 4832 patch_prober.go:28] interesting pod/machine-config-daemon-9r9sz container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 25 07:59:22 crc kubenswrapper[4832]: I0125 07:59:22.166442 4832 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-9r9sz" podUID="1fb47e8e-c812-41b4-9be7-3fad81e121b0" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 25 07:59:22 crc kubenswrapper[4832]: I0125 07:59:22.264247 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Jan 25 07:59:22 crc kubenswrapper[4832]: I0125 07:59:22.286149 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"2d670ef2-a7fe-4ce7-903c-685c953bb63e","Type":"ContainerStarted","Data":"4a7e88bc850a47b418c1a825d049a031a8a2a677c9a5642859955ccee27ea223"} Jan 25 07:59:22 crc kubenswrapper[4832]: I0125 07:59:22.387330 4832 patch_prober.go:28] interesting pod/router-default-5444994796-xjkrg container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 25 07:59:22 crc kubenswrapper[4832]: [-]has-synced failed: reason withheld Jan 25 07:59:22 crc kubenswrapper[4832]: [+]process-running ok Jan 25 07:59:22 crc kubenswrapper[4832]: healthz check failed Jan 25 07:59:22 crc kubenswrapper[4832]: I0125 07:59:22.387404 4832 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-xjkrg" podUID="cdc4f06b-3e9a-4855-8400-faabc37cd870" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 25 07:59:22 crc kubenswrapper[4832]: I0125 07:59:22.669258 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29488785-dcf79" Jan 25 07:59:22 crc kubenswrapper[4832]: I0125 07:59:22.786199 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/051ceaa0-fdb3-480a-9c5d-f56b1194ca81-secret-volume\") pod \"051ceaa0-fdb3-480a-9c5d-f56b1194ca81\" (UID: \"051ceaa0-fdb3-480a-9c5d-f56b1194ca81\") " Jan 25 07:59:22 crc kubenswrapper[4832]: I0125 07:59:22.786265 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/051ceaa0-fdb3-480a-9c5d-f56b1194ca81-config-volume\") pod \"051ceaa0-fdb3-480a-9c5d-f56b1194ca81\" (UID: \"051ceaa0-fdb3-480a-9c5d-f56b1194ca81\") " Jan 25 07:59:22 crc kubenswrapper[4832]: I0125 07:59:22.786407 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l4l94\" (UniqueName: \"kubernetes.io/projected/051ceaa0-fdb3-480a-9c5d-f56b1194ca81-kube-api-access-l4l94\") pod \"051ceaa0-fdb3-480a-9c5d-f56b1194ca81\" (UID: \"051ceaa0-fdb3-480a-9c5d-f56b1194ca81\") " Jan 25 07:59:22 crc kubenswrapper[4832]: I0125 07:59:22.790093 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/051ceaa0-fdb3-480a-9c5d-f56b1194ca81-config-volume" (OuterVolumeSpecName: "config-volume") pod "051ceaa0-fdb3-480a-9c5d-f56b1194ca81" (UID: "051ceaa0-fdb3-480a-9c5d-f56b1194ca81"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 25 07:59:22 crc kubenswrapper[4832]: I0125 07:59:22.793790 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/051ceaa0-fdb3-480a-9c5d-f56b1194ca81-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "051ceaa0-fdb3-480a-9c5d-f56b1194ca81" (UID: "051ceaa0-fdb3-480a-9c5d-f56b1194ca81"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 25 07:59:22 crc kubenswrapper[4832]: I0125 07:59:22.804592 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/051ceaa0-fdb3-480a-9c5d-f56b1194ca81-kube-api-access-l4l94" (OuterVolumeSpecName: "kube-api-access-l4l94") pod "051ceaa0-fdb3-480a-9c5d-f56b1194ca81" (UID: "051ceaa0-fdb3-480a-9c5d-f56b1194ca81"). InnerVolumeSpecName "kube-api-access-l4l94". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 25 07:59:22 crc kubenswrapper[4832]: I0125 07:59:22.888112 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l4l94\" (UniqueName: \"kubernetes.io/projected/051ceaa0-fdb3-480a-9c5d-f56b1194ca81-kube-api-access-l4l94\") on node \"crc\" DevicePath \"\"" Jan 25 07:59:22 crc kubenswrapper[4832]: I0125 07:59:22.888149 4832 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/051ceaa0-fdb3-480a-9c5d-f56b1194ca81-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 25 07:59:22 crc kubenswrapper[4832]: I0125 07:59:22.888160 4832 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/051ceaa0-fdb3-480a-9c5d-f56b1194ca81-config-volume\") on node \"crc\" DevicePath \"\"" Jan 25 07:59:23 crc kubenswrapper[4832]: I0125 07:59:23.184078 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-dns/dns-default-88fz6" Jan 25 07:59:23 crc kubenswrapper[4832]: I0125 07:59:23.312878 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29488785-dcf79" event={"ID":"051ceaa0-fdb3-480a-9c5d-f56b1194ca81","Type":"ContainerDied","Data":"d0a22e098791e15839c35b35e96e335c398e955897d9a70799c3ad2fb614120c"} Jan 25 07:59:23 crc kubenswrapper[4832]: I0125 07:59:23.312937 4832 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d0a22e098791e15839c35b35e96e335c398e955897d9a70799c3ad2fb614120c" Jan 25 07:59:23 crc kubenswrapper[4832]: I0125 07:59:23.312906 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29488785-dcf79" Jan 25 07:59:23 crc kubenswrapper[4832]: I0125 07:59:23.321401 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"2d670ef2-a7fe-4ce7-903c-685c953bb63e","Type":"ContainerStarted","Data":"50aff81866d2a4cd55074450e6577b2187dcc027991f8eefcd65cb13c749edef"} Jan 25 07:59:23 crc kubenswrapper[4832]: I0125 07:59:23.327009 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"328c90cf-cde8-414b-b243-a29a708b2a87","Type":"ContainerStarted","Data":"3276cca74b939d9c0812f45fb84889b5f935a8ccb0f54d91d0b15a9d95ebc7e2"} Jan 25 07:59:23 crc kubenswrapper[4832]: I0125 07:59:23.338475 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/revision-pruner-9-crc" podStartSLOduration=2.338445107 podStartE2EDuration="2.338445107s" podCreationTimestamp="2026-01-25 07:59:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-25 07:59:23.336634065 +0000 UTC m=+146.010457598" watchObservedRunningTime="2026-01-25 07:59:23.338445107 +0000 UTC m=+146.012268640" Jan 25 07:59:23 crc kubenswrapper[4832]: I0125 07:59:23.367903 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-ingress/router-default-5444994796-xjkrg" Jan 25 07:59:23 crc kubenswrapper[4832]: I0125 07:59:23.383186 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ingress/router-default-5444994796-xjkrg" Jan 25 07:59:24 crc kubenswrapper[4832]: I0125 07:59:24.340215 4832 generic.go:334] "Generic (PLEG): container finished" podID="2d670ef2-a7fe-4ce7-903c-685c953bb63e" containerID="50aff81866d2a4cd55074450e6577b2187dcc027991f8eefcd65cb13c749edef" exitCode=0 Jan 25 07:59:24 crc kubenswrapper[4832]: I0125 07:59:24.340316 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"2d670ef2-a7fe-4ce7-903c-685c953bb63e","Type":"ContainerDied","Data":"50aff81866d2a4cd55074450e6577b2187dcc027991f8eefcd65cb13c749edef"} Jan 25 07:59:24 crc kubenswrapper[4832]: I0125 07:59:24.345696 4832 generic.go:334] "Generic (PLEG): container finished" podID="328c90cf-cde8-414b-b243-a29a708b2a87" containerID="dbbed90fdb44bc9aaf3f73fdbbb6c6c18db995ad914d6e0688ed3ed2110df7bc" exitCode=0 Jan 25 07:59:24 crc kubenswrapper[4832]: I0125 07:59:24.345817 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"328c90cf-cde8-414b-b243-a29a708b2a87","Type":"ContainerDied","Data":"dbbed90fdb44bc9aaf3f73fdbbb6c6c18db995ad914d6e0688ed3ed2110df7bc"} Jan 25 07:59:24 crc kubenswrapper[4832]: I0125 07:59:24.510779 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 25 07:59:24 crc kubenswrapper[4832]: I0125 07:59:24.510896 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 25 07:59:24 crc kubenswrapper[4832]: I0125 07:59:24.510932 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 25 07:59:24 crc kubenswrapper[4832]: I0125 07:59:24.518458 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 25 07:59:24 crc kubenswrapper[4832]: I0125 07:59:24.520396 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 25 07:59:24 crc kubenswrapper[4832]: I0125 07:59:24.612310 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 25 07:59:24 crc kubenswrapper[4832]: I0125 07:59:24.630907 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 25 07:59:24 crc kubenswrapper[4832]: I0125 07:59:24.685019 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 25 07:59:24 crc kubenswrapper[4832]: I0125 07:59:24.715005 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 25 07:59:24 crc kubenswrapper[4832]: I0125 07:59:24.830130 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 25 07:59:25 crc kubenswrapper[4832]: I0125 07:59:25.043230 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 25 07:59:25 crc kubenswrapper[4832]: I0125 07:59:25.365266 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"2342a428f3148f358ed942cb25aebcb293b58c6ba6ca177b7a4000297d7f9a41"} Jan 25 07:59:25 crc kubenswrapper[4832]: I0125 07:59:25.794535 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 25 07:59:25 crc kubenswrapper[4832]: I0125 07:59:25.958218 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2d670ef2-a7fe-4ce7-903c-685c953bb63e-kube-api-access\") pod \"2d670ef2-a7fe-4ce7-903c-685c953bb63e\" (UID: \"2d670ef2-a7fe-4ce7-903c-685c953bb63e\") " Jan 25 07:59:25 crc kubenswrapper[4832]: I0125 07:59:25.958424 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/2d670ef2-a7fe-4ce7-903c-685c953bb63e-kubelet-dir\") pod \"2d670ef2-a7fe-4ce7-903c-685c953bb63e\" (UID: \"2d670ef2-a7fe-4ce7-903c-685c953bb63e\") " Jan 25 07:59:25 crc kubenswrapper[4832]: I0125 07:59:25.958831 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2d670ef2-a7fe-4ce7-903c-685c953bb63e-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "2d670ef2-a7fe-4ce7-903c-685c953bb63e" (UID: "2d670ef2-a7fe-4ce7-903c-685c953bb63e"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 25 07:59:25 crc kubenswrapper[4832]: I0125 07:59:25.962376 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2d670ef2-a7fe-4ce7-903c-685c953bb63e-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "2d670ef2-a7fe-4ce7-903c-685c953bb63e" (UID: "2d670ef2-a7fe-4ce7-903c-685c953bb63e"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 25 07:59:26 crc kubenswrapper[4832]: I0125 07:59:26.060402 4832 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/2d670ef2-a7fe-4ce7-903c-685c953bb63e-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 25 07:59:26 crc kubenswrapper[4832]: I0125 07:59:26.060433 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2d670ef2-a7fe-4ce7-903c-685c953bb63e-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 25 07:59:26 crc kubenswrapper[4832]: I0125 07:59:26.354076 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 25 07:59:26 crc kubenswrapper[4832]: I0125 07:59:26.383146 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"2d670ef2-a7fe-4ce7-903c-685c953bb63e","Type":"ContainerDied","Data":"4a7e88bc850a47b418c1a825d049a031a8a2a677c9a5642859955ccee27ea223"} Jan 25 07:59:26 crc kubenswrapper[4832]: I0125 07:59:26.383189 4832 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4a7e88bc850a47b418c1a825d049a031a8a2a677c9a5642859955ccee27ea223" Jan 25 07:59:26 crc kubenswrapper[4832]: I0125 07:59:26.383240 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 25 07:59:26 crc kubenswrapper[4832]: I0125 07:59:26.390184 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"328c90cf-cde8-414b-b243-a29a708b2a87","Type":"ContainerDied","Data":"3276cca74b939d9c0812f45fb84889b5f935a8ccb0f54d91d0b15a9d95ebc7e2"} Jan 25 07:59:26 crc kubenswrapper[4832]: I0125 07:59:26.390229 4832 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3276cca74b939d9c0812f45fb84889b5f935a8ccb0f54d91d0b15a9d95ebc7e2" Jan 25 07:59:26 crc kubenswrapper[4832]: I0125 07:59:26.390319 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 25 07:59:26 crc kubenswrapper[4832]: I0125 07:59:26.392493 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"c6341824b51580a5dc776d9a54cb2f320031a274d56a0c77ef7862930a94c28c"} Jan 25 07:59:26 crc kubenswrapper[4832]: I0125 07:59:26.393744 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"249e630150024ef67984ce33722179d30f162439e266957633dd90fe34287a5c"} Jan 25 07:59:26 crc kubenswrapper[4832]: I0125 07:59:26.473925 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/328c90cf-cde8-414b-b243-a29a708b2a87-kube-api-access\") pod \"328c90cf-cde8-414b-b243-a29a708b2a87\" (UID: \"328c90cf-cde8-414b-b243-a29a708b2a87\") " Jan 25 07:59:26 crc kubenswrapper[4832]: I0125 07:59:26.474033 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/328c90cf-cde8-414b-b243-a29a708b2a87-kubelet-dir\") pod \"328c90cf-cde8-414b-b243-a29a708b2a87\" (UID: \"328c90cf-cde8-414b-b243-a29a708b2a87\") " Jan 25 07:59:26 crc kubenswrapper[4832]: I0125 07:59:26.474298 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/328c90cf-cde8-414b-b243-a29a708b2a87-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "328c90cf-cde8-414b-b243-a29a708b2a87" (UID: "328c90cf-cde8-414b-b243-a29a708b2a87"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 25 07:59:26 crc kubenswrapper[4832]: I0125 07:59:26.478030 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/328c90cf-cde8-414b-b243-a29a708b2a87-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "328c90cf-cde8-414b-b243-a29a708b2a87" (UID: "328c90cf-cde8-414b-b243-a29a708b2a87"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 25 07:59:26 crc kubenswrapper[4832]: I0125 07:59:26.576909 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/328c90cf-cde8-414b-b243-a29a708b2a87-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 25 07:59:26 crc kubenswrapper[4832]: I0125 07:59:26.576955 4832 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/328c90cf-cde8-414b-b243-a29a708b2a87-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 25 07:59:27 crc kubenswrapper[4832]: I0125 07:59:27.445058 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"ac5fe9ec7d4329ca4163a762a9bc96a515bf57660751e54959d1d65e47079a2b"} Jan 25 07:59:27 crc kubenswrapper[4832]: I0125 07:59:27.453935 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"89befd1b44bb8a74927ca16dd042d96428cc2524be3c86494e68412e096e32ae"} Jan 25 07:59:27 crc kubenswrapper[4832]: I0125 07:59:27.459911 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"61fa5499963155d358058e16e20932c3aafa228e4c50594da4f6565b46c5380f"} Jan 25 07:59:28 crc kubenswrapper[4832]: I0125 07:59:28.465192 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 25 07:59:30 crc kubenswrapper[4832]: I0125 07:59:30.804469 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-f9d7485db-8pg27" Jan 25 07:59:30 crc kubenswrapper[4832]: I0125 07:59:30.808533 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-f9d7485db-8pg27" Jan 25 07:59:31 crc kubenswrapper[4832]: I0125 07:59:31.743754 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/downloads-7954f5f757-jvld2" Jan 25 07:59:38 crc kubenswrapper[4832]: I0125 07:59:38.399083 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-697d97f7c8-xw4z9" Jan 25 07:59:38 crc kubenswrapper[4832]: I0125 07:59:38.843170 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/b1a15135-866b-4644-97aa-34c7da815b6b-metrics-certs\") pod \"network-metrics-daemon-nzj5s\" (UID: \"b1a15135-866b-4644-97aa-34c7da815b6b\") " pod="openshift-multus/network-metrics-daemon-nzj5s" Jan 25 07:59:38 crc kubenswrapper[4832]: I0125 07:59:38.848282 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/b1a15135-866b-4644-97aa-34c7da815b6b-metrics-certs\") pod \"network-metrics-daemon-nzj5s\" (UID: \"b1a15135-866b-4644-97aa-34c7da815b6b\") " pod="openshift-multus/network-metrics-daemon-nzj5s" Jan 25 07:59:39 crc kubenswrapper[4832]: I0125 07:59:39.124718 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-nzj5s" Jan 25 07:59:51 crc kubenswrapper[4832]: I0125 07:59:51.478323 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-tqtnp" Jan 25 07:59:52 crc kubenswrapper[4832]: I0125 07:59:52.149733 4832 patch_prober.go:28] interesting pod/machine-config-daemon-9r9sz container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 25 07:59:52 crc kubenswrapper[4832]: I0125 07:59:52.149800 4832 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-9r9sz" podUID="1fb47e8e-c812-41b4-9be7-3fad81e121b0" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 25 07:59:59 crc kubenswrapper[4832]: I0125 07:59:59.531845 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Jan 25 07:59:59 crc kubenswrapper[4832]: E0125 07:59:59.532480 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2d670ef2-a7fe-4ce7-903c-685c953bb63e" containerName="pruner" Jan 25 07:59:59 crc kubenswrapper[4832]: I0125 07:59:59.532501 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="2d670ef2-a7fe-4ce7-903c-685c953bb63e" containerName="pruner" Jan 25 07:59:59 crc kubenswrapper[4832]: E0125 07:59:59.532530 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="051ceaa0-fdb3-480a-9c5d-f56b1194ca81" containerName="collect-profiles" Jan 25 07:59:59 crc kubenswrapper[4832]: I0125 07:59:59.532542 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="051ceaa0-fdb3-480a-9c5d-f56b1194ca81" containerName="collect-profiles" Jan 25 07:59:59 crc kubenswrapper[4832]: E0125 07:59:59.532561 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="328c90cf-cde8-414b-b243-a29a708b2a87" containerName="pruner" Jan 25 07:59:59 crc kubenswrapper[4832]: I0125 07:59:59.532574 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="328c90cf-cde8-414b-b243-a29a708b2a87" containerName="pruner" Jan 25 07:59:59 crc kubenswrapper[4832]: I0125 07:59:59.532763 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="051ceaa0-fdb3-480a-9c5d-f56b1194ca81" containerName="collect-profiles" Jan 25 07:59:59 crc kubenswrapper[4832]: I0125 07:59:59.532798 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="2d670ef2-a7fe-4ce7-903c-685c953bb63e" containerName="pruner" Jan 25 07:59:59 crc kubenswrapper[4832]: I0125 07:59:59.532812 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="328c90cf-cde8-414b-b243-a29a708b2a87" containerName="pruner" Jan 25 07:59:59 crc kubenswrapper[4832]: I0125 07:59:59.533358 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 25 07:59:59 crc kubenswrapper[4832]: I0125 07:59:59.535598 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Jan 25 07:59:59 crc kubenswrapper[4832]: I0125 07:59:59.536121 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Jan 25 07:59:59 crc kubenswrapper[4832]: I0125 07:59:59.539888 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Jan 25 07:59:59 crc kubenswrapper[4832]: I0125 07:59:59.627616 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/fb5919b8-3fe4-439b-b6dd-c23648b81b1e-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"fb5919b8-3fe4-439b-b6dd-c23648b81b1e\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 25 07:59:59 crc kubenswrapper[4832]: I0125 07:59:59.627672 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/fb5919b8-3fe4-439b-b6dd-c23648b81b1e-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"fb5919b8-3fe4-439b-b6dd-c23648b81b1e\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 25 07:59:59 crc kubenswrapper[4832]: I0125 07:59:59.729589 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/fb5919b8-3fe4-439b-b6dd-c23648b81b1e-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"fb5919b8-3fe4-439b-b6dd-c23648b81b1e\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 25 07:59:59 crc kubenswrapper[4832]: I0125 07:59:59.729669 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/fb5919b8-3fe4-439b-b6dd-c23648b81b1e-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"fb5919b8-3fe4-439b-b6dd-c23648b81b1e\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 25 07:59:59 crc kubenswrapper[4832]: I0125 07:59:59.729875 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/fb5919b8-3fe4-439b-b6dd-c23648b81b1e-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"fb5919b8-3fe4-439b-b6dd-c23648b81b1e\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 25 07:59:59 crc kubenswrapper[4832]: I0125 07:59:59.749993 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/fb5919b8-3fe4-439b-b6dd-c23648b81b1e-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"fb5919b8-3fe4-439b-b6dd-c23648b81b1e\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 25 07:59:59 crc kubenswrapper[4832]: E0125 07:59:59.783320 4832 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Jan 25 07:59:59 crc kubenswrapper[4832]: E0125 07:59:59.783601 4832 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-gbmfg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-hgzxd_openshift-marketplace(9ca2e919-2c33-41e7-baa6-40f5437a2c3c): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 25 07:59:59 crc kubenswrapper[4832]: E0125 07:59:59.784769 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-hgzxd" podUID="9ca2e919-2c33-41e7-baa6-40f5437a2c3c" Jan 25 07:59:59 crc kubenswrapper[4832]: I0125 07:59:59.852699 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 25 08:00:00 crc kubenswrapper[4832]: I0125 08:00:00.135594 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29488800-492g8"] Jan 25 08:00:00 crc kubenswrapper[4832]: I0125 08:00:00.139727 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29488800-492g8" Jan 25 08:00:00 crc kubenswrapper[4832]: I0125 08:00:00.146658 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 25 08:00:00 crc kubenswrapper[4832]: I0125 08:00:00.148613 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 25 08:00:00 crc kubenswrapper[4832]: I0125 08:00:00.155575 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29488800-492g8"] Jan 25 08:00:00 crc kubenswrapper[4832]: I0125 08:00:00.235901 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/169d3ee1-b6be-49bc-9522-c3579c6965f4-config-volume\") pod \"collect-profiles-29488800-492g8\" (UID: \"169d3ee1-b6be-49bc-9522-c3579c6965f4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29488800-492g8" Jan 25 08:00:00 crc kubenswrapper[4832]: I0125 08:00:00.235957 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vb49f\" (UniqueName: \"kubernetes.io/projected/169d3ee1-b6be-49bc-9522-c3579c6965f4-kube-api-access-vb49f\") pod \"collect-profiles-29488800-492g8\" (UID: \"169d3ee1-b6be-49bc-9522-c3579c6965f4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29488800-492g8" Jan 25 08:00:00 crc kubenswrapper[4832]: I0125 08:00:00.236006 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/169d3ee1-b6be-49bc-9522-c3579c6965f4-secret-volume\") pod \"collect-profiles-29488800-492g8\" (UID: \"169d3ee1-b6be-49bc-9522-c3579c6965f4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29488800-492g8" Jan 25 08:00:00 crc kubenswrapper[4832]: I0125 08:00:00.337288 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/169d3ee1-b6be-49bc-9522-c3579c6965f4-secret-volume\") pod \"collect-profiles-29488800-492g8\" (UID: \"169d3ee1-b6be-49bc-9522-c3579c6965f4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29488800-492g8" Jan 25 08:00:00 crc kubenswrapper[4832]: I0125 08:00:00.337372 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/169d3ee1-b6be-49bc-9522-c3579c6965f4-config-volume\") pod \"collect-profiles-29488800-492g8\" (UID: \"169d3ee1-b6be-49bc-9522-c3579c6965f4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29488800-492g8" Jan 25 08:00:00 crc kubenswrapper[4832]: I0125 08:00:00.337409 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vb49f\" (UniqueName: \"kubernetes.io/projected/169d3ee1-b6be-49bc-9522-c3579c6965f4-kube-api-access-vb49f\") pod \"collect-profiles-29488800-492g8\" (UID: \"169d3ee1-b6be-49bc-9522-c3579c6965f4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29488800-492g8" Jan 25 08:00:00 crc kubenswrapper[4832]: I0125 08:00:00.339042 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/169d3ee1-b6be-49bc-9522-c3579c6965f4-config-volume\") pod \"collect-profiles-29488800-492g8\" (UID: \"169d3ee1-b6be-49bc-9522-c3579c6965f4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29488800-492g8" Jan 25 08:00:00 crc kubenswrapper[4832]: I0125 08:00:00.358784 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/169d3ee1-b6be-49bc-9522-c3579c6965f4-secret-volume\") pod \"collect-profiles-29488800-492g8\" (UID: \"169d3ee1-b6be-49bc-9522-c3579c6965f4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29488800-492g8" Jan 25 08:00:00 crc kubenswrapper[4832]: I0125 08:00:00.359281 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vb49f\" (UniqueName: \"kubernetes.io/projected/169d3ee1-b6be-49bc-9522-c3579c6965f4-kube-api-access-vb49f\") pod \"collect-profiles-29488800-492g8\" (UID: \"169d3ee1-b6be-49bc-9522-c3579c6965f4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29488800-492g8" Jan 25 08:00:00 crc kubenswrapper[4832]: I0125 08:00:00.517353 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29488800-492g8" Jan 25 08:00:02 crc kubenswrapper[4832]: E0125 08:00:02.840003 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-hgzxd" podUID="9ca2e919-2c33-41e7-baa6-40f5437a2c3c" Jan 25 08:00:03 crc kubenswrapper[4832]: I0125 08:00:03.727173 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Jan 25 08:00:03 crc kubenswrapper[4832]: I0125 08:00:03.728055 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 25 08:00:03 crc kubenswrapper[4832]: I0125 08:00:03.743007 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Jan 25 08:00:03 crc kubenswrapper[4832]: I0125 08:00:03.785139 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/8b9d581a-eedd-4f2b-94a2-e175bbc4530a-kube-api-access\") pod \"installer-9-crc\" (UID: \"8b9d581a-eedd-4f2b-94a2-e175bbc4530a\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 25 08:00:03 crc kubenswrapper[4832]: I0125 08:00:03.785225 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/8b9d581a-eedd-4f2b-94a2-e175bbc4530a-kubelet-dir\") pod \"installer-9-crc\" (UID: \"8b9d581a-eedd-4f2b-94a2-e175bbc4530a\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 25 08:00:03 crc kubenswrapper[4832]: I0125 08:00:03.785405 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/8b9d581a-eedd-4f2b-94a2-e175bbc4530a-var-lock\") pod \"installer-9-crc\" (UID: \"8b9d581a-eedd-4f2b-94a2-e175bbc4530a\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 25 08:00:03 crc kubenswrapper[4832]: I0125 08:00:03.886184 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/8b9d581a-eedd-4f2b-94a2-e175bbc4530a-var-lock\") pod \"installer-9-crc\" (UID: \"8b9d581a-eedd-4f2b-94a2-e175bbc4530a\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 25 08:00:03 crc kubenswrapper[4832]: I0125 08:00:03.886517 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/8b9d581a-eedd-4f2b-94a2-e175bbc4530a-kube-api-access\") pod \"installer-9-crc\" (UID: \"8b9d581a-eedd-4f2b-94a2-e175bbc4530a\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 25 08:00:03 crc kubenswrapper[4832]: I0125 08:00:03.886560 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/8b9d581a-eedd-4f2b-94a2-e175bbc4530a-kubelet-dir\") pod \"installer-9-crc\" (UID: \"8b9d581a-eedd-4f2b-94a2-e175bbc4530a\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 25 08:00:03 crc kubenswrapper[4832]: I0125 08:00:03.886625 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/8b9d581a-eedd-4f2b-94a2-e175bbc4530a-kubelet-dir\") pod \"installer-9-crc\" (UID: \"8b9d581a-eedd-4f2b-94a2-e175bbc4530a\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 25 08:00:03 crc kubenswrapper[4832]: I0125 08:00:03.886655 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/8b9d581a-eedd-4f2b-94a2-e175bbc4530a-var-lock\") pod \"installer-9-crc\" (UID: \"8b9d581a-eedd-4f2b-94a2-e175bbc4530a\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 25 08:00:03 crc kubenswrapper[4832]: I0125 08:00:03.903969 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/8b9d581a-eedd-4f2b-94a2-e175bbc4530a-kube-api-access\") pod \"installer-9-crc\" (UID: \"8b9d581a-eedd-4f2b-94a2-e175bbc4530a\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 25 08:00:04 crc kubenswrapper[4832]: I0125 08:00:04.105787 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 25 08:00:04 crc kubenswrapper[4832]: I0125 08:00:04.689953 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 25 08:00:04 crc kubenswrapper[4832]: E0125 08:00:04.782165 4832 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Jan 25 08:00:04 crc kubenswrapper[4832]: E0125 08:00:04.782450 4832 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2lkrn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-lbczx_openshift-marketplace(f61facf9-6be6-4e92-b219-73da2609112a): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 25 08:00:04 crc kubenswrapper[4832]: E0125 08:00:04.783665 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-lbczx" podUID="f61facf9-6be6-4e92-b219-73da2609112a" Jan 25 08:00:04 crc kubenswrapper[4832]: E0125 08:00:04.793935 4832 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Jan 25 08:00:04 crc kubenswrapper[4832]: E0125 08:00:04.794108 4832 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-n9gvc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-t7rlc_openshift-marketplace(41a974dc-0fea-4f11-930e-c11f28840e71): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 25 08:00:04 crc kubenswrapper[4832]: E0125 08:00:04.795264 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-t7rlc" podUID="41a974dc-0fea-4f11-930e-c11f28840e71" Jan 25 08:00:04 crc kubenswrapper[4832]: E0125 08:00:04.853030 4832 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Jan 25 08:00:04 crc kubenswrapper[4832]: E0125 08:00:04.853766 4832 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-wxbkz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-qmnth_openshift-marketplace(de82f302-d899-48c7-aedc-4b24f4541b2b): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 25 08:00:04 crc kubenswrapper[4832]: E0125 08:00:04.855350 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-qmnth" podUID="de82f302-d899-48c7-aedc-4b24f4541b2b" Jan 25 08:00:06 crc kubenswrapper[4832]: E0125 08:00:06.468368 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-qmnth" podUID="de82f302-d899-48c7-aedc-4b24f4541b2b" Jan 25 08:00:06 crc kubenswrapper[4832]: E0125 08:00:06.469110 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-lbczx" podUID="f61facf9-6be6-4e92-b219-73da2609112a" Jan 25 08:00:06 crc kubenswrapper[4832]: E0125 08:00:06.469212 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-t7rlc" podUID="41a974dc-0fea-4f11-930e-c11f28840e71" Jan 25 08:00:06 crc kubenswrapper[4832]: E0125 08:00:06.552533 4832 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Jan 25 08:00:06 crc kubenswrapper[4832]: E0125 08:00:06.552666 4832 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-dbldb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-f6nwt_openshift-marketplace(479892d8-5a53-40ee-9f16-d4480c2c3e03): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 25 08:00:06 crc kubenswrapper[4832]: E0125 08:00:06.553983 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-f6nwt" podUID="479892d8-5a53-40ee-9f16-d4480c2c3e03" Jan 25 08:00:06 crc kubenswrapper[4832]: E0125 08:00:06.558480 4832 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Jan 25 08:00:06 crc kubenswrapper[4832]: E0125 08:00:06.558592 4832 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-w5rqj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-c5q4h_openshift-marketplace(57a844ec-e431-4caf-9471-00460db6589c): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 25 08:00:06 crc kubenswrapper[4832]: E0125 08:00:06.560161 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-c5q4h" podUID="57a844ec-e431-4caf-9471-00460db6589c" Jan 25 08:00:07 crc kubenswrapper[4832]: E0125 08:00:07.767543 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-f6nwt" podUID="479892d8-5a53-40ee-9f16-d4480c2c3e03" Jan 25 08:00:07 crc kubenswrapper[4832]: E0125 08:00:07.767640 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-c5q4h" podUID="57a844ec-e431-4caf-9471-00460db6589c" Jan 25 08:00:07 crc kubenswrapper[4832]: E0125 08:00:07.845476 4832 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Jan 25 08:00:07 crc kubenswrapper[4832]: E0125 08:00:07.850628 4832 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-pz87r,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-rxv7n_openshift-marketplace(af8ce14e-9431-4f98-b50b-761208bdab1c): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 25 08:00:07 crc kubenswrapper[4832]: E0125 08:00:07.852259 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-rxv7n" podUID="af8ce14e-9431-4f98-b50b-761208bdab1c" Jan 25 08:00:07 crc kubenswrapper[4832]: E0125 08:00:07.894057 4832 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Jan 25 08:00:07 crc kubenswrapper[4832]: E0125 08:00:07.894225 4832 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-s4xpz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-7ntqw_openshift-marketplace(e70962d8-5db3-43c3-84bf-380addc38e9c): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 25 08:00:07 crc kubenswrapper[4832]: E0125 08:00:07.898546 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-7ntqw" podUID="e70962d8-5db3-43c3-84bf-380addc38e9c" Jan 25 08:00:08 crc kubenswrapper[4832]: I0125 08:00:08.209753 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-nzj5s"] Jan 25 08:00:08 crc kubenswrapper[4832]: I0125 08:00:08.264140 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29488800-492g8"] Jan 25 08:00:08 crc kubenswrapper[4832]: I0125 08:00:08.268188 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Jan 25 08:00:08 crc kubenswrapper[4832]: W0125 08:00:08.272863 4832 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod8b9d581a_eedd_4f2b_94a2_e175bbc4530a.slice/crio-109e15081a2868139b23b8b6b2de02ff0e98a1eba83b14bf50a2375b136b5814 WatchSource:0}: Error finding container 109e15081a2868139b23b8b6b2de02ff0e98a1eba83b14bf50a2375b136b5814: Status 404 returned error can't find the container with id 109e15081a2868139b23b8b6b2de02ff0e98a1eba83b14bf50a2375b136b5814 Jan 25 08:00:08 crc kubenswrapper[4832]: I0125 08:00:08.325190 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Jan 25 08:00:08 crc kubenswrapper[4832]: I0125 08:00:08.692828 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"8b9d581a-eedd-4f2b-94a2-e175bbc4530a","Type":"ContainerStarted","Data":"8b38f069369397f5371183e37b5eb0cab4e4d4855c7953a9e90f7a6768a8d7d4"} Jan 25 08:00:08 crc kubenswrapper[4832]: I0125 08:00:08.693178 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"8b9d581a-eedd-4f2b-94a2-e175bbc4530a","Type":"ContainerStarted","Data":"109e15081a2868139b23b8b6b2de02ff0e98a1eba83b14bf50a2375b136b5814"} Jan 25 08:00:08 crc kubenswrapper[4832]: I0125 08:00:08.696582 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-nzj5s" event={"ID":"b1a15135-866b-4644-97aa-34c7da815b6b","Type":"ContainerStarted","Data":"be14ed1d1490f669b26597f05fe67ef6f27498733f54e68cfa468b4c236e6392"} Jan 25 08:00:08 crc kubenswrapper[4832]: I0125 08:00:08.696623 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-nzj5s" event={"ID":"b1a15135-866b-4644-97aa-34c7da815b6b","Type":"ContainerStarted","Data":"b7fe543dd7d90602774ad9c11c0316a8aa23b04242e9ecf96ac30c30bc3525de"} Jan 25 08:00:08 crc kubenswrapper[4832]: I0125 08:00:08.697671 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"fb5919b8-3fe4-439b-b6dd-c23648b81b1e","Type":"ContainerStarted","Data":"b53f183eac6702a6af51f2719bf1783f0927737b692f9e76da0735e779cde392"} Jan 25 08:00:08 crc kubenswrapper[4832]: I0125 08:00:08.699251 4832 generic.go:334] "Generic (PLEG): container finished" podID="169d3ee1-b6be-49bc-9522-c3579c6965f4" containerID="5f37ea3a126374f6bc752d94be6de4dbaa535813eb6522dc68fa3ce71b8c7394" exitCode=0 Jan 25 08:00:08 crc kubenswrapper[4832]: I0125 08:00:08.699339 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29488800-492g8" event={"ID":"169d3ee1-b6be-49bc-9522-c3579c6965f4","Type":"ContainerDied","Data":"5f37ea3a126374f6bc752d94be6de4dbaa535813eb6522dc68fa3ce71b8c7394"} Jan 25 08:00:08 crc kubenswrapper[4832]: I0125 08:00:08.699427 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29488800-492g8" event={"ID":"169d3ee1-b6be-49bc-9522-c3579c6965f4","Type":"ContainerStarted","Data":"8a54e3de0f4a4f9e27563b735cd648cce93088234e4667d0cb86b9d2c6b4259e"} Jan 25 08:00:08 crc kubenswrapper[4832]: E0125 08:00:08.701009 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-7ntqw" podUID="e70962d8-5db3-43c3-84bf-380addc38e9c" Jan 25 08:00:08 crc kubenswrapper[4832]: E0125 08:00:08.701750 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-rxv7n" podUID="af8ce14e-9431-4f98-b50b-761208bdab1c" Jan 25 08:00:08 crc kubenswrapper[4832]: I0125 08:00:08.712355 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-9-crc" podStartSLOduration=5.712337168 podStartE2EDuration="5.712337168s" podCreationTimestamp="2026-01-25 08:00:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-25 08:00:08.709832654 +0000 UTC m=+191.383656187" watchObservedRunningTime="2026-01-25 08:00:08.712337168 +0000 UTC m=+191.386160701" Jan 25 08:00:09 crc kubenswrapper[4832]: I0125 08:00:09.719291 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-nzj5s" event={"ID":"b1a15135-866b-4644-97aa-34c7da815b6b","Type":"ContainerStarted","Data":"24cc5f1e44097517eb46e6db11b20dd3b25be788e26c918ddc286ed666872a62"} Jan 25 08:00:09 crc kubenswrapper[4832]: I0125 08:00:09.723358 4832 generic.go:334] "Generic (PLEG): container finished" podID="fb5919b8-3fe4-439b-b6dd-c23648b81b1e" containerID="135ab25f4262e7cff10b79af61c11efd1631bead06a9efa458bbdd6dfaf520cb" exitCode=0 Jan 25 08:00:09 crc kubenswrapper[4832]: I0125 08:00:09.723830 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"fb5919b8-3fe4-439b-b6dd-c23648b81b1e","Type":"ContainerDied","Data":"135ab25f4262e7cff10b79af61c11efd1631bead06a9efa458bbdd6dfaf520cb"} Jan 25 08:00:09 crc kubenswrapper[4832]: I0125 08:00:09.754824 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/network-metrics-daemon-nzj5s" podStartSLOduration=173.754805222 podStartE2EDuration="2m53.754805222s" podCreationTimestamp="2026-01-25 07:57:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-25 08:00:09.738928975 +0000 UTC m=+192.412752508" watchObservedRunningTime="2026-01-25 08:00:09.754805222 +0000 UTC m=+192.428628765" Jan 25 08:00:09 crc kubenswrapper[4832]: I0125 08:00:09.949163 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29488800-492g8" Jan 25 08:00:10 crc kubenswrapper[4832]: I0125 08:00:10.066810 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/169d3ee1-b6be-49bc-9522-c3579c6965f4-config-volume\") pod \"169d3ee1-b6be-49bc-9522-c3579c6965f4\" (UID: \"169d3ee1-b6be-49bc-9522-c3579c6965f4\") " Jan 25 08:00:10 crc kubenswrapper[4832]: I0125 08:00:10.067225 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vb49f\" (UniqueName: \"kubernetes.io/projected/169d3ee1-b6be-49bc-9522-c3579c6965f4-kube-api-access-vb49f\") pod \"169d3ee1-b6be-49bc-9522-c3579c6965f4\" (UID: \"169d3ee1-b6be-49bc-9522-c3579c6965f4\") " Jan 25 08:00:10 crc kubenswrapper[4832]: I0125 08:00:10.067276 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/169d3ee1-b6be-49bc-9522-c3579c6965f4-secret-volume\") pod \"169d3ee1-b6be-49bc-9522-c3579c6965f4\" (UID: \"169d3ee1-b6be-49bc-9522-c3579c6965f4\") " Jan 25 08:00:10 crc kubenswrapper[4832]: I0125 08:00:10.067841 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/169d3ee1-b6be-49bc-9522-c3579c6965f4-config-volume" (OuterVolumeSpecName: "config-volume") pod "169d3ee1-b6be-49bc-9522-c3579c6965f4" (UID: "169d3ee1-b6be-49bc-9522-c3579c6965f4"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 25 08:00:10 crc kubenswrapper[4832]: I0125 08:00:10.071374 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/169d3ee1-b6be-49bc-9522-c3579c6965f4-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "169d3ee1-b6be-49bc-9522-c3579c6965f4" (UID: "169d3ee1-b6be-49bc-9522-c3579c6965f4"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 25 08:00:10 crc kubenswrapper[4832]: I0125 08:00:10.071815 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/169d3ee1-b6be-49bc-9522-c3579c6965f4-kube-api-access-vb49f" (OuterVolumeSpecName: "kube-api-access-vb49f") pod "169d3ee1-b6be-49bc-9522-c3579c6965f4" (UID: "169d3ee1-b6be-49bc-9522-c3579c6965f4"). InnerVolumeSpecName "kube-api-access-vb49f". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 25 08:00:10 crc kubenswrapper[4832]: I0125 08:00:10.169238 4832 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/169d3ee1-b6be-49bc-9522-c3579c6965f4-config-volume\") on node \"crc\" DevicePath \"\"" Jan 25 08:00:10 crc kubenswrapper[4832]: I0125 08:00:10.169282 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vb49f\" (UniqueName: \"kubernetes.io/projected/169d3ee1-b6be-49bc-9522-c3579c6965f4-kube-api-access-vb49f\") on node \"crc\" DevicePath \"\"" Jan 25 08:00:10 crc kubenswrapper[4832]: I0125 08:00:10.169293 4832 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/169d3ee1-b6be-49bc-9522-c3579c6965f4-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 25 08:00:10 crc kubenswrapper[4832]: I0125 08:00:10.731287 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29488800-492g8" event={"ID":"169d3ee1-b6be-49bc-9522-c3579c6965f4","Type":"ContainerDied","Data":"8a54e3de0f4a4f9e27563b735cd648cce93088234e4667d0cb86b9d2c6b4259e"} Jan 25 08:00:10 crc kubenswrapper[4832]: I0125 08:00:10.731659 4832 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8a54e3de0f4a4f9e27563b735cd648cce93088234e4667d0cb86b9d2c6b4259e" Jan 25 08:00:10 crc kubenswrapper[4832]: I0125 08:00:10.731363 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29488800-492g8" Jan 25 08:00:10 crc kubenswrapper[4832]: I0125 08:00:10.962723 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 25 08:00:11 crc kubenswrapper[4832]: I0125 08:00:11.080460 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/fb5919b8-3fe4-439b-b6dd-c23648b81b1e-kubelet-dir\") pod \"fb5919b8-3fe4-439b-b6dd-c23648b81b1e\" (UID: \"fb5919b8-3fe4-439b-b6dd-c23648b81b1e\") " Jan 25 08:00:11 crc kubenswrapper[4832]: I0125 08:00:11.080576 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/fb5919b8-3fe4-439b-b6dd-c23648b81b1e-kube-api-access\") pod \"fb5919b8-3fe4-439b-b6dd-c23648b81b1e\" (UID: \"fb5919b8-3fe4-439b-b6dd-c23648b81b1e\") " Jan 25 08:00:11 crc kubenswrapper[4832]: I0125 08:00:11.080592 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fb5919b8-3fe4-439b-b6dd-c23648b81b1e-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "fb5919b8-3fe4-439b-b6dd-c23648b81b1e" (UID: "fb5919b8-3fe4-439b-b6dd-c23648b81b1e"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 25 08:00:11 crc kubenswrapper[4832]: I0125 08:00:11.080800 4832 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/fb5919b8-3fe4-439b-b6dd-c23648b81b1e-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 25 08:00:11 crc kubenswrapper[4832]: I0125 08:00:11.086288 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fb5919b8-3fe4-439b-b6dd-c23648b81b1e-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "fb5919b8-3fe4-439b-b6dd-c23648b81b1e" (UID: "fb5919b8-3fe4-439b-b6dd-c23648b81b1e"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 25 08:00:11 crc kubenswrapper[4832]: I0125 08:00:11.181554 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/fb5919b8-3fe4-439b-b6dd-c23648b81b1e-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 25 08:00:11 crc kubenswrapper[4832]: I0125 08:00:11.740869 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"fb5919b8-3fe4-439b-b6dd-c23648b81b1e","Type":"ContainerDied","Data":"b53f183eac6702a6af51f2719bf1783f0927737b692f9e76da0735e779cde392"} Jan 25 08:00:11 crc kubenswrapper[4832]: I0125 08:00:11.741155 4832 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b53f183eac6702a6af51f2719bf1783f0927737b692f9e76da0735e779cde392" Jan 25 08:00:11 crc kubenswrapper[4832]: I0125 08:00:11.740947 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 25 08:00:19 crc kubenswrapper[4832]: I0125 08:00:19.779739 4832 generic.go:334] "Generic (PLEG): container finished" podID="9ca2e919-2c33-41e7-baa6-40f5437a2c3c" containerID="bad721fd34d82bc8a914a20e6fade466dc886327ceaf1d22df157e4241f9866d" exitCode=0 Jan 25 08:00:19 crc kubenswrapper[4832]: I0125 08:00:19.779805 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hgzxd" event={"ID":"9ca2e919-2c33-41e7-baa6-40f5437a2c3c","Type":"ContainerDied","Data":"bad721fd34d82bc8a914a20e6fade466dc886327ceaf1d22df157e4241f9866d"} Jan 25 08:00:19 crc kubenswrapper[4832]: I0125 08:00:19.782658 4832 generic.go:334] "Generic (PLEG): container finished" podID="41a974dc-0fea-4f11-930e-c11f28840e71" containerID="a8b08a330140a4b11f36f328b0b3831deaf08c88e41f995f0ebe478741dc6689" exitCode=0 Jan 25 08:00:19 crc kubenswrapper[4832]: I0125 08:00:19.782702 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-t7rlc" event={"ID":"41a974dc-0fea-4f11-930e-c11f28840e71","Type":"ContainerDied","Data":"a8b08a330140a4b11f36f328b0b3831deaf08c88e41f995f0ebe478741dc6689"} Jan 25 08:00:20 crc kubenswrapper[4832]: I0125 08:00:20.790520 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-t7rlc" event={"ID":"41a974dc-0fea-4f11-930e-c11f28840e71","Type":"ContainerStarted","Data":"87aa48e43dfdd75999b37bb0543d4cd29303b6f58532fd179379d3413e8edd7b"} Jan 25 08:00:20 crc kubenswrapper[4832]: I0125 08:00:20.795158 4832 generic.go:334] "Generic (PLEG): container finished" podID="f61facf9-6be6-4e92-b219-73da2609112a" containerID="708246203bfeb2d1d9c9fed6321fb4c347791c88c2ba2374fc0d93d2b7dde952" exitCode=0 Jan 25 08:00:20 crc kubenswrapper[4832]: I0125 08:00:20.795213 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lbczx" event={"ID":"f61facf9-6be6-4e92-b219-73da2609112a","Type":"ContainerDied","Data":"708246203bfeb2d1d9c9fed6321fb4c347791c88c2ba2374fc0d93d2b7dde952"} Jan 25 08:00:20 crc kubenswrapper[4832]: I0125 08:00:20.797954 4832 generic.go:334] "Generic (PLEG): container finished" podID="de82f302-d899-48c7-aedc-4b24f4541b2b" containerID="9704f0e7139e3714217680a9d4fe3a70ba17d6f8e5f513fbc3d16cf51b1ba25a" exitCode=0 Jan 25 08:00:20 crc kubenswrapper[4832]: I0125 08:00:20.798017 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qmnth" event={"ID":"de82f302-d899-48c7-aedc-4b24f4541b2b","Type":"ContainerDied","Data":"9704f0e7139e3714217680a9d4fe3a70ba17d6f8e5f513fbc3d16cf51b1ba25a"} Jan 25 08:00:20 crc kubenswrapper[4832]: I0125 08:00:20.800598 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hgzxd" event={"ID":"9ca2e919-2c33-41e7-baa6-40f5437a2c3c","Type":"ContainerStarted","Data":"3ea0ea2e74d9246447567c3a5eaeb53f46cc61ea93eace6986d87ad0c2ea5e76"} Jan 25 08:00:20 crc kubenswrapper[4832]: I0125 08:00:20.815922 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-t7rlc" podStartSLOduration=2.662976026 podStartE2EDuration="1m3.815901205s" podCreationTimestamp="2026-01-25 07:59:17 +0000 UTC" firstStartedPulling="2026-01-25 07:59:19.09760452 +0000 UTC m=+141.771428053" lastFinishedPulling="2026-01-25 08:00:20.250529699 +0000 UTC m=+202.924353232" observedRunningTime="2026-01-25 08:00:20.815315686 +0000 UTC m=+203.489139219" watchObservedRunningTime="2026-01-25 08:00:20.815901205 +0000 UTC m=+203.489724738" Jan 25 08:00:20 crc kubenswrapper[4832]: I0125 08:00:20.875015 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-hgzxd" podStartSLOduration=3.702917997 podStartE2EDuration="1m4.87499971s" podCreationTimestamp="2026-01-25 07:59:16 +0000 UTC" firstStartedPulling="2026-01-25 07:59:19.155878598 +0000 UTC m=+141.829702131" lastFinishedPulling="2026-01-25 08:00:20.327960301 +0000 UTC m=+203.001783844" observedRunningTime="2026-01-25 08:00:20.871787716 +0000 UTC m=+203.545611249" watchObservedRunningTime="2026-01-25 08:00:20.87499971 +0000 UTC m=+203.548823243" Jan 25 08:00:21 crc kubenswrapper[4832]: I0125 08:00:21.822536 4832 generic.go:334] "Generic (PLEG): container finished" podID="57a844ec-e431-4caf-9471-00460db6589c" containerID="cc48a6d0c12396a581fe7c35a5a6390b61ba39789f7713c9e93904edd339fece" exitCode=0 Jan 25 08:00:21 crc kubenswrapper[4832]: I0125 08:00:21.822847 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-c5q4h" event={"ID":"57a844ec-e431-4caf-9471-00460db6589c","Type":"ContainerDied","Data":"cc48a6d0c12396a581fe7c35a5a6390b61ba39789f7713c9e93904edd339fece"} Jan 25 08:00:21 crc kubenswrapper[4832]: I0125 08:00:21.834206 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lbczx" event={"ID":"f61facf9-6be6-4e92-b219-73da2609112a","Type":"ContainerStarted","Data":"fbc90fbd6aed76aa89d9497e0517b25c7348ee124eb9b171e40d0e11d0ef84b4"} Jan 25 08:00:21 crc kubenswrapper[4832]: I0125 08:00:21.836758 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qmnth" event={"ID":"de82f302-d899-48c7-aedc-4b24f4541b2b","Type":"ContainerStarted","Data":"3fa7616eebc1718b3b41cc2b08ec70817195522aeb22689dfc06b792f55e8178"} Jan 25 08:00:21 crc kubenswrapper[4832]: I0125 08:00:21.860719 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-qmnth" podStartSLOduration=2.461366986 podStartE2EDuration="1m3.860687118s" podCreationTimestamp="2026-01-25 07:59:18 +0000 UTC" firstStartedPulling="2026-01-25 07:59:20.171024978 +0000 UTC m=+142.844848511" lastFinishedPulling="2026-01-25 08:00:21.57034511 +0000 UTC m=+204.244168643" observedRunningTime="2026-01-25 08:00:21.857828145 +0000 UTC m=+204.531651678" watchObservedRunningTime="2026-01-25 08:00:21.860687118 +0000 UTC m=+204.534510651" Jan 25 08:00:21 crc kubenswrapper[4832]: I0125 08:00:21.880110 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-lbczx" podStartSLOduration=2.843717593 podStartE2EDuration="1m3.88008988s" podCreationTimestamp="2026-01-25 07:59:18 +0000 UTC" firstStartedPulling="2026-01-25 07:59:20.181900545 +0000 UTC m=+142.855724078" lastFinishedPulling="2026-01-25 08:00:21.218272822 +0000 UTC m=+203.892096365" observedRunningTime="2026-01-25 08:00:21.876933347 +0000 UTC m=+204.550756890" watchObservedRunningTime="2026-01-25 08:00:21.88008988 +0000 UTC m=+204.553913413" Jan 25 08:00:22 crc kubenswrapper[4832]: I0125 08:00:22.149798 4832 patch_prober.go:28] interesting pod/machine-config-daemon-9r9sz container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 25 08:00:22 crc kubenswrapper[4832]: I0125 08:00:22.149869 4832 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-9r9sz" podUID="1fb47e8e-c812-41b4-9be7-3fad81e121b0" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 25 08:00:22 crc kubenswrapper[4832]: I0125 08:00:22.149929 4832 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-9r9sz" Jan 25 08:00:22 crc kubenswrapper[4832]: I0125 08:00:22.150587 4832 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"9c32b6a39b2bc87d55b11a88a54d0909633358c70f3fc555cd4308bc5bf2689a"} pod="openshift-machine-config-operator/machine-config-daemon-9r9sz" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 25 08:00:22 crc kubenswrapper[4832]: I0125 08:00:22.150727 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-9r9sz" podUID="1fb47e8e-c812-41b4-9be7-3fad81e121b0" containerName="machine-config-daemon" containerID="cri-o://9c32b6a39b2bc87d55b11a88a54d0909633358c70f3fc555cd4308bc5bf2689a" gracePeriod=600 Jan 25 08:00:22 crc kubenswrapper[4832]: I0125 08:00:22.844895 4832 generic.go:334] "Generic (PLEG): container finished" podID="1fb47e8e-c812-41b4-9be7-3fad81e121b0" containerID="9c32b6a39b2bc87d55b11a88a54d0909633358c70f3fc555cd4308bc5bf2689a" exitCode=0 Jan 25 08:00:22 crc kubenswrapper[4832]: I0125 08:00:22.845015 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-9r9sz" event={"ID":"1fb47e8e-c812-41b4-9be7-3fad81e121b0","Type":"ContainerDied","Data":"9c32b6a39b2bc87d55b11a88a54d0909633358c70f3fc555cd4308bc5bf2689a"} Jan 25 08:00:22 crc kubenswrapper[4832]: I0125 08:00:22.846306 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-9r9sz" event={"ID":"1fb47e8e-c812-41b4-9be7-3fad81e121b0","Type":"ContainerStarted","Data":"ab67a00f3383f3ebf817c9eee1dbd1d6d82dc6ce62d279f6c63b25d61faa31bb"} Jan 25 08:00:23 crc kubenswrapper[4832]: I0125 08:00:23.853614 4832 generic.go:334] "Generic (PLEG): container finished" podID="af8ce14e-9431-4f98-b50b-761208bdab1c" containerID="63d9abe64b3650fa4c01674511f7ff648d904a445105fff3cdbfa2649267a381" exitCode=0 Jan 25 08:00:23 crc kubenswrapper[4832]: I0125 08:00:23.853691 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rxv7n" event={"ID":"af8ce14e-9431-4f98-b50b-761208bdab1c","Type":"ContainerDied","Data":"63d9abe64b3650fa4c01674511f7ff648d904a445105fff3cdbfa2649267a381"} Jan 25 08:00:23 crc kubenswrapper[4832]: I0125 08:00:23.856078 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-c5q4h" event={"ID":"57a844ec-e431-4caf-9471-00460db6589c","Type":"ContainerStarted","Data":"0f88328af848ab944485051c6773a9007863087997305426a42d101ee4f83b54"} Jan 25 08:00:23 crc kubenswrapper[4832]: I0125 08:00:23.857829 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-f6nwt" event={"ID":"479892d8-5a53-40ee-9f16-d4480c2c3e03","Type":"ContainerStarted","Data":"ec3422846c4f7ca5a3e9d03efa6c1a6e5cf108f14cf005b6d25c2c56e461f21d"} Jan 25 08:00:23 crc kubenswrapper[4832]: I0125 08:00:23.909613 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-c5q4h" podStartSLOduration=2.071802886 podStartE2EDuration="1m3.909594209s" podCreationTimestamp="2026-01-25 07:59:20 +0000 UTC" firstStartedPulling="2026-01-25 07:59:21.244005711 +0000 UTC m=+143.917829244" lastFinishedPulling="2026-01-25 08:00:23.081797014 +0000 UTC m=+205.755620567" observedRunningTime="2026-01-25 08:00:23.90566165 +0000 UTC m=+206.579485203" watchObservedRunningTime="2026-01-25 08:00:23.909594209 +0000 UTC m=+206.583417742" Jan 25 08:00:25 crc kubenswrapper[4832]: I0125 08:00:25.871447 4832 generic.go:334] "Generic (PLEG): container finished" podID="479892d8-5a53-40ee-9f16-d4480c2c3e03" containerID="ec3422846c4f7ca5a3e9d03efa6c1a6e5cf108f14cf005b6d25c2c56e461f21d" exitCode=0 Jan 25 08:00:25 crc kubenswrapper[4832]: I0125 08:00:25.871800 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-f6nwt" event={"ID":"479892d8-5a53-40ee-9f16-d4480c2c3e03","Type":"ContainerDied","Data":"ec3422846c4f7ca5a3e9d03efa6c1a6e5cf108f14cf005b6d25c2c56e461f21d"} Jan 25 08:00:26 crc kubenswrapper[4832]: I0125 08:00:26.951822 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-hgzxd" Jan 25 08:00:26 crc kubenswrapper[4832]: I0125 08:00:26.952128 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-hgzxd" Jan 25 08:00:27 crc kubenswrapper[4832]: I0125 08:00:27.155693 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-hgzxd" Jan 25 08:00:27 crc kubenswrapper[4832]: I0125 08:00:27.473093 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-t7rlc" Jan 25 08:00:27 crc kubenswrapper[4832]: I0125 08:00:27.473142 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-t7rlc" Jan 25 08:00:27 crc kubenswrapper[4832]: I0125 08:00:27.779276 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-t7rlc" Jan 25 08:00:27 crc kubenswrapper[4832]: I0125 08:00:27.914803 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-t7rlc" Jan 25 08:00:27 crc kubenswrapper[4832]: I0125 08:00:27.916438 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-hgzxd" Jan 25 08:00:29 crc kubenswrapper[4832]: I0125 08:00:29.010781 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-qmnth" Jan 25 08:00:29 crc kubenswrapper[4832]: I0125 08:00:29.010873 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-qmnth" Jan 25 08:00:29 crc kubenswrapper[4832]: I0125 08:00:29.047158 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-qmnth" Jan 25 08:00:29 crc kubenswrapper[4832]: I0125 08:00:29.324612 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-lbczx" Jan 25 08:00:29 crc kubenswrapper[4832]: I0125 08:00:29.324656 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-lbczx" Jan 25 08:00:29 crc kubenswrapper[4832]: I0125 08:00:29.359621 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-lbczx" Jan 25 08:00:29 crc kubenswrapper[4832]: I0125 08:00:29.929549 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-qmnth" Jan 25 08:00:29 crc kubenswrapper[4832]: I0125 08:00:29.932685 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-lbczx" Jan 25 08:00:30 crc kubenswrapper[4832]: I0125 08:00:30.541726 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-c5q4h" Jan 25 08:00:30 crc kubenswrapper[4832]: I0125 08:00:30.541785 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-c5q4h" Jan 25 08:00:30 crc kubenswrapper[4832]: I0125 08:00:30.576950 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-c5q4h" Jan 25 08:00:30 crc kubenswrapper[4832]: I0125 08:00:30.655509 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-t7rlc"] Jan 25 08:00:30 crc kubenswrapper[4832]: I0125 08:00:30.656692 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-t7rlc" podUID="41a974dc-0fea-4f11-930e-c11f28840e71" containerName="registry-server" containerID="cri-o://87aa48e43dfdd75999b37bb0543d4cd29303b6f58532fd179379d3413e8edd7b" gracePeriod=2 Jan 25 08:00:30 crc kubenswrapper[4832]: I0125 08:00:30.929622 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-c5q4h" Jan 25 08:00:31 crc kubenswrapper[4832]: I0125 08:00:31.901756 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rxv7n" event={"ID":"af8ce14e-9431-4f98-b50b-761208bdab1c","Type":"ContainerStarted","Data":"bfd51b43de0416b4f33b7d528415a95230564b817fc30d5f7df961a4993eceb2"} Jan 25 08:00:32 crc kubenswrapper[4832]: I0125 08:00:32.053372 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-lbczx"] Jan 25 08:00:32 crc kubenswrapper[4832]: I0125 08:00:32.053644 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-lbczx" podUID="f61facf9-6be6-4e92-b219-73da2609112a" containerName="registry-server" containerID="cri-o://fbc90fbd6aed76aa89d9497e0517b25c7348ee124eb9b171e40d0e11d0ef84b4" gracePeriod=2 Jan 25 08:00:33 crc kubenswrapper[4832]: I0125 08:00:33.052560 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-c5q4h"] Jan 25 08:00:33 crc kubenswrapper[4832]: I0125 08:00:33.052971 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-c5q4h" podUID="57a844ec-e431-4caf-9471-00460db6589c" containerName="registry-server" containerID="cri-o://0f88328af848ab944485051c6773a9007863087997305426a42d101ee4f83b54" gracePeriod=2 Jan 25 08:00:33 crc kubenswrapper[4832]: I0125 08:00:33.915053 4832 generic.go:334] "Generic (PLEG): container finished" podID="f61facf9-6be6-4e92-b219-73da2609112a" containerID="fbc90fbd6aed76aa89d9497e0517b25c7348ee124eb9b171e40d0e11d0ef84b4" exitCode=0 Jan 25 08:00:33 crc kubenswrapper[4832]: I0125 08:00:33.915129 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lbczx" event={"ID":"f61facf9-6be6-4e92-b219-73da2609112a","Type":"ContainerDied","Data":"fbc90fbd6aed76aa89d9497e0517b25c7348ee124eb9b171e40d0e11d0ef84b4"} Jan 25 08:00:33 crc kubenswrapper[4832]: I0125 08:00:33.917760 4832 generic.go:334] "Generic (PLEG): container finished" podID="41a974dc-0fea-4f11-930e-c11f28840e71" containerID="87aa48e43dfdd75999b37bb0543d4cd29303b6f58532fd179379d3413e8edd7b" exitCode=0 Jan 25 08:00:33 crc kubenswrapper[4832]: I0125 08:00:33.917816 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-t7rlc" event={"ID":"41a974dc-0fea-4f11-930e-c11f28840e71","Type":"ContainerDied","Data":"87aa48e43dfdd75999b37bb0543d4cd29303b6f58532fd179379d3413e8edd7b"} Jan 25 08:00:33 crc kubenswrapper[4832]: I0125 08:00:33.941274 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-rxv7n" podStartSLOduration=6.3382037239999995 podStartE2EDuration="1m16.941240808s" podCreationTimestamp="2026-01-25 07:59:17 +0000 UTC" firstStartedPulling="2026-01-25 07:59:19.090156659 +0000 UTC m=+141.763980192" lastFinishedPulling="2026-01-25 08:00:29.693193743 +0000 UTC m=+212.367017276" observedRunningTime="2026-01-25 08:00:33.93577559 +0000 UTC m=+216.609599133" watchObservedRunningTime="2026-01-25 08:00:33.941240808 +0000 UTC m=+216.615064381" Jan 25 08:00:34 crc kubenswrapper[4832]: I0125 08:00:34.252479 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-lbczx" Jan 25 08:00:34 crc kubenswrapper[4832]: I0125 08:00:34.386208 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f61facf9-6be6-4e92-b219-73da2609112a-catalog-content\") pod \"f61facf9-6be6-4e92-b219-73da2609112a\" (UID: \"f61facf9-6be6-4e92-b219-73da2609112a\") " Jan 25 08:00:34 crc kubenswrapper[4832]: I0125 08:00:34.386277 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2lkrn\" (UniqueName: \"kubernetes.io/projected/f61facf9-6be6-4e92-b219-73da2609112a-kube-api-access-2lkrn\") pod \"f61facf9-6be6-4e92-b219-73da2609112a\" (UID: \"f61facf9-6be6-4e92-b219-73da2609112a\") " Jan 25 08:00:34 crc kubenswrapper[4832]: I0125 08:00:34.386337 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f61facf9-6be6-4e92-b219-73da2609112a-utilities\") pod \"f61facf9-6be6-4e92-b219-73da2609112a\" (UID: \"f61facf9-6be6-4e92-b219-73da2609112a\") " Jan 25 08:00:34 crc kubenswrapper[4832]: I0125 08:00:34.387169 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f61facf9-6be6-4e92-b219-73da2609112a-utilities" (OuterVolumeSpecName: "utilities") pod "f61facf9-6be6-4e92-b219-73da2609112a" (UID: "f61facf9-6be6-4e92-b219-73da2609112a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 25 08:00:34 crc kubenswrapper[4832]: I0125 08:00:34.391312 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f61facf9-6be6-4e92-b219-73da2609112a-kube-api-access-2lkrn" (OuterVolumeSpecName: "kube-api-access-2lkrn") pod "f61facf9-6be6-4e92-b219-73da2609112a" (UID: "f61facf9-6be6-4e92-b219-73da2609112a"). InnerVolumeSpecName "kube-api-access-2lkrn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 25 08:00:34 crc kubenswrapper[4832]: I0125 08:00:34.413101 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f61facf9-6be6-4e92-b219-73da2609112a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f61facf9-6be6-4e92-b219-73da2609112a" (UID: "f61facf9-6be6-4e92-b219-73da2609112a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 25 08:00:34 crc kubenswrapper[4832]: I0125 08:00:34.487738 4832 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f61facf9-6be6-4e92-b219-73da2609112a-utilities\") on node \"crc\" DevicePath \"\"" Jan 25 08:00:34 crc kubenswrapper[4832]: I0125 08:00:34.487780 4832 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f61facf9-6be6-4e92-b219-73da2609112a-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 25 08:00:34 crc kubenswrapper[4832]: I0125 08:00:34.487795 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2lkrn\" (UniqueName: \"kubernetes.io/projected/f61facf9-6be6-4e92-b219-73da2609112a-kube-api-access-2lkrn\") on node \"crc\" DevicePath \"\"" Jan 25 08:00:34 crc kubenswrapper[4832]: I0125 08:00:34.925716 4832 generic.go:334] "Generic (PLEG): container finished" podID="57a844ec-e431-4caf-9471-00460db6589c" containerID="0f88328af848ab944485051c6773a9007863087997305426a42d101ee4f83b54" exitCode=0 Jan 25 08:00:34 crc kubenswrapper[4832]: I0125 08:00:34.925787 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-c5q4h" event={"ID":"57a844ec-e431-4caf-9471-00460db6589c","Type":"ContainerDied","Data":"0f88328af848ab944485051c6773a9007863087997305426a42d101ee4f83b54"} Jan 25 08:00:34 crc kubenswrapper[4832]: I0125 08:00:34.928043 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lbczx" event={"ID":"f61facf9-6be6-4e92-b219-73da2609112a","Type":"ContainerDied","Data":"9c2aea71028eadd5c75dda6ae960d3cfbfb9c5f3eadca52b33ba3f2b0d4a6922"} Jan 25 08:00:34 crc kubenswrapper[4832]: I0125 08:00:34.928078 4832 scope.go:117] "RemoveContainer" containerID="fbc90fbd6aed76aa89d9497e0517b25c7348ee124eb9b171e40d0e11d0ef84b4" Jan 25 08:00:34 crc kubenswrapper[4832]: I0125 08:00:34.928145 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-lbczx" Jan 25 08:00:34 crc kubenswrapper[4832]: I0125 08:00:34.955422 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-lbczx"] Jan 25 08:00:34 crc kubenswrapper[4832]: I0125 08:00:34.958191 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-lbczx"] Jan 25 08:00:35 crc kubenswrapper[4832]: I0125 08:00:35.675589 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f61facf9-6be6-4e92-b219-73da2609112a" path="/var/lib/kubelet/pods/f61facf9-6be6-4e92-b219-73da2609112a/volumes" Jan 25 08:00:36 crc kubenswrapper[4832]: I0125 08:00:36.801714 4832 scope.go:117] "RemoveContainer" containerID="708246203bfeb2d1d9c9fed6321fb4c347791c88c2ba2374fc0d93d2b7dde952" Jan 25 08:00:36 crc kubenswrapper[4832]: I0125 08:00:36.842642 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-t7rlc" Jan 25 08:00:36 crc kubenswrapper[4832]: I0125 08:00:36.915446 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/41a974dc-0fea-4f11-930e-c11f28840e71-catalog-content\") pod \"41a974dc-0fea-4f11-930e-c11f28840e71\" (UID: \"41a974dc-0fea-4f11-930e-c11f28840e71\") " Jan 25 08:00:36 crc kubenswrapper[4832]: I0125 08:00:36.915502 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n9gvc\" (UniqueName: \"kubernetes.io/projected/41a974dc-0fea-4f11-930e-c11f28840e71-kube-api-access-n9gvc\") pod \"41a974dc-0fea-4f11-930e-c11f28840e71\" (UID: \"41a974dc-0fea-4f11-930e-c11f28840e71\") " Jan 25 08:00:36 crc kubenswrapper[4832]: I0125 08:00:36.915520 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/41a974dc-0fea-4f11-930e-c11f28840e71-utilities\") pod \"41a974dc-0fea-4f11-930e-c11f28840e71\" (UID: \"41a974dc-0fea-4f11-930e-c11f28840e71\") " Jan 25 08:00:36 crc kubenswrapper[4832]: I0125 08:00:36.916641 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/41a974dc-0fea-4f11-930e-c11f28840e71-utilities" (OuterVolumeSpecName: "utilities") pod "41a974dc-0fea-4f11-930e-c11f28840e71" (UID: "41a974dc-0fea-4f11-930e-c11f28840e71"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 25 08:00:36 crc kubenswrapper[4832]: I0125 08:00:36.921706 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/41a974dc-0fea-4f11-930e-c11f28840e71-kube-api-access-n9gvc" (OuterVolumeSpecName: "kube-api-access-n9gvc") pod "41a974dc-0fea-4f11-930e-c11f28840e71" (UID: "41a974dc-0fea-4f11-930e-c11f28840e71"). InnerVolumeSpecName "kube-api-access-n9gvc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 25 08:00:36 crc kubenswrapper[4832]: I0125 08:00:36.940446 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-t7rlc" event={"ID":"41a974dc-0fea-4f11-930e-c11f28840e71","Type":"ContainerDied","Data":"8e312a737e7edaab9ff8909117577b06b829fd2dda2596086481329749b7220a"} Jan 25 08:00:36 crc kubenswrapper[4832]: I0125 08:00:36.940496 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-t7rlc" Jan 25 08:00:36 crc kubenswrapper[4832]: I0125 08:00:36.970705 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/41a974dc-0fea-4f11-930e-c11f28840e71-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "41a974dc-0fea-4f11-930e-c11f28840e71" (UID: "41a974dc-0fea-4f11-930e-c11f28840e71"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 25 08:00:37 crc kubenswrapper[4832]: I0125 08:00:37.017487 4832 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/41a974dc-0fea-4f11-930e-c11f28840e71-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 25 08:00:37 crc kubenswrapper[4832]: I0125 08:00:37.017557 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n9gvc\" (UniqueName: \"kubernetes.io/projected/41a974dc-0fea-4f11-930e-c11f28840e71-kube-api-access-n9gvc\") on node \"crc\" DevicePath \"\"" Jan 25 08:00:37 crc kubenswrapper[4832]: I0125 08:00:37.017577 4832 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/41a974dc-0fea-4f11-930e-c11f28840e71-utilities\") on node \"crc\" DevicePath \"\"" Jan 25 08:00:37 crc kubenswrapper[4832]: I0125 08:00:37.268835 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-t7rlc"] Jan 25 08:00:37 crc kubenswrapper[4832]: I0125 08:00:37.272152 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-t7rlc"] Jan 25 08:00:37 crc kubenswrapper[4832]: I0125 08:00:37.611482 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-rxv7n" Jan 25 08:00:37 crc kubenswrapper[4832]: I0125 08:00:37.611569 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-rxv7n" Jan 25 08:00:37 crc kubenswrapper[4832]: I0125 08:00:37.679777 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="41a974dc-0fea-4f11-930e-c11f28840e71" path="/var/lib/kubelet/pods/41a974dc-0fea-4f11-930e-c11f28840e71/volumes" Jan 25 08:00:37 crc kubenswrapper[4832]: I0125 08:00:37.680918 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-rxv7n" Jan 25 08:00:37 crc kubenswrapper[4832]: I0125 08:00:37.991745 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-rxv7n" Jan 25 08:00:39 crc kubenswrapper[4832]: I0125 08:00:39.172785 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-c5q4h" Jan 25 08:00:39 crc kubenswrapper[4832]: I0125 08:00:39.246426 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a844ec-e431-4caf-9471-00460db6589c-catalog-content\") pod \"57a844ec-e431-4caf-9471-00460db6589c\" (UID: \"57a844ec-e431-4caf-9471-00460db6589c\") " Jan 25 08:00:39 crc kubenswrapper[4832]: I0125 08:00:39.246520 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a844ec-e431-4caf-9471-00460db6589c-utilities\") pod \"57a844ec-e431-4caf-9471-00460db6589c\" (UID: \"57a844ec-e431-4caf-9471-00460db6589c\") " Jan 25 08:00:39 crc kubenswrapper[4832]: I0125 08:00:39.246575 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w5rqj\" (UniqueName: \"kubernetes.io/projected/57a844ec-e431-4caf-9471-00460db6589c-kube-api-access-w5rqj\") pod \"57a844ec-e431-4caf-9471-00460db6589c\" (UID: \"57a844ec-e431-4caf-9471-00460db6589c\") " Jan 25 08:00:39 crc kubenswrapper[4832]: I0125 08:00:39.247241 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a844ec-e431-4caf-9471-00460db6589c-utilities" (OuterVolumeSpecName: "utilities") pod "57a844ec-e431-4caf-9471-00460db6589c" (UID: "57a844ec-e431-4caf-9471-00460db6589c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 25 08:00:39 crc kubenswrapper[4832]: I0125 08:00:39.250573 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57a844ec-e431-4caf-9471-00460db6589c-kube-api-access-w5rqj" (OuterVolumeSpecName: "kube-api-access-w5rqj") pod "57a844ec-e431-4caf-9471-00460db6589c" (UID: "57a844ec-e431-4caf-9471-00460db6589c"). InnerVolumeSpecName "kube-api-access-w5rqj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 25 08:00:39 crc kubenswrapper[4832]: I0125 08:00:39.347872 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w5rqj\" (UniqueName: \"kubernetes.io/projected/57a844ec-e431-4caf-9471-00460db6589c-kube-api-access-w5rqj\") on node \"crc\" DevicePath \"\"" Jan 25 08:00:39 crc kubenswrapper[4832]: I0125 08:00:39.347913 4832 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a844ec-e431-4caf-9471-00460db6589c-utilities\") on node \"crc\" DevicePath \"\"" Jan 25 08:00:39 crc kubenswrapper[4832]: I0125 08:00:39.551631 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a844ec-e431-4caf-9471-00460db6589c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "57a844ec-e431-4caf-9471-00460db6589c" (UID: "57a844ec-e431-4caf-9471-00460db6589c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 25 08:00:39 crc kubenswrapper[4832]: I0125 08:00:39.591088 4832 scope.go:117] "RemoveContainer" containerID="f2f1cfdcb4c31c4471992b5911dc06df838ccff4afdf30db167fe8223454f869" Jan 25 08:00:39 crc kubenswrapper[4832]: I0125 08:00:39.613886 4832 scope.go:117] "RemoveContainer" containerID="87aa48e43dfdd75999b37bb0543d4cd29303b6f58532fd179379d3413e8edd7b" Jan 25 08:00:39 crc kubenswrapper[4832]: I0125 08:00:39.634912 4832 scope.go:117] "RemoveContainer" containerID="a8b08a330140a4b11f36f328b0b3831deaf08c88e41f995f0ebe478741dc6689" Jan 25 08:00:39 crc kubenswrapper[4832]: I0125 08:00:39.650284 4832 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a844ec-e431-4caf-9471-00460db6589c-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 25 08:00:39 crc kubenswrapper[4832]: I0125 08:00:39.663863 4832 scope.go:117] "RemoveContainer" containerID="26c22dde58d1e0d8a24d93e22410c4c4b46912472c0afbde1cbf51960e9ce222" Jan 25 08:00:39 crc kubenswrapper[4832]: I0125 08:00:39.963855 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-f6nwt" event={"ID":"479892d8-5a53-40ee-9f16-d4480c2c3e03","Type":"ContainerStarted","Data":"0d0d908fac00bd4c28962788fc5e0650358742d5bb3525e96fd059be8ee3db05"} Jan 25 08:00:39 crc kubenswrapper[4832]: I0125 08:00:39.967740 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7ntqw" event={"ID":"e70962d8-5db3-43c3-84bf-380addc38e9c","Type":"ContainerStarted","Data":"b14cb83643fc32267fb0eab12b9d0935caf7c094e1451e3835b0d7b781d4da46"} Jan 25 08:00:39 crc kubenswrapper[4832]: I0125 08:00:39.973262 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-c5q4h" event={"ID":"57a844ec-e431-4caf-9471-00460db6589c","Type":"ContainerDied","Data":"7b714b4fe08a81b53110b640ac40fc64d4af249cc00d60dc79131c287d99f3d8"} Jan 25 08:00:39 crc kubenswrapper[4832]: I0125 08:00:39.973311 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-c5q4h" Jan 25 08:00:39 crc kubenswrapper[4832]: I0125 08:00:39.973322 4832 scope.go:117] "RemoveContainer" containerID="0f88328af848ab944485051c6773a9007863087997305426a42d101ee4f83b54" Jan 25 08:00:40 crc kubenswrapper[4832]: I0125 08:00:40.009035 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-f6nwt" podStartSLOduration=2.676435421 podStartE2EDuration="1m21.008987048s" podCreationTimestamp="2026-01-25 07:59:19 +0000 UTC" firstStartedPulling="2026-01-25 07:59:21.248022216 +0000 UTC m=+143.921845749" lastFinishedPulling="2026-01-25 08:00:39.580573843 +0000 UTC m=+222.254397376" observedRunningTime="2026-01-25 08:00:40.003004353 +0000 UTC m=+222.676827916" watchObservedRunningTime="2026-01-25 08:00:40.008987048 +0000 UTC m=+222.682810581" Jan 25 08:00:40 crc kubenswrapper[4832]: I0125 08:00:40.042544 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-c5q4h"] Jan 25 08:00:40 crc kubenswrapper[4832]: I0125 08:00:40.043130 4832 scope.go:117] "RemoveContainer" containerID="cc48a6d0c12396a581fe7c35a5a6390b61ba39789f7713c9e93904edd339fece" Jan 25 08:00:40 crc kubenswrapper[4832]: I0125 08:00:40.045815 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-c5q4h"] Jan 25 08:00:40 crc kubenswrapper[4832]: I0125 08:00:40.106468 4832 scope.go:117] "RemoveContainer" containerID="cfa24793b9bb832c35772653f752387268ede9def4b222f81bd79c32bc9bc02e" Jan 25 08:00:40 crc kubenswrapper[4832]: I0125 08:00:40.112465 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-f6nwt" Jan 25 08:00:40 crc kubenswrapper[4832]: I0125 08:00:40.112503 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-f6nwt" Jan 25 08:00:40 crc kubenswrapper[4832]: I0125 08:00:40.127614 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-q5r28"] Jan 25 08:00:40 crc kubenswrapper[4832]: I0125 08:00:40.983814 4832 generic.go:334] "Generic (PLEG): container finished" podID="e70962d8-5db3-43c3-84bf-380addc38e9c" containerID="b14cb83643fc32267fb0eab12b9d0935caf7c094e1451e3835b0d7b781d4da46" exitCode=0 Jan 25 08:00:40 crc kubenswrapper[4832]: I0125 08:00:40.983884 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7ntqw" event={"ID":"e70962d8-5db3-43c3-84bf-380addc38e9c","Type":"ContainerDied","Data":"b14cb83643fc32267fb0eab12b9d0935caf7c094e1451e3835b0d7b781d4da46"} Jan 25 08:00:40 crc kubenswrapper[4832]: I0125 08:00:40.984145 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7ntqw" event={"ID":"e70962d8-5db3-43c3-84bf-380addc38e9c","Type":"ContainerStarted","Data":"c80a8496e4fb8daab894185ccd7abe905b3a6f0e511ef2e71a15cdfbad3cc4df"} Jan 25 08:00:41 crc kubenswrapper[4832]: I0125 08:00:41.006603 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-7ntqw" podStartSLOduration=3.643335781 podStartE2EDuration="1m25.006573654s" podCreationTimestamp="2026-01-25 07:59:16 +0000 UTC" firstStartedPulling="2026-01-25 07:59:19.12601862 +0000 UTC m=+141.799842153" lastFinishedPulling="2026-01-25 08:00:40.489256493 +0000 UTC m=+223.163080026" observedRunningTime="2026-01-25 08:00:41.002727498 +0000 UTC m=+223.676551031" watchObservedRunningTime="2026-01-25 08:00:41.006573654 +0000 UTC m=+223.680397187" Jan 25 08:00:41 crc kubenswrapper[4832]: I0125 08:00:41.158171 4832 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-f6nwt" podUID="479892d8-5a53-40ee-9f16-d4480c2c3e03" containerName="registry-server" probeResult="failure" output=< Jan 25 08:00:41 crc kubenswrapper[4832]: timeout: failed to connect service ":50051" within 1s Jan 25 08:00:41 crc kubenswrapper[4832]: > Jan 25 08:00:41 crc kubenswrapper[4832]: I0125 08:00:41.451432 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-rxv7n"] Jan 25 08:00:41 crc kubenswrapper[4832]: I0125 08:00:41.451685 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-rxv7n" podUID="af8ce14e-9431-4f98-b50b-761208bdab1c" containerName="registry-server" containerID="cri-o://bfd51b43de0416b4f33b7d528415a95230564b817fc30d5f7df961a4993eceb2" gracePeriod=2 Jan 25 08:00:41 crc kubenswrapper[4832]: I0125 08:00:41.677306 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="57a844ec-e431-4caf-9471-00460db6589c" path="/var/lib/kubelet/pods/57a844ec-e431-4caf-9471-00460db6589c/volumes" Jan 25 08:00:41 crc kubenswrapper[4832]: I0125 08:00:41.810111 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-rxv7n" Jan 25 08:00:41 crc kubenswrapper[4832]: I0125 08:00:41.983970 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/af8ce14e-9431-4f98-b50b-761208bdab1c-catalog-content\") pod \"af8ce14e-9431-4f98-b50b-761208bdab1c\" (UID: \"af8ce14e-9431-4f98-b50b-761208bdab1c\") " Jan 25 08:00:41 crc kubenswrapper[4832]: I0125 08:00:41.984284 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pz87r\" (UniqueName: \"kubernetes.io/projected/af8ce14e-9431-4f98-b50b-761208bdab1c-kube-api-access-pz87r\") pod \"af8ce14e-9431-4f98-b50b-761208bdab1c\" (UID: \"af8ce14e-9431-4f98-b50b-761208bdab1c\") " Jan 25 08:00:41 crc kubenswrapper[4832]: I0125 08:00:41.984364 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/af8ce14e-9431-4f98-b50b-761208bdab1c-utilities\") pod \"af8ce14e-9431-4f98-b50b-761208bdab1c\" (UID: \"af8ce14e-9431-4f98-b50b-761208bdab1c\") " Jan 25 08:00:41 crc kubenswrapper[4832]: I0125 08:00:41.985415 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/af8ce14e-9431-4f98-b50b-761208bdab1c-utilities" (OuterVolumeSpecName: "utilities") pod "af8ce14e-9431-4f98-b50b-761208bdab1c" (UID: "af8ce14e-9431-4f98-b50b-761208bdab1c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 25 08:00:41 crc kubenswrapper[4832]: I0125 08:00:41.988961 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/af8ce14e-9431-4f98-b50b-761208bdab1c-kube-api-access-pz87r" (OuterVolumeSpecName: "kube-api-access-pz87r") pod "af8ce14e-9431-4f98-b50b-761208bdab1c" (UID: "af8ce14e-9431-4f98-b50b-761208bdab1c"). InnerVolumeSpecName "kube-api-access-pz87r". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 25 08:00:41 crc kubenswrapper[4832]: I0125 08:00:41.989918 4832 generic.go:334] "Generic (PLEG): container finished" podID="af8ce14e-9431-4f98-b50b-761208bdab1c" containerID="bfd51b43de0416b4f33b7d528415a95230564b817fc30d5f7df961a4993eceb2" exitCode=0 Jan 25 08:00:41 crc kubenswrapper[4832]: I0125 08:00:41.989955 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rxv7n" event={"ID":"af8ce14e-9431-4f98-b50b-761208bdab1c","Type":"ContainerDied","Data":"bfd51b43de0416b4f33b7d528415a95230564b817fc30d5f7df961a4993eceb2"} Jan 25 08:00:41 crc kubenswrapper[4832]: I0125 08:00:41.989988 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rxv7n" event={"ID":"af8ce14e-9431-4f98-b50b-761208bdab1c","Type":"ContainerDied","Data":"d382ef68ef07fc75cceda225337a1834a482e1607f321dd4423475b08cf3e3fd"} Jan 25 08:00:41 crc kubenswrapper[4832]: I0125 08:00:41.990013 4832 scope.go:117] "RemoveContainer" containerID="bfd51b43de0416b4f33b7d528415a95230564b817fc30d5f7df961a4993eceb2" Jan 25 08:00:41 crc kubenswrapper[4832]: I0125 08:00:41.990130 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-rxv7n" Jan 25 08:00:42 crc kubenswrapper[4832]: I0125 08:00:42.016717 4832 scope.go:117] "RemoveContainer" containerID="63d9abe64b3650fa4c01674511f7ff648d904a445105fff3cdbfa2649267a381" Jan 25 08:00:42 crc kubenswrapper[4832]: I0125 08:00:42.033715 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/af8ce14e-9431-4f98-b50b-761208bdab1c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "af8ce14e-9431-4f98-b50b-761208bdab1c" (UID: "af8ce14e-9431-4f98-b50b-761208bdab1c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 25 08:00:42 crc kubenswrapper[4832]: I0125 08:00:42.039587 4832 scope.go:117] "RemoveContainer" containerID="b270e4b790ebc92e727cbbe5c83877d8d93626934a92e8742f1d4375db64f092" Jan 25 08:00:42 crc kubenswrapper[4832]: I0125 08:00:42.057890 4832 scope.go:117] "RemoveContainer" containerID="bfd51b43de0416b4f33b7d528415a95230564b817fc30d5f7df961a4993eceb2" Jan 25 08:00:42 crc kubenswrapper[4832]: E0125 08:00:42.058429 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bfd51b43de0416b4f33b7d528415a95230564b817fc30d5f7df961a4993eceb2\": container with ID starting with bfd51b43de0416b4f33b7d528415a95230564b817fc30d5f7df961a4993eceb2 not found: ID does not exist" containerID="bfd51b43de0416b4f33b7d528415a95230564b817fc30d5f7df961a4993eceb2" Jan 25 08:00:42 crc kubenswrapper[4832]: I0125 08:00:42.058495 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bfd51b43de0416b4f33b7d528415a95230564b817fc30d5f7df961a4993eceb2"} err="failed to get container status \"bfd51b43de0416b4f33b7d528415a95230564b817fc30d5f7df961a4993eceb2\": rpc error: code = NotFound desc = could not find container \"bfd51b43de0416b4f33b7d528415a95230564b817fc30d5f7df961a4993eceb2\": container with ID starting with bfd51b43de0416b4f33b7d528415a95230564b817fc30d5f7df961a4993eceb2 not found: ID does not exist" Jan 25 08:00:42 crc kubenswrapper[4832]: I0125 08:00:42.058530 4832 scope.go:117] "RemoveContainer" containerID="63d9abe64b3650fa4c01674511f7ff648d904a445105fff3cdbfa2649267a381" Jan 25 08:00:42 crc kubenswrapper[4832]: E0125 08:00:42.058904 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"63d9abe64b3650fa4c01674511f7ff648d904a445105fff3cdbfa2649267a381\": container with ID starting with 63d9abe64b3650fa4c01674511f7ff648d904a445105fff3cdbfa2649267a381 not found: ID does not exist" containerID="63d9abe64b3650fa4c01674511f7ff648d904a445105fff3cdbfa2649267a381" Jan 25 08:00:42 crc kubenswrapper[4832]: I0125 08:00:42.058938 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"63d9abe64b3650fa4c01674511f7ff648d904a445105fff3cdbfa2649267a381"} err="failed to get container status \"63d9abe64b3650fa4c01674511f7ff648d904a445105fff3cdbfa2649267a381\": rpc error: code = NotFound desc = could not find container \"63d9abe64b3650fa4c01674511f7ff648d904a445105fff3cdbfa2649267a381\": container with ID starting with 63d9abe64b3650fa4c01674511f7ff648d904a445105fff3cdbfa2649267a381 not found: ID does not exist" Jan 25 08:00:42 crc kubenswrapper[4832]: I0125 08:00:42.058956 4832 scope.go:117] "RemoveContainer" containerID="b270e4b790ebc92e727cbbe5c83877d8d93626934a92e8742f1d4375db64f092" Jan 25 08:00:42 crc kubenswrapper[4832]: E0125 08:00:42.059538 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b270e4b790ebc92e727cbbe5c83877d8d93626934a92e8742f1d4375db64f092\": container with ID starting with b270e4b790ebc92e727cbbe5c83877d8d93626934a92e8742f1d4375db64f092 not found: ID does not exist" containerID="b270e4b790ebc92e727cbbe5c83877d8d93626934a92e8742f1d4375db64f092" Jan 25 08:00:42 crc kubenswrapper[4832]: I0125 08:00:42.059580 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b270e4b790ebc92e727cbbe5c83877d8d93626934a92e8742f1d4375db64f092"} err="failed to get container status \"b270e4b790ebc92e727cbbe5c83877d8d93626934a92e8742f1d4375db64f092\": rpc error: code = NotFound desc = could not find container \"b270e4b790ebc92e727cbbe5c83877d8d93626934a92e8742f1d4375db64f092\": container with ID starting with b270e4b790ebc92e727cbbe5c83877d8d93626934a92e8742f1d4375db64f092 not found: ID does not exist" Jan 25 08:00:42 crc kubenswrapper[4832]: I0125 08:00:42.085130 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pz87r\" (UniqueName: \"kubernetes.io/projected/af8ce14e-9431-4f98-b50b-761208bdab1c-kube-api-access-pz87r\") on node \"crc\" DevicePath \"\"" Jan 25 08:00:42 crc kubenswrapper[4832]: I0125 08:00:42.085157 4832 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/af8ce14e-9431-4f98-b50b-761208bdab1c-utilities\") on node \"crc\" DevicePath \"\"" Jan 25 08:00:42 crc kubenswrapper[4832]: I0125 08:00:42.085167 4832 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/af8ce14e-9431-4f98-b50b-761208bdab1c-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 25 08:00:42 crc kubenswrapper[4832]: I0125 08:00:42.326410 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-rxv7n"] Jan 25 08:00:42 crc kubenswrapper[4832]: I0125 08:00:42.332831 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-rxv7n"] Jan 25 08:00:43 crc kubenswrapper[4832]: I0125 08:00:43.683086 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="af8ce14e-9431-4f98-b50b-761208bdab1c" path="/var/lib/kubelet/pods/af8ce14e-9431-4f98-b50b-761208bdab1c/volumes" Jan 25 08:00:46 crc kubenswrapper[4832]: I0125 08:00:46.102583 4832 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 25 08:00:46 crc kubenswrapper[4832]: E0125 08:00:46.102881 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fb5919b8-3fe4-439b-b6dd-c23648b81b1e" containerName="pruner" Jan 25 08:00:46 crc kubenswrapper[4832]: I0125 08:00:46.102904 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="fb5919b8-3fe4-439b-b6dd-c23648b81b1e" containerName="pruner" Jan 25 08:00:46 crc kubenswrapper[4832]: E0125 08:00:46.102924 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="57a844ec-e431-4caf-9471-00460db6589c" containerName="extract-content" Jan 25 08:00:46 crc kubenswrapper[4832]: I0125 08:00:46.102934 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="57a844ec-e431-4caf-9471-00460db6589c" containerName="extract-content" Jan 25 08:00:46 crc kubenswrapper[4832]: E0125 08:00:46.102950 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="57a844ec-e431-4caf-9471-00460db6589c" containerName="registry-server" Jan 25 08:00:46 crc kubenswrapper[4832]: I0125 08:00:46.102961 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="57a844ec-e431-4caf-9471-00460db6589c" containerName="registry-server" Jan 25 08:00:46 crc kubenswrapper[4832]: E0125 08:00:46.102978 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f61facf9-6be6-4e92-b219-73da2609112a" containerName="registry-server" Jan 25 08:00:46 crc kubenswrapper[4832]: I0125 08:00:46.102987 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="f61facf9-6be6-4e92-b219-73da2609112a" containerName="registry-server" Jan 25 08:00:46 crc kubenswrapper[4832]: E0125 08:00:46.103005 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="57a844ec-e431-4caf-9471-00460db6589c" containerName="extract-utilities" Jan 25 08:00:46 crc kubenswrapper[4832]: I0125 08:00:46.103015 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="57a844ec-e431-4caf-9471-00460db6589c" containerName="extract-utilities" Jan 25 08:00:46 crc kubenswrapper[4832]: E0125 08:00:46.103031 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="169d3ee1-b6be-49bc-9522-c3579c6965f4" containerName="collect-profiles" Jan 25 08:00:46 crc kubenswrapper[4832]: I0125 08:00:46.103041 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="169d3ee1-b6be-49bc-9522-c3579c6965f4" containerName="collect-profiles" Jan 25 08:00:46 crc kubenswrapper[4832]: E0125 08:00:46.103060 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="41a974dc-0fea-4f11-930e-c11f28840e71" containerName="registry-server" Jan 25 08:00:46 crc kubenswrapper[4832]: I0125 08:00:46.103071 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="41a974dc-0fea-4f11-930e-c11f28840e71" containerName="registry-server" Jan 25 08:00:46 crc kubenswrapper[4832]: E0125 08:00:46.103085 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="41a974dc-0fea-4f11-930e-c11f28840e71" containerName="extract-utilities" Jan 25 08:00:46 crc kubenswrapper[4832]: I0125 08:00:46.103095 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="41a974dc-0fea-4f11-930e-c11f28840e71" containerName="extract-utilities" Jan 25 08:00:46 crc kubenswrapper[4832]: E0125 08:00:46.103112 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="af8ce14e-9431-4f98-b50b-761208bdab1c" containerName="extract-utilities" Jan 25 08:00:46 crc kubenswrapper[4832]: I0125 08:00:46.103123 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="af8ce14e-9431-4f98-b50b-761208bdab1c" containerName="extract-utilities" Jan 25 08:00:46 crc kubenswrapper[4832]: E0125 08:00:46.103134 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="af8ce14e-9431-4f98-b50b-761208bdab1c" containerName="extract-content" Jan 25 08:00:46 crc kubenswrapper[4832]: I0125 08:00:46.103145 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="af8ce14e-9431-4f98-b50b-761208bdab1c" containerName="extract-content" Jan 25 08:00:46 crc kubenswrapper[4832]: E0125 08:00:46.103160 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="af8ce14e-9431-4f98-b50b-761208bdab1c" containerName="registry-server" Jan 25 08:00:46 crc kubenswrapper[4832]: I0125 08:00:46.103170 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="af8ce14e-9431-4f98-b50b-761208bdab1c" containerName="registry-server" Jan 25 08:00:46 crc kubenswrapper[4832]: E0125 08:00:46.103184 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="41a974dc-0fea-4f11-930e-c11f28840e71" containerName="extract-content" Jan 25 08:00:46 crc kubenswrapper[4832]: I0125 08:00:46.103194 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="41a974dc-0fea-4f11-930e-c11f28840e71" containerName="extract-content" Jan 25 08:00:46 crc kubenswrapper[4832]: E0125 08:00:46.103214 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f61facf9-6be6-4e92-b219-73da2609112a" containerName="extract-utilities" Jan 25 08:00:46 crc kubenswrapper[4832]: I0125 08:00:46.103223 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="f61facf9-6be6-4e92-b219-73da2609112a" containerName="extract-utilities" Jan 25 08:00:46 crc kubenswrapper[4832]: E0125 08:00:46.103252 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f61facf9-6be6-4e92-b219-73da2609112a" containerName="extract-content" Jan 25 08:00:46 crc kubenswrapper[4832]: I0125 08:00:46.103264 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="f61facf9-6be6-4e92-b219-73da2609112a" containerName="extract-content" Jan 25 08:00:46 crc kubenswrapper[4832]: I0125 08:00:46.103386 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="169d3ee1-b6be-49bc-9522-c3579c6965f4" containerName="collect-profiles" Jan 25 08:00:46 crc kubenswrapper[4832]: I0125 08:00:46.103428 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="f61facf9-6be6-4e92-b219-73da2609112a" containerName="registry-server" Jan 25 08:00:46 crc kubenswrapper[4832]: I0125 08:00:46.103441 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="57a844ec-e431-4caf-9471-00460db6589c" containerName="registry-server" Jan 25 08:00:46 crc kubenswrapper[4832]: I0125 08:00:46.103453 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="41a974dc-0fea-4f11-930e-c11f28840e71" containerName="registry-server" Jan 25 08:00:46 crc kubenswrapper[4832]: I0125 08:00:46.103462 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="af8ce14e-9431-4f98-b50b-761208bdab1c" containerName="registry-server" Jan 25 08:00:46 crc kubenswrapper[4832]: I0125 08:00:46.103475 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="fb5919b8-3fe4-439b-b6dd-c23648b81b1e" containerName="pruner" Jan 25 08:00:46 crc kubenswrapper[4832]: I0125 08:00:46.103855 4832 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 25 08:00:46 crc kubenswrapper[4832]: I0125 08:00:46.104031 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 25 08:00:46 crc kubenswrapper[4832]: I0125 08:00:46.104148 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" containerID="cri-o://427b76c32266adf832d2068d3a55977e793505c5bb68d7b55f73115596094910" gracePeriod=15 Jan 25 08:00:46 crc kubenswrapper[4832]: I0125 08:00:46.104232 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" containerID="cri-o://56d7d5b36830b76c8af4d6a98ec50b4096ef677b7ec94784724d5395dbc5e1a5" gracePeriod=15 Jan 25 08:00:46 crc kubenswrapper[4832]: I0125 08:00:46.104265 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" containerID="cri-o://37e9206fcc440929199c51b318bab8d2c23814d1307eaed596434c12edf2ed21" gracePeriod=15 Jan 25 08:00:46 crc kubenswrapper[4832]: I0125 08:00:46.104369 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" containerID="cri-o://7c0b0c638bfaa98aaf9932b5ad1b0bfc04ba52038c40f3aa85103388c557ace5" gracePeriod=15 Jan 25 08:00:46 crc kubenswrapper[4832]: I0125 08:00:46.104288 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" containerID="cri-o://959f94a48ef709e3a3ca62ab6fda1874fd98e4fa70fbde0fa03da51bc8d0ed25" gracePeriod=15 Jan 25 08:00:46 crc kubenswrapper[4832]: I0125 08:00:46.107873 4832 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 25 08:00:46 crc kubenswrapper[4832]: E0125 08:00:46.108202 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Jan 25 08:00:46 crc kubenswrapper[4832]: I0125 08:00:46.108359 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Jan 25 08:00:46 crc kubenswrapper[4832]: E0125 08:00:46.108470 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 25 08:00:46 crc kubenswrapper[4832]: I0125 08:00:46.108568 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 25 08:00:46 crc kubenswrapper[4832]: E0125 08:00:46.108652 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Jan 25 08:00:46 crc kubenswrapper[4832]: I0125 08:00:46.108729 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Jan 25 08:00:46 crc kubenswrapper[4832]: E0125 08:00:46.108813 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Jan 25 08:00:46 crc kubenswrapper[4832]: I0125 08:00:46.108896 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Jan 25 08:00:46 crc kubenswrapper[4832]: E0125 08:00:46.108982 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Jan 25 08:00:46 crc kubenswrapper[4832]: I0125 08:00:46.109057 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Jan 25 08:00:46 crc kubenswrapper[4832]: E0125 08:00:46.109141 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Jan 25 08:00:46 crc kubenswrapper[4832]: I0125 08:00:46.109230 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Jan 25 08:00:46 crc kubenswrapper[4832]: I0125 08:00:46.109460 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Jan 25 08:00:46 crc kubenswrapper[4832]: I0125 08:00:46.109562 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 25 08:00:46 crc kubenswrapper[4832]: I0125 08:00:46.109649 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Jan 25 08:00:46 crc kubenswrapper[4832]: I0125 08:00:46.109748 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Jan 25 08:00:46 crc kubenswrapper[4832]: I0125 08:00:46.109829 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 25 08:00:46 crc kubenswrapper[4832]: I0125 08:00:46.109905 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Jan 25 08:00:46 crc kubenswrapper[4832]: E0125 08:00:46.110124 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 25 08:00:46 crc kubenswrapper[4832]: I0125 08:00:46.110219 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 25 08:00:46 crc kubenswrapper[4832]: E0125 08:00:46.142394 4832 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 38.102.83.213:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 25 08:00:46 crc kubenswrapper[4832]: I0125 08:00:46.230812 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 25 08:00:46 crc kubenswrapper[4832]: I0125 08:00:46.230897 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 25 08:00:46 crc kubenswrapper[4832]: I0125 08:00:46.230932 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 25 08:00:46 crc kubenswrapper[4832]: I0125 08:00:46.231009 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 25 08:00:46 crc kubenswrapper[4832]: I0125 08:00:46.231023 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 25 08:00:46 crc kubenswrapper[4832]: I0125 08:00:46.231050 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 25 08:00:46 crc kubenswrapper[4832]: I0125 08:00:46.231066 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 25 08:00:46 crc kubenswrapper[4832]: I0125 08:00:46.231085 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 25 08:00:46 crc kubenswrapper[4832]: I0125 08:00:46.332457 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 25 08:00:46 crc kubenswrapper[4832]: I0125 08:00:46.332498 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 25 08:00:46 crc kubenswrapper[4832]: I0125 08:00:46.332528 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 25 08:00:46 crc kubenswrapper[4832]: I0125 08:00:46.332550 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 25 08:00:46 crc kubenswrapper[4832]: I0125 08:00:46.332565 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 25 08:00:46 crc kubenswrapper[4832]: I0125 08:00:46.332576 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 25 08:00:46 crc kubenswrapper[4832]: I0125 08:00:46.332585 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 25 08:00:46 crc kubenswrapper[4832]: I0125 08:00:46.332592 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 25 08:00:46 crc kubenswrapper[4832]: I0125 08:00:46.332615 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 25 08:00:46 crc kubenswrapper[4832]: I0125 08:00:46.332619 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 25 08:00:46 crc kubenswrapper[4832]: I0125 08:00:46.332561 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 25 08:00:46 crc kubenswrapper[4832]: I0125 08:00:46.332643 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 25 08:00:46 crc kubenswrapper[4832]: I0125 08:00:46.332670 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 25 08:00:46 crc kubenswrapper[4832]: I0125 08:00:46.332688 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 25 08:00:46 crc kubenswrapper[4832]: I0125 08:00:46.332734 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 25 08:00:46 crc kubenswrapper[4832]: I0125 08:00:46.332755 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 25 08:00:46 crc kubenswrapper[4832]: I0125 08:00:46.443948 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 25 08:00:46 crc kubenswrapper[4832]: W0125 08:00:46.467332 4832 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf85e55b1a89d02b0cb034b1ea31ed45a.slice/crio-14c367f42d2012202715f73ab25fba7c02accd09e2ad3aaf1891dbd1f7b77e95 WatchSource:0}: Error finding container 14c367f42d2012202715f73ab25fba7c02accd09e2ad3aaf1891dbd1f7b77e95: Status 404 returned error can't find the container with id 14c367f42d2012202715f73ab25fba7c02accd09e2ad3aaf1891dbd1f7b77e95 Jan 25 08:00:46 crc kubenswrapper[4832]: E0125 08:00:46.472493 4832 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.213:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.188dea7be51343c7 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-25 08:00:46.471594951 +0000 UTC m=+229.145418484,LastTimestamp:2026-01-25 08:00:46.471594951 +0000 UTC m=+229.145418484,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 25 08:00:47 crc kubenswrapper[4832]: I0125 08:00:47.021999 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Jan 25 08:00:47 crc kubenswrapper[4832]: I0125 08:00:47.024445 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 25 08:00:47 crc kubenswrapper[4832]: I0125 08:00:47.025704 4832 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="56d7d5b36830b76c8af4d6a98ec50b4096ef677b7ec94784724d5395dbc5e1a5" exitCode=0 Jan 25 08:00:47 crc kubenswrapper[4832]: I0125 08:00:47.025757 4832 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="7c0b0c638bfaa98aaf9932b5ad1b0bfc04ba52038c40f3aa85103388c557ace5" exitCode=0 Jan 25 08:00:47 crc kubenswrapper[4832]: I0125 08:00:47.025773 4832 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="37e9206fcc440929199c51b318bab8d2c23814d1307eaed596434c12edf2ed21" exitCode=0 Jan 25 08:00:47 crc kubenswrapper[4832]: I0125 08:00:47.025791 4832 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="959f94a48ef709e3a3ca62ab6fda1874fd98e4fa70fbde0fa03da51bc8d0ed25" exitCode=2 Jan 25 08:00:47 crc kubenswrapper[4832]: I0125 08:00:47.025834 4832 scope.go:117] "RemoveContainer" containerID="7e2213b4c4748dc37cf94e9b977630270dedbabf28e81c8a6d75e4ee3346ad7a" Jan 25 08:00:47 crc kubenswrapper[4832]: I0125 08:00:47.028798 4832 generic.go:334] "Generic (PLEG): container finished" podID="8b9d581a-eedd-4f2b-94a2-e175bbc4530a" containerID="8b38f069369397f5371183e37b5eb0cab4e4d4855c7953a9e90f7a6768a8d7d4" exitCode=0 Jan 25 08:00:47 crc kubenswrapper[4832]: I0125 08:00:47.028889 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"8b9d581a-eedd-4f2b-94a2-e175bbc4530a","Type":"ContainerDied","Data":"8b38f069369397f5371183e37b5eb0cab4e4d4855c7953a9e90f7a6768a8d7d4"} Jan 25 08:00:47 crc kubenswrapper[4832]: I0125 08:00:47.029910 4832 status_manager.go:851] "Failed to get status for pod" podUID="8b9d581a-eedd-4f2b-94a2-e175bbc4530a" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.213:6443: connect: connection refused" Jan 25 08:00:47 crc kubenswrapper[4832]: I0125 08:00:47.030363 4832 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.213:6443: connect: connection refused" Jan 25 08:00:47 crc kubenswrapper[4832]: I0125 08:00:47.030512 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"70652d96b7264c2b3dc4a0f20d1e20539e185b73b0ec9a36e5d36cb4805d127f"} Jan 25 08:00:47 crc kubenswrapper[4832]: I0125 08:00:47.030576 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"14c367f42d2012202715f73ab25fba7c02accd09e2ad3aaf1891dbd1f7b77e95"} Jan 25 08:00:47 crc kubenswrapper[4832]: I0125 08:00:47.031486 4832 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.213:6443: connect: connection refused" Jan 25 08:00:47 crc kubenswrapper[4832]: E0125 08:00:47.031550 4832 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 38.102.83.213:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 25 08:00:47 crc kubenswrapper[4832]: I0125 08:00:47.032041 4832 status_manager.go:851] "Failed to get status for pod" podUID="8b9d581a-eedd-4f2b-94a2-e175bbc4530a" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.213:6443: connect: connection refused" Jan 25 08:00:47 crc kubenswrapper[4832]: I0125 08:00:47.148315 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-7ntqw" Jan 25 08:00:47 crc kubenswrapper[4832]: I0125 08:00:47.148436 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-7ntqw" Jan 25 08:00:47 crc kubenswrapper[4832]: I0125 08:00:47.197044 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-7ntqw" Jan 25 08:00:47 crc kubenswrapper[4832]: I0125 08:00:47.198018 4832 status_manager.go:851] "Failed to get status for pod" podUID="e70962d8-5db3-43c3-84bf-380addc38e9c" pod="openshift-marketplace/certified-operators-7ntqw" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-7ntqw\": dial tcp 38.102.83.213:6443: connect: connection refused" Jan 25 08:00:47 crc kubenswrapper[4832]: I0125 08:00:47.198573 4832 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.213:6443: connect: connection refused" Jan 25 08:00:47 crc kubenswrapper[4832]: I0125 08:00:47.199126 4832 status_manager.go:851] "Failed to get status for pod" podUID="8b9d581a-eedd-4f2b-94a2-e175bbc4530a" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.213:6443: connect: connection refused" Jan 25 08:00:47 crc kubenswrapper[4832]: E0125 08:00:47.484481 4832 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.213:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.188dea7be51343c7 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-25 08:00:46.471594951 +0000 UTC m=+229.145418484,LastTimestamp:2026-01-25 08:00:46.471594951 +0000 UTC m=+229.145418484,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 25 08:00:47 crc kubenswrapper[4832]: I0125 08:00:47.674902 4832 status_manager.go:851] "Failed to get status for pod" podUID="8b9d581a-eedd-4f2b-94a2-e175bbc4530a" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.213:6443: connect: connection refused" Jan 25 08:00:47 crc kubenswrapper[4832]: I0125 08:00:47.675338 4832 status_manager.go:851] "Failed to get status for pod" podUID="e70962d8-5db3-43c3-84bf-380addc38e9c" pod="openshift-marketplace/certified-operators-7ntqw" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-7ntqw\": dial tcp 38.102.83.213:6443: connect: connection refused" Jan 25 08:00:47 crc kubenswrapper[4832]: I0125 08:00:47.675966 4832 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.213:6443: connect: connection refused" Jan 25 08:00:48 crc kubenswrapper[4832]: I0125 08:00:48.051032 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 25 08:00:48 crc kubenswrapper[4832]: I0125 08:00:48.097756 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-7ntqw" Jan 25 08:00:48 crc kubenswrapper[4832]: I0125 08:00:48.098477 4832 status_manager.go:851] "Failed to get status for pod" podUID="8b9d581a-eedd-4f2b-94a2-e175bbc4530a" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.213:6443: connect: connection refused" Jan 25 08:00:48 crc kubenswrapper[4832]: I0125 08:00:48.099204 4832 status_manager.go:851] "Failed to get status for pod" podUID="e70962d8-5db3-43c3-84bf-380addc38e9c" pod="openshift-marketplace/certified-operators-7ntqw" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-7ntqw\": dial tcp 38.102.83.213:6443: connect: connection refused" Jan 25 08:00:48 crc kubenswrapper[4832]: I0125 08:00:48.481437 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 25 08:00:48 crc kubenswrapper[4832]: I0125 08:00:48.483653 4832 status_manager.go:851] "Failed to get status for pod" podUID="8b9d581a-eedd-4f2b-94a2-e175bbc4530a" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.213:6443: connect: connection refused" Jan 25 08:00:48 crc kubenswrapper[4832]: I0125 08:00:48.484609 4832 status_manager.go:851] "Failed to get status for pod" podUID="e70962d8-5db3-43c3-84bf-380addc38e9c" pod="openshift-marketplace/certified-operators-7ntqw" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-7ntqw\": dial tcp 38.102.83.213:6443: connect: connection refused" Jan 25 08:00:48 crc kubenswrapper[4832]: I0125 08:00:48.488303 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 25 08:00:48 crc kubenswrapper[4832]: I0125 08:00:48.489727 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 25 08:00:48 crc kubenswrapper[4832]: I0125 08:00:48.490744 4832 status_manager.go:851] "Failed to get status for pod" podUID="8b9d581a-eedd-4f2b-94a2-e175bbc4530a" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.213:6443: connect: connection refused" Jan 25 08:00:48 crc kubenswrapper[4832]: I0125 08:00:48.491441 4832 status_manager.go:851] "Failed to get status for pod" podUID="e70962d8-5db3-43c3-84bf-380addc38e9c" pod="openshift-marketplace/certified-operators-7ntqw" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-7ntqw\": dial tcp 38.102.83.213:6443: connect: connection refused" Jan 25 08:00:48 crc kubenswrapper[4832]: I0125 08:00:48.492053 4832 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.213:6443: connect: connection refused" Jan 25 08:00:48 crc kubenswrapper[4832]: I0125 08:00:48.562996 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/8b9d581a-eedd-4f2b-94a2-e175bbc4530a-var-lock\") pod \"8b9d581a-eedd-4f2b-94a2-e175bbc4530a\" (UID: \"8b9d581a-eedd-4f2b-94a2-e175bbc4530a\") " Jan 25 08:00:48 crc kubenswrapper[4832]: I0125 08:00:48.563113 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/8b9d581a-eedd-4f2b-94a2-e175bbc4530a-kubelet-dir\") pod \"8b9d581a-eedd-4f2b-94a2-e175bbc4530a\" (UID: \"8b9d581a-eedd-4f2b-94a2-e175bbc4530a\") " Jan 25 08:00:48 crc kubenswrapper[4832]: I0125 08:00:48.563155 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8b9d581a-eedd-4f2b-94a2-e175bbc4530a-var-lock" (OuterVolumeSpecName: "var-lock") pod "8b9d581a-eedd-4f2b-94a2-e175bbc4530a" (UID: "8b9d581a-eedd-4f2b-94a2-e175bbc4530a"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 25 08:00:48 crc kubenswrapper[4832]: I0125 08:00:48.563165 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Jan 25 08:00:48 crc kubenswrapper[4832]: I0125 08:00:48.563200 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 25 08:00:48 crc kubenswrapper[4832]: I0125 08:00:48.563223 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Jan 25 08:00:48 crc kubenswrapper[4832]: I0125 08:00:48.563229 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8b9d581a-eedd-4f2b-94a2-e175bbc4530a-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "8b9d581a-eedd-4f2b-94a2-e175bbc4530a" (UID: "8b9d581a-eedd-4f2b-94a2-e175bbc4530a"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 25 08:00:48 crc kubenswrapper[4832]: I0125 08:00:48.563286 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Jan 25 08:00:48 crc kubenswrapper[4832]: I0125 08:00:48.563337 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/8b9d581a-eedd-4f2b-94a2-e175bbc4530a-kube-api-access\") pod \"8b9d581a-eedd-4f2b-94a2-e175bbc4530a\" (UID: \"8b9d581a-eedd-4f2b-94a2-e175bbc4530a\") " Jan 25 08:00:48 crc kubenswrapper[4832]: I0125 08:00:48.563356 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 25 08:00:48 crc kubenswrapper[4832]: I0125 08:00:48.563475 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 25 08:00:48 crc kubenswrapper[4832]: I0125 08:00:48.564232 4832 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/8b9d581a-eedd-4f2b-94a2-e175bbc4530a-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 25 08:00:48 crc kubenswrapper[4832]: I0125 08:00:48.564287 4832 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") on node \"crc\" DevicePath \"\"" Jan 25 08:00:48 crc kubenswrapper[4832]: I0125 08:00:48.564314 4832 reconciler_common.go:293] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") on node \"crc\" DevicePath \"\"" Jan 25 08:00:48 crc kubenswrapper[4832]: I0125 08:00:48.564340 4832 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") on node \"crc\" DevicePath \"\"" Jan 25 08:00:48 crc kubenswrapper[4832]: I0125 08:00:48.564360 4832 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/8b9d581a-eedd-4f2b-94a2-e175bbc4530a-var-lock\") on node \"crc\" DevicePath \"\"" Jan 25 08:00:48 crc kubenswrapper[4832]: I0125 08:00:48.569413 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8b9d581a-eedd-4f2b-94a2-e175bbc4530a-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "8b9d581a-eedd-4f2b-94a2-e175bbc4530a" (UID: "8b9d581a-eedd-4f2b-94a2-e175bbc4530a"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 25 08:00:48 crc kubenswrapper[4832]: I0125 08:00:48.666039 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/8b9d581a-eedd-4f2b-94a2-e175bbc4530a-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 25 08:00:49 crc kubenswrapper[4832]: I0125 08:00:49.062959 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 25 08:00:49 crc kubenswrapper[4832]: I0125 08:00:49.064214 4832 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="427b76c32266adf832d2068d3a55977e793505c5bb68d7b55f73115596094910" exitCode=0 Jan 25 08:00:49 crc kubenswrapper[4832]: I0125 08:00:49.064310 4832 scope.go:117] "RemoveContainer" containerID="56d7d5b36830b76c8af4d6a98ec50b4096ef677b7ec94784724d5395dbc5e1a5" Jan 25 08:00:49 crc kubenswrapper[4832]: I0125 08:00:49.064365 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 25 08:00:49 crc kubenswrapper[4832]: I0125 08:00:49.069944 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"8b9d581a-eedd-4f2b-94a2-e175bbc4530a","Type":"ContainerDied","Data":"109e15081a2868139b23b8b6b2de02ff0e98a1eba83b14bf50a2375b136b5814"} Jan 25 08:00:49 crc kubenswrapper[4832]: I0125 08:00:49.070047 4832 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="109e15081a2868139b23b8b6b2de02ff0e98a1eba83b14bf50a2375b136b5814" Jan 25 08:00:49 crc kubenswrapper[4832]: I0125 08:00:49.069968 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 25 08:00:49 crc kubenswrapper[4832]: I0125 08:00:49.087978 4832 status_manager.go:851] "Failed to get status for pod" podUID="8b9d581a-eedd-4f2b-94a2-e175bbc4530a" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.213:6443: connect: connection refused" Jan 25 08:00:49 crc kubenswrapper[4832]: I0125 08:00:49.088181 4832 status_manager.go:851] "Failed to get status for pod" podUID="e70962d8-5db3-43c3-84bf-380addc38e9c" pod="openshift-marketplace/certified-operators-7ntqw" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-7ntqw\": dial tcp 38.102.83.213:6443: connect: connection refused" Jan 25 08:00:49 crc kubenswrapper[4832]: I0125 08:00:49.088376 4832 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.213:6443: connect: connection refused" Jan 25 08:00:49 crc kubenswrapper[4832]: I0125 08:00:49.091222 4832 scope.go:117] "RemoveContainer" containerID="7c0b0c638bfaa98aaf9932b5ad1b0bfc04ba52038c40f3aa85103388c557ace5" Jan 25 08:00:49 crc kubenswrapper[4832]: I0125 08:00:49.093042 4832 status_manager.go:851] "Failed to get status for pod" podUID="8b9d581a-eedd-4f2b-94a2-e175bbc4530a" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.213:6443: connect: connection refused" Jan 25 08:00:49 crc kubenswrapper[4832]: I0125 08:00:49.093342 4832 status_manager.go:851] "Failed to get status for pod" podUID="e70962d8-5db3-43c3-84bf-380addc38e9c" pod="openshift-marketplace/certified-operators-7ntqw" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-7ntqw\": dial tcp 38.102.83.213:6443: connect: connection refused" Jan 25 08:00:49 crc kubenswrapper[4832]: I0125 08:00:49.093753 4832 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.213:6443: connect: connection refused" Jan 25 08:00:49 crc kubenswrapper[4832]: I0125 08:00:49.113830 4832 scope.go:117] "RemoveContainer" containerID="37e9206fcc440929199c51b318bab8d2c23814d1307eaed596434c12edf2ed21" Jan 25 08:00:49 crc kubenswrapper[4832]: I0125 08:00:49.135488 4832 scope.go:117] "RemoveContainer" containerID="959f94a48ef709e3a3ca62ab6fda1874fd98e4fa70fbde0fa03da51bc8d0ed25" Jan 25 08:00:49 crc kubenswrapper[4832]: I0125 08:00:49.157593 4832 scope.go:117] "RemoveContainer" containerID="427b76c32266adf832d2068d3a55977e793505c5bb68d7b55f73115596094910" Jan 25 08:00:49 crc kubenswrapper[4832]: I0125 08:00:49.173743 4832 scope.go:117] "RemoveContainer" containerID="b5cdefbe9da3ff798b69ba79465cd9b6fce74df31802f14dca3fa58ba5b9d1bd" Jan 25 08:00:49 crc kubenswrapper[4832]: I0125 08:00:49.192169 4832 scope.go:117] "RemoveContainer" containerID="56d7d5b36830b76c8af4d6a98ec50b4096ef677b7ec94784724d5395dbc5e1a5" Jan 25 08:00:49 crc kubenswrapper[4832]: E0125 08:00:49.192656 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"56d7d5b36830b76c8af4d6a98ec50b4096ef677b7ec94784724d5395dbc5e1a5\": container with ID starting with 56d7d5b36830b76c8af4d6a98ec50b4096ef677b7ec94784724d5395dbc5e1a5 not found: ID does not exist" containerID="56d7d5b36830b76c8af4d6a98ec50b4096ef677b7ec94784724d5395dbc5e1a5" Jan 25 08:00:49 crc kubenswrapper[4832]: I0125 08:00:49.192692 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"56d7d5b36830b76c8af4d6a98ec50b4096ef677b7ec94784724d5395dbc5e1a5"} err="failed to get container status \"56d7d5b36830b76c8af4d6a98ec50b4096ef677b7ec94784724d5395dbc5e1a5\": rpc error: code = NotFound desc = could not find container \"56d7d5b36830b76c8af4d6a98ec50b4096ef677b7ec94784724d5395dbc5e1a5\": container with ID starting with 56d7d5b36830b76c8af4d6a98ec50b4096ef677b7ec94784724d5395dbc5e1a5 not found: ID does not exist" Jan 25 08:00:49 crc kubenswrapper[4832]: I0125 08:00:49.192720 4832 scope.go:117] "RemoveContainer" containerID="7c0b0c638bfaa98aaf9932b5ad1b0bfc04ba52038c40f3aa85103388c557ace5" Jan 25 08:00:49 crc kubenswrapper[4832]: E0125 08:00:49.193222 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7c0b0c638bfaa98aaf9932b5ad1b0bfc04ba52038c40f3aa85103388c557ace5\": container with ID starting with 7c0b0c638bfaa98aaf9932b5ad1b0bfc04ba52038c40f3aa85103388c557ace5 not found: ID does not exist" containerID="7c0b0c638bfaa98aaf9932b5ad1b0bfc04ba52038c40f3aa85103388c557ace5" Jan 25 08:00:49 crc kubenswrapper[4832]: I0125 08:00:49.193259 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7c0b0c638bfaa98aaf9932b5ad1b0bfc04ba52038c40f3aa85103388c557ace5"} err="failed to get container status \"7c0b0c638bfaa98aaf9932b5ad1b0bfc04ba52038c40f3aa85103388c557ace5\": rpc error: code = NotFound desc = could not find container \"7c0b0c638bfaa98aaf9932b5ad1b0bfc04ba52038c40f3aa85103388c557ace5\": container with ID starting with 7c0b0c638bfaa98aaf9932b5ad1b0bfc04ba52038c40f3aa85103388c557ace5 not found: ID does not exist" Jan 25 08:00:49 crc kubenswrapper[4832]: I0125 08:00:49.193285 4832 scope.go:117] "RemoveContainer" containerID="37e9206fcc440929199c51b318bab8d2c23814d1307eaed596434c12edf2ed21" Jan 25 08:00:49 crc kubenswrapper[4832]: E0125 08:00:49.194092 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"37e9206fcc440929199c51b318bab8d2c23814d1307eaed596434c12edf2ed21\": container with ID starting with 37e9206fcc440929199c51b318bab8d2c23814d1307eaed596434c12edf2ed21 not found: ID does not exist" containerID="37e9206fcc440929199c51b318bab8d2c23814d1307eaed596434c12edf2ed21" Jan 25 08:00:49 crc kubenswrapper[4832]: I0125 08:00:49.194133 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"37e9206fcc440929199c51b318bab8d2c23814d1307eaed596434c12edf2ed21"} err="failed to get container status \"37e9206fcc440929199c51b318bab8d2c23814d1307eaed596434c12edf2ed21\": rpc error: code = NotFound desc = could not find container \"37e9206fcc440929199c51b318bab8d2c23814d1307eaed596434c12edf2ed21\": container with ID starting with 37e9206fcc440929199c51b318bab8d2c23814d1307eaed596434c12edf2ed21 not found: ID does not exist" Jan 25 08:00:49 crc kubenswrapper[4832]: I0125 08:00:49.194159 4832 scope.go:117] "RemoveContainer" containerID="959f94a48ef709e3a3ca62ab6fda1874fd98e4fa70fbde0fa03da51bc8d0ed25" Jan 25 08:00:49 crc kubenswrapper[4832]: E0125 08:00:49.195293 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"959f94a48ef709e3a3ca62ab6fda1874fd98e4fa70fbde0fa03da51bc8d0ed25\": container with ID starting with 959f94a48ef709e3a3ca62ab6fda1874fd98e4fa70fbde0fa03da51bc8d0ed25 not found: ID does not exist" containerID="959f94a48ef709e3a3ca62ab6fda1874fd98e4fa70fbde0fa03da51bc8d0ed25" Jan 25 08:00:49 crc kubenswrapper[4832]: I0125 08:00:49.195512 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"959f94a48ef709e3a3ca62ab6fda1874fd98e4fa70fbde0fa03da51bc8d0ed25"} err="failed to get container status \"959f94a48ef709e3a3ca62ab6fda1874fd98e4fa70fbde0fa03da51bc8d0ed25\": rpc error: code = NotFound desc = could not find container \"959f94a48ef709e3a3ca62ab6fda1874fd98e4fa70fbde0fa03da51bc8d0ed25\": container with ID starting with 959f94a48ef709e3a3ca62ab6fda1874fd98e4fa70fbde0fa03da51bc8d0ed25 not found: ID does not exist" Jan 25 08:00:49 crc kubenswrapper[4832]: I0125 08:00:49.195584 4832 scope.go:117] "RemoveContainer" containerID="427b76c32266adf832d2068d3a55977e793505c5bb68d7b55f73115596094910" Jan 25 08:00:49 crc kubenswrapper[4832]: E0125 08:00:49.196437 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"427b76c32266adf832d2068d3a55977e793505c5bb68d7b55f73115596094910\": container with ID starting with 427b76c32266adf832d2068d3a55977e793505c5bb68d7b55f73115596094910 not found: ID does not exist" containerID="427b76c32266adf832d2068d3a55977e793505c5bb68d7b55f73115596094910" Jan 25 08:00:49 crc kubenswrapper[4832]: I0125 08:00:49.196497 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"427b76c32266adf832d2068d3a55977e793505c5bb68d7b55f73115596094910"} err="failed to get container status \"427b76c32266adf832d2068d3a55977e793505c5bb68d7b55f73115596094910\": rpc error: code = NotFound desc = could not find container \"427b76c32266adf832d2068d3a55977e793505c5bb68d7b55f73115596094910\": container with ID starting with 427b76c32266adf832d2068d3a55977e793505c5bb68d7b55f73115596094910 not found: ID does not exist" Jan 25 08:00:49 crc kubenswrapper[4832]: I0125 08:00:49.196531 4832 scope.go:117] "RemoveContainer" containerID="b5cdefbe9da3ff798b69ba79465cd9b6fce74df31802f14dca3fa58ba5b9d1bd" Jan 25 08:00:49 crc kubenswrapper[4832]: E0125 08:00:49.197125 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b5cdefbe9da3ff798b69ba79465cd9b6fce74df31802f14dca3fa58ba5b9d1bd\": container with ID starting with b5cdefbe9da3ff798b69ba79465cd9b6fce74df31802f14dca3fa58ba5b9d1bd not found: ID does not exist" containerID="b5cdefbe9da3ff798b69ba79465cd9b6fce74df31802f14dca3fa58ba5b9d1bd" Jan 25 08:00:49 crc kubenswrapper[4832]: I0125 08:00:49.197177 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b5cdefbe9da3ff798b69ba79465cd9b6fce74df31802f14dca3fa58ba5b9d1bd"} err="failed to get container status \"b5cdefbe9da3ff798b69ba79465cd9b6fce74df31802f14dca3fa58ba5b9d1bd\": rpc error: code = NotFound desc = could not find container \"b5cdefbe9da3ff798b69ba79465cd9b6fce74df31802f14dca3fa58ba5b9d1bd\": container with ID starting with b5cdefbe9da3ff798b69ba79465cd9b6fce74df31802f14dca3fa58ba5b9d1bd not found: ID does not exist" Jan 25 08:00:49 crc kubenswrapper[4832]: I0125 08:00:49.676616 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f4b27818a5e8e43d0dc095d08835c792" path="/var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/volumes" Jan 25 08:00:50 crc kubenswrapper[4832]: I0125 08:00:50.148199 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-f6nwt" Jan 25 08:00:50 crc kubenswrapper[4832]: I0125 08:00:50.149494 4832 status_manager.go:851] "Failed to get status for pod" podUID="e70962d8-5db3-43c3-84bf-380addc38e9c" pod="openshift-marketplace/certified-operators-7ntqw" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-7ntqw\": dial tcp 38.102.83.213:6443: connect: connection refused" Jan 25 08:00:50 crc kubenswrapper[4832]: I0125 08:00:50.149890 4832 status_manager.go:851] "Failed to get status for pod" podUID="479892d8-5a53-40ee-9f16-d4480c2c3e03" pod="openshift-marketplace/redhat-operators-f6nwt" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-f6nwt\": dial tcp 38.102.83.213:6443: connect: connection refused" Jan 25 08:00:50 crc kubenswrapper[4832]: I0125 08:00:50.150079 4832 status_manager.go:851] "Failed to get status for pod" podUID="8b9d581a-eedd-4f2b-94a2-e175bbc4530a" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.213:6443: connect: connection refused" Jan 25 08:00:50 crc kubenswrapper[4832]: I0125 08:00:50.205589 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-f6nwt" Jan 25 08:00:50 crc kubenswrapper[4832]: I0125 08:00:50.206082 4832 status_manager.go:851] "Failed to get status for pod" podUID="479892d8-5a53-40ee-9f16-d4480c2c3e03" pod="openshift-marketplace/redhat-operators-f6nwt" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-f6nwt\": dial tcp 38.102.83.213:6443: connect: connection refused" Jan 25 08:00:50 crc kubenswrapper[4832]: I0125 08:00:50.206354 4832 status_manager.go:851] "Failed to get status for pod" podUID="8b9d581a-eedd-4f2b-94a2-e175bbc4530a" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.213:6443: connect: connection refused" Jan 25 08:00:50 crc kubenswrapper[4832]: I0125 08:00:50.206710 4832 status_manager.go:851] "Failed to get status for pod" podUID="e70962d8-5db3-43c3-84bf-380addc38e9c" pod="openshift-marketplace/certified-operators-7ntqw" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-7ntqw\": dial tcp 38.102.83.213:6443: connect: connection refused" Jan 25 08:00:50 crc kubenswrapper[4832]: E0125 08:00:50.410346 4832 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-25T08:00:50Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-25T08:00:50Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-25T08:00:50Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-25T08:00:50Z\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"}]}}\" for node \"crc\": Patch \"https://api-int.crc.testing:6443/api/v1/nodes/crc/status?timeout=10s\": dial tcp 38.102.83.213:6443: connect: connection refused" Jan 25 08:00:50 crc kubenswrapper[4832]: E0125 08:00:50.411439 4832 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.213:6443: connect: connection refused" Jan 25 08:00:50 crc kubenswrapper[4832]: E0125 08:00:50.412133 4832 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.213:6443: connect: connection refused" Jan 25 08:00:50 crc kubenswrapper[4832]: E0125 08:00:50.413065 4832 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.213:6443: connect: connection refused" Jan 25 08:00:50 crc kubenswrapper[4832]: E0125 08:00:50.413563 4832 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.213:6443: connect: connection refused" Jan 25 08:00:50 crc kubenswrapper[4832]: E0125 08:00:50.413600 4832 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 25 08:00:53 crc kubenswrapper[4832]: E0125 08:00:53.002299 4832 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.213:6443: connect: connection refused" Jan 25 08:00:53 crc kubenswrapper[4832]: E0125 08:00:53.003436 4832 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.213:6443: connect: connection refused" Jan 25 08:00:53 crc kubenswrapper[4832]: E0125 08:00:53.003819 4832 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.213:6443: connect: connection refused" Jan 25 08:00:53 crc kubenswrapper[4832]: E0125 08:00:53.004230 4832 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.213:6443: connect: connection refused" Jan 25 08:00:53 crc kubenswrapper[4832]: E0125 08:00:53.004783 4832 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.213:6443: connect: connection refused" Jan 25 08:00:53 crc kubenswrapper[4832]: I0125 08:00:53.004836 4832 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Jan 25 08:00:53 crc kubenswrapper[4832]: E0125 08:00:53.005170 4832 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.213:6443: connect: connection refused" interval="200ms" Jan 25 08:00:53 crc kubenswrapper[4832]: E0125 08:00:53.206129 4832 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.213:6443: connect: connection refused" interval="400ms" Jan 25 08:00:53 crc kubenswrapper[4832]: E0125 08:00:53.607441 4832 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.213:6443: connect: connection refused" interval="800ms" Jan 25 08:00:54 crc kubenswrapper[4832]: E0125 08:00:54.408723 4832 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.213:6443: connect: connection refused" interval="1.6s" Jan 25 08:00:56 crc kubenswrapper[4832]: E0125 08:00:56.009652 4832 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.213:6443: connect: connection refused" interval="3.2s" Jan 25 08:00:56 crc kubenswrapper[4832]: I0125 08:00:56.669040 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 25 08:00:56 crc kubenswrapper[4832]: I0125 08:00:56.671283 4832 status_manager.go:851] "Failed to get status for pod" podUID="479892d8-5a53-40ee-9f16-d4480c2c3e03" pod="openshift-marketplace/redhat-operators-f6nwt" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-f6nwt\": dial tcp 38.102.83.213:6443: connect: connection refused" Jan 25 08:00:56 crc kubenswrapper[4832]: I0125 08:00:56.671912 4832 status_manager.go:851] "Failed to get status for pod" podUID="8b9d581a-eedd-4f2b-94a2-e175bbc4530a" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.213:6443: connect: connection refused" Jan 25 08:00:56 crc kubenswrapper[4832]: I0125 08:00:56.672416 4832 status_manager.go:851] "Failed to get status for pod" podUID="e70962d8-5db3-43c3-84bf-380addc38e9c" pod="openshift-marketplace/certified-operators-7ntqw" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-7ntqw\": dial tcp 38.102.83.213:6443: connect: connection refused" Jan 25 08:00:56 crc kubenswrapper[4832]: I0125 08:00:56.686339 4832 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="4399c971-4476-4d24-ae22-8f9710ee1ea8" Jan 25 08:00:56 crc kubenswrapper[4832]: I0125 08:00:56.686384 4832 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="4399c971-4476-4d24-ae22-8f9710ee1ea8" Jan 25 08:00:56 crc kubenswrapper[4832]: E0125 08:00:56.686924 4832 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.213:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 25 08:00:56 crc kubenswrapper[4832]: I0125 08:00:56.687630 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 25 08:00:57 crc kubenswrapper[4832]: I0125 08:00:57.115854 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"57d7e41933607240a03dec52ee9bcec4e2f27fcd2c2d8932e4a38fa526f38399"} Jan 25 08:00:57 crc kubenswrapper[4832]: E0125 08:00:57.485908 4832 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.213:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.188dea7be51343c7 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-25 08:00:46.471594951 +0000 UTC m=+229.145418484,LastTimestamp:2026-01-25 08:00:46.471594951 +0000 UTC m=+229.145418484,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 25 08:00:57 crc kubenswrapper[4832]: I0125 08:00:57.674576 4832 status_manager.go:851] "Failed to get status for pod" podUID="8b9d581a-eedd-4f2b-94a2-e175bbc4530a" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.213:6443: connect: connection refused" Jan 25 08:00:57 crc kubenswrapper[4832]: I0125 08:00:57.674995 4832 status_manager.go:851] "Failed to get status for pod" podUID="71bb4a3aecc4ba5b26c4b7318770ce13" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.213:6443: connect: connection refused" Jan 25 08:00:57 crc kubenswrapper[4832]: I0125 08:00:57.675252 4832 status_manager.go:851] "Failed to get status for pod" podUID="e70962d8-5db3-43c3-84bf-380addc38e9c" pod="openshift-marketplace/certified-operators-7ntqw" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-7ntqw\": dial tcp 38.102.83.213:6443: connect: connection refused" Jan 25 08:00:57 crc kubenswrapper[4832]: I0125 08:00:57.675614 4832 status_manager.go:851] "Failed to get status for pod" podUID="479892d8-5a53-40ee-9f16-d4480c2c3e03" pod="openshift-marketplace/redhat-operators-f6nwt" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-f6nwt\": dial tcp 38.102.83.213:6443: connect: connection refused" Jan 25 08:00:58 crc kubenswrapper[4832]: I0125 08:00:58.122357 4832 generic.go:334] "Generic (PLEG): container finished" podID="71bb4a3aecc4ba5b26c4b7318770ce13" containerID="10b6eb662d6faa1fb686842b345e949ce2a6fa16c4bc032de6a2e800a25230be" exitCode=0 Jan 25 08:00:58 crc kubenswrapper[4832]: I0125 08:00:58.122425 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerDied","Data":"10b6eb662d6faa1fb686842b345e949ce2a6fa16c4bc032de6a2e800a25230be"} Jan 25 08:00:58 crc kubenswrapper[4832]: I0125 08:00:58.123329 4832 status_manager.go:851] "Failed to get status for pod" podUID="e70962d8-5db3-43c3-84bf-380addc38e9c" pod="openshift-marketplace/certified-operators-7ntqw" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-7ntqw\": dial tcp 38.102.83.213:6443: connect: connection refused" Jan 25 08:00:58 crc kubenswrapper[4832]: I0125 08:00:58.123550 4832 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="4399c971-4476-4d24-ae22-8f9710ee1ea8" Jan 25 08:00:58 crc kubenswrapper[4832]: I0125 08:00:58.123571 4832 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="4399c971-4476-4d24-ae22-8f9710ee1ea8" Jan 25 08:00:58 crc kubenswrapper[4832]: I0125 08:00:58.123783 4832 status_manager.go:851] "Failed to get status for pod" podUID="479892d8-5a53-40ee-9f16-d4480c2c3e03" pod="openshift-marketplace/redhat-operators-f6nwt" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-f6nwt\": dial tcp 38.102.83.213:6443: connect: connection refused" Jan 25 08:00:58 crc kubenswrapper[4832]: E0125 08:00:58.123801 4832 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.213:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 25 08:00:58 crc kubenswrapper[4832]: I0125 08:00:58.128530 4832 status_manager.go:851] "Failed to get status for pod" podUID="8b9d581a-eedd-4f2b-94a2-e175bbc4530a" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.213:6443: connect: connection refused" Jan 25 08:00:58 crc kubenswrapper[4832]: I0125 08:00:58.128988 4832 status_manager.go:851] "Failed to get status for pod" podUID="71bb4a3aecc4ba5b26c4b7318770ce13" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.213:6443: connect: connection refused" Jan 25 08:00:59 crc kubenswrapper[4832]: I0125 08:00:59.092350 4832 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Readiness probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" start-of-body= Jan 25 08:00:59 crc kubenswrapper[4832]: I0125 08:00:59.092777 4832 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" Jan 25 08:00:59 crc kubenswrapper[4832]: I0125 08:00:59.130407 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Jan 25 08:00:59 crc kubenswrapper[4832]: I0125 08:00:59.130452 4832 generic.go:334] "Generic (PLEG): container finished" podID="f614b9022728cf315e60c057852e563e" containerID="b044eb1a229266f00938c08da6aa9e86425ca71d08c8434d7214d54850c36bbb" exitCode=1 Jan 25 08:00:59 crc kubenswrapper[4832]: I0125 08:00:59.130502 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerDied","Data":"b044eb1a229266f00938c08da6aa9e86425ca71d08c8434d7214d54850c36bbb"} Jan 25 08:00:59 crc kubenswrapper[4832]: I0125 08:00:59.131022 4832 scope.go:117] "RemoveContainer" containerID="b044eb1a229266f00938c08da6aa9e86425ca71d08c8434d7214d54850c36bbb" Jan 25 08:00:59 crc kubenswrapper[4832]: I0125 08:00:59.133847 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"5be0ff40e63804c12d9dd8ad3129ea38c1612b54a47ff84342aee73f47ce0a13"} Jan 25 08:00:59 crc kubenswrapper[4832]: I0125 08:00:59.133878 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"2223cff9db6fb0c352802d44d1a4259ed1df3d635b051bcf2b441a0897b8a85e"} Jan 25 08:00:59 crc kubenswrapper[4832]: I0125 08:00:59.133896 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"6c154af68c32cc61c85cbfd197618d297f0c8a565bde07be3512bad69705b4ab"} Jan 25 08:01:00 crc kubenswrapper[4832]: I0125 08:01:00.142307 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Jan 25 08:01:00 crc kubenswrapper[4832]: I0125 08:01:00.142444 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"c2e09b97e8a144382fad57af76df78cc29791a1b823d01f3c37980195200d12d"} Jan 25 08:01:00 crc kubenswrapper[4832]: I0125 08:01:00.148044 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"fefb0578cca8fa3656e2aef74d92152ea8499f68ac602c7d839068e08c82b2b6"} Jan 25 08:01:00 crc kubenswrapper[4832]: I0125 08:01:00.148098 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"a7786320b3efd1a5d88cdefaaab618f2adcdd45969e05c56ff052c385c15ec2c"} Jan 25 08:01:00 crc kubenswrapper[4832]: I0125 08:01:00.148378 4832 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="4399c971-4476-4d24-ae22-8f9710ee1ea8" Jan 25 08:01:00 crc kubenswrapper[4832]: I0125 08:01:00.148421 4832 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="4399c971-4476-4d24-ae22-8f9710ee1ea8" Jan 25 08:01:00 crc kubenswrapper[4832]: I0125 08:01:00.148631 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 25 08:01:01 crc kubenswrapper[4832]: I0125 08:01:01.688744 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 25 08:01:01 crc kubenswrapper[4832]: I0125 08:01:01.689072 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 25 08:01:01 crc kubenswrapper[4832]: I0125 08:01:01.697160 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 25 08:01:05 crc kubenswrapper[4832]: I0125 08:01:05.157756 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-authentication/oauth-openshift-558db77b4-q5r28" podUID="cb0834ac-2ef5-48dc-a86f-511e79c897f7" containerName="oauth-openshift" containerID="cri-o://4aa2f99a6cb09e58bd131a500f9c11f552be7eba00ee188e76ad7a3b5ac1987e" gracePeriod=15 Jan 25 08:01:05 crc kubenswrapper[4832]: I0125 08:01:05.159839 4832 kubelet.go:1914] "Deleted mirror pod because it is outdated" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 25 08:01:05 crc kubenswrapper[4832]: I0125 08:01:05.510456 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-q5r28" Jan 25 08:01:05 crc kubenswrapper[4832]: I0125 08:01:05.677791 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/cb0834ac-2ef5-48dc-a86f-511e79c897f7-v4-0-config-system-service-ca\") pod \"cb0834ac-2ef5-48dc-a86f-511e79c897f7\" (UID: \"cb0834ac-2ef5-48dc-a86f-511e79c897f7\") " Jan 25 08:01:05 crc kubenswrapper[4832]: I0125 08:01:05.677829 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/cb0834ac-2ef5-48dc-a86f-511e79c897f7-v4-0-config-user-idp-0-file-data\") pod \"cb0834ac-2ef5-48dc-a86f-511e79c897f7\" (UID: \"cb0834ac-2ef5-48dc-a86f-511e79c897f7\") " Jan 25 08:01:05 crc kubenswrapper[4832]: I0125 08:01:05.677866 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cb0834ac-2ef5-48dc-a86f-511e79c897f7-v4-0-config-system-trusted-ca-bundle\") pod \"cb0834ac-2ef5-48dc-a86f-511e79c897f7\" (UID: \"cb0834ac-2ef5-48dc-a86f-511e79c897f7\") " Jan 25 08:01:05 crc kubenswrapper[4832]: I0125 08:01:05.677881 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/cb0834ac-2ef5-48dc-a86f-511e79c897f7-audit-policies\") pod \"cb0834ac-2ef5-48dc-a86f-511e79c897f7\" (UID: \"cb0834ac-2ef5-48dc-a86f-511e79c897f7\") " Jan 25 08:01:05 crc kubenswrapper[4832]: I0125 08:01:05.677928 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/cb0834ac-2ef5-48dc-a86f-511e79c897f7-v4-0-config-user-template-login\") pod \"cb0834ac-2ef5-48dc-a86f-511e79c897f7\" (UID: \"cb0834ac-2ef5-48dc-a86f-511e79c897f7\") " Jan 25 08:01:05 crc kubenswrapper[4832]: I0125 08:01:05.677957 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/cb0834ac-2ef5-48dc-a86f-511e79c897f7-v4-0-config-system-serving-cert\") pod \"cb0834ac-2ef5-48dc-a86f-511e79c897f7\" (UID: \"cb0834ac-2ef5-48dc-a86f-511e79c897f7\") " Jan 25 08:01:05 crc kubenswrapper[4832]: I0125 08:01:05.677978 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/cb0834ac-2ef5-48dc-a86f-511e79c897f7-v4-0-config-system-ocp-branding-template\") pod \"cb0834ac-2ef5-48dc-a86f-511e79c897f7\" (UID: \"cb0834ac-2ef5-48dc-a86f-511e79c897f7\") " Jan 25 08:01:05 crc kubenswrapper[4832]: I0125 08:01:05.677993 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/cb0834ac-2ef5-48dc-a86f-511e79c897f7-v4-0-config-system-cliconfig\") pod \"cb0834ac-2ef5-48dc-a86f-511e79c897f7\" (UID: \"cb0834ac-2ef5-48dc-a86f-511e79c897f7\") " Jan 25 08:01:05 crc kubenswrapper[4832]: I0125 08:01:05.678029 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/cb0834ac-2ef5-48dc-a86f-511e79c897f7-v4-0-config-user-template-error\") pod \"cb0834ac-2ef5-48dc-a86f-511e79c897f7\" (UID: \"cb0834ac-2ef5-48dc-a86f-511e79c897f7\") " Jan 25 08:01:05 crc kubenswrapper[4832]: I0125 08:01:05.678050 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/cb0834ac-2ef5-48dc-a86f-511e79c897f7-v4-0-config-user-template-provider-selection\") pod \"cb0834ac-2ef5-48dc-a86f-511e79c897f7\" (UID: \"cb0834ac-2ef5-48dc-a86f-511e79c897f7\") " Jan 25 08:01:05 crc kubenswrapper[4832]: I0125 08:01:05.678074 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/cb0834ac-2ef5-48dc-a86f-511e79c897f7-audit-dir\") pod \"cb0834ac-2ef5-48dc-a86f-511e79c897f7\" (UID: \"cb0834ac-2ef5-48dc-a86f-511e79c897f7\") " Jan 25 08:01:05 crc kubenswrapper[4832]: I0125 08:01:05.678090 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4x5qc\" (UniqueName: \"kubernetes.io/projected/cb0834ac-2ef5-48dc-a86f-511e79c897f7-kube-api-access-4x5qc\") pod \"cb0834ac-2ef5-48dc-a86f-511e79c897f7\" (UID: \"cb0834ac-2ef5-48dc-a86f-511e79c897f7\") " Jan 25 08:01:05 crc kubenswrapper[4832]: I0125 08:01:05.678125 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/cb0834ac-2ef5-48dc-a86f-511e79c897f7-v4-0-config-system-router-certs\") pod \"cb0834ac-2ef5-48dc-a86f-511e79c897f7\" (UID: \"cb0834ac-2ef5-48dc-a86f-511e79c897f7\") " Jan 25 08:01:05 crc kubenswrapper[4832]: I0125 08:01:05.678343 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/cb0834ac-2ef5-48dc-a86f-511e79c897f7-v4-0-config-system-session\") pod \"cb0834ac-2ef5-48dc-a86f-511e79c897f7\" (UID: \"cb0834ac-2ef5-48dc-a86f-511e79c897f7\") " Jan 25 08:01:05 crc kubenswrapper[4832]: I0125 08:01:05.678474 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cb0834ac-2ef5-48dc-a86f-511e79c897f7-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "cb0834ac-2ef5-48dc-a86f-511e79c897f7" (UID: "cb0834ac-2ef5-48dc-a86f-511e79c897f7"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 25 08:01:05 crc kubenswrapper[4832]: I0125 08:01:05.678538 4832 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/cb0834ac-2ef5-48dc-a86f-511e79c897f7-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Jan 25 08:01:05 crc kubenswrapper[4832]: I0125 08:01:05.679205 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cb0834ac-2ef5-48dc-a86f-511e79c897f7-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "cb0834ac-2ef5-48dc-a86f-511e79c897f7" (UID: "cb0834ac-2ef5-48dc-a86f-511e79c897f7"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 25 08:01:05 crc kubenswrapper[4832]: I0125 08:01:05.679265 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cb0834ac-2ef5-48dc-a86f-511e79c897f7-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "cb0834ac-2ef5-48dc-a86f-511e79c897f7" (UID: "cb0834ac-2ef5-48dc-a86f-511e79c897f7"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 25 08:01:05 crc kubenswrapper[4832]: I0125 08:01:05.679376 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cb0834ac-2ef5-48dc-a86f-511e79c897f7-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "cb0834ac-2ef5-48dc-a86f-511e79c897f7" (UID: "cb0834ac-2ef5-48dc-a86f-511e79c897f7"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 25 08:01:05 crc kubenswrapper[4832]: I0125 08:01:05.680272 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cb0834ac-2ef5-48dc-a86f-511e79c897f7-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "cb0834ac-2ef5-48dc-a86f-511e79c897f7" (UID: "cb0834ac-2ef5-48dc-a86f-511e79c897f7"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 25 08:01:05 crc kubenswrapper[4832]: I0125 08:01:05.685099 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cb0834ac-2ef5-48dc-a86f-511e79c897f7-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "cb0834ac-2ef5-48dc-a86f-511e79c897f7" (UID: "cb0834ac-2ef5-48dc-a86f-511e79c897f7"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 25 08:01:05 crc kubenswrapper[4832]: I0125 08:01:05.685212 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cb0834ac-2ef5-48dc-a86f-511e79c897f7-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "cb0834ac-2ef5-48dc-a86f-511e79c897f7" (UID: "cb0834ac-2ef5-48dc-a86f-511e79c897f7"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 25 08:01:05 crc kubenswrapper[4832]: I0125 08:01:05.685479 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cb0834ac-2ef5-48dc-a86f-511e79c897f7-kube-api-access-4x5qc" (OuterVolumeSpecName: "kube-api-access-4x5qc") pod "cb0834ac-2ef5-48dc-a86f-511e79c897f7" (UID: "cb0834ac-2ef5-48dc-a86f-511e79c897f7"). InnerVolumeSpecName "kube-api-access-4x5qc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 25 08:01:05 crc kubenswrapper[4832]: I0125 08:01:05.685887 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cb0834ac-2ef5-48dc-a86f-511e79c897f7-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "cb0834ac-2ef5-48dc-a86f-511e79c897f7" (UID: "cb0834ac-2ef5-48dc-a86f-511e79c897f7"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 25 08:01:05 crc kubenswrapper[4832]: I0125 08:01:05.686044 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cb0834ac-2ef5-48dc-a86f-511e79c897f7-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "cb0834ac-2ef5-48dc-a86f-511e79c897f7" (UID: "cb0834ac-2ef5-48dc-a86f-511e79c897f7"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 25 08:01:05 crc kubenswrapper[4832]: I0125 08:01:05.686194 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cb0834ac-2ef5-48dc-a86f-511e79c897f7-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "cb0834ac-2ef5-48dc-a86f-511e79c897f7" (UID: "cb0834ac-2ef5-48dc-a86f-511e79c897f7"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 25 08:01:05 crc kubenswrapper[4832]: I0125 08:01:05.686415 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cb0834ac-2ef5-48dc-a86f-511e79c897f7-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "cb0834ac-2ef5-48dc-a86f-511e79c897f7" (UID: "cb0834ac-2ef5-48dc-a86f-511e79c897f7"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 25 08:01:05 crc kubenswrapper[4832]: I0125 08:01:05.686747 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cb0834ac-2ef5-48dc-a86f-511e79c897f7-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "cb0834ac-2ef5-48dc-a86f-511e79c897f7" (UID: "cb0834ac-2ef5-48dc-a86f-511e79c897f7"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 25 08:01:05 crc kubenswrapper[4832]: I0125 08:01:05.691781 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cb0834ac-2ef5-48dc-a86f-511e79c897f7-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "cb0834ac-2ef5-48dc-a86f-511e79c897f7" (UID: "cb0834ac-2ef5-48dc-a86f-511e79c897f7"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 25 08:01:05 crc kubenswrapper[4832]: I0125 08:01:05.771401 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 25 08:01:05 crc kubenswrapper[4832]: I0125 08:01:05.775331 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 25 08:01:05 crc kubenswrapper[4832]: I0125 08:01:05.779213 4832 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/cb0834ac-2ef5-48dc-a86f-511e79c897f7-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Jan 25 08:01:05 crc kubenswrapper[4832]: I0125 08:01:05.779237 4832 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/cb0834ac-2ef5-48dc-a86f-511e79c897f7-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 25 08:01:05 crc kubenswrapper[4832]: I0125 08:01:05.779812 4832 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/cb0834ac-2ef5-48dc-a86f-511e79c897f7-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Jan 25 08:01:05 crc kubenswrapper[4832]: I0125 08:01:05.779825 4832 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/cb0834ac-2ef5-48dc-a86f-511e79c897f7-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Jan 25 08:01:05 crc kubenswrapper[4832]: I0125 08:01:05.779834 4832 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/cb0834ac-2ef5-48dc-a86f-511e79c897f7-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Jan 25 08:01:05 crc kubenswrapper[4832]: I0125 08:01:05.779843 4832 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/cb0834ac-2ef5-48dc-a86f-511e79c897f7-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Jan 25 08:01:05 crc kubenswrapper[4832]: I0125 08:01:05.779852 4832 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/cb0834ac-2ef5-48dc-a86f-511e79c897f7-audit-dir\") on node \"crc\" DevicePath \"\"" Jan 25 08:01:05 crc kubenswrapper[4832]: I0125 08:01:05.779860 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4x5qc\" (UniqueName: \"kubernetes.io/projected/cb0834ac-2ef5-48dc-a86f-511e79c897f7-kube-api-access-4x5qc\") on node \"crc\" DevicePath \"\"" Jan 25 08:01:05 crc kubenswrapper[4832]: I0125 08:01:05.779869 4832 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/cb0834ac-2ef5-48dc-a86f-511e79c897f7-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Jan 25 08:01:05 crc kubenswrapper[4832]: I0125 08:01:05.779877 4832 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/cb0834ac-2ef5-48dc-a86f-511e79c897f7-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Jan 25 08:01:05 crc kubenswrapper[4832]: I0125 08:01:05.779886 4832 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/cb0834ac-2ef5-48dc-a86f-511e79c897f7-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Jan 25 08:01:05 crc kubenswrapper[4832]: I0125 08:01:05.779893 4832 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/cb0834ac-2ef5-48dc-a86f-511e79c897f7-audit-policies\") on node \"crc\" DevicePath \"\"" Jan 25 08:01:05 crc kubenswrapper[4832]: I0125 08:01:05.779901 4832 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cb0834ac-2ef5-48dc-a86f-511e79c897f7-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 25 08:01:06 crc kubenswrapper[4832]: I0125 08:01:06.177785 4832 generic.go:334] "Generic (PLEG): container finished" podID="cb0834ac-2ef5-48dc-a86f-511e79c897f7" containerID="4aa2f99a6cb09e58bd131a500f9c11f552be7eba00ee188e76ad7a3b5ac1987e" exitCode=0 Jan 25 08:01:06 crc kubenswrapper[4832]: I0125 08:01:06.177831 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-q5r28" Jan 25 08:01:06 crc kubenswrapper[4832]: I0125 08:01:06.177828 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-q5r28" event={"ID":"cb0834ac-2ef5-48dc-a86f-511e79c897f7","Type":"ContainerDied","Data":"4aa2f99a6cb09e58bd131a500f9c11f552be7eba00ee188e76ad7a3b5ac1987e"} Jan 25 08:01:06 crc kubenswrapper[4832]: I0125 08:01:06.177960 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-q5r28" event={"ID":"cb0834ac-2ef5-48dc-a86f-511e79c897f7","Type":"ContainerDied","Data":"2487fbdce256f30617517b45f0729e348432293f6deade7aceb3e47928c6adcb"} Jan 25 08:01:06 crc kubenswrapper[4832]: I0125 08:01:06.177988 4832 scope.go:117] "RemoveContainer" containerID="4aa2f99a6cb09e58bd131a500f9c11f552be7eba00ee188e76ad7a3b5ac1987e" Jan 25 08:01:06 crc kubenswrapper[4832]: I0125 08:01:06.178097 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 25 08:01:06 crc kubenswrapper[4832]: I0125 08:01:06.178149 4832 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="4399c971-4476-4d24-ae22-8f9710ee1ea8" Jan 25 08:01:06 crc kubenswrapper[4832]: I0125 08:01:06.178167 4832 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="4399c971-4476-4d24-ae22-8f9710ee1ea8" Jan 25 08:01:06 crc kubenswrapper[4832]: I0125 08:01:06.188829 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 25 08:01:06 crc kubenswrapper[4832]: I0125 08:01:06.191988 4832 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="a4004a8d-578e-4fcd-ae67-bf19f78aa7c7" Jan 25 08:01:06 crc kubenswrapper[4832]: I0125 08:01:06.204046 4832 scope.go:117] "RemoveContainer" containerID="4aa2f99a6cb09e58bd131a500f9c11f552be7eba00ee188e76ad7a3b5ac1987e" Jan 25 08:01:06 crc kubenswrapper[4832]: E0125 08:01:06.204788 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4aa2f99a6cb09e58bd131a500f9c11f552be7eba00ee188e76ad7a3b5ac1987e\": container with ID starting with 4aa2f99a6cb09e58bd131a500f9c11f552be7eba00ee188e76ad7a3b5ac1987e not found: ID does not exist" containerID="4aa2f99a6cb09e58bd131a500f9c11f552be7eba00ee188e76ad7a3b5ac1987e" Jan 25 08:01:06 crc kubenswrapper[4832]: I0125 08:01:06.204846 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4aa2f99a6cb09e58bd131a500f9c11f552be7eba00ee188e76ad7a3b5ac1987e"} err="failed to get container status \"4aa2f99a6cb09e58bd131a500f9c11f552be7eba00ee188e76ad7a3b5ac1987e\": rpc error: code = NotFound desc = could not find container \"4aa2f99a6cb09e58bd131a500f9c11f552be7eba00ee188e76ad7a3b5ac1987e\": container with ID starting with 4aa2f99a6cb09e58bd131a500f9c11f552be7eba00ee188e76ad7a3b5ac1987e not found: ID does not exist" Jan 25 08:01:07 crc kubenswrapper[4832]: I0125 08:01:07.187224 4832 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="4399c971-4476-4d24-ae22-8f9710ee1ea8" Jan 25 08:01:07 crc kubenswrapper[4832]: I0125 08:01:07.187258 4832 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="4399c971-4476-4d24-ae22-8f9710ee1ea8" Jan 25 08:01:07 crc kubenswrapper[4832]: I0125 08:01:07.687494 4832 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="a4004a8d-578e-4fcd-ae67-bf19f78aa7c7" Jan 25 08:01:09 crc kubenswrapper[4832]: I0125 08:01:09.095525 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 25 08:01:11 crc kubenswrapper[4832]: I0125 08:01:11.309243 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Jan 25 08:01:11 crc kubenswrapper[4832]: I0125 08:01:11.904060 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Jan 25 08:01:11 crc kubenswrapper[4832]: I0125 08:01:11.983073 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Jan 25 08:01:12 crc kubenswrapper[4832]: I0125 08:01:12.583865 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Jan 25 08:01:12 crc kubenswrapper[4832]: I0125 08:01:12.730335 4832 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Jan 25 08:01:12 crc kubenswrapper[4832]: I0125 08:01:12.987979 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Jan 25 08:01:13 crc kubenswrapper[4832]: I0125 08:01:13.791617 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Jan 25 08:01:14 crc kubenswrapper[4832]: I0125 08:01:14.354644 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Jan 25 08:01:14 crc kubenswrapper[4832]: I0125 08:01:14.544935 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 25 08:01:14 crc kubenswrapper[4832]: I0125 08:01:14.982941 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Jan 25 08:01:15 crc kubenswrapper[4832]: I0125 08:01:15.422981 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Jan 25 08:01:15 crc kubenswrapper[4832]: I0125 08:01:15.574358 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Jan 25 08:01:15 crc kubenswrapper[4832]: I0125 08:01:15.950895 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Jan 25 08:01:16 crc kubenswrapper[4832]: I0125 08:01:16.501502 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Jan 25 08:01:16 crc kubenswrapper[4832]: I0125 08:01:16.534731 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Jan 25 08:01:16 crc kubenswrapper[4832]: I0125 08:01:16.879776 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 25 08:01:16 crc kubenswrapper[4832]: I0125 08:01:16.900225 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Jan 25 08:01:16 crc kubenswrapper[4832]: I0125 08:01:16.952323 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Jan 25 08:01:17 crc kubenswrapper[4832]: I0125 08:01:17.249076 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Jan 25 08:01:17 crc kubenswrapper[4832]: I0125 08:01:17.587194 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Jan 25 08:01:17 crc kubenswrapper[4832]: I0125 08:01:17.684424 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Jan 25 08:01:17 crc kubenswrapper[4832]: I0125 08:01:17.881418 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Jan 25 08:01:17 crc kubenswrapper[4832]: I0125 08:01:17.949782 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Jan 25 08:01:18 crc kubenswrapper[4832]: I0125 08:01:18.272176 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Jan 25 08:01:18 crc kubenswrapper[4832]: I0125 08:01:18.396302 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Jan 25 08:01:18 crc kubenswrapper[4832]: I0125 08:01:18.496984 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Jan 25 08:01:18 crc kubenswrapper[4832]: I0125 08:01:18.499324 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Jan 25 08:01:18 crc kubenswrapper[4832]: I0125 08:01:18.549243 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Jan 25 08:01:18 crc kubenswrapper[4832]: I0125 08:01:18.596572 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Jan 25 08:01:18 crc kubenswrapper[4832]: I0125 08:01:18.645547 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Jan 25 08:01:18 crc kubenswrapper[4832]: I0125 08:01:18.713086 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Jan 25 08:01:18 crc kubenswrapper[4832]: I0125 08:01:18.893858 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Jan 25 08:01:19 crc kubenswrapper[4832]: I0125 08:01:19.007094 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Jan 25 08:01:19 crc kubenswrapper[4832]: I0125 08:01:19.014109 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Jan 25 08:01:19 crc kubenswrapper[4832]: I0125 08:01:19.028435 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Jan 25 08:01:19 crc kubenswrapper[4832]: I0125 08:01:19.232702 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Jan 25 08:01:19 crc kubenswrapper[4832]: I0125 08:01:19.451377 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Jan 25 08:01:19 crc kubenswrapper[4832]: I0125 08:01:19.468860 4832 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Jan 25 08:01:19 crc kubenswrapper[4832]: I0125 08:01:19.534698 4832 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Jan 25 08:01:19 crc kubenswrapper[4832]: I0125 08:01:19.539757 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-q5r28","openshift-kube-apiserver/kube-apiserver-crc"] Jan 25 08:01:19 crc kubenswrapper[4832]: I0125 08:01:19.539837 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc","openshift-authentication/oauth-openshift-9fc86467f-hsrzz"] Jan 25 08:01:19 crc kubenswrapper[4832]: E0125 08:01:19.540032 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8b9d581a-eedd-4f2b-94a2-e175bbc4530a" containerName="installer" Jan 25 08:01:19 crc kubenswrapper[4832]: I0125 08:01:19.540050 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="8b9d581a-eedd-4f2b-94a2-e175bbc4530a" containerName="installer" Jan 25 08:01:19 crc kubenswrapper[4832]: E0125 08:01:19.540063 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cb0834ac-2ef5-48dc-a86f-511e79c897f7" containerName="oauth-openshift" Jan 25 08:01:19 crc kubenswrapper[4832]: I0125 08:01:19.540071 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="cb0834ac-2ef5-48dc-a86f-511e79c897f7" containerName="oauth-openshift" Jan 25 08:01:19 crc kubenswrapper[4832]: I0125 08:01:19.540183 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="cb0834ac-2ef5-48dc-a86f-511e79c897f7" containerName="oauth-openshift" Jan 25 08:01:19 crc kubenswrapper[4832]: I0125 08:01:19.540201 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="8b9d581a-eedd-4f2b-94a2-e175bbc4530a" containerName="installer" Jan 25 08:01:19 crc kubenswrapper[4832]: I0125 08:01:19.540528 4832 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="4399c971-4476-4d24-ae22-8f9710ee1ea8" Jan 25 08:01:19 crc kubenswrapper[4832]: I0125 08:01:19.540574 4832 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="4399c971-4476-4d24-ae22-8f9710ee1ea8" Jan 25 08:01:19 crc kubenswrapper[4832]: I0125 08:01:19.540639 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-9fc86467f-hsrzz" Jan 25 08:01:19 crc kubenswrapper[4832]: I0125 08:01:19.546916 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Jan 25 08:01:19 crc kubenswrapper[4832]: I0125 08:01:19.547257 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 25 08:01:19 crc kubenswrapper[4832]: I0125 08:01:19.547296 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Jan 25 08:01:19 crc kubenswrapper[4832]: I0125 08:01:19.547321 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Jan 25 08:01:19 crc kubenswrapper[4832]: I0125 08:01:19.547477 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Jan 25 08:01:19 crc kubenswrapper[4832]: I0125 08:01:19.547536 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Jan 25 08:01:19 crc kubenswrapper[4832]: I0125 08:01:19.547816 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Jan 25 08:01:19 crc kubenswrapper[4832]: I0125 08:01:19.547951 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Jan 25 08:01:19 crc kubenswrapper[4832]: I0125 08:01:19.549328 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Jan 25 08:01:19 crc kubenswrapper[4832]: I0125 08:01:19.549710 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Jan 25 08:01:19 crc kubenswrapper[4832]: I0125 08:01:19.549886 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Jan 25 08:01:19 crc kubenswrapper[4832]: I0125 08:01:19.550751 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Jan 25 08:01:19 crc kubenswrapper[4832]: I0125 08:01:19.552952 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Jan 25 08:01:19 crc kubenswrapper[4832]: I0125 08:01:19.561215 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Jan 25 08:01:19 crc kubenswrapper[4832]: I0125 08:01:19.563290 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Jan 25 08:01:19 crc kubenswrapper[4832]: I0125 08:01:19.563336 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Jan 25 08:01:19 crc kubenswrapper[4832]: I0125 08:01:19.569793 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Jan 25 08:01:19 crc kubenswrapper[4832]: I0125 08:01:19.577209 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=14.577185092 podStartE2EDuration="14.577185092s" podCreationTimestamp="2026-01-25 08:01:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-25 08:01:19.574631628 +0000 UTC m=+262.248455161" watchObservedRunningTime="2026-01-25 08:01:19.577185092 +0000 UTC m=+262.251008685" Jan 25 08:01:19 crc kubenswrapper[4832]: I0125 08:01:19.631557 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Jan 25 08:01:19 crc kubenswrapper[4832]: I0125 08:01:19.678314 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cb0834ac-2ef5-48dc-a86f-511e79c897f7" path="/var/lib/kubelet/pods/cb0834ac-2ef5-48dc-a86f-511e79c897f7/volumes" Jan 25 08:01:19 crc kubenswrapper[4832]: I0125 08:01:19.683767 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3c8bf29a-c28c-44d7-8e02-77a7ec993a7e-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-9fc86467f-hsrzz\" (UID: \"3c8bf29a-c28c-44d7-8e02-77a7ec993a7e\") " pod="openshift-authentication/oauth-openshift-9fc86467f-hsrzz" Jan 25 08:01:19 crc kubenswrapper[4832]: I0125 08:01:19.683902 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/3c8bf29a-c28c-44d7-8e02-77a7ec993a7e-audit-dir\") pod \"oauth-openshift-9fc86467f-hsrzz\" (UID: \"3c8bf29a-c28c-44d7-8e02-77a7ec993a7e\") " pod="openshift-authentication/oauth-openshift-9fc86467f-hsrzz" Jan 25 08:01:19 crc kubenswrapper[4832]: I0125 08:01:19.683997 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/3c8bf29a-c28c-44d7-8e02-77a7ec993a7e-v4-0-config-system-service-ca\") pod \"oauth-openshift-9fc86467f-hsrzz\" (UID: \"3c8bf29a-c28c-44d7-8e02-77a7ec993a7e\") " pod="openshift-authentication/oauth-openshift-9fc86467f-hsrzz" Jan 25 08:01:19 crc kubenswrapper[4832]: I0125 08:01:19.684086 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/3c8bf29a-c28c-44d7-8e02-77a7ec993a7e-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-9fc86467f-hsrzz\" (UID: \"3c8bf29a-c28c-44d7-8e02-77a7ec993a7e\") " pod="openshift-authentication/oauth-openshift-9fc86467f-hsrzz" Jan 25 08:01:19 crc kubenswrapper[4832]: I0125 08:01:19.684169 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/3c8bf29a-c28c-44d7-8e02-77a7ec993a7e-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-9fc86467f-hsrzz\" (UID: \"3c8bf29a-c28c-44d7-8e02-77a7ec993a7e\") " pod="openshift-authentication/oauth-openshift-9fc86467f-hsrzz" Jan 25 08:01:19 crc kubenswrapper[4832]: I0125 08:01:19.684263 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/3c8bf29a-c28c-44d7-8e02-77a7ec993a7e-v4-0-config-user-template-error\") pod \"oauth-openshift-9fc86467f-hsrzz\" (UID: \"3c8bf29a-c28c-44d7-8e02-77a7ec993a7e\") " pod="openshift-authentication/oauth-openshift-9fc86467f-hsrzz" Jan 25 08:01:19 crc kubenswrapper[4832]: I0125 08:01:19.684341 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/3c8bf29a-c28c-44d7-8e02-77a7ec993a7e-v4-0-config-system-serving-cert\") pod \"oauth-openshift-9fc86467f-hsrzz\" (UID: \"3c8bf29a-c28c-44d7-8e02-77a7ec993a7e\") " pod="openshift-authentication/oauth-openshift-9fc86467f-hsrzz" Jan 25 08:01:19 crc kubenswrapper[4832]: I0125 08:01:19.684445 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/3c8bf29a-c28c-44d7-8e02-77a7ec993a7e-v4-0-config-system-router-certs\") pod \"oauth-openshift-9fc86467f-hsrzz\" (UID: \"3c8bf29a-c28c-44d7-8e02-77a7ec993a7e\") " pod="openshift-authentication/oauth-openshift-9fc86467f-hsrzz" Jan 25 08:01:19 crc kubenswrapper[4832]: I0125 08:01:19.684535 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/3c8bf29a-c28c-44d7-8e02-77a7ec993a7e-v4-0-config-user-template-login\") pod \"oauth-openshift-9fc86467f-hsrzz\" (UID: \"3c8bf29a-c28c-44d7-8e02-77a7ec993a7e\") " pod="openshift-authentication/oauth-openshift-9fc86467f-hsrzz" Jan 25 08:01:19 crc kubenswrapper[4832]: I0125 08:01:19.684616 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lqlkl\" (UniqueName: \"kubernetes.io/projected/3c8bf29a-c28c-44d7-8e02-77a7ec993a7e-kube-api-access-lqlkl\") pod \"oauth-openshift-9fc86467f-hsrzz\" (UID: \"3c8bf29a-c28c-44d7-8e02-77a7ec993a7e\") " pod="openshift-authentication/oauth-openshift-9fc86467f-hsrzz" Jan 25 08:01:19 crc kubenswrapper[4832]: I0125 08:01:19.684700 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/3c8bf29a-c28c-44d7-8e02-77a7ec993a7e-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-9fc86467f-hsrzz\" (UID: \"3c8bf29a-c28c-44d7-8e02-77a7ec993a7e\") " pod="openshift-authentication/oauth-openshift-9fc86467f-hsrzz" Jan 25 08:01:19 crc kubenswrapper[4832]: I0125 08:01:19.684798 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/3c8bf29a-c28c-44d7-8e02-77a7ec993a7e-v4-0-config-system-cliconfig\") pod \"oauth-openshift-9fc86467f-hsrzz\" (UID: \"3c8bf29a-c28c-44d7-8e02-77a7ec993a7e\") " pod="openshift-authentication/oauth-openshift-9fc86467f-hsrzz" Jan 25 08:01:19 crc kubenswrapper[4832]: I0125 08:01:19.684889 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/3c8bf29a-c28c-44d7-8e02-77a7ec993a7e-audit-policies\") pod \"oauth-openshift-9fc86467f-hsrzz\" (UID: \"3c8bf29a-c28c-44d7-8e02-77a7ec993a7e\") " pod="openshift-authentication/oauth-openshift-9fc86467f-hsrzz" Jan 25 08:01:19 crc kubenswrapper[4832]: I0125 08:01:19.684971 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/3c8bf29a-c28c-44d7-8e02-77a7ec993a7e-v4-0-config-system-session\") pod \"oauth-openshift-9fc86467f-hsrzz\" (UID: \"3c8bf29a-c28c-44d7-8e02-77a7ec993a7e\") " pod="openshift-authentication/oauth-openshift-9fc86467f-hsrzz" Jan 25 08:01:19 crc kubenswrapper[4832]: I0125 08:01:19.707423 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Jan 25 08:01:19 crc kubenswrapper[4832]: I0125 08:01:19.786481 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/3c8bf29a-c28c-44d7-8e02-77a7ec993a7e-v4-0-config-system-cliconfig\") pod \"oauth-openshift-9fc86467f-hsrzz\" (UID: \"3c8bf29a-c28c-44d7-8e02-77a7ec993a7e\") " pod="openshift-authentication/oauth-openshift-9fc86467f-hsrzz" Jan 25 08:01:19 crc kubenswrapper[4832]: I0125 08:01:19.786572 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/3c8bf29a-c28c-44d7-8e02-77a7ec993a7e-audit-policies\") pod \"oauth-openshift-9fc86467f-hsrzz\" (UID: \"3c8bf29a-c28c-44d7-8e02-77a7ec993a7e\") " pod="openshift-authentication/oauth-openshift-9fc86467f-hsrzz" Jan 25 08:01:19 crc kubenswrapper[4832]: I0125 08:01:19.786604 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/3c8bf29a-c28c-44d7-8e02-77a7ec993a7e-v4-0-config-system-session\") pod \"oauth-openshift-9fc86467f-hsrzz\" (UID: \"3c8bf29a-c28c-44d7-8e02-77a7ec993a7e\") " pod="openshift-authentication/oauth-openshift-9fc86467f-hsrzz" Jan 25 08:01:19 crc kubenswrapper[4832]: I0125 08:01:19.786664 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3c8bf29a-c28c-44d7-8e02-77a7ec993a7e-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-9fc86467f-hsrzz\" (UID: \"3c8bf29a-c28c-44d7-8e02-77a7ec993a7e\") " pod="openshift-authentication/oauth-openshift-9fc86467f-hsrzz" Jan 25 08:01:19 crc kubenswrapper[4832]: I0125 08:01:19.786698 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/3c8bf29a-c28c-44d7-8e02-77a7ec993a7e-audit-dir\") pod \"oauth-openshift-9fc86467f-hsrzz\" (UID: \"3c8bf29a-c28c-44d7-8e02-77a7ec993a7e\") " pod="openshift-authentication/oauth-openshift-9fc86467f-hsrzz" Jan 25 08:01:19 crc kubenswrapper[4832]: I0125 08:01:19.786727 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/3c8bf29a-c28c-44d7-8e02-77a7ec993a7e-v4-0-config-system-service-ca\") pod \"oauth-openshift-9fc86467f-hsrzz\" (UID: \"3c8bf29a-c28c-44d7-8e02-77a7ec993a7e\") " pod="openshift-authentication/oauth-openshift-9fc86467f-hsrzz" Jan 25 08:01:19 crc kubenswrapper[4832]: I0125 08:01:19.786753 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/3c8bf29a-c28c-44d7-8e02-77a7ec993a7e-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-9fc86467f-hsrzz\" (UID: \"3c8bf29a-c28c-44d7-8e02-77a7ec993a7e\") " pod="openshift-authentication/oauth-openshift-9fc86467f-hsrzz" Jan 25 08:01:19 crc kubenswrapper[4832]: I0125 08:01:19.786778 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/3c8bf29a-c28c-44d7-8e02-77a7ec993a7e-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-9fc86467f-hsrzz\" (UID: \"3c8bf29a-c28c-44d7-8e02-77a7ec993a7e\") " pod="openshift-authentication/oauth-openshift-9fc86467f-hsrzz" Jan 25 08:01:19 crc kubenswrapper[4832]: I0125 08:01:19.786813 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/3c8bf29a-c28c-44d7-8e02-77a7ec993a7e-v4-0-config-user-template-error\") pod \"oauth-openshift-9fc86467f-hsrzz\" (UID: \"3c8bf29a-c28c-44d7-8e02-77a7ec993a7e\") " pod="openshift-authentication/oauth-openshift-9fc86467f-hsrzz" Jan 25 08:01:19 crc kubenswrapper[4832]: I0125 08:01:19.786835 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/3c8bf29a-c28c-44d7-8e02-77a7ec993a7e-v4-0-config-system-serving-cert\") pod \"oauth-openshift-9fc86467f-hsrzz\" (UID: \"3c8bf29a-c28c-44d7-8e02-77a7ec993a7e\") " pod="openshift-authentication/oauth-openshift-9fc86467f-hsrzz" Jan 25 08:01:19 crc kubenswrapper[4832]: I0125 08:01:19.786857 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/3c8bf29a-c28c-44d7-8e02-77a7ec993a7e-v4-0-config-system-router-certs\") pod \"oauth-openshift-9fc86467f-hsrzz\" (UID: \"3c8bf29a-c28c-44d7-8e02-77a7ec993a7e\") " pod="openshift-authentication/oauth-openshift-9fc86467f-hsrzz" Jan 25 08:01:19 crc kubenswrapper[4832]: I0125 08:01:19.786887 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/3c8bf29a-c28c-44d7-8e02-77a7ec993a7e-v4-0-config-user-template-login\") pod \"oauth-openshift-9fc86467f-hsrzz\" (UID: \"3c8bf29a-c28c-44d7-8e02-77a7ec993a7e\") " pod="openshift-authentication/oauth-openshift-9fc86467f-hsrzz" Jan 25 08:01:19 crc kubenswrapper[4832]: I0125 08:01:19.786911 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/3c8bf29a-c28c-44d7-8e02-77a7ec993a7e-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-9fc86467f-hsrzz\" (UID: \"3c8bf29a-c28c-44d7-8e02-77a7ec993a7e\") " pod="openshift-authentication/oauth-openshift-9fc86467f-hsrzz" Jan 25 08:01:19 crc kubenswrapper[4832]: I0125 08:01:19.786934 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lqlkl\" (UniqueName: \"kubernetes.io/projected/3c8bf29a-c28c-44d7-8e02-77a7ec993a7e-kube-api-access-lqlkl\") pod \"oauth-openshift-9fc86467f-hsrzz\" (UID: \"3c8bf29a-c28c-44d7-8e02-77a7ec993a7e\") " pod="openshift-authentication/oauth-openshift-9fc86467f-hsrzz" Jan 25 08:01:19 crc kubenswrapper[4832]: I0125 08:01:19.787661 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/3c8bf29a-c28c-44d7-8e02-77a7ec993a7e-audit-dir\") pod \"oauth-openshift-9fc86467f-hsrzz\" (UID: \"3c8bf29a-c28c-44d7-8e02-77a7ec993a7e\") " pod="openshift-authentication/oauth-openshift-9fc86467f-hsrzz" Jan 25 08:01:19 crc kubenswrapper[4832]: I0125 08:01:19.788423 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3c8bf29a-c28c-44d7-8e02-77a7ec993a7e-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-9fc86467f-hsrzz\" (UID: \"3c8bf29a-c28c-44d7-8e02-77a7ec993a7e\") " pod="openshift-authentication/oauth-openshift-9fc86467f-hsrzz" Jan 25 08:01:19 crc kubenswrapper[4832]: I0125 08:01:19.788430 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/3c8bf29a-c28c-44d7-8e02-77a7ec993a7e-v4-0-config-system-service-ca\") pod \"oauth-openshift-9fc86467f-hsrzz\" (UID: \"3c8bf29a-c28c-44d7-8e02-77a7ec993a7e\") " pod="openshift-authentication/oauth-openshift-9fc86467f-hsrzz" Jan 25 08:01:19 crc kubenswrapper[4832]: I0125 08:01:19.788855 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/3c8bf29a-c28c-44d7-8e02-77a7ec993a7e-audit-policies\") pod \"oauth-openshift-9fc86467f-hsrzz\" (UID: \"3c8bf29a-c28c-44d7-8e02-77a7ec993a7e\") " pod="openshift-authentication/oauth-openshift-9fc86467f-hsrzz" Jan 25 08:01:19 crc kubenswrapper[4832]: I0125 08:01:19.788970 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/3c8bf29a-c28c-44d7-8e02-77a7ec993a7e-v4-0-config-system-cliconfig\") pod \"oauth-openshift-9fc86467f-hsrzz\" (UID: \"3c8bf29a-c28c-44d7-8e02-77a7ec993a7e\") " pod="openshift-authentication/oauth-openshift-9fc86467f-hsrzz" Jan 25 08:01:19 crc kubenswrapper[4832]: I0125 08:01:19.793162 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/3c8bf29a-c28c-44d7-8e02-77a7ec993a7e-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-9fc86467f-hsrzz\" (UID: \"3c8bf29a-c28c-44d7-8e02-77a7ec993a7e\") " pod="openshift-authentication/oauth-openshift-9fc86467f-hsrzz" Jan 25 08:01:19 crc kubenswrapper[4832]: I0125 08:01:19.793444 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/3c8bf29a-c28c-44d7-8e02-77a7ec993a7e-v4-0-config-system-serving-cert\") pod \"oauth-openshift-9fc86467f-hsrzz\" (UID: \"3c8bf29a-c28c-44d7-8e02-77a7ec993a7e\") " pod="openshift-authentication/oauth-openshift-9fc86467f-hsrzz" Jan 25 08:01:19 crc kubenswrapper[4832]: I0125 08:01:19.793709 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/3c8bf29a-c28c-44d7-8e02-77a7ec993a7e-v4-0-config-user-template-error\") pod \"oauth-openshift-9fc86467f-hsrzz\" (UID: \"3c8bf29a-c28c-44d7-8e02-77a7ec993a7e\") " pod="openshift-authentication/oauth-openshift-9fc86467f-hsrzz" Jan 25 08:01:19 crc kubenswrapper[4832]: I0125 08:01:19.793755 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/3c8bf29a-c28c-44d7-8e02-77a7ec993a7e-v4-0-config-system-router-certs\") pod \"oauth-openshift-9fc86467f-hsrzz\" (UID: \"3c8bf29a-c28c-44d7-8e02-77a7ec993a7e\") " pod="openshift-authentication/oauth-openshift-9fc86467f-hsrzz" Jan 25 08:01:19 crc kubenswrapper[4832]: I0125 08:01:19.793966 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/3c8bf29a-c28c-44d7-8e02-77a7ec993a7e-v4-0-config-system-session\") pod \"oauth-openshift-9fc86467f-hsrzz\" (UID: \"3c8bf29a-c28c-44d7-8e02-77a7ec993a7e\") " pod="openshift-authentication/oauth-openshift-9fc86467f-hsrzz" Jan 25 08:01:19 crc kubenswrapper[4832]: I0125 08:01:19.801376 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/3c8bf29a-c28c-44d7-8e02-77a7ec993a7e-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-9fc86467f-hsrzz\" (UID: \"3c8bf29a-c28c-44d7-8e02-77a7ec993a7e\") " pod="openshift-authentication/oauth-openshift-9fc86467f-hsrzz" Jan 25 08:01:19 crc kubenswrapper[4832]: I0125 08:01:19.806532 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Jan 25 08:01:19 crc kubenswrapper[4832]: I0125 08:01:19.806876 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/3c8bf29a-c28c-44d7-8e02-77a7ec993a7e-v4-0-config-user-template-login\") pod \"oauth-openshift-9fc86467f-hsrzz\" (UID: \"3c8bf29a-c28c-44d7-8e02-77a7ec993a7e\") " pod="openshift-authentication/oauth-openshift-9fc86467f-hsrzz" Jan 25 08:01:19 crc kubenswrapper[4832]: I0125 08:01:19.806045 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/3c8bf29a-c28c-44d7-8e02-77a7ec993a7e-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-9fc86467f-hsrzz\" (UID: \"3c8bf29a-c28c-44d7-8e02-77a7ec993a7e\") " pod="openshift-authentication/oauth-openshift-9fc86467f-hsrzz" Jan 25 08:01:19 crc kubenswrapper[4832]: I0125 08:01:19.810347 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lqlkl\" (UniqueName: \"kubernetes.io/projected/3c8bf29a-c28c-44d7-8e02-77a7ec993a7e-kube-api-access-lqlkl\") pod \"oauth-openshift-9fc86467f-hsrzz\" (UID: \"3c8bf29a-c28c-44d7-8e02-77a7ec993a7e\") " pod="openshift-authentication/oauth-openshift-9fc86467f-hsrzz" Jan 25 08:01:19 crc kubenswrapper[4832]: I0125 08:01:19.875353 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-9fc86467f-hsrzz" Jan 25 08:01:19 crc kubenswrapper[4832]: I0125 08:01:19.942536 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Jan 25 08:01:19 crc kubenswrapper[4832]: I0125 08:01:19.975266 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Jan 25 08:01:20 crc kubenswrapper[4832]: I0125 08:01:20.145860 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Jan 25 08:01:20 crc kubenswrapper[4832]: I0125 08:01:20.251855 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Jan 25 08:01:20 crc kubenswrapper[4832]: I0125 08:01:20.305644 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Jan 25 08:01:20 crc kubenswrapper[4832]: I0125 08:01:20.338407 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Jan 25 08:01:20 crc kubenswrapper[4832]: I0125 08:01:20.365917 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Jan 25 08:01:20 crc kubenswrapper[4832]: I0125 08:01:20.507566 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Jan 25 08:01:20 crc kubenswrapper[4832]: I0125 08:01:20.723203 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Jan 25 08:01:20 crc kubenswrapper[4832]: I0125 08:01:20.827155 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Jan 25 08:01:20 crc kubenswrapper[4832]: I0125 08:01:20.876095 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Jan 25 08:01:20 crc kubenswrapper[4832]: I0125 08:01:20.946396 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Jan 25 08:01:20 crc kubenswrapper[4832]: I0125 08:01:20.968403 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Jan 25 08:01:21 crc kubenswrapper[4832]: I0125 08:01:21.113355 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Jan 25 08:01:21 crc kubenswrapper[4832]: I0125 08:01:21.134673 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Jan 25 08:01:21 crc kubenswrapper[4832]: I0125 08:01:21.235708 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Jan 25 08:01:21 crc kubenswrapper[4832]: I0125 08:01:21.250119 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Jan 25 08:01:21 crc kubenswrapper[4832]: I0125 08:01:21.270196 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Jan 25 08:01:21 crc kubenswrapper[4832]: I0125 08:01:21.362364 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Jan 25 08:01:21 crc kubenswrapper[4832]: I0125 08:01:21.669562 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Jan 25 08:01:21 crc kubenswrapper[4832]: I0125 08:01:21.801610 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Jan 25 08:01:21 crc kubenswrapper[4832]: I0125 08:01:21.864698 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Jan 25 08:01:21 crc kubenswrapper[4832]: I0125 08:01:21.891517 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Jan 25 08:01:21 crc kubenswrapper[4832]: I0125 08:01:21.972572 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Jan 25 08:01:22 crc kubenswrapper[4832]: I0125 08:01:22.101834 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Jan 25 08:01:22 crc kubenswrapper[4832]: I0125 08:01:22.130959 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Jan 25 08:01:22 crc kubenswrapper[4832]: I0125 08:01:22.152061 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Jan 25 08:01:22 crc kubenswrapper[4832]: I0125 08:01:22.185428 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Jan 25 08:01:22 crc kubenswrapper[4832]: I0125 08:01:22.208111 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Jan 25 08:01:22 crc kubenswrapper[4832]: I0125 08:01:22.291626 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Jan 25 08:01:22 crc kubenswrapper[4832]: I0125 08:01:22.306985 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Jan 25 08:01:22 crc kubenswrapper[4832]: I0125 08:01:22.310025 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Jan 25 08:01:22 crc kubenswrapper[4832]: I0125 08:01:22.330692 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Jan 25 08:01:22 crc kubenswrapper[4832]: I0125 08:01:22.398605 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Jan 25 08:01:22 crc kubenswrapper[4832]: I0125 08:01:22.429203 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Jan 25 08:01:22 crc kubenswrapper[4832]: I0125 08:01:22.465756 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Jan 25 08:01:22 crc kubenswrapper[4832]: I0125 08:01:22.516686 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 25 08:01:22 crc kubenswrapper[4832]: I0125 08:01:22.711562 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Jan 25 08:01:22 crc kubenswrapper[4832]: I0125 08:01:22.769349 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Jan 25 08:01:22 crc kubenswrapper[4832]: I0125 08:01:22.778633 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Jan 25 08:01:22 crc kubenswrapper[4832]: I0125 08:01:22.917263 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Jan 25 08:01:22 crc kubenswrapper[4832]: I0125 08:01:22.937470 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Jan 25 08:01:22 crc kubenswrapper[4832]: I0125 08:01:22.985665 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Jan 25 08:01:23 crc kubenswrapper[4832]: I0125 08:01:23.050655 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Jan 25 08:01:23 crc kubenswrapper[4832]: I0125 08:01:23.207234 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Jan 25 08:01:23 crc kubenswrapper[4832]: I0125 08:01:23.325168 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Jan 25 08:01:23 crc kubenswrapper[4832]: I0125 08:01:23.330054 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Jan 25 08:01:23 crc kubenswrapper[4832]: I0125 08:01:23.334872 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Jan 25 08:01:23 crc kubenswrapper[4832]: I0125 08:01:23.459535 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Jan 25 08:01:23 crc kubenswrapper[4832]: I0125 08:01:23.462064 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Jan 25 08:01:23 crc kubenswrapper[4832]: I0125 08:01:23.495440 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Jan 25 08:01:23 crc kubenswrapper[4832]: I0125 08:01:23.655090 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Jan 25 08:01:23 crc kubenswrapper[4832]: I0125 08:01:23.715546 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Jan 25 08:01:23 crc kubenswrapper[4832]: I0125 08:01:23.731983 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Jan 25 08:01:23 crc kubenswrapper[4832]: I0125 08:01:23.997054 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Jan 25 08:01:24 crc kubenswrapper[4832]: I0125 08:01:24.073471 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 25 08:01:24 crc kubenswrapper[4832]: I0125 08:01:24.107812 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Jan 25 08:01:24 crc kubenswrapper[4832]: I0125 08:01:24.124209 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Jan 25 08:01:24 crc kubenswrapper[4832]: I0125 08:01:24.150990 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Jan 25 08:01:24 crc kubenswrapper[4832]: I0125 08:01:24.216177 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Jan 25 08:01:24 crc kubenswrapper[4832]: I0125 08:01:24.244759 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Jan 25 08:01:24 crc kubenswrapper[4832]: I0125 08:01:24.290690 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Jan 25 08:01:24 crc kubenswrapper[4832]: I0125 08:01:24.322100 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 25 08:01:24 crc kubenswrapper[4832]: I0125 08:01:24.323962 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Jan 25 08:01:24 crc kubenswrapper[4832]: I0125 08:01:24.352046 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Jan 25 08:01:24 crc kubenswrapper[4832]: I0125 08:01:24.366505 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Jan 25 08:01:24 crc kubenswrapper[4832]: I0125 08:01:24.478796 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Jan 25 08:01:24 crc kubenswrapper[4832]: I0125 08:01:24.621245 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Jan 25 08:01:24 crc kubenswrapper[4832]: I0125 08:01:24.648469 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Jan 25 08:01:24 crc kubenswrapper[4832]: I0125 08:01:24.734247 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Jan 25 08:01:24 crc kubenswrapper[4832]: I0125 08:01:24.782220 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Jan 25 08:01:24 crc kubenswrapper[4832]: I0125 08:01:24.892693 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Jan 25 08:01:24 crc kubenswrapper[4832]: I0125 08:01:24.927115 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Jan 25 08:01:25 crc kubenswrapper[4832]: I0125 08:01:25.019964 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Jan 25 08:01:25 crc kubenswrapper[4832]: I0125 08:01:25.131691 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Jan 25 08:01:25 crc kubenswrapper[4832]: I0125 08:01:25.318529 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Jan 25 08:01:25 crc kubenswrapper[4832]: I0125 08:01:25.384148 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Jan 25 08:01:25 crc kubenswrapper[4832]: I0125 08:01:25.410556 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Jan 25 08:01:25 crc kubenswrapper[4832]: I0125 08:01:25.578829 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Jan 25 08:01:25 crc kubenswrapper[4832]: I0125 08:01:25.591715 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Jan 25 08:01:25 crc kubenswrapper[4832]: I0125 08:01:25.649463 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Jan 25 08:01:25 crc kubenswrapper[4832]: I0125 08:01:25.670872 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Jan 25 08:01:25 crc kubenswrapper[4832]: I0125 08:01:25.716436 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Jan 25 08:01:25 crc kubenswrapper[4832]: I0125 08:01:25.765871 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Jan 25 08:01:25 crc kubenswrapper[4832]: I0125 08:01:25.815377 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Jan 25 08:01:25 crc kubenswrapper[4832]: I0125 08:01:25.840356 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Jan 25 08:01:25 crc kubenswrapper[4832]: I0125 08:01:25.842693 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Jan 25 08:01:25 crc kubenswrapper[4832]: I0125 08:01:25.881821 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Jan 25 08:01:25 crc kubenswrapper[4832]: I0125 08:01:25.941823 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Jan 25 08:01:25 crc kubenswrapper[4832]: I0125 08:01:25.997275 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Jan 25 08:01:26 crc kubenswrapper[4832]: I0125 08:01:26.062255 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 25 08:01:26 crc kubenswrapper[4832]: I0125 08:01:26.064750 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Jan 25 08:01:26 crc kubenswrapper[4832]: I0125 08:01:26.069880 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Jan 25 08:01:26 crc kubenswrapper[4832]: I0125 08:01:26.091212 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Jan 25 08:01:26 crc kubenswrapper[4832]: I0125 08:01:26.098485 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Jan 25 08:01:26 crc kubenswrapper[4832]: I0125 08:01:26.146498 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Jan 25 08:01:26 crc kubenswrapper[4832]: I0125 08:01:26.161483 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Jan 25 08:01:26 crc kubenswrapper[4832]: I0125 08:01:26.171016 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Jan 25 08:01:26 crc kubenswrapper[4832]: I0125 08:01:26.172913 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 25 08:01:26 crc kubenswrapper[4832]: I0125 08:01:26.287351 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Jan 25 08:01:26 crc kubenswrapper[4832]: I0125 08:01:26.430800 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Jan 25 08:01:26 crc kubenswrapper[4832]: I0125 08:01:26.433265 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Jan 25 08:01:26 crc kubenswrapper[4832]: I0125 08:01:26.453099 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Jan 25 08:01:26 crc kubenswrapper[4832]: I0125 08:01:26.478359 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Jan 25 08:01:26 crc kubenswrapper[4832]: I0125 08:01:26.585080 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Jan 25 08:01:26 crc kubenswrapper[4832]: I0125 08:01:26.588058 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Jan 25 08:01:26 crc kubenswrapper[4832]: I0125 08:01:26.593643 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Jan 25 08:01:26 crc kubenswrapper[4832]: I0125 08:01:26.614937 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Jan 25 08:01:26 crc kubenswrapper[4832]: I0125 08:01:26.677798 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Jan 25 08:01:26 crc kubenswrapper[4832]: I0125 08:01:26.871764 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Jan 25 08:01:26 crc kubenswrapper[4832]: I0125 08:01:26.897296 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Jan 25 08:01:26 crc kubenswrapper[4832]: I0125 08:01:26.926451 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Jan 25 08:01:26 crc kubenswrapper[4832]: I0125 08:01:26.927637 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Jan 25 08:01:26 crc kubenswrapper[4832]: I0125 08:01:26.939328 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Jan 25 08:01:26 crc kubenswrapper[4832]: I0125 08:01:26.940630 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Jan 25 08:01:26 crc kubenswrapper[4832]: I0125 08:01:26.981739 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Jan 25 08:01:27 crc kubenswrapper[4832]: I0125 08:01:27.002960 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Jan 25 08:01:27 crc kubenswrapper[4832]: I0125 08:01:27.068453 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Jan 25 08:01:27 crc kubenswrapper[4832]: I0125 08:01:27.138523 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Jan 25 08:01:27 crc kubenswrapper[4832]: I0125 08:01:27.164563 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Jan 25 08:01:27 crc kubenswrapper[4832]: I0125 08:01:27.241206 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Jan 25 08:01:27 crc kubenswrapper[4832]: I0125 08:01:27.298210 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Jan 25 08:01:27 crc kubenswrapper[4832]: I0125 08:01:27.339171 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Jan 25 08:01:27 crc kubenswrapper[4832]: I0125 08:01:27.362243 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Jan 25 08:01:27 crc kubenswrapper[4832]: I0125 08:01:27.409956 4832 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 25 08:01:27 crc kubenswrapper[4832]: I0125 08:01:27.410166 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" containerID="cri-o://70652d96b7264c2b3dc4a0f20d1e20539e185b73b0ec9a36e5d36cb4805d127f" gracePeriod=5 Jan 25 08:01:27 crc kubenswrapper[4832]: I0125 08:01:27.495106 4832 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Jan 25 08:01:27 crc kubenswrapper[4832]: I0125 08:01:27.529361 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Jan 25 08:01:27 crc kubenswrapper[4832]: I0125 08:01:27.544041 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Jan 25 08:01:27 crc kubenswrapper[4832]: I0125 08:01:27.545007 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Jan 25 08:01:27 crc kubenswrapper[4832]: I0125 08:01:27.597167 4832 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Jan 25 08:01:27 crc kubenswrapper[4832]: I0125 08:01:27.693263 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Jan 25 08:01:27 crc kubenswrapper[4832]: I0125 08:01:27.791374 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Jan 25 08:01:27 crc kubenswrapper[4832]: I0125 08:01:27.834890 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 25 08:01:27 crc kubenswrapper[4832]: I0125 08:01:27.956565 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Jan 25 08:01:28 crc kubenswrapper[4832]: I0125 08:01:28.249306 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Jan 25 08:01:28 crc kubenswrapper[4832]: I0125 08:01:28.407639 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Jan 25 08:01:28 crc kubenswrapper[4832]: I0125 08:01:28.463518 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Jan 25 08:01:28 crc kubenswrapper[4832]: I0125 08:01:28.525223 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Jan 25 08:01:28 crc kubenswrapper[4832]: I0125 08:01:28.544730 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Jan 25 08:01:28 crc kubenswrapper[4832]: I0125 08:01:28.571150 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Jan 25 08:01:28 crc kubenswrapper[4832]: I0125 08:01:28.591033 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Jan 25 08:01:28 crc kubenswrapper[4832]: I0125 08:01:28.826684 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Jan 25 08:01:28 crc kubenswrapper[4832]: I0125 08:01:28.960845 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Jan 25 08:01:29 crc kubenswrapper[4832]: I0125 08:01:29.105478 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Jan 25 08:01:29 crc kubenswrapper[4832]: I0125 08:01:29.241526 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Jan 25 08:01:29 crc kubenswrapper[4832]: I0125 08:01:29.333754 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Jan 25 08:01:29 crc kubenswrapper[4832]: I0125 08:01:29.399975 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Jan 25 08:01:29 crc kubenswrapper[4832]: I0125 08:01:29.465223 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Jan 25 08:01:29 crc kubenswrapper[4832]: I0125 08:01:29.481299 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Jan 25 08:01:29 crc kubenswrapper[4832]: I0125 08:01:29.591950 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Jan 25 08:01:29 crc kubenswrapper[4832]: I0125 08:01:29.616789 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Jan 25 08:01:29 crc kubenswrapper[4832]: I0125 08:01:29.686878 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Jan 25 08:01:29 crc kubenswrapper[4832]: I0125 08:01:29.724991 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Jan 25 08:01:29 crc kubenswrapper[4832]: I0125 08:01:29.870450 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Jan 25 08:01:29 crc kubenswrapper[4832]: I0125 08:01:29.930353 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Jan 25 08:01:30 crc kubenswrapper[4832]: I0125 08:01:30.032561 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Jan 25 08:01:30 crc kubenswrapper[4832]: I0125 08:01:30.038637 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Jan 25 08:01:30 crc kubenswrapper[4832]: I0125 08:01:30.069598 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-9fc86467f-hsrzz"] Jan 25 08:01:30 crc kubenswrapper[4832]: I0125 08:01:30.202626 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Jan 25 08:01:30 crc kubenswrapper[4832]: I0125 08:01:30.300349 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Jan 25 08:01:30 crc kubenswrapper[4832]: I0125 08:01:30.390402 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Jan 25 08:01:30 crc kubenswrapper[4832]: I0125 08:01:30.419479 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Jan 25 08:01:30 crc kubenswrapper[4832]: I0125 08:01:30.514790 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Jan 25 08:01:30 crc kubenswrapper[4832]: I0125 08:01:30.556675 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Jan 25 08:01:30 crc kubenswrapper[4832]: I0125 08:01:30.562571 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-9fc86467f-hsrzz"] Jan 25 08:01:30 crc kubenswrapper[4832]: I0125 08:01:30.647521 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Jan 25 08:01:30 crc kubenswrapper[4832]: I0125 08:01:30.695546 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Jan 25 08:01:30 crc kubenswrapper[4832]: I0125 08:01:30.738359 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Jan 25 08:01:30 crc kubenswrapper[4832]: I0125 08:01:30.748277 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 25 08:01:30 crc kubenswrapper[4832]: I0125 08:01:30.908027 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Jan 25 08:01:30 crc kubenswrapper[4832]: I0125 08:01:30.924528 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 25 08:01:31 crc kubenswrapper[4832]: I0125 08:01:31.216325 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Jan 25 08:01:31 crc kubenswrapper[4832]: I0125 08:01:31.272245 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Jan 25 08:01:31 crc kubenswrapper[4832]: I0125 08:01:31.319751 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Jan 25 08:01:31 crc kubenswrapper[4832]: I0125 08:01:31.350361 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-authentication_oauth-openshift-9fc86467f-hsrzz_3c8bf29a-c28c-44d7-8e02-77a7ec993a7e/oauth-openshift/0.log" Jan 25 08:01:31 crc kubenswrapper[4832]: I0125 08:01:31.350424 4832 generic.go:334] "Generic (PLEG): container finished" podID="3c8bf29a-c28c-44d7-8e02-77a7ec993a7e" containerID="97a314b44c5398b5aa3a6be238f60c75be1ab0db16a05ed571b3ad0436e76069" exitCode=255 Jan 25 08:01:31 crc kubenswrapper[4832]: I0125 08:01:31.350454 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-9fc86467f-hsrzz" event={"ID":"3c8bf29a-c28c-44d7-8e02-77a7ec993a7e","Type":"ContainerDied","Data":"97a314b44c5398b5aa3a6be238f60c75be1ab0db16a05ed571b3ad0436e76069"} Jan 25 08:01:31 crc kubenswrapper[4832]: I0125 08:01:31.350480 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-9fc86467f-hsrzz" event={"ID":"3c8bf29a-c28c-44d7-8e02-77a7ec993a7e","Type":"ContainerStarted","Data":"fd16e852611f9882c0c5ea6ffded86f3c1124beb81404743287b48a372b5b13c"} Jan 25 08:01:31 crc kubenswrapper[4832]: I0125 08:01:31.350863 4832 scope.go:117] "RemoveContainer" containerID="97a314b44c5398b5aa3a6be238f60c75be1ab0db16a05ed571b3ad0436e76069" Jan 25 08:01:31 crc kubenswrapper[4832]: I0125 08:01:31.353988 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Jan 25 08:01:31 crc kubenswrapper[4832]: I0125 08:01:31.385708 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 25 08:01:31 crc kubenswrapper[4832]: I0125 08:01:31.400922 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Jan 25 08:01:31 crc kubenswrapper[4832]: I0125 08:01:31.405110 4832 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Jan 25 08:01:31 crc kubenswrapper[4832]: I0125 08:01:31.506233 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 25 08:01:31 crc kubenswrapper[4832]: I0125 08:01:31.686744 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Jan 25 08:01:31 crc kubenswrapper[4832]: I0125 08:01:31.735856 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Jan 25 08:01:31 crc kubenswrapper[4832]: I0125 08:01:31.776939 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Jan 25 08:01:31 crc kubenswrapper[4832]: I0125 08:01:31.796276 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Jan 25 08:01:31 crc kubenswrapper[4832]: I0125 08:01:31.816554 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Jan 25 08:01:31 crc kubenswrapper[4832]: I0125 08:01:31.934335 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Jan 25 08:01:31 crc kubenswrapper[4832]: I0125 08:01:31.980931 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Jan 25 08:01:32 crc kubenswrapper[4832]: I0125 08:01:32.006222 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Jan 25 08:01:32 crc kubenswrapper[4832]: I0125 08:01:32.030724 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Jan 25 08:01:32 crc kubenswrapper[4832]: I0125 08:01:32.105987 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Jan 25 08:01:32 crc kubenswrapper[4832]: I0125 08:01:32.292289 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Jan 25 08:01:32 crc kubenswrapper[4832]: I0125 08:01:32.340437 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Jan 25 08:01:32 crc kubenswrapper[4832]: I0125 08:01:32.357067 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-authentication_oauth-openshift-9fc86467f-hsrzz_3c8bf29a-c28c-44d7-8e02-77a7ec993a7e/oauth-openshift/1.log" Jan 25 08:01:32 crc kubenswrapper[4832]: I0125 08:01:32.357611 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-authentication_oauth-openshift-9fc86467f-hsrzz_3c8bf29a-c28c-44d7-8e02-77a7ec993a7e/oauth-openshift/0.log" Jan 25 08:01:32 crc kubenswrapper[4832]: I0125 08:01:32.357648 4832 generic.go:334] "Generic (PLEG): container finished" podID="3c8bf29a-c28c-44d7-8e02-77a7ec993a7e" containerID="18936f5255740ad5d556aacfe11401898215be426dc115a4fa0941eeb6006604" exitCode=255 Jan 25 08:01:32 crc kubenswrapper[4832]: I0125 08:01:32.357676 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-9fc86467f-hsrzz" event={"ID":"3c8bf29a-c28c-44d7-8e02-77a7ec993a7e","Type":"ContainerDied","Data":"18936f5255740ad5d556aacfe11401898215be426dc115a4fa0941eeb6006604"} Jan 25 08:01:32 crc kubenswrapper[4832]: I0125 08:01:32.357710 4832 scope.go:117] "RemoveContainer" containerID="97a314b44c5398b5aa3a6be238f60c75be1ab0db16a05ed571b3ad0436e76069" Jan 25 08:01:32 crc kubenswrapper[4832]: I0125 08:01:32.358073 4832 scope.go:117] "RemoveContainer" containerID="18936f5255740ad5d556aacfe11401898215be426dc115a4fa0941eeb6006604" Jan 25 08:01:32 crc kubenswrapper[4832]: E0125 08:01:32.358252 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oauth-openshift\" with CrashLoopBackOff: \"back-off 10s restarting failed container=oauth-openshift pod=oauth-openshift-9fc86467f-hsrzz_openshift-authentication(3c8bf29a-c28c-44d7-8e02-77a7ec993a7e)\"" pod="openshift-authentication/oauth-openshift-9fc86467f-hsrzz" podUID="3c8bf29a-c28c-44d7-8e02-77a7ec993a7e" Jan 25 08:01:32 crc kubenswrapper[4832]: I0125 08:01:32.605321 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Jan 25 08:01:32 crc kubenswrapper[4832]: I0125 08:01:32.631680 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 25 08:01:32 crc kubenswrapper[4832]: I0125 08:01:32.861902 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Jan 25 08:01:33 crc kubenswrapper[4832]: I0125 08:01:33.002374 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Jan 25 08:01:33 crc kubenswrapper[4832]: I0125 08:01:33.002655 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 25 08:01:33 crc kubenswrapper[4832]: I0125 08:01:33.165621 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 25 08:01:33 crc kubenswrapper[4832]: I0125 08:01:33.165771 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 25 08:01:33 crc kubenswrapper[4832]: I0125 08:01:33.165834 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 25 08:01:33 crc kubenswrapper[4832]: I0125 08:01:33.165888 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 25 08:01:33 crc kubenswrapper[4832]: I0125 08:01:33.165917 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 25 08:01:33 crc kubenswrapper[4832]: I0125 08:01:33.166237 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock" (OuterVolumeSpecName: "var-lock") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 25 08:01:33 crc kubenswrapper[4832]: I0125 08:01:33.166277 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests" (OuterVolumeSpecName: "manifests") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "manifests". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 25 08:01:33 crc kubenswrapper[4832]: I0125 08:01:33.166299 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log" (OuterVolumeSpecName: "var-log") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 25 08:01:33 crc kubenswrapper[4832]: I0125 08:01:33.166317 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 25 08:01:33 crc kubenswrapper[4832]: I0125 08:01:33.174486 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir" (OuterVolumeSpecName: "pod-resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "pod-resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 25 08:01:33 crc kubenswrapper[4832]: I0125 08:01:33.266866 4832 reconciler_common.go:293] "Volume detached for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") on node \"crc\" DevicePath \"\"" Jan 25 08:01:33 crc kubenswrapper[4832]: I0125 08:01:33.267161 4832 reconciler_common.go:293] "Volume detached for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") on node \"crc\" DevicePath \"\"" Jan 25 08:01:33 crc kubenswrapper[4832]: I0125 08:01:33.267171 4832 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") on node \"crc\" DevicePath \"\"" Jan 25 08:01:33 crc kubenswrapper[4832]: I0125 08:01:33.267189 4832 reconciler_common.go:293] "Volume detached for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") on node \"crc\" DevicePath \"\"" Jan 25 08:01:33 crc kubenswrapper[4832]: I0125 08:01:33.267200 4832 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") on node \"crc\" DevicePath \"\"" Jan 25 08:01:33 crc kubenswrapper[4832]: I0125 08:01:33.364029 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Jan 25 08:01:33 crc kubenswrapper[4832]: I0125 08:01:33.364076 4832 generic.go:334] "Generic (PLEG): container finished" podID="f85e55b1a89d02b0cb034b1ea31ed45a" containerID="70652d96b7264c2b3dc4a0f20d1e20539e185b73b0ec9a36e5d36cb4805d127f" exitCode=137 Jan 25 08:01:33 crc kubenswrapper[4832]: I0125 08:01:33.364130 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 25 08:01:33 crc kubenswrapper[4832]: I0125 08:01:33.364155 4832 scope.go:117] "RemoveContainer" containerID="70652d96b7264c2b3dc4a0f20d1e20539e185b73b0ec9a36e5d36cb4805d127f" Jan 25 08:01:33 crc kubenswrapper[4832]: I0125 08:01:33.366843 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-authentication_oauth-openshift-9fc86467f-hsrzz_3c8bf29a-c28c-44d7-8e02-77a7ec993a7e/oauth-openshift/1.log" Jan 25 08:01:33 crc kubenswrapper[4832]: I0125 08:01:33.367180 4832 scope.go:117] "RemoveContainer" containerID="18936f5255740ad5d556aacfe11401898215be426dc115a4fa0941eeb6006604" Jan 25 08:01:33 crc kubenswrapper[4832]: E0125 08:01:33.367484 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oauth-openshift\" with CrashLoopBackOff: \"back-off 10s restarting failed container=oauth-openshift pod=oauth-openshift-9fc86467f-hsrzz_openshift-authentication(3c8bf29a-c28c-44d7-8e02-77a7ec993a7e)\"" pod="openshift-authentication/oauth-openshift-9fc86467f-hsrzz" podUID="3c8bf29a-c28c-44d7-8e02-77a7ec993a7e" Jan 25 08:01:33 crc kubenswrapper[4832]: I0125 08:01:33.386473 4832 scope.go:117] "RemoveContainer" containerID="70652d96b7264c2b3dc4a0f20d1e20539e185b73b0ec9a36e5d36cb4805d127f" Jan 25 08:01:33 crc kubenswrapper[4832]: E0125 08:01:33.387598 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"70652d96b7264c2b3dc4a0f20d1e20539e185b73b0ec9a36e5d36cb4805d127f\": container with ID starting with 70652d96b7264c2b3dc4a0f20d1e20539e185b73b0ec9a36e5d36cb4805d127f not found: ID does not exist" containerID="70652d96b7264c2b3dc4a0f20d1e20539e185b73b0ec9a36e5d36cb4805d127f" Jan 25 08:01:33 crc kubenswrapper[4832]: I0125 08:01:33.387639 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"70652d96b7264c2b3dc4a0f20d1e20539e185b73b0ec9a36e5d36cb4805d127f"} err="failed to get container status \"70652d96b7264c2b3dc4a0f20d1e20539e185b73b0ec9a36e5d36cb4805d127f\": rpc error: code = NotFound desc = could not find container \"70652d96b7264c2b3dc4a0f20d1e20539e185b73b0ec9a36e5d36cb4805d127f\": container with ID starting with 70652d96b7264c2b3dc4a0f20d1e20539e185b73b0ec9a36e5d36cb4805d127f not found: ID does not exist" Jan 25 08:01:33 crc kubenswrapper[4832]: I0125 08:01:33.677834 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" path="/var/lib/kubelet/pods/f85e55b1a89d02b0cb034b1ea31ed45a/volumes" Jan 25 08:01:34 crc kubenswrapper[4832]: I0125 08:01:34.113875 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Jan 25 08:01:34 crc kubenswrapper[4832]: I0125 08:01:34.138739 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Jan 25 08:01:34 crc kubenswrapper[4832]: I0125 08:01:34.776013 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Jan 25 08:01:39 crc kubenswrapper[4832]: I0125 08:01:39.876470 4832 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-authentication/oauth-openshift-9fc86467f-hsrzz" Jan 25 08:01:39 crc kubenswrapper[4832]: I0125 08:01:39.876898 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-9fc86467f-hsrzz" Jan 25 08:01:39 crc kubenswrapper[4832]: I0125 08:01:39.877866 4832 scope.go:117] "RemoveContainer" containerID="18936f5255740ad5d556aacfe11401898215be426dc115a4fa0941eeb6006604" Jan 25 08:01:39 crc kubenswrapper[4832]: E0125 08:01:39.878163 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oauth-openshift\" with CrashLoopBackOff: \"back-off 10s restarting failed container=oauth-openshift pod=oauth-openshift-9fc86467f-hsrzz_openshift-authentication(3c8bf29a-c28c-44d7-8e02-77a7ec993a7e)\"" pod="openshift-authentication/oauth-openshift-9fc86467f-hsrzz" podUID="3c8bf29a-c28c-44d7-8e02-77a7ec993a7e" Jan 25 08:01:54 crc kubenswrapper[4832]: I0125 08:01:54.669924 4832 scope.go:117] "RemoveContainer" containerID="18936f5255740ad5d556aacfe11401898215be426dc115a4fa0941eeb6006604" Jan 25 08:01:55 crc kubenswrapper[4832]: I0125 08:01:55.538692 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-authentication_oauth-openshift-9fc86467f-hsrzz_3c8bf29a-c28c-44d7-8e02-77a7ec993a7e/oauth-openshift/1.log" Jan 25 08:01:55 crc kubenswrapper[4832]: I0125 08:01:55.539365 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-9fc86467f-hsrzz" event={"ID":"3c8bf29a-c28c-44d7-8e02-77a7ec993a7e","Type":"ContainerStarted","Data":"4660a80c5857fc1238f44829911ad8b3a7217a1767dcb68619e6c08001d512ae"} Jan 25 08:01:55 crc kubenswrapper[4832]: I0125 08:01:55.539823 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-9fc86467f-hsrzz" Jan 25 08:01:55 crc kubenswrapper[4832]: I0125 08:01:55.550257 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-9fc86467f-hsrzz" Jan 25 08:01:55 crc kubenswrapper[4832]: I0125 08:01:55.576840 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-9fc86467f-hsrzz" podStartSLOduration=75.576820671 podStartE2EDuration="1m15.576820671s" podCreationTimestamp="2026-01-25 08:00:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-25 08:01:55.573060289 +0000 UTC m=+298.246883852" watchObservedRunningTime="2026-01-25 08:01:55.576820671 +0000 UTC m=+298.250644204" Jan 25 08:01:57 crc kubenswrapper[4832]: I0125 08:01:57.476775 4832 cert_rotation.go:91] certificate rotation detected, shutting down client connections to start using new credentials Jan 25 08:02:16 crc kubenswrapper[4832]: I0125 08:02:16.568802 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-sqbmg"] Jan 25 08:02:16 crc kubenswrapper[4832]: I0125 08:02:16.569731 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-879f6c89f-sqbmg" podUID="8be00535-0bc6-41a2-a79c-552be0f574a8" containerName="controller-manager" containerID="cri-o://9000c5cb2305bfd03ddd15ab32c5d7c5de5d0fa5cebf5d45d85557ac0e62a18f" gracePeriod=30 Jan 25 08:02:16 crc kubenswrapper[4832]: I0125 08:02:16.673113 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-csbzw"] Jan 25 08:02:16 crc kubenswrapper[4832]: I0125 08:02:16.673547 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-csbzw" podUID="7fad5166-9aa0-4c10-8c73-2186af1d226d" containerName="route-controller-manager" containerID="cri-o://42dd21d4e8703a89f775e8ff69d13fc6b03894f2734a8752f71a2f070db1bcaf" gracePeriod=30 Jan 25 08:02:17 crc kubenswrapper[4832]: I0125 08:02:17.015032 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-sqbmg" Jan 25 08:02:17 crc kubenswrapper[4832]: I0125 08:02:17.084885 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-csbzw" Jan 25 08:02:17 crc kubenswrapper[4832]: I0125 08:02:17.128711 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/8be00535-0bc6-41a2-a79c-552be0f574a8-proxy-ca-bundles\") pod \"8be00535-0bc6-41a2-a79c-552be0f574a8\" (UID: \"8be00535-0bc6-41a2-a79c-552be0f574a8\") " Jan 25 08:02:17 crc kubenswrapper[4832]: I0125 08:02:17.128771 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8be00535-0bc6-41a2-a79c-552be0f574a8-client-ca\") pod \"8be00535-0bc6-41a2-a79c-552be0f574a8\" (UID: \"8be00535-0bc6-41a2-a79c-552be0f574a8\") " Jan 25 08:02:17 crc kubenswrapper[4832]: I0125 08:02:17.128805 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8be00535-0bc6-41a2-a79c-552be0f574a8-config\") pod \"8be00535-0bc6-41a2-a79c-552be0f574a8\" (UID: \"8be00535-0bc6-41a2-a79c-552be0f574a8\") " Jan 25 08:02:17 crc kubenswrapper[4832]: I0125 08:02:17.128842 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xk4vl\" (UniqueName: \"kubernetes.io/projected/8be00535-0bc6-41a2-a79c-552be0f574a8-kube-api-access-xk4vl\") pod \"8be00535-0bc6-41a2-a79c-552be0f574a8\" (UID: \"8be00535-0bc6-41a2-a79c-552be0f574a8\") " Jan 25 08:02:17 crc kubenswrapper[4832]: I0125 08:02:17.128916 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8be00535-0bc6-41a2-a79c-552be0f574a8-serving-cert\") pod \"8be00535-0bc6-41a2-a79c-552be0f574a8\" (UID: \"8be00535-0bc6-41a2-a79c-552be0f574a8\") " Jan 25 08:02:17 crc kubenswrapper[4832]: I0125 08:02:17.130806 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8be00535-0bc6-41a2-a79c-552be0f574a8-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "8be00535-0bc6-41a2-a79c-552be0f574a8" (UID: "8be00535-0bc6-41a2-a79c-552be0f574a8"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 25 08:02:17 crc kubenswrapper[4832]: I0125 08:02:17.130861 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8be00535-0bc6-41a2-a79c-552be0f574a8-config" (OuterVolumeSpecName: "config") pod "8be00535-0bc6-41a2-a79c-552be0f574a8" (UID: "8be00535-0bc6-41a2-a79c-552be0f574a8"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 25 08:02:17 crc kubenswrapper[4832]: I0125 08:02:17.132022 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8be00535-0bc6-41a2-a79c-552be0f574a8-client-ca" (OuterVolumeSpecName: "client-ca") pod "8be00535-0bc6-41a2-a79c-552be0f574a8" (UID: "8be00535-0bc6-41a2-a79c-552be0f574a8"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 25 08:02:17 crc kubenswrapper[4832]: I0125 08:02:17.136151 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8be00535-0bc6-41a2-a79c-552be0f574a8-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "8be00535-0bc6-41a2-a79c-552be0f574a8" (UID: "8be00535-0bc6-41a2-a79c-552be0f574a8"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 25 08:02:17 crc kubenswrapper[4832]: I0125 08:02:17.136181 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8be00535-0bc6-41a2-a79c-552be0f574a8-kube-api-access-xk4vl" (OuterVolumeSpecName: "kube-api-access-xk4vl") pod "8be00535-0bc6-41a2-a79c-552be0f574a8" (UID: "8be00535-0bc6-41a2-a79c-552be0f574a8"). InnerVolumeSpecName "kube-api-access-xk4vl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 25 08:02:17 crc kubenswrapper[4832]: I0125 08:02:17.230120 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7fad5166-9aa0-4c10-8c73-2186af1d226d-config\") pod \"7fad5166-9aa0-4c10-8c73-2186af1d226d\" (UID: \"7fad5166-9aa0-4c10-8c73-2186af1d226d\") " Jan 25 08:02:17 crc kubenswrapper[4832]: I0125 08:02:17.230197 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-shcjj\" (UniqueName: \"kubernetes.io/projected/7fad5166-9aa0-4c10-8c73-2186af1d226d-kube-api-access-shcjj\") pod \"7fad5166-9aa0-4c10-8c73-2186af1d226d\" (UID: \"7fad5166-9aa0-4c10-8c73-2186af1d226d\") " Jan 25 08:02:17 crc kubenswrapper[4832]: I0125 08:02:17.230261 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7fad5166-9aa0-4c10-8c73-2186af1d226d-client-ca\") pod \"7fad5166-9aa0-4c10-8c73-2186af1d226d\" (UID: \"7fad5166-9aa0-4c10-8c73-2186af1d226d\") " Jan 25 08:02:17 crc kubenswrapper[4832]: I0125 08:02:17.230297 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7fad5166-9aa0-4c10-8c73-2186af1d226d-serving-cert\") pod \"7fad5166-9aa0-4c10-8c73-2186af1d226d\" (UID: \"7fad5166-9aa0-4c10-8c73-2186af1d226d\") " Jan 25 08:02:17 crc kubenswrapper[4832]: I0125 08:02:17.230603 4832 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8be00535-0bc6-41a2-a79c-552be0f574a8-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 25 08:02:17 crc kubenswrapper[4832]: I0125 08:02:17.230618 4832 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/8be00535-0bc6-41a2-a79c-552be0f574a8-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 25 08:02:17 crc kubenswrapper[4832]: I0125 08:02:17.230630 4832 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8be00535-0bc6-41a2-a79c-552be0f574a8-client-ca\") on node \"crc\" DevicePath \"\"" Jan 25 08:02:17 crc kubenswrapper[4832]: I0125 08:02:17.230645 4832 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8be00535-0bc6-41a2-a79c-552be0f574a8-config\") on node \"crc\" DevicePath \"\"" Jan 25 08:02:17 crc kubenswrapper[4832]: I0125 08:02:17.230655 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xk4vl\" (UniqueName: \"kubernetes.io/projected/8be00535-0bc6-41a2-a79c-552be0f574a8-kube-api-access-xk4vl\") on node \"crc\" DevicePath \"\"" Jan 25 08:02:17 crc kubenswrapper[4832]: I0125 08:02:17.231507 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7fad5166-9aa0-4c10-8c73-2186af1d226d-client-ca" (OuterVolumeSpecName: "client-ca") pod "7fad5166-9aa0-4c10-8c73-2186af1d226d" (UID: "7fad5166-9aa0-4c10-8c73-2186af1d226d"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 25 08:02:17 crc kubenswrapper[4832]: I0125 08:02:17.231535 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7fad5166-9aa0-4c10-8c73-2186af1d226d-config" (OuterVolumeSpecName: "config") pod "7fad5166-9aa0-4c10-8c73-2186af1d226d" (UID: "7fad5166-9aa0-4c10-8c73-2186af1d226d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 25 08:02:17 crc kubenswrapper[4832]: I0125 08:02:17.234098 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7fad5166-9aa0-4c10-8c73-2186af1d226d-kube-api-access-shcjj" (OuterVolumeSpecName: "kube-api-access-shcjj") pod "7fad5166-9aa0-4c10-8c73-2186af1d226d" (UID: "7fad5166-9aa0-4c10-8c73-2186af1d226d"). InnerVolumeSpecName "kube-api-access-shcjj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 25 08:02:17 crc kubenswrapper[4832]: I0125 08:02:17.234468 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7fad5166-9aa0-4c10-8c73-2186af1d226d-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7fad5166-9aa0-4c10-8c73-2186af1d226d" (UID: "7fad5166-9aa0-4c10-8c73-2186af1d226d"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 25 08:02:17 crc kubenswrapper[4832]: I0125 08:02:17.331858 4832 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7fad5166-9aa0-4c10-8c73-2186af1d226d-config\") on node \"crc\" DevicePath \"\"" Jan 25 08:02:17 crc kubenswrapper[4832]: I0125 08:02:17.331897 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-shcjj\" (UniqueName: \"kubernetes.io/projected/7fad5166-9aa0-4c10-8c73-2186af1d226d-kube-api-access-shcjj\") on node \"crc\" DevicePath \"\"" Jan 25 08:02:17 crc kubenswrapper[4832]: I0125 08:02:17.331910 4832 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7fad5166-9aa0-4c10-8c73-2186af1d226d-client-ca\") on node \"crc\" DevicePath \"\"" Jan 25 08:02:17 crc kubenswrapper[4832]: I0125 08:02:17.331920 4832 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7fad5166-9aa0-4c10-8c73-2186af1d226d-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 25 08:02:17 crc kubenswrapper[4832]: I0125 08:02:17.683248 4832 generic.go:334] "Generic (PLEG): container finished" podID="8be00535-0bc6-41a2-a79c-552be0f574a8" containerID="9000c5cb2305bfd03ddd15ab32c5d7c5de5d0fa5cebf5d45d85557ac0e62a18f" exitCode=0 Jan 25 08:02:17 crc kubenswrapper[4832]: I0125 08:02:17.683510 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-sqbmg" event={"ID":"8be00535-0bc6-41a2-a79c-552be0f574a8","Type":"ContainerDied","Data":"9000c5cb2305bfd03ddd15ab32c5d7c5de5d0fa5cebf5d45d85557ac0e62a18f"} Jan 25 08:02:17 crc kubenswrapper[4832]: I0125 08:02:17.683596 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-sqbmg" event={"ID":"8be00535-0bc6-41a2-a79c-552be0f574a8","Type":"ContainerDied","Data":"540bf08f9a452ad64ac7c34ee7785738e4574473c85b256f4b4b816be7d14e87"} Jan 25 08:02:17 crc kubenswrapper[4832]: I0125 08:02:17.683648 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-sqbmg" Jan 25 08:02:17 crc kubenswrapper[4832]: I0125 08:02:17.683648 4832 scope.go:117] "RemoveContainer" containerID="9000c5cb2305bfd03ddd15ab32c5d7c5de5d0fa5cebf5d45d85557ac0e62a18f" Jan 25 08:02:17 crc kubenswrapper[4832]: I0125 08:02:17.685726 4832 generic.go:334] "Generic (PLEG): container finished" podID="7fad5166-9aa0-4c10-8c73-2186af1d226d" containerID="42dd21d4e8703a89f775e8ff69d13fc6b03894f2734a8752f71a2f070db1bcaf" exitCode=0 Jan 25 08:02:17 crc kubenswrapper[4832]: I0125 08:02:17.685822 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-csbzw" event={"ID":"7fad5166-9aa0-4c10-8c73-2186af1d226d","Type":"ContainerDied","Data":"42dd21d4e8703a89f775e8ff69d13fc6b03894f2734a8752f71a2f070db1bcaf"} Jan 25 08:02:17 crc kubenswrapper[4832]: I0125 08:02:17.685887 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-csbzw" event={"ID":"7fad5166-9aa0-4c10-8c73-2186af1d226d","Type":"ContainerDied","Data":"9b7781d0df06fa0acac3945c3db98d23cdcb581ebfc8e1b6c83e46dc05d5432e"} Jan 25 08:02:17 crc kubenswrapper[4832]: I0125 08:02:17.688154 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-csbzw" Jan 25 08:02:17 crc kubenswrapper[4832]: I0125 08:02:17.730455 4832 scope.go:117] "RemoveContainer" containerID="9000c5cb2305bfd03ddd15ab32c5d7c5de5d0fa5cebf5d45d85557ac0e62a18f" Jan 25 08:02:17 crc kubenswrapper[4832]: E0125 08:02:17.731087 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9000c5cb2305bfd03ddd15ab32c5d7c5de5d0fa5cebf5d45d85557ac0e62a18f\": container with ID starting with 9000c5cb2305bfd03ddd15ab32c5d7c5de5d0fa5cebf5d45d85557ac0e62a18f not found: ID does not exist" containerID="9000c5cb2305bfd03ddd15ab32c5d7c5de5d0fa5cebf5d45d85557ac0e62a18f" Jan 25 08:02:17 crc kubenswrapper[4832]: I0125 08:02:17.731196 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9000c5cb2305bfd03ddd15ab32c5d7c5de5d0fa5cebf5d45d85557ac0e62a18f"} err="failed to get container status \"9000c5cb2305bfd03ddd15ab32c5d7c5de5d0fa5cebf5d45d85557ac0e62a18f\": rpc error: code = NotFound desc = could not find container \"9000c5cb2305bfd03ddd15ab32c5d7c5de5d0fa5cebf5d45d85557ac0e62a18f\": container with ID starting with 9000c5cb2305bfd03ddd15ab32c5d7c5de5d0fa5cebf5d45d85557ac0e62a18f not found: ID does not exist" Jan 25 08:02:17 crc kubenswrapper[4832]: I0125 08:02:17.731232 4832 scope.go:117] "RemoveContainer" containerID="42dd21d4e8703a89f775e8ff69d13fc6b03894f2734a8752f71a2f070db1bcaf" Jan 25 08:02:17 crc kubenswrapper[4832]: I0125 08:02:17.744874 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-csbzw"] Jan 25 08:02:17 crc kubenswrapper[4832]: I0125 08:02:17.750030 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-csbzw"] Jan 25 08:02:17 crc kubenswrapper[4832]: I0125 08:02:17.761377 4832 scope.go:117] "RemoveContainer" containerID="42dd21d4e8703a89f775e8ff69d13fc6b03894f2734a8752f71a2f070db1bcaf" Jan 25 08:02:17 crc kubenswrapper[4832]: I0125 08:02:17.761569 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-sqbmg"] Jan 25 08:02:17 crc kubenswrapper[4832]: E0125 08:02:17.762141 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"42dd21d4e8703a89f775e8ff69d13fc6b03894f2734a8752f71a2f070db1bcaf\": container with ID starting with 42dd21d4e8703a89f775e8ff69d13fc6b03894f2734a8752f71a2f070db1bcaf not found: ID does not exist" containerID="42dd21d4e8703a89f775e8ff69d13fc6b03894f2734a8752f71a2f070db1bcaf" Jan 25 08:02:17 crc kubenswrapper[4832]: I0125 08:02:17.762222 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"42dd21d4e8703a89f775e8ff69d13fc6b03894f2734a8752f71a2f070db1bcaf"} err="failed to get container status \"42dd21d4e8703a89f775e8ff69d13fc6b03894f2734a8752f71a2f070db1bcaf\": rpc error: code = NotFound desc = could not find container \"42dd21d4e8703a89f775e8ff69d13fc6b03894f2734a8752f71a2f070db1bcaf\": container with ID starting with 42dd21d4e8703a89f775e8ff69d13fc6b03894f2734a8752f71a2f070db1bcaf not found: ID does not exist" Jan 25 08:02:17 crc kubenswrapper[4832]: I0125 08:02:17.766936 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-sqbmg"] Jan 25 08:02:17 crc kubenswrapper[4832]: I0125 08:02:17.951623 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-d4fc77944-xmrzw"] Jan 25 08:02:17 crc kubenswrapper[4832]: E0125 08:02:17.952633 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8be00535-0bc6-41a2-a79c-552be0f574a8" containerName="controller-manager" Jan 25 08:02:17 crc kubenswrapper[4832]: I0125 08:02:17.952656 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="8be00535-0bc6-41a2-a79c-552be0f574a8" containerName="controller-manager" Jan 25 08:02:17 crc kubenswrapper[4832]: E0125 08:02:17.952679 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7fad5166-9aa0-4c10-8c73-2186af1d226d" containerName="route-controller-manager" Jan 25 08:02:17 crc kubenswrapper[4832]: I0125 08:02:17.952690 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="7fad5166-9aa0-4c10-8c73-2186af1d226d" containerName="route-controller-manager" Jan 25 08:02:17 crc kubenswrapper[4832]: E0125 08:02:17.952719 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Jan 25 08:02:17 crc kubenswrapper[4832]: I0125 08:02:17.952727 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Jan 25 08:02:17 crc kubenswrapper[4832]: I0125 08:02:17.952933 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="8be00535-0bc6-41a2-a79c-552be0f574a8" containerName="controller-manager" Jan 25 08:02:17 crc kubenswrapper[4832]: I0125 08:02:17.952954 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="7fad5166-9aa0-4c10-8c73-2186af1d226d" containerName="route-controller-manager" Jan 25 08:02:17 crc kubenswrapper[4832]: I0125 08:02:17.952969 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Jan 25 08:02:17 crc kubenswrapper[4832]: I0125 08:02:17.953912 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-d4fc77944-xmrzw" Jan 25 08:02:17 crc kubenswrapper[4832]: I0125 08:02:17.957186 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6d7c88cf6b-xkjkv"] Jan 25 08:02:17 crc kubenswrapper[4832]: I0125 08:02:17.957689 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 25 08:02:17 crc kubenswrapper[4832]: I0125 08:02:17.958346 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 25 08:02:17 crc kubenswrapper[4832]: I0125 08:02:17.958349 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6d7c88cf6b-xkjkv" Jan 25 08:02:17 crc kubenswrapper[4832]: I0125 08:02:17.958657 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 25 08:02:17 crc kubenswrapper[4832]: I0125 08:02:17.958732 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 25 08:02:17 crc kubenswrapper[4832]: I0125 08:02:17.960477 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-d4fc77944-xmrzw"] Jan 25 08:02:17 crc kubenswrapper[4832]: I0125 08:02:17.960990 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 25 08:02:17 crc kubenswrapper[4832]: I0125 08:02:17.961010 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 25 08:02:17 crc kubenswrapper[4832]: I0125 08:02:17.963872 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6d7c88cf6b-xkjkv"] Jan 25 08:02:17 crc kubenswrapper[4832]: I0125 08:02:17.964725 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 25 08:02:17 crc kubenswrapper[4832]: I0125 08:02:17.968672 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 25 08:02:17 crc kubenswrapper[4832]: I0125 08:02:17.968924 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 25 08:02:17 crc kubenswrapper[4832]: I0125 08:02:17.969298 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 25 08:02:17 crc kubenswrapper[4832]: I0125 08:02:17.970573 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 25 08:02:17 crc kubenswrapper[4832]: I0125 08:02:17.970826 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 25 08:02:17 crc kubenswrapper[4832]: I0125 08:02:17.971107 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 25 08:02:18 crc kubenswrapper[4832]: I0125 08:02:18.150045 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b105b4ed-f7f6-43d8-a0ef-84c44e8116a7-config\") pod \"controller-manager-d4fc77944-xmrzw\" (UID: \"b105b4ed-f7f6-43d8-a0ef-84c44e8116a7\") " pod="openshift-controller-manager/controller-manager-d4fc77944-xmrzw" Jan 25 08:02:18 crc kubenswrapper[4832]: I0125 08:02:18.150084 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b105b4ed-f7f6-43d8-a0ef-84c44e8116a7-serving-cert\") pod \"controller-manager-d4fc77944-xmrzw\" (UID: \"b105b4ed-f7f6-43d8-a0ef-84c44e8116a7\") " pod="openshift-controller-manager/controller-manager-d4fc77944-xmrzw" Jan 25 08:02:18 crc kubenswrapper[4832]: I0125 08:02:18.150114 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jrc46\" (UniqueName: \"kubernetes.io/projected/bfdb78ba-6a90-4f59-8a21-31e5de03016e-kube-api-access-jrc46\") pod \"route-controller-manager-6d7c88cf6b-xkjkv\" (UID: \"bfdb78ba-6a90-4f59-8a21-31e5de03016e\") " pod="openshift-route-controller-manager/route-controller-manager-6d7c88cf6b-xkjkv" Jan 25 08:02:18 crc kubenswrapper[4832]: I0125 08:02:18.150135 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wtdfc\" (UniqueName: \"kubernetes.io/projected/b105b4ed-f7f6-43d8-a0ef-84c44e8116a7-kube-api-access-wtdfc\") pod \"controller-manager-d4fc77944-xmrzw\" (UID: \"b105b4ed-f7f6-43d8-a0ef-84c44e8116a7\") " pod="openshift-controller-manager/controller-manager-d4fc77944-xmrzw" Jan 25 08:02:18 crc kubenswrapper[4832]: I0125 08:02:18.150210 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bfdb78ba-6a90-4f59-8a21-31e5de03016e-config\") pod \"route-controller-manager-6d7c88cf6b-xkjkv\" (UID: \"bfdb78ba-6a90-4f59-8a21-31e5de03016e\") " pod="openshift-route-controller-manager/route-controller-manager-6d7c88cf6b-xkjkv" Jan 25 08:02:18 crc kubenswrapper[4832]: I0125 08:02:18.150238 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/bfdb78ba-6a90-4f59-8a21-31e5de03016e-client-ca\") pod \"route-controller-manager-6d7c88cf6b-xkjkv\" (UID: \"bfdb78ba-6a90-4f59-8a21-31e5de03016e\") " pod="openshift-route-controller-manager/route-controller-manager-6d7c88cf6b-xkjkv" Jan 25 08:02:18 crc kubenswrapper[4832]: I0125 08:02:18.150265 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bfdb78ba-6a90-4f59-8a21-31e5de03016e-serving-cert\") pod \"route-controller-manager-6d7c88cf6b-xkjkv\" (UID: \"bfdb78ba-6a90-4f59-8a21-31e5de03016e\") " pod="openshift-route-controller-manager/route-controller-manager-6d7c88cf6b-xkjkv" Jan 25 08:02:18 crc kubenswrapper[4832]: I0125 08:02:18.150313 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/b105b4ed-f7f6-43d8-a0ef-84c44e8116a7-proxy-ca-bundles\") pod \"controller-manager-d4fc77944-xmrzw\" (UID: \"b105b4ed-f7f6-43d8-a0ef-84c44e8116a7\") " pod="openshift-controller-manager/controller-manager-d4fc77944-xmrzw" Jan 25 08:02:18 crc kubenswrapper[4832]: I0125 08:02:18.150336 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b105b4ed-f7f6-43d8-a0ef-84c44e8116a7-client-ca\") pod \"controller-manager-d4fc77944-xmrzw\" (UID: \"b105b4ed-f7f6-43d8-a0ef-84c44e8116a7\") " pod="openshift-controller-manager/controller-manager-d4fc77944-xmrzw" Jan 25 08:02:18 crc kubenswrapper[4832]: I0125 08:02:18.251829 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bfdb78ba-6a90-4f59-8a21-31e5de03016e-config\") pod \"route-controller-manager-6d7c88cf6b-xkjkv\" (UID: \"bfdb78ba-6a90-4f59-8a21-31e5de03016e\") " pod="openshift-route-controller-manager/route-controller-manager-6d7c88cf6b-xkjkv" Jan 25 08:02:18 crc kubenswrapper[4832]: I0125 08:02:18.251874 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/bfdb78ba-6a90-4f59-8a21-31e5de03016e-client-ca\") pod \"route-controller-manager-6d7c88cf6b-xkjkv\" (UID: \"bfdb78ba-6a90-4f59-8a21-31e5de03016e\") " pod="openshift-route-controller-manager/route-controller-manager-6d7c88cf6b-xkjkv" Jan 25 08:02:18 crc kubenswrapper[4832]: I0125 08:02:18.251902 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bfdb78ba-6a90-4f59-8a21-31e5de03016e-serving-cert\") pod \"route-controller-manager-6d7c88cf6b-xkjkv\" (UID: \"bfdb78ba-6a90-4f59-8a21-31e5de03016e\") " pod="openshift-route-controller-manager/route-controller-manager-6d7c88cf6b-xkjkv" Jan 25 08:02:18 crc kubenswrapper[4832]: I0125 08:02:18.251927 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/b105b4ed-f7f6-43d8-a0ef-84c44e8116a7-proxy-ca-bundles\") pod \"controller-manager-d4fc77944-xmrzw\" (UID: \"b105b4ed-f7f6-43d8-a0ef-84c44e8116a7\") " pod="openshift-controller-manager/controller-manager-d4fc77944-xmrzw" Jan 25 08:02:18 crc kubenswrapper[4832]: I0125 08:02:18.251946 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b105b4ed-f7f6-43d8-a0ef-84c44e8116a7-client-ca\") pod \"controller-manager-d4fc77944-xmrzw\" (UID: \"b105b4ed-f7f6-43d8-a0ef-84c44e8116a7\") " pod="openshift-controller-manager/controller-manager-d4fc77944-xmrzw" Jan 25 08:02:18 crc kubenswrapper[4832]: I0125 08:02:18.252006 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b105b4ed-f7f6-43d8-a0ef-84c44e8116a7-serving-cert\") pod \"controller-manager-d4fc77944-xmrzw\" (UID: \"b105b4ed-f7f6-43d8-a0ef-84c44e8116a7\") " pod="openshift-controller-manager/controller-manager-d4fc77944-xmrzw" Jan 25 08:02:18 crc kubenswrapper[4832]: I0125 08:02:18.252021 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b105b4ed-f7f6-43d8-a0ef-84c44e8116a7-config\") pod \"controller-manager-d4fc77944-xmrzw\" (UID: \"b105b4ed-f7f6-43d8-a0ef-84c44e8116a7\") " pod="openshift-controller-manager/controller-manager-d4fc77944-xmrzw" Jan 25 08:02:18 crc kubenswrapper[4832]: I0125 08:02:18.252045 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jrc46\" (UniqueName: \"kubernetes.io/projected/bfdb78ba-6a90-4f59-8a21-31e5de03016e-kube-api-access-jrc46\") pod \"route-controller-manager-6d7c88cf6b-xkjkv\" (UID: \"bfdb78ba-6a90-4f59-8a21-31e5de03016e\") " pod="openshift-route-controller-manager/route-controller-manager-6d7c88cf6b-xkjkv" Jan 25 08:02:18 crc kubenswrapper[4832]: I0125 08:02:18.252080 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wtdfc\" (UniqueName: \"kubernetes.io/projected/b105b4ed-f7f6-43d8-a0ef-84c44e8116a7-kube-api-access-wtdfc\") pod \"controller-manager-d4fc77944-xmrzw\" (UID: \"b105b4ed-f7f6-43d8-a0ef-84c44e8116a7\") " pod="openshift-controller-manager/controller-manager-d4fc77944-xmrzw" Jan 25 08:02:18 crc kubenswrapper[4832]: I0125 08:02:18.253088 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b105b4ed-f7f6-43d8-a0ef-84c44e8116a7-client-ca\") pod \"controller-manager-d4fc77944-xmrzw\" (UID: \"b105b4ed-f7f6-43d8-a0ef-84c44e8116a7\") " pod="openshift-controller-manager/controller-manager-d4fc77944-xmrzw" Jan 25 08:02:18 crc kubenswrapper[4832]: I0125 08:02:18.253284 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bfdb78ba-6a90-4f59-8a21-31e5de03016e-config\") pod \"route-controller-manager-6d7c88cf6b-xkjkv\" (UID: \"bfdb78ba-6a90-4f59-8a21-31e5de03016e\") " pod="openshift-route-controller-manager/route-controller-manager-6d7c88cf6b-xkjkv" Jan 25 08:02:18 crc kubenswrapper[4832]: I0125 08:02:18.253603 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/b105b4ed-f7f6-43d8-a0ef-84c44e8116a7-proxy-ca-bundles\") pod \"controller-manager-d4fc77944-xmrzw\" (UID: \"b105b4ed-f7f6-43d8-a0ef-84c44e8116a7\") " pod="openshift-controller-manager/controller-manager-d4fc77944-xmrzw" Jan 25 08:02:18 crc kubenswrapper[4832]: I0125 08:02:18.253696 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/bfdb78ba-6a90-4f59-8a21-31e5de03016e-client-ca\") pod \"route-controller-manager-6d7c88cf6b-xkjkv\" (UID: \"bfdb78ba-6a90-4f59-8a21-31e5de03016e\") " pod="openshift-route-controller-manager/route-controller-manager-6d7c88cf6b-xkjkv" Jan 25 08:02:18 crc kubenswrapper[4832]: I0125 08:02:18.253747 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b105b4ed-f7f6-43d8-a0ef-84c44e8116a7-config\") pod \"controller-manager-d4fc77944-xmrzw\" (UID: \"b105b4ed-f7f6-43d8-a0ef-84c44e8116a7\") " pod="openshift-controller-manager/controller-manager-d4fc77944-xmrzw" Jan 25 08:02:18 crc kubenswrapper[4832]: I0125 08:02:18.259215 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bfdb78ba-6a90-4f59-8a21-31e5de03016e-serving-cert\") pod \"route-controller-manager-6d7c88cf6b-xkjkv\" (UID: \"bfdb78ba-6a90-4f59-8a21-31e5de03016e\") " pod="openshift-route-controller-manager/route-controller-manager-6d7c88cf6b-xkjkv" Jan 25 08:02:18 crc kubenswrapper[4832]: I0125 08:02:18.260021 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b105b4ed-f7f6-43d8-a0ef-84c44e8116a7-serving-cert\") pod \"controller-manager-d4fc77944-xmrzw\" (UID: \"b105b4ed-f7f6-43d8-a0ef-84c44e8116a7\") " pod="openshift-controller-manager/controller-manager-d4fc77944-xmrzw" Jan 25 08:02:18 crc kubenswrapper[4832]: I0125 08:02:18.271134 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wtdfc\" (UniqueName: \"kubernetes.io/projected/b105b4ed-f7f6-43d8-a0ef-84c44e8116a7-kube-api-access-wtdfc\") pod \"controller-manager-d4fc77944-xmrzw\" (UID: \"b105b4ed-f7f6-43d8-a0ef-84c44e8116a7\") " pod="openshift-controller-manager/controller-manager-d4fc77944-xmrzw" Jan 25 08:02:18 crc kubenswrapper[4832]: I0125 08:02:18.274270 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jrc46\" (UniqueName: \"kubernetes.io/projected/bfdb78ba-6a90-4f59-8a21-31e5de03016e-kube-api-access-jrc46\") pod \"route-controller-manager-6d7c88cf6b-xkjkv\" (UID: \"bfdb78ba-6a90-4f59-8a21-31e5de03016e\") " pod="openshift-route-controller-manager/route-controller-manager-6d7c88cf6b-xkjkv" Jan 25 08:02:18 crc kubenswrapper[4832]: I0125 08:02:18.277161 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-d4fc77944-xmrzw" Jan 25 08:02:18 crc kubenswrapper[4832]: I0125 08:02:18.288623 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6d7c88cf6b-xkjkv" Jan 25 08:02:18 crc kubenswrapper[4832]: I0125 08:02:18.518374 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6d7c88cf6b-xkjkv"] Jan 25 08:02:18 crc kubenswrapper[4832]: I0125 08:02:18.691068 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-d4fc77944-xmrzw"] Jan 25 08:02:18 crc kubenswrapper[4832]: I0125 08:02:18.695907 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6d7c88cf6b-xkjkv" event={"ID":"bfdb78ba-6a90-4f59-8a21-31e5de03016e","Type":"ContainerStarted","Data":"bf2d81d20a39a91370e1eede62256bc04c1f9ce704c615df8d65dcbdfbb3f1e7"} Jan 25 08:02:18 crc kubenswrapper[4832]: I0125 08:02:18.695953 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6d7c88cf6b-xkjkv" event={"ID":"bfdb78ba-6a90-4f59-8a21-31e5de03016e","Type":"ContainerStarted","Data":"985dc59bc0876aca872ee1a8fe9b887cab78d5dec2838695896e2e26d4c3b001"} Jan 25 08:02:18 crc kubenswrapper[4832]: W0125 08:02:18.704483 4832 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb105b4ed_f7f6_43d8_a0ef_84c44e8116a7.slice/crio-45f3eeeb8e38c8c679d51823c84d2c9ee01eeadb3286dc8c02aa339752d23e5b WatchSource:0}: Error finding container 45f3eeeb8e38c8c679d51823c84d2c9ee01eeadb3286dc8c02aa339752d23e5b: Status 404 returned error can't find the container with id 45f3eeeb8e38c8c679d51823c84d2c9ee01eeadb3286dc8c02aa339752d23e5b Jan 25 08:02:19 crc kubenswrapper[4832]: I0125 08:02:19.677925 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7fad5166-9aa0-4c10-8c73-2186af1d226d" path="/var/lib/kubelet/pods/7fad5166-9aa0-4c10-8c73-2186af1d226d/volumes" Jan 25 08:02:19 crc kubenswrapper[4832]: I0125 08:02:19.679005 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8be00535-0bc6-41a2-a79c-552be0f574a8" path="/var/lib/kubelet/pods/8be00535-0bc6-41a2-a79c-552be0f574a8/volumes" Jan 25 08:02:19 crc kubenswrapper[4832]: I0125 08:02:19.703155 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-d4fc77944-xmrzw" event={"ID":"b105b4ed-f7f6-43d8-a0ef-84c44e8116a7","Type":"ContainerStarted","Data":"06332a650895196e0a2fad0b09c6ea9135564e7af9e244797df4b93cc9e29c3e"} Jan 25 08:02:19 crc kubenswrapper[4832]: I0125 08:02:19.703220 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-d4fc77944-xmrzw" event={"ID":"b105b4ed-f7f6-43d8-a0ef-84c44e8116a7","Type":"ContainerStarted","Data":"45f3eeeb8e38c8c679d51823c84d2c9ee01eeadb3286dc8c02aa339752d23e5b"} Jan 25 08:02:19 crc kubenswrapper[4832]: I0125 08:02:19.703528 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6d7c88cf6b-xkjkv" Jan 25 08:02:19 crc kubenswrapper[4832]: I0125 08:02:19.703955 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-d4fc77944-xmrzw" Jan 25 08:02:19 crc kubenswrapper[4832]: I0125 08:02:19.708631 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6d7c88cf6b-xkjkv" Jan 25 08:02:19 crc kubenswrapper[4832]: I0125 08:02:19.709573 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-d4fc77944-xmrzw" Jan 25 08:02:19 crc kubenswrapper[4832]: I0125 08:02:19.722782 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-d4fc77944-xmrzw" podStartSLOduration=3.722765402 podStartE2EDuration="3.722765402s" podCreationTimestamp="2026-01-25 08:02:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-25 08:02:19.72088065 +0000 UTC m=+322.394704183" watchObservedRunningTime="2026-01-25 08:02:19.722765402 +0000 UTC m=+322.396588935" Jan 25 08:02:19 crc kubenswrapper[4832]: I0125 08:02:19.738510 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6d7c88cf6b-xkjkv" podStartSLOduration=3.738487402 podStartE2EDuration="3.738487402s" podCreationTimestamp="2026-01-25 08:02:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-25 08:02:19.734913336 +0000 UTC m=+322.408736879" watchObservedRunningTime="2026-01-25 08:02:19.738487402 +0000 UTC m=+322.412310945" Jan 25 08:02:22 crc kubenswrapper[4832]: I0125 08:02:22.150239 4832 patch_prober.go:28] interesting pod/machine-config-daemon-9r9sz container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 25 08:02:22 crc kubenswrapper[4832]: I0125 08:02:22.150635 4832 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-9r9sz" podUID="1fb47e8e-c812-41b4-9be7-3fad81e121b0" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 25 08:02:52 crc kubenswrapper[4832]: I0125 08:02:52.149912 4832 patch_prober.go:28] interesting pod/machine-config-daemon-9r9sz container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 25 08:02:52 crc kubenswrapper[4832]: I0125 08:02:52.150588 4832 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-9r9sz" podUID="1fb47e8e-c812-41b4-9be7-3fad81e121b0" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 25 08:02:56 crc kubenswrapper[4832]: I0125 08:02:56.565625 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-d4fc77944-xmrzw"] Jan 25 08:02:56 crc kubenswrapper[4832]: I0125 08:02:56.566105 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-d4fc77944-xmrzw" podUID="b105b4ed-f7f6-43d8-a0ef-84c44e8116a7" containerName="controller-manager" containerID="cri-o://06332a650895196e0a2fad0b09c6ea9135564e7af9e244797df4b93cc9e29c3e" gracePeriod=30 Jan 25 08:02:56 crc kubenswrapper[4832]: I0125 08:02:56.911219 4832 generic.go:334] "Generic (PLEG): container finished" podID="b105b4ed-f7f6-43d8-a0ef-84c44e8116a7" containerID="06332a650895196e0a2fad0b09c6ea9135564e7af9e244797df4b93cc9e29c3e" exitCode=0 Jan 25 08:02:56 crc kubenswrapper[4832]: I0125 08:02:56.911290 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-d4fc77944-xmrzw" event={"ID":"b105b4ed-f7f6-43d8-a0ef-84c44e8116a7","Type":"ContainerDied","Data":"06332a650895196e0a2fad0b09c6ea9135564e7af9e244797df4b93cc9e29c3e"} Jan 25 08:02:57 crc kubenswrapper[4832]: I0125 08:02:57.495083 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-d4fc77944-xmrzw" Jan 25 08:02:57 crc kubenswrapper[4832]: I0125 08:02:57.629264 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wtdfc\" (UniqueName: \"kubernetes.io/projected/b105b4ed-f7f6-43d8-a0ef-84c44e8116a7-kube-api-access-wtdfc\") pod \"b105b4ed-f7f6-43d8-a0ef-84c44e8116a7\" (UID: \"b105b4ed-f7f6-43d8-a0ef-84c44e8116a7\") " Jan 25 08:02:57 crc kubenswrapper[4832]: I0125 08:02:57.629314 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/b105b4ed-f7f6-43d8-a0ef-84c44e8116a7-proxy-ca-bundles\") pod \"b105b4ed-f7f6-43d8-a0ef-84c44e8116a7\" (UID: \"b105b4ed-f7f6-43d8-a0ef-84c44e8116a7\") " Jan 25 08:02:57 crc kubenswrapper[4832]: I0125 08:02:57.629344 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b105b4ed-f7f6-43d8-a0ef-84c44e8116a7-config\") pod \"b105b4ed-f7f6-43d8-a0ef-84c44e8116a7\" (UID: \"b105b4ed-f7f6-43d8-a0ef-84c44e8116a7\") " Jan 25 08:02:57 crc kubenswrapper[4832]: I0125 08:02:57.629425 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b105b4ed-f7f6-43d8-a0ef-84c44e8116a7-serving-cert\") pod \"b105b4ed-f7f6-43d8-a0ef-84c44e8116a7\" (UID: \"b105b4ed-f7f6-43d8-a0ef-84c44e8116a7\") " Jan 25 08:02:57 crc kubenswrapper[4832]: I0125 08:02:57.629456 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b105b4ed-f7f6-43d8-a0ef-84c44e8116a7-client-ca\") pod \"b105b4ed-f7f6-43d8-a0ef-84c44e8116a7\" (UID: \"b105b4ed-f7f6-43d8-a0ef-84c44e8116a7\") " Jan 25 08:02:57 crc kubenswrapper[4832]: I0125 08:02:57.629951 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b105b4ed-f7f6-43d8-a0ef-84c44e8116a7-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "b105b4ed-f7f6-43d8-a0ef-84c44e8116a7" (UID: "b105b4ed-f7f6-43d8-a0ef-84c44e8116a7"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 25 08:02:57 crc kubenswrapper[4832]: I0125 08:02:57.630097 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b105b4ed-f7f6-43d8-a0ef-84c44e8116a7-client-ca" (OuterVolumeSpecName: "client-ca") pod "b105b4ed-f7f6-43d8-a0ef-84c44e8116a7" (UID: "b105b4ed-f7f6-43d8-a0ef-84c44e8116a7"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 25 08:02:57 crc kubenswrapper[4832]: I0125 08:02:57.630412 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b105b4ed-f7f6-43d8-a0ef-84c44e8116a7-config" (OuterVolumeSpecName: "config") pod "b105b4ed-f7f6-43d8-a0ef-84c44e8116a7" (UID: "b105b4ed-f7f6-43d8-a0ef-84c44e8116a7"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 25 08:02:57 crc kubenswrapper[4832]: I0125 08:02:57.643651 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b105b4ed-f7f6-43d8-a0ef-84c44e8116a7-kube-api-access-wtdfc" (OuterVolumeSpecName: "kube-api-access-wtdfc") pod "b105b4ed-f7f6-43d8-a0ef-84c44e8116a7" (UID: "b105b4ed-f7f6-43d8-a0ef-84c44e8116a7"). InnerVolumeSpecName "kube-api-access-wtdfc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 25 08:02:57 crc kubenswrapper[4832]: I0125 08:02:57.654875 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b105b4ed-f7f6-43d8-a0ef-84c44e8116a7-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "b105b4ed-f7f6-43d8-a0ef-84c44e8116a7" (UID: "b105b4ed-f7f6-43d8-a0ef-84c44e8116a7"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 25 08:02:57 crc kubenswrapper[4832]: I0125 08:02:57.730574 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wtdfc\" (UniqueName: \"kubernetes.io/projected/b105b4ed-f7f6-43d8-a0ef-84c44e8116a7-kube-api-access-wtdfc\") on node \"crc\" DevicePath \"\"" Jan 25 08:02:57 crc kubenswrapper[4832]: I0125 08:02:57.730610 4832 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/b105b4ed-f7f6-43d8-a0ef-84c44e8116a7-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 25 08:02:57 crc kubenswrapper[4832]: I0125 08:02:57.730621 4832 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b105b4ed-f7f6-43d8-a0ef-84c44e8116a7-config\") on node \"crc\" DevicePath \"\"" Jan 25 08:02:57 crc kubenswrapper[4832]: I0125 08:02:57.730634 4832 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b105b4ed-f7f6-43d8-a0ef-84c44e8116a7-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 25 08:02:57 crc kubenswrapper[4832]: I0125 08:02:57.730647 4832 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b105b4ed-f7f6-43d8-a0ef-84c44e8116a7-client-ca\") on node \"crc\" DevicePath \"\"" Jan 25 08:02:57 crc kubenswrapper[4832]: I0125 08:02:57.918451 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-d4fc77944-xmrzw" event={"ID":"b105b4ed-f7f6-43d8-a0ef-84c44e8116a7","Type":"ContainerDied","Data":"45f3eeeb8e38c8c679d51823c84d2c9ee01eeadb3286dc8c02aa339752d23e5b"} Jan 25 08:02:57 crc kubenswrapper[4832]: I0125 08:02:57.918589 4832 scope.go:117] "RemoveContainer" containerID="06332a650895196e0a2fad0b09c6ea9135564e7af9e244797df4b93cc9e29c3e" Jan 25 08:02:57 crc kubenswrapper[4832]: I0125 08:02:57.918492 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-d4fc77944-xmrzw" Jan 25 08:02:57 crc kubenswrapper[4832]: I0125 08:02:57.940720 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-d4fc77944-xmrzw"] Jan 25 08:02:57 crc kubenswrapper[4832]: I0125 08:02:57.946095 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-d4fc77944-xmrzw"] Jan 25 08:02:57 crc kubenswrapper[4832]: I0125 08:02:57.979544 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-567cf8c87f-mjh4s"] Jan 25 08:02:57 crc kubenswrapper[4832]: E0125 08:02:57.979880 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b105b4ed-f7f6-43d8-a0ef-84c44e8116a7" containerName="controller-manager" Jan 25 08:02:57 crc kubenswrapper[4832]: I0125 08:02:57.979908 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="b105b4ed-f7f6-43d8-a0ef-84c44e8116a7" containerName="controller-manager" Jan 25 08:02:57 crc kubenswrapper[4832]: I0125 08:02:57.980132 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="b105b4ed-f7f6-43d8-a0ef-84c44e8116a7" containerName="controller-manager" Jan 25 08:02:57 crc kubenswrapper[4832]: I0125 08:02:57.980650 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-567cf8c87f-mjh4s" Jan 25 08:02:57 crc kubenswrapper[4832]: I0125 08:02:57.983000 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 25 08:02:57 crc kubenswrapper[4832]: I0125 08:02:57.983061 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 25 08:02:57 crc kubenswrapper[4832]: I0125 08:02:57.983214 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 25 08:02:57 crc kubenswrapper[4832]: I0125 08:02:57.983318 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 25 08:02:57 crc kubenswrapper[4832]: I0125 08:02:57.984212 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 25 08:02:57 crc kubenswrapper[4832]: I0125 08:02:57.984754 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 25 08:02:57 crc kubenswrapper[4832]: I0125 08:02:57.994160 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 25 08:02:57 crc kubenswrapper[4832]: I0125 08:02:57.998243 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-567cf8c87f-mjh4s"] Jan 25 08:02:58 crc kubenswrapper[4832]: I0125 08:02:58.137403 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/796f4e65-e440-46f2-b9c3-2a5a9f93cfba-proxy-ca-bundles\") pod \"controller-manager-567cf8c87f-mjh4s\" (UID: \"796f4e65-e440-46f2-b9c3-2a5a9f93cfba\") " pod="openshift-controller-manager/controller-manager-567cf8c87f-mjh4s" Jan 25 08:02:58 crc kubenswrapper[4832]: I0125 08:02:58.137564 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/796f4e65-e440-46f2-b9c3-2a5a9f93cfba-client-ca\") pod \"controller-manager-567cf8c87f-mjh4s\" (UID: \"796f4e65-e440-46f2-b9c3-2a5a9f93cfba\") " pod="openshift-controller-manager/controller-manager-567cf8c87f-mjh4s" Jan 25 08:02:58 crc kubenswrapper[4832]: I0125 08:02:58.137617 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/796f4e65-e440-46f2-b9c3-2a5a9f93cfba-config\") pod \"controller-manager-567cf8c87f-mjh4s\" (UID: \"796f4e65-e440-46f2-b9c3-2a5a9f93cfba\") " pod="openshift-controller-manager/controller-manager-567cf8c87f-mjh4s" Jan 25 08:02:58 crc kubenswrapper[4832]: I0125 08:02:58.137643 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/796f4e65-e440-46f2-b9c3-2a5a9f93cfba-serving-cert\") pod \"controller-manager-567cf8c87f-mjh4s\" (UID: \"796f4e65-e440-46f2-b9c3-2a5a9f93cfba\") " pod="openshift-controller-manager/controller-manager-567cf8c87f-mjh4s" Jan 25 08:02:58 crc kubenswrapper[4832]: I0125 08:02:58.137671 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pc84d\" (UniqueName: \"kubernetes.io/projected/796f4e65-e440-46f2-b9c3-2a5a9f93cfba-kube-api-access-pc84d\") pod \"controller-manager-567cf8c87f-mjh4s\" (UID: \"796f4e65-e440-46f2-b9c3-2a5a9f93cfba\") " pod="openshift-controller-manager/controller-manager-567cf8c87f-mjh4s" Jan 25 08:02:58 crc kubenswrapper[4832]: I0125 08:02:58.238863 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/796f4e65-e440-46f2-b9c3-2a5a9f93cfba-client-ca\") pod \"controller-manager-567cf8c87f-mjh4s\" (UID: \"796f4e65-e440-46f2-b9c3-2a5a9f93cfba\") " pod="openshift-controller-manager/controller-manager-567cf8c87f-mjh4s" Jan 25 08:02:58 crc kubenswrapper[4832]: I0125 08:02:58.238923 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/796f4e65-e440-46f2-b9c3-2a5a9f93cfba-config\") pod \"controller-manager-567cf8c87f-mjh4s\" (UID: \"796f4e65-e440-46f2-b9c3-2a5a9f93cfba\") " pod="openshift-controller-manager/controller-manager-567cf8c87f-mjh4s" Jan 25 08:02:58 crc kubenswrapper[4832]: I0125 08:02:58.238950 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/796f4e65-e440-46f2-b9c3-2a5a9f93cfba-serving-cert\") pod \"controller-manager-567cf8c87f-mjh4s\" (UID: \"796f4e65-e440-46f2-b9c3-2a5a9f93cfba\") " pod="openshift-controller-manager/controller-manager-567cf8c87f-mjh4s" Jan 25 08:02:58 crc kubenswrapper[4832]: I0125 08:02:58.238972 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pc84d\" (UniqueName: \"kubernetes.io/projected/796f4e65-e440-46f2-b9c3-2a5a9f93cfba-kube-api-access-pc84d\") pod \"controller-manager-567cf8c87f-mjh4s\" (UID: \"796f4e65-e440-46f2-b9c3-2a5a9f93cfba\") " pod="openshift-controller-manager/controller-manager-567cf8c87f-mjh4s" Jan 25 08:02:58 crc kubenswrapper[4832]: I0125 08:02:58.239017 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/796f4e65-e440-46f2-b9c3-2a5a9f93cfba-proxy-ca-bundles\") pod \"controller-manager-567cf8c87f-mjh4s\" (UID: \"796f4e65-e440-46f2-b9c3-2a5a9f93cfba\") " pod="openshift-controller-manager/controller-manager-567cf8c87f-mjh4s" Jan 25 08:02:58 crc kubenswrapper[4832]: I0125 08:02:58.239771 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/796f4e65-e440-46f2-b9c3-2a5a9f93cfba-client-ca\") pod \"controller-manager-567cf8c87f-mjh4s\" (UID: \"796f4e65-e440-46f2-b9c3-2a5a9f93cfba\") " pod="openshift-controller-manager/controller-manager-567cf8c87f-mjh4s" Jan 25 08:02:58 crc kubenswrapper[4832]: I0125 08:02:58.240871 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/796f4e65-e440-46f2-b9c3-2a5a9f93cfba-proxy-ca-bundles\") pod \"controller-manager-567cf8c87f-mjh4s\" (UID: \"796f4e65-e440-46f2-b9c3-2a5a9f93cfba\") " pod="openshift-controller-manager/controller-manager-567cf8c87f-mjh4s" Jan 25 08:02:58 crc kubenswrapper[4832]: I0125 08:02:58.241006 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/796f4e65-e440-46f2-b9c3-2a5a9f93cfba-config\") pod \"controller-manager-567cf8c87f-mjh4s\" (UID: \"796f4e65-e440-46f2-b9c3-2a5a9f93cfba\") " pod="openshift-controller-manager/controller-manager-567cf8c87f-mjh4s" Jan 25 08:02:58 crc kubenswrapper[4832]: I0125 08:02:58.257119 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/796f4e65-e440-46f2-b9c3-2a5a9f93cfba-serving-cert\") pod \"controller-manager-567cf8c87f-mjh4s\" (UID: \"796f4e65-e440-46f2-b9c3-2a5a9f93cfba\") " pod="openshift-controller-manager/controller-manager-567cf8c87f-mjh4s" Jan 25 08:02:58 crc kubenswrapper[4832]: I0125 08:02:58.266537 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pc84d\" (UniqueName: \"kubernetes.io/projected/796f4e65-e440-46f2-b9c3-2a5a9f93cfba-kube-api-access-pc84d\") pod \"controller-manager-567cf8c87f-mjh4s\" (UID: \"796f4e65-e440-46f2-b9c3-2a5a9f93cfba\") " pod="openshift-controller-manager/controller-manager-567cf8c87f-mjh4s" Jan 25 08:02:58 crc kubenswrapper[4832]: I0125 08:02:58.305081 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-567cf8c87f-mjh4s" Jan 25 08:02:58 crc kubenswrapper[4832]: I0125 08:02:58.468881 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-567cf8c87f-mjh4s"] Jan 25 08:02:58 crc kubenswrapper[4832]: I0125 08:02:58.925099 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-567cf8c87f-mjh4s" event={"ID":"796f4e65-e440-46f2-b9c3-2a5a9f93cfba","Type":"ContainerStarted","Data":"a9a93f7b4fbbc3a339006d01c908b06719a1f86e6ddde15d2969352db987a889"} Jan 25 08:02:58 crc kubenswrapper[4832]: I0125 08:02:58.925438 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-567cf8c87f-mjh4s" Jan 25 08:02:58 crc kubenswrapper[4832]: I0125 08:02:58.925451 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-567cf8c87f-mjh4s" event={"ID":"796f4e65-e440-46f2-b9c3-2a5a9f93cfba","Type":"ContainerStarted","Data":"b836b2e5cd2fe533e527a02af8b69049f8f85b2739b6bc2d7143eb8d3c06a74c"} Jan 25 08:02:58 crc kubenswrapper[4832]: I0125 08:02:58.931630 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-567cf8c87f-mjh4s" Jan 25 08:02:58 crc kubenswrapper[4832]: I0125 08:02:58.942227 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-567cf8c87f-mjh4s" podStartSLOduration=2.942210373 podStartE2EDuration="2.942210373s" podCreationTimestamp="2026-01-25 08:02:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-25 08:02:58.942163361 +0000 UTC m=+361.615986894" watchObservedRunningTime="2026-01-25 08:02:58.942210373 +0000 UTC m=+361.616033906" Jan 25 08:02:59 crc kubenswrapper[4832]: I0125 08:02:59.676558 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b105b4ed-f7f6-43d8-a0ef-84c44e8116a7" path="/var/lib/kubelet/pods/b105b4ed-f7f6-43d8-a0ef-84c44e8116a7/volumes" Jan 25 08:03:16 crc kubenswrapper[4832]: I0125 08:03:16.621811 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6d7c88cf6b-xkjkv"] Jan 25 08:03:16 crc kubenswrapper[4832]: I0125 08:03:16.623166 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6d7c88cf6b-xkjkv" podUID="bfdb78ba-6a90-4f59-8a21-31e5de03016e" containerName="route-controller-manager" containerID="cri-o://bf2d81d20a39a91370e1eede62256bc04c1f9ce704c615df8d65dcbdfbb3f1e7" gracePeriod=30 Jan 25 08:03:16 crc kubenswrapper[4832]: I0125 08:03:16.783036 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-mz8gw"] Jan 25 08:03:16 crc kubenswrapper[4832]: I0125 08:03:16.783843 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-mz8gw" Jan 25 08:03:16 crc kubenswrapper[4832]: I0125 08:03:16.800664 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-mz8gw"] Jan 25 08:03:16 crc kubenswrapper[4832]: I0125 08:03:16.923542 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/732b763f-ae7b-4623-a27b-3c23812409ba-registry-tls\") pod \"image-registry-66df7c8f76-mz8gw\" (UID: \"732b763f-ae7b-4623-a27b-3c23812409ba\") " pod="openshift-image-registry/image-registry-66df7c8f76-mz8gw" Jan 25 08:03:16 crc kubenswrapper[4832]: I0125 08:03:16.923585 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-clgzc\" (UniqueName: \"kubernetes.io/projected/732b763f-ae7b-4623-a27b-3c23812409ba-kube-api-access-clgzc\") pod \"image-registry-66df7c8f76-mz8gw\" (UID: \"732b763f-ae7b-4623-a27b-3c23812409ba\") " pod="openshift-image-registry/image-registry-66df7c8f76-mz8gw" Jan 25 08:03:16 crc kubenswrapper[4832]: I0125 08:03:16.923620 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/732b763f-ae7b-4623-a27b-3c23812409ba-registry-certificates\") pod \"image-registry-66df7c8f76-mz8gw\" (UID: \"732b763f-ae7b-4623-a27b-3c23812409ba\") " pod="openshift-image-registry/image-registry-66df7c8f76-mz8gw" Jan 25 08:03:16 crc kubenswrapper[4832]: I0125 08:03:16.923637 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/732b763f-ae7b-4623-a27b-3c23812409ba-trusted-ca\") pod \"image-registry-66df7c8f76-mz8gw\" (UID: \"732b763f-ae7b-4623-a27b-3c23812409ba\") " pod="openshift-image-registry/image-registry-66df7c8f76-mz8gw" Jan 25 08:03:16 crc kubenswrapper[4832]: I0125 08:03:16.923833 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/732b763f-ae7b-4623-a27b-3c23812409ba-ca-trust-extracted\") pod \"image-registry-66df7c8f76-mz8gw\" (UID: \"732b763f-ae7b-4623-a27b-3c23812409ba\") " pod="openshift-image-registry/image-registry-66df7c8f76-mz8gw" Jan 25 08:03:16 crc kubenswrapper[4832]: I0125 08:03:16.924020 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/732b763f-ae7b-4623-a27b-3c23812409ba-installation-pull-secrets\") pod \"image-registry-66df7c8f76-mz8gw\" (UID: \"732b763f-ae7b-4623-a27b-3c23812409ba\") " pod="openshift-image-registry/image-registry-66df7c8f76-mz8gw" Jan 25 08:03:16 crc kubenswrapper[4832]: I0125 08:03:16.924307 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-mz8gw\" (UID: \"732b763f-ae7b-4623-a27b-3c23812409ba\") " pod="openshift-image-registry/image-registry-66df7c8f76-mz8gw" Jan 25 08:03:16 crc kubenswrapper[4832]: I0125 08:03:16.924479 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/732b763f-ae7b-4623-a27b-3c23812409ba-bound-sa-token\") pod \"image-registry-66df7c8f76-mz8gw\" (UID: \"732b763f-ae7b-4623-a27b-3c23812409ba\") " pod="openshift-image-registry/image-registry-66df7c8f76-mz8gw" Jan 25 08:03:16 crc kubenswrapper[4832]: I0125 08:03:16.960218 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-mz8gw\" (UID: \"732b763f-ae7b-4623-a27b-3c23812409ba\") " pod="openshift-image-registry/image-registry-66df7c8f76-mz8gw" Jan 25 08:03:17 crc kubenswrapper[4832]: I0125 08:03:17.025835 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/732b763f-ae7b-4623-a27b-3c23812409ba-ca-trust-extracted\") pod \"image-registry-66df7c8f76-mz8gw\" (UID: \"732b763f-ae7b-4623-a27b-3c23812409ba\") " pod="openshift-image-registry/image-registry-66df7c8f76-mz8gw" Jan 25 08:03:17 crc kubenswrapper[4832]: I0125 08:03:17.025902 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/732b763f-ae7b-4623-a27b-3c23812409ba-installation-pull-secrets\") pod \"image-registry-66df7c8f76-mz8gw\" (UID: \"732b763f-ae7b-4623-a27b-3c23812409ba\") " pod="openshift-image-registry/image-registry-66df7c8f76-mz8gw" Jan 25 08:03:17 crc kubenswrapper[4832]: I0125 08:03:17.025965 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/732b763f-ae7b-4623-a27b-3c23812409ba-bound-sa-token\") pod \"image-registry-66df7c8f76-mz8gw\" (UID: \"732b763f-ae7b-4623-a27b-3c23812409ba\") " pod="openshift-image-registry/image-registry-66df7c8f76-mz8gw" Jan 25 08:03:17 crc kubenswrapper[4832]: I0125 08:03:17.025991 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/732b763f-ae7b-4623-a27b-3c23812409ba-registry-tls\") pod \"image-registry-66df7c8f76-mz8gw\" (UID: \"732b763f-ae7b-4623-a27b-3c23812409ba\") " pod="openshift-image-registry/image-registry-66df7c8f76-mz8gw" Jan 25 08:03:17 crc kubenswrapper[4832]: I0125 08:03:17.026016 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-clgzc\" (UniqueName: \"kubernetes.io/projected/732b763f-ae7b-4623-a27b-3c23812409ba-kube-api-access-clgzc\") pod \"image-registry-66df7c8f76-mz8gw\" (UID: \"732b763f-ae7b-4623-a27b-3c23812409ba\") " pod="openshift-image-registry/image-registry-66df7c8f76-mz8gw" Jan 25 08:03:17 crc kubenswrapper[4832]: I0125 08:03:17.026053 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/732b763f-ae7b-4623-a27b-3c23812409ba-registry-certificates\") pod \"image-registry-66df7c8f76-mz8gw\" (UID: \"732b763f-ae7b-4623-a27b-3c23812409ba\") " pod="openshift-image-registry/image-registry-66df7c8f76-mz8gw" Jan 25 08:03:17 crc kubenswrapper[4832]: I0125 08:03:17.026078 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/732b763f-ae7b-4623-a27b-3c23812409ba-trusted-ca\") pod \"image-registry-66df7c8f76-mz8gw\" (UID: \"732b763f-ae7b-4623-a27b-3c23812409ba\") " pod="openshift-image-registry/image-registry-66df7c8f76-mz8gw" Jan 25 08:03:17 crc kubenswrapper[4832]: I0125 08:03:17.028620 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/732b763f-ae7b-4623-a27b-3c23812409ba-ca-trust-extracted\") pod \"image-registry-66df7c8f76-mz8gw\" (UID: \"732b763f-ae7b-4623-a27b-3c23812409ba\") " pod="openshift-image-registry/image-registry-66df7c8f76-mz8gw" Jan 25 08:03:17 crc kubenswrapper[4832]: I0125 08:03:17.029902 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/732b763f-ae7b-4623-a27b-3c23812409ba-trusted-ca\") pod \"image-registry-66df7c8f76-mz8gw\" (UID: \"732b763f-ae7b-4623-a27b-3c23812409ba\") " pod="openshift-image-registry/image-registry-66df7c8f76-mz8gw" Jan 25 08:03:17 crc kubenswrapper[4832]: I0125 08:03:17.030083 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/732b763f-ae7b-4623-a27b-3c23812409ba-registry-certificates\") pod \"image-registry-66df7c8f76-mz8gw\" (UID: \"732b763f-ae7b-4623-a27b-3c23812409ba\") " pod="openshift-image-registry/image-registry-66df7c8f76-mz8gw" Jan 25 08:03:17 crc kubenswrapper[4832]: I0125 08:03:17.042788 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/732b763f-ae7b-4623-a27b-3c23812409ba-installation-pull-secrets\") pod \"image-registry-66df7c8f76-mz8gw\" (UID: \"732b763f-ae7b-4623-a27b-3c23812409ba\") " pod="openshift-image-registry/image-registry-66df7c8f76-mz8gw" Jan 25 08:03:17 crc kubenswrapper[4832]: I0125 08:03:17.044811 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/732b763f-ae7b-4623-a27b-3c23812409ba-bound-sa-token\") pod \"image-registry-66df7c8f76-mz8gw\" (UID: \"732b763f-ae7b-4623-a27b-3c23812409ba\") " pod="openshift-image-registry/image-registry-66df7c8f76-mz8gw" Jan 25 08:03:17 crc kubenswrapper[4832]: I0125 08:03:17.044903 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/732b763f-ae7b-4623-a27b-3c23812409ba-registry-tls\") pod \"image-registry-66df7c8f76-mz8gw\" (UID: \"732b763f-ae7b-4623-a27b-3c23812409ba\") " pod="openshift-image-registry/image-registry-66df7c8f76-mz8gw" Jan 25 08:03:17 crc kubenswrapper[4832]: I0125 08:03:17.047505 4832 generic.go:334] "Generic (PLEG): container finished" podID="bfdb78ba-6a90-4f59-8a21-31e5de03016e" containerID="bf2d81d20a39a91370e1eede62256bc04c1f9ce704c615df8d65dcbdfbb3f1e7" exitCode=0 Jan 25 08:03:17 crc kubenswrapper[4832]: I0125 08:03:17.047571 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6d7c88cf6b-xkjkv" event={"ID":"bfdb78ba-6a90-4f59-8a21-31e5de03016e","Type":"ContainerDied","Data":"bf2d81d20a39a91370e1eede62256bc04c1f9ce704c615df8d65dcbdfbb3f1e7"} Jan 25 08:03:17 crc kubenswrapper[4832]: I0125 08:03:17.054307 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-clgzc\" (UniqueName: \"kubernetes.io/projected/732b763f-ae7b-4623-a27b-3c23812409ba-kube-api-access-clgzc\") pod \"image-registry-66df7c8f76-mz8gw\" (UID: \"732b763f-ae7b-4623-a27b-3c23812409ba\") " pod="openshift-image-registry/image-registry-66df7c8f76-mz8gw" Jan 25 08:03:17 crc kubenswrapper[4832]: I0125 08:03:17.105447 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-mz8gw" Jan 25 08:03:17 crc kubenswrapper[4832]: I0125 08:03:17.113908 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6d7c88cf6b-xkjkv" Jan 25 08:03:17 crc kubenswrapper[4832]: I0125 08:03:17.231033 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jrc46\" (UniqueName: \"kubernetes.io/projected/bfdb78ba-6a90-4f59-8a21-31e5de03016e-kube-api-access-jrc46\") pod \"bfdb78ba-6a90-4f59-8a21-31e5de03016e\" (UID: \"bfdb78ba-6a90-4f59-8a21-31e5de03016e\") " Jan 25 08:03:17 crc kubenswrapper[4832]: I0125 08:03:17.231156 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bfdb78ba-6a90-4f59-8a21-31e5de03016e-serving-cert\") pod \"bfdb78ba-6a90-4f59-8a21-31e5de03016e\" (UID: \"bfdb78ba-6a90-4f59-8a21-31e5de03016e\") " Jan 25 08:03:17 crc kubenswrapper[4832]: I0125 08:03:17.231225 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bfdb78ba-6a90-4f59-8a21-31e5de03016e-config\") pod \"bfdb78ba-6a90-4f59-8a21-31e5de03016e\" (UID: \"bfdb78ba-6a90-4f59-8a21-31e5de03016e\") " Jan 25 08:03:17 crc kubenswrapper[4832]: I0125 08:03:17.231247 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/bfdb78ba-6a90-4f59-8a21-31e5de03016e-client-ca\") pod \"bfdb78ba-6a90-4f59-8a21-31e5de03016e\" (UID: \"bfdb78ba-6a90-4f59-8a21-31e5de03016e\") " Jan 25 08:03:17 crc kubenswrapper[4832]: I0125 08:03:17.232661 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bfdb78ba-6a90-4f59-8a21-31e5de03016e-client-ca" (OuterVolumeSpecName: "client-ca") pod "bfdb78ba-6a90-4f59-8a21-31e5de03016e" (UID: "bfdb78ba-6a90-4f59-8a21-31e5de03016e"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 25 08:03:17 crc kubenswrapper[4832]: I0125 08:03:17.233324 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bfdb78ba-6a90-4f59-8a21-31e5de03016e-config" (OuterVolumeSpecName: "config") pod "bfdb78ba-6a90-4f59-8a21-31e5de03016e" (UID: "bfdb78ba-6a90-4f59-8a21-31e5de03016e"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 25 08:03:17 crc kubenswrapper[4832]: I0125 08:03:17.236559 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bfdb78ba-6a90-4f59-8a21-31e5de03016e-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "bfdb78ba-6a90-4f59-8a21-31e5de03016e" (UID: "bfdb78ba-6a90-4f59-8a21-31e5de03016e"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 25 08:03:17 crc kubenswrapper[4832]: I0125 08:03:17.236688 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bfdb78ba-6a90-4f59-8a21-31e5de03016e-kube-api-access-jrc46" (OuterVolumeSpecName: "kube-api-access-jrc46") pod "bfdb78ba-6a90-4f59-8a21-31e5de03016e" (UID: "bfdb78ba-6a90-4f59-8a21-31e5de03016e"). InnerVolumeSpecName "kube-api-access-jrc46". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 25 08:03:17 crc kubenswrapper[4832]: I0125 08:03:17.333880 4832 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bfdb78ba-6a90-4f59-8a21-31e5de03016e-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 25 08:03:17 crc kubenswrapper[4832]: I0125 08:03:17.333964 4832 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bfdb78ba-6a90-4f59-8a21-31e5de03016e-config\") on node \"crc\" DevicePath \"\"" Jan 25 08:03:17 crc kubenswrapper[4832]: I0125 08:03:17.333994 4832 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/bfdb78ba-6a90-4f59-8a21-31e5de03016e-client-ca\") on node \"crc\" DevicePath \"\"" Jan 25 08:03:17 crc kubenswrapper[4832]: I0125 08:03:17.334020 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jrc46\" (UniqueName: \"kubernetes.io/projected/bfdb78ba-6a90-4f59-8a21-31e5de03016e-kube-api-access-jrc46\") on node \"crc\" DevicePath \"\"" Jan 25 08:03:17 crc kubenswrapper[4832]: I0125 08:03:17.524551 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-mz8gw"] Jan 25 08:03:17 crc kubenswrapper[4832]: W0125 08:03:17.539151 4832 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod732b763f_ae7b_4623_a27b_3c23812409ba.slice/crio-519b2ea9740908ea047afb6d38d6c39fa758ae7d9d91ca337541f184404f2715 WatchSource:0}: Error finding container 519b2ea9740908ea047afb6d38d6c39fa758ae7d9d91ca337541f184404f2715: Status 404 returned error can't find the container with id 519b2ea9740908ea047afb6d38d6c39fa758ae7d9d91ca337541f184404f2715 Jan 25 08:03:17 crc kubenswrapper[4832]: I0125 08:03:17.993617 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-665689765d-kk2vq"] Jan 25 08:03:17 crc kubenswrapper[4832]: E0125 08:03:17.994884 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bfdb78ba-6a90-4f59-8a21-31e5de03016e" containerName="route-controller-manager" Jan 25 08:03:17 crc kubenswrapper[4832]: I0125 08:03:17.994966 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="bfdb78ba-6a90-4f59-8a21-31e5de03016e" containerName="route-controller-manager" Jan 25 08:03:17 crc kubenswrapper[4832]: I0125 08:03:17.995126 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="bfdb78ba-6a90-4f59-8a21-31e5de03016e" containerName="route-controller-manager" Jan 25 08:03:17 crc kubenswrapper[4832]: I0125 08:03:17.995572 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-665689765d-kk2vq" Jan 25 08:03:18 crc kubenswrapper[4832]: I0125 08:03:18.010752 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-665689765d-kk2vq"] Jan 25 08:03:18 crc kubenswrapper[4832]: I0125 08:03:18.054229 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6d7c88cf6b-xkjkv" event={"ID":"bfdb78ba-6a90-4f59-8a21-31e5de03016e","Type":"ContainerDied","Data":"985dc59bc0876aca872ee1a8fe9b887cab78d5dec2838695896e2e26d4c3b001"} Jan 25 08:03:18 crc kubenswrapper[4832]: I0125 08:03:18.054452 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6d7c88cf6b-xkjkv" Jan 25 08:03:18 crc kubenswrapper[4832]: I0125 08:03:18.054620 4832 scope.go:117] "RemoveContainer" containerID="bf2d81d20a39a91370e1eede62256bc04c1f9ce704c615df8d65dcbdfbb3f1e7" Jan 25 08:03:18 crc kubenswrapper[4832]: I0125 08:03:18.057419 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-mz8gw" event={"ID":"732b763f-ae7b-4623-a27b-3c23812409ba","Type":"ContainerStarted","Data":"b5479f89245c1ef73031a1276b662c604063e8b7f2df6bf8d9338e77de957ae0"} Jan 25 08:03:18 crc kubenswrapper[4832]: I0125 08:03:18.057530 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-mz8gw" event={"ID":"732b763f-ae7b-4623-a27b-3c23812409ba","Type":"ContainerStarted","Data":"519b2ea9740908ea047afb6d38d6c39fa758ae7d9d91ca337541f184404f2715"} Jan 25 08:03:18 crc kubenswrapper[4832]: I0125 08:03:18.057704 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-66df7c8f76-mz8gw" Jan 25 08:03:18 crc kubenswrapper[4832]: I0125 08:03:18.087265 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-66df7c8f76-mz8gw" podStartSLOduration=2.087239443 podStartE2EDuration="2.087239443s" podCreationTimestamp="2026-01-25 08:03:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-25 08:03:18.085881029 +0000 UTC m=+380.759704582" watchObservedRunningTime="2026-01-25 08:03:18.087239443 +0000 UTC m=+380.761062986" Jan 25 08:03:18 crc kubenswrapper[4832]: I0125 08:03:18.107965 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6d7c88cf6b-xkjkv"] Jan 25 08:03:18 crc kubenswrapper[4832]: I0125 08:03:18.113050 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6d7c88cf6b-xkjkv"] Jan 25 08:03:18 crc kubenswrapper[4832]: I0125 08:03:18.147803 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ef3b456a-9a2c-45c0-8e2c-233bd1700706-config\") pod \"route-controller-manager-665689765d-kk2vq\" (UID: \"ef3b456a-9a2c-45c0-8e2c-233bd1700706\") " pod="openshift-route-controller-manager/route-controller-manager-665689765d-kk2vq" Jan 25 08:03:18 crc kubenswrapper[4832]: I0125 08:03:18.147886 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qm6rz\" (UniqueName: \"kubernetes.io/projected/ef3b456a-9a2c-45c0-8e2c-233bd1700706-kube-api-access-qm6rz\") pod \"route-controller-manager-665689765d-kk2vq\" (UID: \"ef3b456a-9a2c-45c0-8e2c-233bd1700706\") " pod="openshift-route-controller-manager/route-controller-manager-665689765d-kk2vq" Jan 25 08:03:18 crc kubenswrapper[4832]: I0125 08:03:18.147928 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ef3b456a-9a2c-45c0-8e2c-233bd1700706-serving-cert\") pod \"route-controller-manager-665689765d-kk2vq\" (UID: \"ef3b456a-9a2c-45c0-8e2c-233bd1700706\") " pod="openshift-route-controller-manager/route-controller-manager-665689765d-kk2vq" Jan 25 08:03:18 crc kubenswrapper[4832]: I0125 08:03:18.148032 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ef3b456a-9a2c-45c0-8e2c-233bd1700706-client-ca\") pod \"route-controller-manager-665689765d-kk2vq\" (UID: \"ef3b456a-9a2c-45c0-8e2c-233bd1700706\") " pod="openshift-route-controller-manager/route-controller-manager-665689765d-kk2vq" Jan 25 08:03:18 crc kubenswrapper[4832]: I0125 08:03:18.249615 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ef3b456a-9a2c-45c0-8e2c-233bd1700706-client-ca\") pod \"route-controller-manager-665689765d-kk2vq\" (UID: \"ef3b456a-9a2c-45c0-8e2c-233bd1700706\") " pod="openshift-route-controller-manager/route-controller-manager-665689765d-kk2vq" Jan 25 08:03:18 crc kubenswrapper[4832]: I0125 08:03:18.250281 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ef3b456a-9a2c-45c0-8e2c-233bd1700706-config\") pod \"route-controller-manager-665689765d-kk2vq\" (UID: \"ef3b456a-9a2c-45c0-8e2c-233bd1700706\") " pod="openshift-route-controller-manager/route-controller-manager-665689765d-kk2vq" Jan 25 08:03:18 crc kubenswrapper[4832]: I0125 08:03:18.250330 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qm6rz\" (UniqueName: \"kubernetes.io/projected/ef3b456a-9a2c-45c0-8e2c-233bd1700706-kube-api-access-qm6rz\") pod \"route-controller-manager-665689765d-kk2vq\" (UID: \"ef3b456a-9a2c-45c0-8e2c-233bd1700706\") " pod="openshift-route-controller-manager/route-controller-manager-665689765d-kk2vq" Jan 25 08:03:18 crc kubenswrapper[4832]: I0125 08:03:18.250409 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ef3b456a-9a2c-45c0-8e2c-233bd1700706-serving-cert\") pod \"route-controller-manager-665689765d-kk2vq\" (UID: \"ef3b456a-9a2c-45c0-8e2c-233bd1700706\") " pod="openshift-route-controller-manager/route-controller-manager-665689765d-kk2vq" Jan 25 08:03:18 crc kubenswrapper[4832]: I0125 08:03:18.251701 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ef3b456a-9a2c-45c0-8e2c-233bd1700706-client-ca\") pod \"route-controller-manager-665689765d-kk2vq\" (UID: \"ef3b456a-9a2c-45c0-8e2c-233bd1700706\") " pod="openshift-route-controller-manager/route-controller-manager-665689765d-kk2vq" Jan 25 08:03:18 crc kubenswrapper[4832]: I0125 08:03:18.252260 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ef3b456a-9a2c-45c0-8e2c-233bd1700706-config\") pod \"route-controller-manager-665689765d-kk2vq\" (UID: \"ef3b456a-9a2c-45c0-8e2c-233bd1700706\") " pod="openshift-route-controller-manager/route-controller-manager-665689765d-kk2vq" Jan 25 08:03:18 crc kubenswrapper[4832]: I0125 08:03:18.257612 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ef3b456a-9a2c-45c0-8e2c-233bd1700706-serving-cert\") pod \"route-controller-manager-665689765d-kk2vq\" (UID: \"ef3b456a-9a2c-45c0-8e2c-233bd1700706\") " pod="openshift-route-controller-manager/route-controller-manager-665689765d-kk2vq" Jan 25 08:03:18 crc kubenswrapper[4832]: I0125 08:03:18.286618 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qm6rz\" (UniqueName: \"kubernetes.io/projected/ef3b456a-9a2c-45c0-8e2c-233bd1700706-kube-api-access-qm6rz\") pod \"route-controller-manager-665689765d-kk2vq\" (UID: \"ef3b456a-9a2c-45c0-8e2c-233bd1700706\") " pod="openshift-route-controller-manager/route-controller-manager-665689765d-kk2vq" Jan 25 08:03:18 crc kubenswrapper[4832]: I0125 08:03:18.317545 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-665689765d-kk2vq" Jan 25 08:03:18 crc kubenswrapper[4832]: I0125 08:03:18.736422 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-665689765d-kk2vq"] Jan 25 08:03:18 crc kubenswrapper[4832]: W0125 08:03:18.744813 4832 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podef3b456a_9a2c_45c0_8e2c_233bd1700706.slice/crio-bb6476c30ca64127b4aa24860be637715f4d0cc242f4ec4f7dcb383be6edb2ed WatchSource:0}: Error finding container bb6476c30ca64127b4aa24860be637715f4d0cc242f4ec4f7dcb383be6edb2ed: Status 404 returned error can't find the container with id bb6476c30ca64127b4aa24860be637715f4d0cc242f4ec4f7dcb383be6edb2ed Jan 25 08:03:19 crc kubenswrapper[4832]: I0125 08:03:19.064419 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-665689765d-kk2vq" event={"ID":"ef3b456a-9a2c-45c0-8e2c-233bd1700706","Type":"ContainerStarted","Data":"294200f9c86eb004c01370f7825395e92b9a50b1737ab83b2a2e776b09b021b1"} Jan 25 08:03:19 crc kubenswrapper[4832]: I0125 08:03:19.064465 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-665689765d-kk2vq" event={"ID":"ef3b456a-9a2c-45c0-8e2c-233bd1700706","Type":"ContainerStarted","Data":"bb6476c30ca64127b4aa24860be637715f4d0cc242f4ec4f7dcb383be6edb2ed"} Jan 25 08:03:19 crc kubenswrapper[4832]: I0125 08:03:19.064665 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-665689765d-kk2vq" Jan 25 08:03:19 crc kubenswrapper[4832]: I0125 08:03:19.082827 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-665689765d-kk2vq" podStartSLOduration=3.082811081 podStartE2EDuration="3.082811081s" podCreationTimestamp="2026-01-25 08:03:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-25 08:03:19.082006996 +0000 UTC m=+381.755830529" watchObservedRunningTime="2026-01-25 08:03:19.082811081 +0000 UTC m=+381.756634604" Jan 25 08:03:19 crc kubenswrapper[4832]: I0125 08:03:19.383418 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-665689765d-kk2vq" Jan 25 08:03:19 crc kubenswrapper[4832]: I0125 08:03:19.675185 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bfdb78ba-6a90-4f59-8a21-31e5de03016e" path="/var/lib/kubelet/pods/bfdb78ba-6a90-4f59-8a21-31e5de03016e/volumes" Jan 25 08:03:22 crc kubenswrapper[4832]: I0125 08:03:22.150612 4832 patch_prober.go:28] interesting pod/machine-config-daemon-9r9sz container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 25 08:03:22 crc kubenswrapper[4832]: I0125 08:03:22.151347 4832 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-9r9sz" podUID="1fb47e8e-c812-41b4-9be7-3fad81e121b0" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 25 08:03:22 crc kubenswrapper[4832]: I0125 08:03:22.151501 4832 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-9r9sz" Jan 25 08:03:22 crc kubenswrapper[4832]: I0125 08:03:22.152577 4832 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"ab67a00f3383f3ebf817c9eee1dbd1d6d82dc6ce62d279f6c63b25d61faa31bb"} pod="openshift-machine-config-operator/machine-config-daemon-9r9sz" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 25 08:03:22 crc kubenswrapper[4832]: I0125 08:03:22.152675 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-9r9sz" podUID="1fb47e8e-c812-41b4-9be7-3fad81e121b0" containerName="machine-config-daemon" containerID="cri-o://ab67a00f3383f3ebf817c9eee1dbd1d6d82dc6ce62d279f6c63b25d61faa31bb" gracePeriod=600 Jan 25 08:03:23 crc kubenswrapper[4832]: I0125 08:03:23.099306 4832 generic.go:334] "Generic (PLEG): container finished" podID="1fb47e8e-c812-41b4-9be7-3fad81e121b0" containerID="ab67a00f3383f3ebf817c9eee1dbd1d6d82dc6ce62d279f6c63b25d61faa31bb" exitCode=0 Jan 25 08:03:23 crc kubenswrapper[4832]: I0125 08:03:23.099446 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-9r9sz" event={"ID":"1fb47e8e-c812-41b4-9be7-3fad81e121b0","Type":"ContainerDied","Data":"ab67a00f3383f3ebf817c9eee1dbd1d6d82dc6ce62d279f6c63b25d61faa31bb"} Jan 25 08:03:23 crc kubenswrapper[4832]: I0125 08:03:23.099860 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-9r9sz" event={"ID":"1fb47e8e-c812-41b4-9be7-3fad81e121b0","Type":"ContainerStarted","Data":"63d1a0b13b16f0668b1c02ef162797d02564ab151b4d705b380dc4d22fa1cf34"} Jan 25 08:03:23 crc kubenswrapper[4832]: I0125 08:03:23.099914 4832 scope.go:117] "RemoveContainer" containerID="9c32b6a39b2bc87d55b11a88a54d0909633358c70f3fc555cd4308bc5bf2689a" Jan 25 08:03:25 crc kubenswrapper[4832]: I0125 08:03:25.953604 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-7ntqw"] Jan 25 08:03:25 crc kubenswrapper[4832]: I0125 08:03:25.954690 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-7ntqw" podUID="e70962d8-5db3-43c3-84bf-380addc38e9c" containerName="registry-server" containerID="cri-o://c80a8496e4fb8daab894185ccd7abe905b3a6f0e511ef2e71a15cdfbad3cc4df" gracePeriod=30 Jan 25 08:03:25 crc kubenswrapper[4832]: I0125 08:03:25.977483 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-hgzxd"] Jan 25 08:03:25 crc kubenswrapper[4832]: I0125 08:03:25.977898 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-hgzxd" podUID="9ca2e919-2c33-41e7-baa6-40f5437a2c3c" containerName="registry-server" containerID="cri-o://3ea0ea2e74d9246447567c3a5eaeb53f46cc61ea93eace6986d87ad0c2ea5e76" gracePeriod=30 Jan 25 08:03:25 crc kubenswrapper[4832]: I0125 08:03:25.981709 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-gqjzs"] Jan 25 08:03:25 crc kubenswrapper[4832]: I0125 08:03:25.982126 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/marketplace-operator-79b997595-gqjzs" podUID="c97f51ea-b215-4660-bc7b-2406783aa3bb" containerName="marketplace-operator" containerID="cri-o://c7664c7ac9b4377cc9c7b624c5daefd6b6623febb560cc7ea9d15dcfc36d59e8" gracePeriod=30 Jan 25 08:03:25 crc kubenswrapper[4832]: I0125 08:03:25.993058 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-qmnth"] Jan 25 08:03:25 crc kubenswrapper[4832]: I0125 08:03:25.994856 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-qmnth" podUID="de82f302-d899-48c7-aedc-4b24f4541b2b" containerName="registry-server" containerID="cri-o://3fa7616eebc1718b3b41cc2b08ec70817195522aeb22689dfc06b792f55e8178" gracePeriod=30 Jan 25 08:03:26 crc kubenswrapper[4832]: I0125 08:03:26.007758 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-f6nwt"] Jan 25 08:03:26 crc kubenswrapper[4832]: I0125 08:03:26.008034 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-f6nwt" podUID="479892d8-5a53-40ee-9f16-d4480c2c3e03" containerName="registry-server" containerID="cri-o://0d0d908fac00bd4c28962788fc5e0650358742d5bb3525e96fd059be8ee3db05" gracePeriod=30 Jan 25 08:03:26 crc kubenswrapper[4832]: I0125 08:03:26.025906 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-ncr8s"] Jan 25 08:03:26 crc kubenswrapper[4832]: I0125 08:03:26.026570 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-ncr8s" Jan 25 08:03:26 crc kubenswrapper[4832]: I0125 08:03:26.031433 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-ncr8s"] Jan 25 08:03:26 crc kubenswrapper[4832]: I0125 08:03:26.119124 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tkmgl\" (UniqueName: \"kubernetes.io/projected/12e3f428-4b38-471d-8048-e3d55ce0d4b4-kube-api-access-tkmgl\") pod \"marketplace-operator-79b997595-ncr8s\" (UID: \"12e3f428-4b38-471d-8048-e3d55ce0d4b4\") " pod="openshift-marketplace/marketplace-operator-79b997595-ncr8s" Jan 25 08:03:26 crc kubenswrapper[4832]: I0125 08:03:26.119179 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/12e3f428-4b38-471d-8048-e3d55ce0d4b4-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-ncr8s\" (UID: \"12e3f428-4b38-471d-8048-e3d55ce0d4b4\") " pod="openshift-marketplace/marketplace-operator-79b997595-ncr8s" Jan 25 08:03:26 crc kubenswrapper[4832]: I0125 08:03:26.119201 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/12e3f428-4b38-471d-8048-e3d55ce0d4b4-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-ncr8s\" (UID: \"12e3f428-4b38-471d-8048-e3d55ce0d4b4\") " pod="openshift-marketplace/marketplace-operator-79b997595-ncr8s" Jan 25 08:03:26 crc kubenswrapper[4832]: I0125 08:03:26.140108 4832 generic.go:334] "Generic (PLEG): container finished" podID="e70962d8-5db3-43c3-84bf-380addc38e9c" containerID="c80a8496e4fb8daab894185ccd7abe905b3a6f0e511ef2e71a15cdfbad3cc4df" exitCode=0 Jan 25 08:03:26 crc kubenswrapper[4832]: I0125 08:03:26.140209 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7ntqw" event={"ID":"e70962d8-5db3-43c3-84bf-380addc38e9c","Type":"ContainerDied","Data":"c80a8496e4fb8daab894185ccd7abe905b3a6f0e511ef2e71a15cdfbad3cc4df"} Jan 25 08:03:26 crc kubenswrapper[4832]: I0125 08:03:26.147626 4832 generic.go:334] "Generic (PLEG): container finished" podID="c97f51ea-b215-4660-bc7b-2406783aa3bb" containerID="c7664c7ac9b4377cc9c7b624c5daefd6b6623febb560cc7ea9d15dcfc36d59e8" exitCode=0 Jan 25 08:03:26 crc kubenswrapper[4832]: I0125 08:03:26.147680 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-gqjzs" event={"ID":"c97f51ea-b215-4660-bc7b-2406783aa3bb","Type":"ContainerDied","Data":"c7664c7ac9b4377cc9c7b624c5daefd6b6623febb560cc7ea9d15dcfc36d59e8"} Jan 25 08:03:26 crc kubenswrapper[4832]: I0125 08:03:26.170078 4832 generic.go:334] "Generic (PLEG): container finished" podID="9ca2e919-2c33-41e7-baa6-40f5437a2c3c" containerID="3ea0ea2e74d9246447567c3a5eaeb53f46cc61ea93eace6986d87ad0c2ea5e76" exitCode=0 Jan 25 08:03:26 crc kubenswrapper[4832]: I0125 08:03:26.170177 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hgzxd" event={"ID":"9ca2e919-2c33-41e7-baa6-40f5437a2c3c","Type":"ContainerDied","Data":"3ea0ea2e74d9246447567c3a5eaeb53f46cc61ea93eace6986d87ad0c2ea5e76"} Jan 25 08:03:26 crc kubenswrapper[4832]: I0125 08:03:26.183360 4832 generic.go:334] "Generic (PLEG): container finished" podID="de82f302-d899-48c7-aedc-4b24f4541b2b" containerID="3fa7616eebc1718b3b41cc2b08ec70817195522aeb22689dfc06b792f55e8178" exitCode=0 Jan 25 08:03:26 crc kubenswrapper[4832]: I0125 08:03:26.183446 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qmnth" event={"ID":"de82f302-d899-48c7-aedc-4b24f4541b2b","Type":"ContainerDied","Data":"3fa7616eebc1718b3b41cc2b08ec70817195522aeb22689dfc06b792f55e8178"} Jan 25 08:03:26 crc kubenswrapper[4832]: I0125 08:03:26.220078 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/12e3f428-4b38-471d-8048-e3d55ce0d4b4-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-ncr8s\" (UID: \"12e3f428-4b38-471d-8048-e3d55ce0d4b4\") " pod="openshift-marketplace/marketplace-operator-79b997595-ncr8s" Jan 25 08:03:26 crc kubenswrapper[4832]: I0125 08:03:26.220166 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tkmgl\" (UniqueName: \"kubernetes.io/projected/12e3f428-4b38-471d-8048-e3d55ce0d4b4-kube-api-access-tkmgl\") pod \"marketplace-operator-79b997595-ncr8s\" (UID: \"12e3f428-4b38-471d-8048-e3d55ce0d4b4\") " pod="openshift-marketplace/marketplace-operator-79b997595-ncr8s" Jan 25 08:03:26 crc kubenswrapper[4832]: I0125 08:03:26.220197 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/12e3f428-4b38-471d-8048-e3d55ce0d4b4-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-ncr8s\" (UID: \"12e3f428-4b38-471d-8048-e3d55ce0d4b4\") " pod="openshift-marketplace/marketplace-operator-79b997595-ncr8s" Jan 25 08:03:26 crc kubenswrapper[4832]: I0125 08:03:26.221584 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/12e3f428-4b38-471d-8048-e3d55ce0d4b4-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-ncr8s\" (UID: \"12e3f428-4b38-471d-8048-e3d55ce0d4b4\") " pod="openshift-marketplace/marketplace-operator-79b997595-ncr8s" Jan 25 08:03:26 crc kubenswrapper[4832]: I0125 08:03:26.235227 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/12e3f428-4b38-471d-8048-e3d55ce0d4b4-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-ncr8s\" (UID: \"12e3f428-4b38-471d-8048-e3d55ce0d4b4\") " pod="openshift-marketplace/marketplace-operator-79b997595-ncr8s" Jan 25 08:03:26 crc kubenswrapper[4832]: I0125 08:03:26.239832 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tkmgl\" (UniqueName: \"kubernetes.io/projected/12e3f428-4b38-471d-8048-e3d55ce0d4b4-kube-api-access-tkmgl\") pod \"marketplace-operator-79b997595-ncr8s\" (UID: \"12e3f428-4b38-471d-8048-e3d55ce0d4b4\") " pod="openshift-marketplace/marketplace-operator-79b997595-ncr8s" Jan 25 08:03:26 crc kubenswrapper[4832]: I0125 08:03:26.363130 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-ncr8s" Jan 25 08:03:26 crc kubenswrapper[4832]: I0125 08:03:26.429949 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7ntqw" Jan 25 08:03:26 crc kubenswrapper[4832]: I0125 08:03:26.621902 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-hgzxd" Jan 25 08:03:26 crc kubenswrapper[4832]: I0125 08:03:26.624238 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s4xpz\" (UniqueName: \"kubernetes.io/projected/e70962d8-5db3-43c3-84bf-380addc38e9c-kube-api-access-s4xpz\") pod \"e70962d8-5db3-43c3-84bf-380addc38e9c\" (UID: \"e70962d8-5db3-43c3-84bf-380addc38e9c\") " Jan 25 08:03:26 crc kubenswrapper[4832]: I0125 08:03:26.624321 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e70962d8-5db3-43c3-84bf-380addc38e9c-utilities\") pod \"e70962d8-5db3-43c3-84bf-380addc38e9c\" (UID: \"e70962d8-5db3-43c3-84bf-380addc38e9c\") " Jan 25 08:03:26 crc kubenswrapper[4832]: I0125 08:03:26.624351 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e70962d8-5db3-43c3-84bf-380addc38e9c-catalog-content\") pod \"e70962d8-5db3-43c3-84bf-380addc38e9c\" (UID: \"e70962d8-5db3-43c3-84bf-380addc38e9c\") " Jan 25 08:03:26 crc kubenswrapper[4832]: I0125 08:03:26.633143 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e70962d8-5db3-43c3-84bf-380addc38e9c-utilities" (OuterVolumeSpecName: "utilities") pod "e70962d8-5db3-43c3-84bf-380addc38e9c" (UID: "e70962d8-5db3-43c3-84bf-380addc38e9c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 25 08:03:26 crc kubenswrapper[4832]: I0125 08:03:26.633633 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e70962d8-5db3-43c3-84bf-380addc38e9c-kube-api-access-s4xpz" (OuterVolumeSpecName: "kube-api-access-s4xpz") pod "e70962d8-5db3-43c3-84bf-380addc38e9c" (UID: "e70962d8-5db3-43c3-84bf-380addc38e9c"). InnerVolumeSpecName "kube-api-access-s4xpz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 25 08:03:26 crc kubenswrapper[4832]: I0125 08:03:26.678608 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f6nwt" Jan 25 08:03:26 crc kubenswrapper[4832]: I0125 08:03:26.685863 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-gqjzs" Jan 25 08:03:26 crc kubenswrapper[4832]: I0125 08:03:26.705598 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e70962d8-5db3-43c3-84bf-380addc38e9c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e70962d8-5db3-43c3-84bf-380addc38e9c" (UID: "e70962d8-5db3-43c3-84bf-380addc38e9c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 25 08:03:26 crc kubenswrapper[4832]: I0125 08:03:26.725324 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9ca2e919-2c33-41e7-baa6-40f5437a2c3c-catalog-content\") pod \"9ca2e919-2c33-41e7-baa6-40f5437a2c3c\" (UID: \"9ca2e919-2c33-41e7-baa6-40f5437a2c3c\") " Jan 25 08:03:26 crc kubenswrapper[4832]: I0125 08:03:26.725374 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9ca2e919-2c33-41e7-baa6-40f5437a2c3c-utilities\") pod \"9ca2e919-2c33-41e7-baa6-40f5437a2c3c\" (UID: \"9ca2e919-2c33-41e7-baa6-40f5437a2c3c\") " Jan 25 08:03:26 crc kubenswrapper[4832]: I0125 08:03:26.725421 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-qmnth" Jan 25 08:03:26 crc kubenswrapper[4832]: I0125 08:03:26.725460 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gbmfg\" (UniqueName: \"kubernetes.io/projected/9ca2e919-2c33-41e7-baa6-40f5437a2c3c-kube-api-access-gbmfg\") pod \"9ca2e919-2c33-41e7-baa6-40f5437a2c3c\" (UID: \"9ca2e919-2c33-41e7-baa6-40f5437a2c3c\") " Jan 25 08:03:26 crc kubenswrapper[4832]: I0125 08:03:26.725770 4832 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e70962d8-5db3-43c3-84bf-380addc38e9c-utilities\") on node \"crc\" DevicePath \"\"" Jan 25 08:03:26 crc kubenswrapper[4832]: I0125 08:03:26.725788 4832 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e70962d8-5db3-43c3-84bf-380addc38e9c-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 25 08:03:26 crc kubenswrapper[4832]: I0125 08:03:26.725806 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s4xpz\" (UniqueName: \"kubernetes.io/projected/e70962d8-5db3-43c3-84bf-380addc38e9c-kube-api-access-s4xpz\") on node \"crc\" DevicePath \"\"" Jan 25 08:03:26 crc kubenswrapper[4832]: I0125 08:03:26.726217 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9ca2e919-2c33-41e7-baa6-40f5437a2c3c-utilities" (OuterVolumeSpecName: "utilities") pod "9ca2e919-2c33-41e7-baa6-40f5437a2c3c" (UID: "9ca2e919-2c33-41e7-baa6-40f5437a2c3c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 25 08:03:26 crc kubenswrapper[4832]: I0125 08:03:26.730502 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9ca2e919-2c33-41e7-baa6-40f5437a2c3c-kube-api-access-gbmfg" (OuterVolumeSpecName: "kube-api-access-gbmfg") pod "9ca2e919-2c33-41e7-baa6-40f5437a2c3c" (UID: "9ca2e919-2c33-41e7-baa6-40f5437a2c3c"). InnerVolumeSpecName "kube-api-access-gbmfg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 25 08:03:26 crc kubenswrapper[4832]: I0125 08:03:26.789275 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9ca2e919-2c33-41e7-baa6-40f5437a2c3c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "9ca2e919-2c33-41e7-baa6-40f5437a2c3c" (UID: "9ca2e919-2c33-41e7-baa6-40f5437a2c3c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 25 08:03:26 crc kubenswrapper[4832]: I0125 08:03:26.827085 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/479892d8-5a53-40ee-9f16-d4480c2c3e03-utilities\") pod \"479892d8-5a53-40ee-9f16-d4480c2c3e03\" (UID: \"479892d8-5a53-40ee-9f16-d4480c2c3e03\") " Jan 25 08:03:26 crc kubenswrapper[4832]: I0125 08:03:26.827173 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/de82f302-d899-48c7-aedc-4b24f4541b2b-catalog-content\") pod \"de82f302-d899-48c7-aedc-4b24f4541b2b\" (UID: \"de82f302-d899-48c7-aedc-4b24f4541b2b\") " Jan 25 08:03:26 crc kubenswrapper[4832]: I0125 08:03:26.827230 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wxbkz\" (UniqueName: \"kubernetes.io/projected/de82f302-d899-48c7-aedc-4b24f4541b2b-kube-api-access-wxbkz\") pod \"de82f302-d899-48c7-aedc-4b24f4541b2b\" (UID: \"de82f302-d899-48c7-aedc-4b24f4541b2b\") " Jan 25 08:03:26 crc kubenswrapper[4832]: I0125 08:03:26.827293 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c97f51ea-b215-4660-bc7b-2406783aa3bb-marketplace-trusted-ca\") pod \"c97f51ea-b215-4660-bc7b-2406783aa3bb\" (UID: \"c97f51ea-b215-4660-bc7b-2406783aa3bb\") " Jan 25 08:03:26 crc kubenswrapper[4832]: I0125 08:03:26.827319 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/c97f51ea-b215-4660-bc7b-2406783aa3bb-marketplace-operator-metrics\") pod \"c97f51ea-b215-4660-bc7b-2406783aa3bb\" (UID: \"c97f51ea-b215-4660-bc7b-2406783aa3bb\") " Jan 25 08:03:26 crc kubenswrapper[4832]: I0125 08:03:26.827340 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dbldb\" (UniqueName: \"kubernetes.io/projected/479892d8-5a53-40ee-9f16-d4480c2c3e03-kube-api-access-dbldb\") pod \"479892d8-5a53-40ee-9f16-d4480c2c3e03\" (UID: \"479892d8-5a53-40ee-9f16-d4480c2c3e03\") " Jan 25 08:03:26 crc kubenswrapper[4832]: I0125 08:03:26.827356 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m9x6j\" (UniqueName: \"kubernetes.io/projected/c97f51ea-b215-4660-bc7b-2406783aa3bb-kube-api-access-m9x6j\") pod \"c97f51ea-b215-4660-bc7b-2406783aa3bb\" (UID: \"c97f51ea-b215-4660-bc7b-2406783aa3bb\") " Jan 25 08:03:26 crc kubenswrapper[4832]: I0125 08:03:26.827375 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/de82f302-d899-48c7-aedc-4b24f4541b2b-utilities\") pod \"de82f302-d899-48c7-aedc-4b24f4541b2b\" (UID: \"de82f302-d899-48c7-aedc-4b24f4541b2b\") " Jan 25 08:03:26 crc kubenswrapper[4832]: I0125 08:03:26.827397 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/479892d8-5a53-40ee-9f16-d4480c2c3e03-catalog-content\") pod \"479892d8-5a53-40ee-9f16-d4480c2c3e03\" (UID: \"479892d8-5a53-40ee-9f16-d4480c2c3e03\") " Jan 25 08:03:26 crc kubenswrapper[4832]: I0125 08:03:26.829083 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/479892d8-5a53-40ee-9f16-d4480c2c3e03-utilities" (OuterVolumeSpecName: "utilities") pod "479892d8-5a53-40ee-9f16-d4480c2c3e03" (UID: "479892d8-5a53-40ee-9f16-d4480c2c3e03"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 25 08:03:26 crc kubenswrapper[4832]: I0125 08:03:26.831287 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gbmfg\" (UniqueName: \"kubernetes.io/projected/9ca2e919-2c33-41e7-baa6-40f5437a2c3c-kube-api-access-gbmfg\") on node \"crc\" DevicePath \"\"" Jan 25 08:03:26 crc kubenswrapper[4832]: I0125 08:03:26.831319 4832 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9ca2e919-2c33-41e7-baa6-40f5437a2c3c-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 25 08:03:26 crc kubenswrapper[4832]: I0125 08:03:26.831352 4832 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9ca2e919-2c33-41e7-baa6-40f5437a2c3c-utilities\") on node \"crc\" DevicePath \"\"" Jan 25 08:03:26 crc kubenswrapper[4832]: I0125 08:03:26.832641 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c97f51ea-b215-4660-bc7b-2406783aa3bb-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "c97f51ea-b215-4660-bc7b-2406783aa3bb" (UID: "c97f51ea-b215-4660-bc7b-2406783aa3bb"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 25 08:03:26 crc kubenswrapper[4832]: I0125 08:03:26.834165 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/de82f302-d899-48c7-aedc-4b24f4541b2b-kube-api-access-wxbkz" (OuterVolumeSpecName: "kube-api-access-wxbkz") pod "de82f302-d899-48c7-aedc-4b24f4541b2b" (UID: "de82f302-d899-48c7-aedc-4b24f4541b2b"). InnerVolumeSpecName "kube-api-access-wxbkz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 25 08:03:26 crc kubenswrapper[4832]: I0125 08:03:26.834983 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/de82f302-d899-48c7-aedc-4b24f4541b2b-utilities" (OuterVolumeSpecName: "utilities") pod "de82f302-d899-48c7-aedc-4b24f4541b2b" (UID: "de82f302-d899-48c7-aedc-4b24f4541b2b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 25 08:03:26 crc kubenswrapper[4832]: I0125 08:03:26.836754 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/479892d8-5a53-40ee-9f16-d4480c2c3e03-kube-api-access-dbldb" (OuterVolumeSpecName: "kube-api-access-dbldb") pod "479892d8-5a53-40ee-9f16-d4480c2c3e03" (UID: "479892d8-5a53-40ee-9f16-d4480c2c3e03"). InnerVolumeSpecName "kube-api-access-dbldb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 25 08:03:26 crc kubenswrapper[4832]: I0125 08:03:26.839360 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c97f51ea-b215-4660-bc7b-2406783aa3bb-kube-api-access-m9x6j" (OuterVolumeSpecName: "kube-api-access-m9x6j") pod "c97f51ea-b215-4660-bc7b-2406783aa3bb" (UID: "c97f51ea-b215-4660-bc7b-2406783aa3bb"). InnerVolumeSpecName "kube-api-access-m9x6j". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 25 08:03:26 crc kubenswrapper[4832]: I0125 08:03:26.840592 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c97f51ea-b215-4660-bc7b-2406783aa3bb-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "c97f51ea-b215-4660-bc7b-2406783aa3bb" (UID: "c97f51ea-b215-4660-bc7b-2406783aa3bb"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 25 08:03:26 crc kubenswrapper[4832]: I0125 08:03:26.856821 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/de82f302-d899-48c7-aedc-4b24f4541b2b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "de82f302-d899-48c7-aedc-4b24f4541b2b" (UID: "de82f302-d899-48c7-aedc-4b24f4541b2b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 25 08:03:26 crc kubenswrapper[4832]: I0125 08:03:26.932333 4832 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c97f51ea-b215-4660-bc7b-2406783aa3bb-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 25 08:03:26 crc kubenswrapper[4832]: I0125 08:03:26.932379 4832 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/c97f51ea-b215-4660-bc7b-2406783aa3bb-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Jan 25 08:03:26 crc kubenswrapper[4832]: I0125 08:03:26.932394 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dbldb\" (UniqueName: \"kubernetes.io/projected/479892d8-5a53-40ee-9f16-d4480c2c3e03-kube-api-access-dbldb\") on node \"crc\" DevicePath \"\"" Jan 25 08:03:26 crc kubenswrapper[4832]: I0125 08:03:26.932405 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m9x6j\" (UniqueName: \"kubernetes.io/projected/c97f51ea-b215-4660-bc7b-2406783aa3bb-kube-api-access-m9x6j\") on node \"crc\" DevicePath \"\"" Jan 25 08:03:26 crc kubenswrapper[4832]: I0125 08:03:26.932420 4832 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/de82f302-d899-48c7-aedc-4b24f4541b2b-utilities\") on node \"crc\" DevicePath \"\"" Jan 25 08:03:26 crc kubenswrapper[4832]: I0125 08:03:26.932449 4832 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/479892d8-5a53-40ee-9f16-d4480c2c3e03-utilities\") on node \"crc\" DevicePath \"\"" Jan 25 08:03:26 crc kubenswrapper[4832]: I0125 08:03:26.932461 4832 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/de82f302-d899-48c7-aedc-4b24f4541b2b-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 25 08:03:26 crc kubenswrapper[4832]: I0125 08:03:26.932472 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wxbkz\" (UniqueName: \"kubernetes.io/projected/de82f302-d899-48c7-aedc-4b24f4541b2b-kube-api-access-wxbkz\") on node \"crc\" DevicePath \"\"" Jan 25 08:03:26 crc kubenswrapper[4832]: I0125 08:03:26.950294 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/479892d8-5a53-40ee-9f16-d4480c2c3e03-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "479892d8-5a53-40ee-9f16-d4480c2c3e03" (UID: "479892d8-5a53-40ee-9f16-d4480c2c3e03"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 25 08:03:26 crc kubenswrapper[4832]: I0125 08:03:26.989363 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-ncr8s"] Jan 25 08:03:26 crc kubenswrapper[4832]: W0125 08:03:26.992776 4832 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod12e3f428_4b38_471d_8048_e3d55ce0d4b4.slice/crio-776534568cdf387e3ff9cdb91e5af587b45cfcb9a55bbe09f0659bd80126d351 WatchSource:0}: Error finding container 776534568cdf387e3ff9cdb91e5af587b45cfcb9a55bbe09f0659bd80126d351: Status 404 returned error can't find the container with id 776534568cdf387e3ff9cdb91e5af587b45cfcb9a55bbe09f0659bd80126d351 Jan 25 08:03:27 crc kubenswrapper[4832]: I0125 08:03:27.034038 4832 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/479892d8-5a53-40ee-9f16-d4480c2c3e03-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 25 08:03:27 crc kubenswrapper[4832]: I0125 08:03:27.194338 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hgzxd" event={"ID":"9ca2e919-2c33-41e7-baa6-40f5437a2c3c","Type":"ContainerDied","Data":"b6c719bac066722a1521079a1ebc6dfc92367eaa1f1374b71e48ced4dd4c69cb"} Jan 25 08:03:27 crc kubenswrapper[4832]: I0125 08:03:27.194408 4832 scope.go:117] "RemoveContainer" containerID="3ea0ea2e74d9246447567c3a5eaeb53f46cc61ea93eace6986d87ad0c2ea5e76" Jan 25 08:03:27 crc kubenswrapper[4832]: I0125 08:03:27.194383 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-hgzxd" Jan 25 08:03:27 crc kubenswrapper[4832]: I0125 08:03:27.199260 4832 generic.go:334] "Generic (PLEG): container finished" podID="479892d8-5a53-40ee-9f16-d4480c2c3e03" containerID="0d0d908fac00bd4c28962788fc5e0650358742d5bb3525e96fd059be8ee3db05" exitCode=0 Jan 25 08:03:27 crc kubenswrapper[4832]: I0125 08:03:27.199335 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-f6nwt" event={"ID":"479892d8-5a53-40ee-9f16-d4480c2c3e03","Type":"ContainerDied","Data":"0d0d908fac00bd4c28962788fc5e0650358742d5bb3525e96fd059be8ee3db05"} Jan 25 08:03:27 crc kubenswrapper[4832]: I0125 08:03:27.199372 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-f6nwt" event={"ID":"479892d8-5a53-40ee-9f16-d4480c2c3e03","Type":"ContainerDied","Data":"127cc4332ddae9518675191b7ff5d76421650c33e5fd334f43393e427ed6939d"} Jan 25 08:03:27 crc kubenswrapper[4832]: I0125 08:03:27.199496 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f6nwt" Jan 25 08:03:27 crc kubenswrapper[4832]: I0125 08:03:27.210867 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qmnth" event={"ID":"de82f302-d899-48c7-aedc-4b24f4541b2b","Type":"ContainerDied","Data":"431d294c492ed2eb7131c55cbcf8b2b7d3cfeb9b126674d8cf875938e17d1637"} Jan 25 08:03:27 crc kubenswrapper[4832]: I0125 08:03:27.210997 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-qmnth" Jan 25 08:03:27 crc kubenswrapper[4832]: I0125 08:03:27.214541 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7ntqw" event={"ID":"e70962d8-5db3-43c3-84bf-380addc38e9c","Type":"ContainerDied","Data":"1c962dbb608a1dee25986c1352c3b194a3342adc2556faad12137e1d2184c600"} Jan 25 08:03:27 crc kubenswrapper[4832]: I0125 08:03:27.214639 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7ntqw" Jan 25 08:03:27 crc kubenswrapper[4832]: I0125 08:03:27.216796 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-ncr8s" event={"ID":"12e3f428-4b38-471d-8048-e3d55ce0d4b4","Type":"ContainerStarted","Data":"96a50cc65f42150c3ca9eb97bca04b1f951aeeb11f761f59153281a8eb4ffad1"} Jan 25 08:03:27 crc kubenswrapper[4832]: I0125 08:03:27.216835 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-ncr8s" event={"ID":"12e3f428-4b38-471d-8048-e3d55ce0d4b4","Type":"ContainerStarted","Data":"776534568cdf387e3ff9cdb91e5af587b45cfcb9a55bbe09f0659bd80126d351"} Jan 25 08:03:27 crc kubenswrapper[4832]: I0125 08:03:27.217733 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-ncr8s" Jan 25 08:03:27 crc kubenswrapper[4832]: I0125 08:03:27.218788 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-gqjzs" event={"ID":"c97f51ea-b215-4660-bc7b-2406783aa3bb","Type":"ContainerDied","Data":"09260039b4ef997bc5158f5963a092c064b8417a9c43275caeaa431a633cea7b"} Jan 25 08:03:27 crc kubenswrapper[4832]: I0125 08:03:27.218888 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-gqjzs" Jan 25 08:03:27 crc kubenswrapper[4832]: I0125 08:03:27.219082 4832 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-ncr8s container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.64:8080/healthz\": dial tcp 10.217.0.64:8080: connect: connection refused" start-of-body= Jan 25 08:03:27 crc kubenswrapper[4832]: I0125 08:03:27.219135 4832 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-ncr8s" podUID="12e3f428-4b38-471d-8048-e3d55ce0d4b4" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.64:8080/healthz\": dial tcp 10.217.0.64:8080: connect: connection refused" Jan 25 08:03:27 crc kubenswrapper[4832]: I0125 08:03:27.219377 4832 scope.go:117] "RemoveContainer" containerID="bad721fd34d82bc8a914a20e6fade466dc886327ceaf1d22df157e4241f9866d" Jan 25 08:03:27 crc kubenswrapper[4832]: I0125 08:03:27.248241 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-hgzxd"] Jan 25 08:03:27 crc kubenswrapper[4832]: I0125 08:03:27.258161 4832 scope.go:117] "RemoveContainer" containerID="a9740819c55ba65dac41e257c64271a6fffa2f105bd173d52ba77be1e1a91b2f" Jan 25 08:03:27 crc kubenswrapper[4832]: I0125 08:03:27.260848 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-hgzxd"] Jan 25 08:03:27 crc kubenswrapper[4832]: I0125 08:03:27.267155 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-f6nwt"] Jan 25 08:03:27 crc kubenswrapper[4832]: I0125 08:03:27.272742 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-f6nwt"] Jan 25 08:03:27 crc kubenswrapper[4832]: I0125 08:03:27.285333 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-ncr8s" podStartSLOduration=2.285300342 podStartE2EDuration="2.285300342s" podCreationTimestamp="2026-01-25 08:03:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-25 08:03:27.279488517 +0000 UTC m=+389.953312050" watchObservedRunningTime="2026-01-25 08:03:27.285300342 +0000 UTC m=+389.959123885" Jan 25 08:03:27 crc kubenswrapper[4832]: I0125 08:03:27.287727 4832 scope.go:117] "RemoveContainer" containerID="0d0d908fac00bd4c28962788fc5e0650358742d5bb3525e96fd059be8ee3db05" Jan 25 08:03:27 crc kubenswrapper[4832]: I0125 08:03:27.295606 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-7ntqw"] Jan 25 08:03:27 crc kubenswrapper[4832]: I0125 08:03:27.299411 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-7ntqw"] Jan 25 08:03:27 crc kubenswrapper[4832]: I0125 08:03:27.310143 4832 scope.go:117] "RemoveContainer" containerID="ec3422846c4f7ca5a3e9d03efa6c1a6e5cf108f14cf005b6d25c2c56e461f21d" Jan 25 08:03:27 crc kubenswrapper[4832]: I0125 08:03:27.326031 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-gqjzs"] Jan 25 08:03:27 crc kubenswrapper[4832]: I0125 08:03:27.333537 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-gqjzs"] Jan 25 08:03:27 crc kubenswrapper[4832]: I0125 08:03:27.337006 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-qmnth"] Jan 25 08:03:27 crc kubenswrapper[4832]: I0125 08:03:27.342647 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-qmnth"] Jan 25 08:03:27 crc kubenswrapper[4832]: I0125 08:03:27.350219 4832 scope.go:117] "RemoveContainer" containerID="e0b7fe92ad2aa5af33f56e083dd111fbc1388c3d3d952adfc8bd0213a65b7766" Jan 25 08:03:27 crc kubenswrapper[4832]: I0125 08:03:27.364384 4832 scope.go:117] "RemoveContainer" containerID="0d0d908fac00bd4c28962788fc5e0650358742d5bb3525e96fd059be8ee3db05" Jan 25 08:03:27 crc kubenswrapper[4832]: E0125 08:03:27.365027 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0d0d908fac00bd4c28962788fc5e0650358742d5bb3525e96fd059be8ee3db05\": container with ID starting with 0d0d908fac00bd4c28962788fc5e0650358742d5bb3525e96fd059be8ee3db05 not found: ID does not exist" containerID="0d0d908fac00bd4c28962788fc5e0650358742d5bb3525e96fd059be8ee3db05" Jan 25 08:03:27 crc kubenswrapper[4832]: I0125 08:03:27.365081 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0d0d908fac00bd4c28962788fc5e0650358742d5bb3525e96fd059be8ee3db05"} err="failed to get container status \"0d0d908fac00bd4c28962788fc5e0650358742d5bb3525e96fd059be8ee3db05\": rpc error: code = NotFound desc = could not find container \"0d0d908fac00bd4c28962788fc5e0650358742d5bb3525e96fd059be8ee3db05\": container with ID starting with 0d0d908fac00bd4c28962788fc5e0650358742d5bb3525e96fd059be8ee3db05 not found: ID does not exist" Jan 25 08:03:27 crc kubenswrapper[4832]: I0125 08:03:27.365121 4832 scope.go:117] "RemoveContainer" containerID="ec3422846c4f7ca5a3e9d03efa6c1a6e5cf108f14cf005b6d25c2c56e461f21d" Jan 25 08:03:27 crc kubenswrapper[4832]: E0125 08:03:27.365766 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ec3422846c4f7ca5a3e9d03efa6c1a6e5cf108f14cf005b6d25c2c56e461f21d\": container with ID starting with ec3422846c4f7ca5a3e9d03efa6c1a6e5cf108f14cf005b6d25c2c56e461f21d not found: ID does not exist" containerID="ec3422846c4f7ca5a3e9d03efa6c1a6e5cf108f14cf005b6d25c2c56e461f21d" Jan 25 08:03:27 crc kubenswrapper[4832]: I0125 08:03:27.365824 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ec3422846c4f7ca5a3e9d03efa6c1a6e5cf108f14cf005b6d25c2c56e461f21d"} err="failed to get container status \"ec3422846c4f7ca5a3e9d03efa6c1a6e5cf108f14cf005b6d25c2c56e461f21d\": rpc error: code = NotFound desc = could not find container \"ec3422846c4f7ca5a3e9d03efa6c1a6e5cf108f14cf005b6d25c2c56e461f21d\": container with ID starting with ec3422846c4f7ca5a3e9d03efa6c1a6e5cf108f14cf005b6d25c2c56e461f21d not found: ID does not exist" Jan 25 08:03:27 crc kubenswrapper[4832]: I0125 08:03:27.365871 4832 scope.go:117] "RemoveContainer" containerID="e0b7fe92ad2aa5af33f56e083dd111fbc1388c3d3d952adfc8bd0213a65b7766" Jan 25 08:03:27 crc kubenswrapper[4832]: E0125 08:03:27.366282 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e0b7fe92ad2aa5af33f56e083dd111fbc1388c3d3d952adfc8bd0213a65b7766\": container with ID starting with e0b7fe92ad2aa5af33f56e083dd111fbc1388c3d3d952adfc8bd0213a65b7766 not found: ID does not exist" containerID="e0b7fe92ad2aa5af33f56e083dd111fbc1388c3d3d952adfc8bd0213a65b7766" Jan 25 08:03:27 crc kubenswrapper[4832]: I0125 08:03:27.366315 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e0b7fe92ad2aa5af33f56e083dd111fbc1388c3d3d952adfc8bd0213a65b7766"} err="failed to get container status \"e0b7fe92ad2aa5af33f56e083dd111fbc1388c3d3d952adfc8bd0213a65b7766\": rpc error: code = NotFound desc = could not find container \"e0b7fe92ad2aa5af33f56e083dd111fbc1388c3d3d952adfc8bd0213a65b7766\": container with ID starting with e0b7fe92ad2aa5af33f56e083dd111fbc1388c3d3d952adfc8bd0213a65b7766 not found: ID does not exist" Jan 25 08:03:27 crc kubenswrapper[4832]: I0125 08:03:27.366338 4832 scope.go:117] "RemoveContainer" containerID="3fa7616eebc1718b3b41cc2b08ec70817195522aeb22689dfc06b792f55e8178" Jan 25 08:03:27 crc kubenswrapper[4832]: I0125 08:03:27.388370 4832 scope.go:117] "RemoveContainer" containerID="9704f0e7139e3714217680a9d4fe3a70ba17d6f8e5f513fbc3d16cf51b1ba25a" Jan 25 08:03:27 crc kubenswrapper[4832]: I0125 08:03:27.415893 4832 scope.go:117] "RemoveContainer" containerID="bbc3775b6b6494c05ef373c63a534637c6029db1d75be738e8d862cbca808950" Jan 25 08:03:27 crc kubenswrapper[4832]: I0125 08:03:27.436105 4832 scope.go:117] "RemoveContainer" containerID="c80a8496e4fb8daab894185ccd7abe905b3a6f0e511ef2e71a15cdfbad3cc4df" Jan 25 08:03:27 crc kubenswrapper[4832]: I0125 08:03:27.460748 4832 scope.go:117] "RemoveContainer" containerID="b14cb83643fc32267fb0eab12b9d0935caf7c094e1451e3835b0d7b781d4da46" Jan 25 08:03:27 crc kubenswrapper[4832]: I0125 08:03:27.480266 4832 scope.go:117] "RemoveContainer" containerID="54eca1bc87adc3d2b05494c017fdad90e29819a526374686473f122d4dffd0c8" Jan 25 08:03:27 crc kubenswrapper[4832]: I0125 08:03:27.493841 4832 scope.go:117] "RemoveContainer" containerID="c7664c7ac9b4377cc9c7b624c5daefd6b6623febb560cc7ea9d15dcfc36d59e8" Jan 25 08:03:27 crc kubenswrapper[4832]: I0125 08:03:27.677117 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="479892d8-5a53-40ee-9f16-d4480c2c3e03" path="/var/lib/kubelet/pods/479892d8-5a53-40ee-9f16-d4480c2c3e03/volumes" Jan 25 08:03:27 crc kubenswrapper[4832]: I0125 08:03:27.677986 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9ca2e919-2c33-41e7-baa6-40f5437a2c3c" path="/var/lib/kubelet/pods/9ca2e919-2c33-41e7-baa6-40f5437a2c3c/volumes" Jan 25 08:03:27 crc kubenswrapper[4832]: I0125 08:03:27.678668 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c97f51ea-b215-4660-bc7b-2406783aa3bb" path="/var/lib/kubelet/pods/c97f51ea-b215-4660-bc7b-2406783aa3bb/volumes" Jan 25 08:03:27 crc kubenswrapper[4832]: I0125 08:03:27.679118 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="de82f302-d899-48c7-aedc-4b24f4541b2b" path="/var/lib/kubelet/pods/de82f302-d899-48c7-aedc-4b24f4541b2b/volumes" Jan 25 08:03:27 crc kubenswrapper[4832]: I0125 08:03:27.679697 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e70962d8-5db3-43c3-84bf-380addc38e9c" path="/var/lib/kubelet/pods/e70962d8-5db3-43c3-84bf-380addc38e9c/volumes" Jan 25 08:03:27 crc kubenswrapper[4832]: I0125 08:03:27.967477 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-228pm"] Jan 25 08:03:27 crc kubenswrapper[4832]: E0125 08:03:27.967711 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="de82f302-d899-48c7-aedc-4b24f4541b2b" containerName="extract-content" Jan 25 08:03:27 crc kubenswrapper[4832]: I0125 08:03:27.967725 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="de82f302-d899-48c7-aedc-4b24f4541b2b" containerName="extract-content" Jan 25 08:03:27 crc kubenswrapper[4832]: E0125 08:03:27.967742 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9ca2e919-2c33-41e7-baa6-40f5437a2c3c" containerName="extract-utilities" Jan 25 08:03:27 crc kubenswrapper[4832]: I0125 08:03:27.967750 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="9ca2e919-2c33-41e7-baa6-40f5437a2c3c" containerName="extract-utilities" Jan 25 08:03:27 crc kubenswrapper[4832]: E0125 08:03:27.967762 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e70962d8-5db3-43c3-84bf-380addc38e9c" containerName="extract-utilities" Jan 25 08:03:27 crc kubenswrapper[4832]: I0125 08:03:27.967770 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="e70962d8-5db3-43c3-84bf-380addc38e9c" containerName="extract-utilities" Jan 25 08:03:27 crc kubenswrapper[4832]: E0125 08:03:27.967783 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="de82f302-d899-48c7-aedc-4b24f4541b2b" containerName="registry-server" Jan 25 08:03:27 crc kubenswrapper[4832]: I0125 08:03:27.967791 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="de82f302-d899-48c7-aedc-4b24f4541b2b" containerName="registry-server" Jan 25 08:03:27 crc kubenswrapper[4832]: E0125 08:03:27.967799 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="479892d8-5a53-40ee-9f16-d4480c2c3e03" containerName="extract-content" Jan 25 08:03:27 crc kubenswrapper[4832]: I0125 08:03:27.967806 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="479892d8-5a53-40ee-9f16-d4480c2c3e03" containerName="extract-content" Jan 25 08:03:27 crc kubenswrapper[4832]: E0125 08:03:27.967818 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="479892d8-5a53-40ee-9f16-d4480c2c3e03" containerName="registry-server" Jan 25 08:03:27 crc kubenswrapper[4832]: I0125 08:03:27.967826 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="479892d8-5a53-40ee-9f16-d4480c2c3e03" containerName="registry-server" Jan 25 08:03:27 crc kubenswrapper[4832]: E0125 08:03:27.967837 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c97f51ea-b215-4660-bc7b-2406783aa3bb" containerName="marketplace-operator" Jan 25 08:03:27 crc kubenswrapper[4832]: I0125 08:03:27.967844 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="c97f51ea-b215-4660-bc7b-2406783aa3bb" containerName="marketplace-operator" Jan 25 08:03:27 crc kubenswrapper[4832]: E0125 08:03:27.967854 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="479892d8-5a53-40ee-9f16-d4480c2c3e03" containerName="extract-utilities" Jan 25 08:03:27 crc kubenswrapper[4832]: I0125 08:03:27.967861 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="479892d8-5a53-40ee-9f16-d4480c2c3e03" containerName="extract-utilities" Jan 25 08:03:27 crc kubenswrapper[4832]: E0125 08:03:27.967869 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e70962d8-5db3-43c3-84bf-380addc38e9c" containerName="registry-server" Jan 25 08:03:27 crc kubenswrapper[4832]: I0125 08:03:27.967878 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="e70962d8-5db3-43c3-84bf-380addc38e9c" containerName="registry-server" Jan 25 08:03:27 crc kubenswrapper[4832]: E0125 08:03:27.967887 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9ca2e919-2c33-41e7-baa6-40f5437a2c3c" containerName="extract-content" Jan 25 08:03:27 crc kubenswrapper[4832]: I0125 08:03:27.967895 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="9ca2e919-2c33-41e7-baa6-40f5437a2c3c" containerName="extract-content" Jan 25 08:03:27 crc kubenswrapper[4832]: E0125 08:03:27.967905 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="de82f302-d899-48c7-aedc-4b24f4541b2b" containerName="extract-utilities" Jan 25 08:03:27 crc kubenswrapper[4832]: I0125 08:03:27.967912 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="de82f302-d899-48c7-aedc-4b24f4541b2b" containerName="extract-utilities" Jan 25 08:03:27 crc kubenswrapper[4832]: E0125 08:03:27.967925 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9ca2e919-2c33-41e7-baa6-40f5437a2c3c" containerName="registry-server" Jan 25 08:03:27 crc kubenswrapper[4832]: I0125 08:03:27.967931 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="9ca2e919-2c33-41e7-baa6-40f5437a2c3c" containerName="registry-server" Jan 25 08:03:27 crc kubenswrapper[4832]: E0125 08:03:27.967939 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e70962d8-5db3-43c3-84bf-380addc38e9c" containerName="extract-content" Jan 25 08:03:27 crc kubenswrapper[4832]: I0125 08:03:27.967946 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="e70962d8-5db3-43c3-84bf-380addc38e9c" containerName="extract-content" Jan 25 08:03:27 crc kubenswrapper[4832]: I0125 08:03:27.968048 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="de82f302-d899-48c7-aedc-4b24f4541b2b" containerName="registry-server" Jan 25 08:03:27 crc kubenswrapper[4832]: I0125 08:03:27.968068 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="e70962d8-5db3-43c3-84bf-380addc38e9c" containerName="registry-server" Jan 25 08:03:27 crc kubenswrapper[4832]: I0125 08:03:27.968079 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="c97f51ea-b215-4660-bc7b-2406783aa3bb" containerName="marketplace-operator" Jan 25 08:03:27 crc kubenswrapper[4832]: I0125 08:03:27.968088 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="9ca2e919-2c33-41e7-baa6-40f5437a2c3c" containerName="registry-server" Jan 25 08:03:27 crc kubenswrapper[4832]: I0125 08:03:27.968097 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="479892d8-5a53-40ee-9f16-d4480c2c3e03" containerName="registry-server" Jan 25 08:03:27 crc kubenswrapper[4832]: I0125 08:03:27.969065 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-228pm" Jan 25 08:03:27 crc kubenswrapper[4832]: I0125 08:03:27.973712 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Jan 25 08:03:27 crc kubenswrapper[4832]: I0125 08:03:27.978127 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-228pm"] Jan 25 08:03:28 crc kubenswrapper[4832]: I0125 08:03:28.147280 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5c017036-4f0f-41d7-86b8-52d5216b44ba-catalog-content\") pod \"redhat-marketplace-228pm\" (UID: \"5c017036-4f0f-41d7-86b8-52d5216b44ba\") " pod="openshift-marketplace/redhat-marketplace-228pm" Jan 25 08:03:28 crc kubenswrapper[4832]: I0125 08:03:28.147393 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8kcnq\" (UniqueName: \"kubernetes.io/projected/5c017036-4f0f-41d7-86b8-52d5216b44ba-kube-api-access-8kcnq\") pod \"redhat-marketplace-228pm\" (UID: \"5c017036-4f0f-41d7-86b8-52d5216b44ba\") " pod="openshift-marketplace/redhat-marketplace-228pm" Jan 25 08:03:28 crc kubenswrapper[4832]: I0125 08:03:28.147556 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5c017036-4f0f-41d7-86b8-52d5216b44ba-utilities\") pod \"redhat-marketplace-228pm\" (UID: \"5c017036-4f0f-41d7-86b8-52d5216b44ba\") " pod="openshift-marketplace/redhat-marketplace-228pm" Jan 25 08:03:28 crc kubenswrapper[4832]: I0125 08:03:28.232555 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-ncr8s" Jan 25 08:03:28 crc kubenswrapper[4832]: I0125 08:03:28.249002 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8kcnq\" (UniqueName: \"kubernetes.io/projected/5c017036-4f0f-41d7-86b8-52d5216b44ba-kube-api-access-8kcnq\") pod \"redhat-marketplace-228pm\" (UID: \"5c017036-4f0f-41d7-86b8-52d5216b44ba\") " pod="openshift-marketplace/redhat-marketplace-228pm" Jan 25 08:03:28 crc kubenswrapper[4832]: I0125 08:03:28.249311 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5c017036-4f0f-41d7-86b8-52d5216b44ba-utilities\") pod \"redhat-marketplace-228pm\" (UID: \"5c017036-4f0f-41d7-86b8-52d5216b44ba\") " pod="openshift-marketplace/redhat-marketplace-228pm" Jan 25 08:03:28 crc kubenswrapper[4832]: I0125 08:03:28.249381 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5c017036-4f0f-41d7-86b8-52d5216b44ba-catalog-content\") pod \"redhat-marketplace-228pm\" (UID: \"5c017036-4f0f-41d7-86b8-52d5216b44ba\") " pod="openshift-marketplace/redhat-marketplace-228pm" Jan 25 08:03:28 crc kubenswrapper[4832]: I0125 08:03:28.250076 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5c017036-4f0f-41d7-86b8-52d5216b44ba-utilities\") pod \"redhat-marketplace-228pm\" (UID: \"5c017036-4f0f-41d7-86b8-52d5216b44ba\") " pod="openshift-marketplace/redhat-marketplace-228pm" Jan 25 08:03:28 crc kubenswrapper[4832]: I0125 08:03:28.254621 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5c017036-4f0f-41d7-86b8-52d5216b44ba-catalog-content\") pod \"redhat-marketplace-228pm\" (UID: \"5c017036-4f0f-41d7-86b8-52d5216b44ba\") " pod="openshift-marketplace/redhat-marketplace-228pm" Jan 25 08:03:28 crc kubenswrapper[4832]: I0125 08:03:28.272342 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8kcnq\" (UniqueName: \"kubernetes.io/projected/5c017036-4f0f-41d7-86b8-52d5216b44ba-kube-api-access-8kcnq\") pod \"redhat-marketplace-228pm\" (UID: \"5c017036-4f0f-41d7-86b8-52d5216b44ba\") " pod="openshift-marketplace/redhat-marketplace-228pm" Jan 25 08:03:28 crc kubenswrapper[4832]: I0125 08:03:28.308445 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-228pm" Jan 25 08:03:28 crc kubenswrapper[4832]: I0125 08:03:28.564592 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-fnkc8"] Jan 25 08:03:28 crc kubenswrapper[4832]: I0125 08:03:28.567469 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-fnkc8" Jan 25 08:03:28 crc kubenswrapper[4832]: I0125 08:03:28.570924 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Jan 25 08:03:28 crc kubenswrapper[4832]: I0125 08:03:28.599988 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-fnkc8"] Jan 25 08:03:28 crc kubenswrapper[4832]: I0125 08:03:28.684651 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-228pm"] Jan 25 08:03:28 crc kubenswrapper[4832]: I0125 08:03:28.759202 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8676ecdd-5a18-4dfb-aa09-0c398279d340-catalog-content\") pod \"redhat-operators-fnkc8\" (UID: \"8676ecdd-5a18-4dfb-aa09-0c398279d340\") " pod="openshift-marketplace/redhat-operators-fnkc8" Jan 25 08:03:28 crc kubenswrapper[4832]: I0125 08:03:28.759260 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n2t5g\" (UniqueName: \"kubernetes.io/projected/8676ecdd-5a18-4dfb-aa09-0c398279d340-kube-api-access-n2t5g\") pod \"redhat-operators-fnkc8\" (UID: \"8676ecdd-5a18-4dfb-aa09-0c398279d340\") " pod="openshift-marketplace/redhat-operators-fnkc8" Jan 25 08:03:28 crc kubenswrapper[4832]: I0125 08:03:28.759341 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8676ecdd-5a18-4dfb-aa09-0c398279d340-utilities\") pod \"redhat-operators-fnkc8\" (UID: \"8676ecdd-5a18-4dfb-aa09-0c398279d340\") " pod="openshift-marketplace/redhat-operators-fnkc8" Jan 25 08:03:28 crc kubenswrapper[4832]: I0125 08:03:28.860707 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8676ecdd-5a18-4dfb-aa09-0c398279d340-catalog-content\") pod \"redhat-operators-fnkc8\" (UID: \"8676ecdd-5a18-4dfb-aa09-0c398279d340\") " pod="openshift-marketplace/redhat-operators-fnkc8" Jan 25 08:03:28 crc kubenswrapper[4832]: I0125 08:03:28.861049 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n2t5g\" (UniqueName: \"kubernetes.io/projected/8676ecdd-5a18-4dfb-aa09-0c398279d340-kube-api-access-n2t5g\") pod \"redhat-operators-fnkc8\" (UID: \"8676ecdd-5a18-4dfb-aa09-0c398279d340\") " pod="openshift-marketplace/redhat-operators-fnkc8" Jan 25 08:03:28 crc kubenswrapper[4832]: I0125 08:03:28.861102 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8676ecdd-5a18-4dfb-aa09-0c398279d340-utilities\") pod \"redhat-operators-fnkc8\" (UID: \"8676ecdd-5a18-4dfb-aa09-0c398279d340\") " pod="openshift-marketplace/redhat-operators-fnkc8" Jan 25 08:03:28 crc kubenswrapper[4832]: I0125 08:03:28.861332 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8676ecdd-5a18-4dfb-aa09-0c398279d340-catalog-content\") pod \"redhat-operators-fnkc8\" (UID: \"8676ecdd-5a18-4dfb-aa09-0c398279d340\") " pod="openshift-marketplace/redhat-operators-fnkc8" Jan 25 08:03:28 crc kubenswrapper[4832]: I0125 08:03:28.861578 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8676ecdd-5a18-4dfb-aa09-0c398279d340-utilities\") pod \"redhat-operators-fnkc8\" (UID: \"8676ecdd-5a18-4dfb-aa09-0c398279d340\") " pod="openshift-marketplace/redhat-operators-fnkc8" Jan 25 08:03:28 crc kubenswrapper[4832]: I0125 08:03:28.881239 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n2t5g\" (UniqueName: \"kubernetes.io/projected/8676ecdd-5a18-4dfb-aa09-0c398279d340-kube-api-access-n2t5g\") pod \"redhat-operators-fnkc8\" (UID: \"8676ecdd-5a18-4dfb-aa09-0c398279d340\") " pod="openshift-marketplace/redhat-operators-fnkc8" Jan 25 08:03:28 crc kubenswrapper[4832]: I0125 08:03:28.916542 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-fnkc8" Jan 25 08:03:29 crc kubenswrapper[4832]: I0125 08:03:29.236401 4832 generic.go:334] "Generic (PLEG): container finished" podID="5c017036-4f0f-41d7-86b8-52d5216b44ba" containerID="02219affadf7f146608948eb7d293d53a7e9a4d7eed4dfcb92eeba78ee32d61b" exitCode=0 Jan 25 08:03:29 crc kubenswrapper[4832]: I0125 08:03:29.236494 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-228pm" event={"ID":"5c017036-4f0f-41d7-86b8-52d5216b44ba","Type":"ContainerDied","Data":"02219affadf7f146608948eb7d293d53a7e9a4d7eed4dfcb92eeba78ee32d61b"} Jan 25 08:03:29 crc kubenswrapper[4832]: I0125 08:03:29.236534 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-228pm" event={"ID":"5c017036-4f0f-41d7-86b8-52d5216b44ba","Type":"ContainerStarted","Data":"fe5c942b94723a3cd891e5943a301eb77d8232cefe70cebad2a304f2c028d986"} Jan 25 08:03:29 crc kubenswrapper[4832]: I0125 08:03:29.282853 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-fnkc8"] Jan 25 08:03:29 crc kubenswrapper[4832]: W0125 08:03:29.290301 4832 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8676ecdd_5a18_4dfb_aa09_0c398279d340.slice/crio-21e8ba9b06096506097c1f6d406497d8b7b95dcee05159db227de0ff1283f475 WatchSource:0}: Error finding container 21e8ba9b06096506097c1f6d406497d8b7b95dcee05159db227de0ff1283f475: Status 404 returned error can't find the container with id 21e8ba9b06096506097c1f6d406497d8b7b95dcee05159db227de0ff1283f475 Jan 25 08:03:30 crc kubenswrapper[4832]: I0125 08:03:30.243624 4832 generic.go:334] "Generic (PLEG): container finished" podID="8676ecdd-5a18-4dfb-aa09-0c398279d340" containerID="513483c2584006b6010aa1eed620ed51007ef7e01c4dc3987f4d71ed808d2e04" exitCode=0 Jan 25 08:03:30 crc kubenswrapper[4832]: I0125 08:03:30.243684 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fnkc8" event={"ID":"8676ecdd-5a18-4dfb-aa09-0c398279d340","Type":"ContainerDied","Data":"513483c2584006b6010aa1eed620ed51007ef7e01c4dc3987f4d71ed808d2e04"} Jan 25 08:03:30 crc kubenswrapper[4832]: I0125 08:03:30.244030 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fnkc8" event={"ID":"8676ecdd-5a18-4dfb-aa09-0c398279d340","Type":"ContainerStarted","Data":"21e8ba9b06096506097c1f6d406497d8b7b95dcee05159db227de0ff1283f475"} Jan 25 08:03:30 crc kubenswrapper[4832]: I0125 08:03:30.247160 4832 generic.go:334] "Generic (PLEG): container finished" podID="5c017036-4f0f-41d7-86b8-52d5216b44ba" containerID="9eb0c916a4e57ddd6aa78baceeaf8a889b53ff9aa06181e33feeba988febdbdc" exitCode=0 Jan 25 08:03:30 crc kubenswrapper[4832]: I0125 08:03:30.247651 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-228pm" event={"ID":"5c017036-4f0f-41d7-86b8-52d5216b44ba","Type":"ContainerDied","Data":"9eb0c916a4e57ddd6aa78baceeaf8a889b53ff9aa06181e33feeba988febdbdc"} Jan 25 08:03:30 crc kubenswrapper[4832]: I0125 08:03:30.369494 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-8dnnk"] Jan 25 08:03:30 crc kubenswrapper[4832]: I0125 08:03:30.370911 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8dnnk" Jan 25 08:03:30 crc kubenswrapper[4832]: I0125 08:03:30.373302 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Jan 25 08:03:30 crc kubenswrapper[4832]: I0125 08:03:30.384865 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-8dnnk"] Jan 25 08:03:30 crc kubenswrapper[4832]: I0125 08:03:30.483849 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-prw2m\" (UniqueName: \"kubernetes.io/projected/ab8542fb-edc3-4aac-9c78-41ec2ff8981f-kube-api-access-prw2m\") pod \"certified-operators-8dnnk\" (UID: \"ab8542fb-edc3-4aac-9c78-41ec2ff8981f\") " pod="openshift-marketplace/certified-operators-8dnnk" Jan 25 08:03:30 crc kubenswrapper[4832]: I0125 08:03:30.484015 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ab8542fb-edc3-4aac-9c78-41ec2ff8981f-catalog-content\") pod \"certified-operators-8dnnk\" (UID: \"ab8542fb-edc3-4aac-9c78-41ec2ff8981f\") " pod="openshift-marketplace/certified-operators-8dnnk" Jan 25 08:03:30 crc kubenswrapper[4832]: I0125 08:03:30.484055 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ab8542fb-edc3-4aac-9c78-41ec2ff8981f-utilities\") pod \"certified-operators-8dnnk\" (UID: \"ab8542fb-edc3-4aac-9c78-41ec2ff8981f\") " pod="openshift-marketplace/certified-operators-8dnnk" Jan 25 08:03:30 crc kubenswrapper[4832]: I0125 08:03:30.585175 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ab8542fb-edc3-4aac-9c78-41ec2ff8981f-catalog-content\") pod \"certified-operators-8dnnk\" (UID: \"ab8542fb-edc3-4aac-9c78-41ec2ff8981f\") " pod="openshift-marketplace/certified-operators-8dnnk" Jan 25 08:03:30 crc kubenswrapper[4832]: I0125 08:03:30.585225 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ab8542fb-edc3-4aac-9c78-41ec2ff8981f-utilities\") pod \"certified-operators-8dnnk\" (UID: \"ab8542fb-edc3-4aac-9c78-41ec2ff8981f\") " pod="openshift-marketplace/certified-operators-8dnnk" Jan 25 08:03:30 crc kubenswrapper[4832]: I0125 08:03:30.585252 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-prw2m\" (UniqueName: \"kubernetes.io/projected/ab8542fb-edc3-4aac-9c78-41ec2ff8981f-kube-api-access-prw2m\") pod \"certified-operators-8dnnk\" (UID: \"ab8542fb-edc3-4aac-9c78-41ec2ff8981f\") " pod="openshift-marketplace/certified-operators-8dnnk" Jan 25 08:03:30 crc kubenswrapper[4832]: I0125 08:03:30.585654 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ab8542fb-edc3-4aac-9c78-41ec2ff8981f-utilities\") pod \"certified-operators-8dnnk\" (UID: \"ab8542fb-edc3-4aac-9c78-41ec2ff8981f\") " pod="openshift-marketplace/certified-operators-8dnnk" Jan 25 08:03:30 crc kubenswrapper[4832]: I0125 08:03:30.589342 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ab8542fb-edc3-4aac-9c78-41ec2ff8981f-catalog-content\") pod \"certified-operators-8dnnk\" (UID: \"ab8542fb-edc3-4aac-9c78-41ec2ff8981f\") " pod="openshift-marketplace/certified-operators-8dnnk" Jan 25 08:03:30 crc kubenswrapper[4832]: I0125 08:03:30.609130 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-prw2m\" (UniqueName: \"kubernetes.io/projected/ab8542fb-edc3-4aac-9c78-41ec2ff8981f-kube-api-access-prw2m\") pod \"certified-operators-8dnnk\" (UID: \"ab8542fb-edc3-4aac-9c78-41ec2ff8981f\") " pod="openshift-marketplace/certified-operators-8dnnk" Jan 25 08:03:30 crc kubenswrapper[4832]: I0125 08:03:30.691053 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8dnnk" Jan 25 08:03:30 crc kubenswrapper[4832]: I0125 08:03:30.968450 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-cjfdq"] Jan 25 08:03:30 crc kubenswrapper[4832]: I0125 08:03:30.970785 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-cjfdq" Jan 25 08:03:30 crc kubenswrapper[4832]: I0125 08:03:30.972226 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Jan 25 08:03:30 crc kubenswrapper[4832]: I0125 08:03:30.983597 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-cjfdq"] Jan 25 08:03:31 crc kubenswrapper[4832]: I0125 08:03:31.091956 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b4371fdc-00c0-4e6a-a877-b17501271922-catalog-content\") pod \"community-operators-cjfdq\" (UID: \"b4371fdc-00c0-4e6a-a877-b17501271922\") " pod="openshift-marketplace/community-operators-cjfdq" Jan 25 08:03:31 crc kubenswrapper[4832]: I0125 08:03:31.092055 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b4371fdc-00c0-4e6a-a877-b17501271922-utilities\") pod \"community-operators-cjfdq\" (UID: \"b4371fdc-00c0-4e6a-a877-b17501271922\") " pod="openshift-marketplace/community-operators-cjfdq" Jan 25 08:03:31 crc kubenswrapper[4832]: I0125 08:03:31.092176 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f9k89\" (UniqueName: \"kubernetes.io/projected/b4371fdc-00c0-4e6a-a877-b17501271922-kube-api-access-f9k89\") pod \"community-operators-cjfdq\" (UID: \"b4371fdc-00c0-4e6a-a877-b17501271922\") " pod="openshift-marketplace/community-operators-cjfdq" Jan 25 08:03:31 crc kubenswrapper[4832]: I0125 08:03:31.107667 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-8dnnk"] Jan 25 08:03:31 crc kubenswrapper[4832]: W0125 08:03:31.112052 4832 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podab8542fb_edc3_4aac_9c78_41ec2ff8981f.slice/crio-6da6af8ec7ba014eba12a0b5e9b756145e66dde82c030269606c1fbc3a3f46d7 WatchSource:0}: Error finding container 6da6af8ec7ba014eba12a0b5e9b756145e66dde82c030269606c1fbc3a3f46d7: Status 404 returned error can't find the container with id 6da6af8ec7ba014eba12a0b5e9b756145e66dde82c030269606c1fbc3a3f46d7 Jan 25 08:03:31 crc kubenswrapper[4832]: I0125 08:03:31.193509 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f9k89\" (UniqueName: \"kubernetes.io/projected/b4371fdc-00c0-4e6a-a877-b17501271922-kube-api-access-f9k89\") pod \"community-operators-cjfdq\" (UID: \"b4371fdc-00c0-4e6a-a877-b17501271922\") " pod="openshift-marketplace/community-operators-cjfdq" Jan 25 08:03:31 crc kubenswrapper[4832]: I0125 08:03:31.193589 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b4371fdc-00c0-4e6a-a877-b17501271922-catalog-content\") pod \"community-operators-cjfdq\" (UID: \"b4371fdc-00c0-4e6a-a877-b17501271922\") " pod="openshift-marketplace/community-operators-cjfdq" Jan 25 08:03:31 crc kubenswrapper[4832]: I0125 08:03:31.193631 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b4371fdc-00c0-4e6a-a877-b17501271922-utilities\") pod \"community-operators-cjfdq\" (UID: \"b4371fdc-00c0-4e6a-a877-b17501271922\") " pod="openshift-marketplace/community-operators-cjfdq" Jan 25 08:03:31 crc kubenswrapper[4832]: I0125 08:03:31.194120 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b4371fdc-00c0-4e6a-a877-b17501271922-utilities\") pod \"community-operators-cjfdq\" (UID: \"b4371fdc-00c0-4e6a-a877-b17501271922\") " pod="openshift-marketplace/community-operators-cjfdq" Jan 25 08:03:31 crc kubenswrapper[4832]: I0125 08:03:31.194228 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b4371fdc-00c0-4e6a-a877-b17501271922-catalog-content\") pod \"community-operators-cjfdq\" (UID: \"b4371fdc-00c0-4e6a-a877-b17501271922\") " pod="openshift-marketplace/community-operators-cjfdq" Jan 25 08:03:31 crc kubenswrapper[4832]: I0125 08:03:31.214198 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f9k89\" (UniqueName: \"kubernetes.io/projected/b4371fdc-00c0-4e6a-a877-b17501271922-kube-api-access-f9k89\") pod \"community-operators-cjfdq\" (UID: \"b4371fdc-00c0-4e6a-a877-b17501271922\") " pod="openshift-marketplace/community-operators-cjfdq" Jan 25 08:03:31 crc kubenswrapper[4832]: I0125 08:03:31.257828 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fnkc8" event={"ID":"8676ecdd-5a18-4dfb-aa09-0c398279d340","Type":"ContainerStarted","Data":"ee40844ec72aac3267e79aad85bbb6d7bca8b1f6dfe2a46bfdee8b97c26d096a"} Jan 25 08:03:31 crc kubenswrapper[4832]: I0125 08:03:31.262905 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-228pm" event={"ID":"5c017036-4f0f-41d7-86b8-52d5216b44ba","Type":"ContainerStarted","Data":"2d2095ef6114a1de9ad3659f3dbc7b0adfab1ca1500c0de47e1c650ebaa1da3d"} Jan 25 08:03:31 crc kubenswrapper[4832]: I0125 08:03:31.264357 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8dnnk" event={"ID":"ab8542fb-edc3-4aac-9c78-41ec2ff8981f","Type":"ContainerStarted","Data":"70a3ef0c98f5718e1814b48d5921fdfd42048748eed0025dcf027348b5721671"} Jan 25 08:03:31 crc kubenswrapper[4832]: I0125 08:03:31.264426 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8dnnk" event={"ID":"ab8542fb-edc3-4aac-9c78-41ec2ff8981f","Type":"ContainerStarted","Data":"6da6af8ec7ba014eba12a0b5e9b756145e66dde82c030269606c1fbc3a3f46d7"} Jan 25 08:03:31 crc kubenswrapper[4832]: I0125 08:03:31.285937 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-cjfdq" Jan 25 08:03:31 crc kubenswrapper[4832]: I0125 08:03:31.300354 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-228pm" podStartSLOduration=2.843577795 podStartE2EDuration="4.300339533s" podCreationTimestamp="2026-01-25 08:03:27 +0000 UTC" firstStartedPulling="2026-01-25 08:03:29.238114818 +0000 UTC m=+391.911938351" lastFinishedPulling="2026-01-25 08:03:30.694876556 +0000 UTC m=+393.368700089" observedRunningTime="2026-01-25 08:03:31.298707611 +0000 UTC m=+393.972531144" watchObservedRunningTime="2026-01-25 08:03:31.300339533 +0000 UTC m=+393.974163066" Jan 25 08:03:31 crc kubenswrapper[4832]: I0125 08:03:31.719254 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-cjfdq"] Jan 25 08:03:32 crc kubenswrapper[4832]: I0125 08:03:32.272662 4832 generic.go:334] "Generic (PLEG): container finished" podID="8676ecdd-5a18-4dfb-aa09-0c398279d340" containerID="ee40844ec72aac3267e79aad85bbb6d7bca8b1f6dfe2a46bfdee8b97c26d096a" exitCode=0 Jan 25 08:03:32 crc kubenswrapper[4832]: I0125 08:03:32.272749 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fnkc8" event={"ID":"8676ecdd-5a18-4dfb-aa09-0c398279d340","Type":"ContainerDied","Data":"ee40844ec72aac3267e79aad85bbb6d7bca8b1f6dfe2a46bfdee8b97c26d096a"} Jan 25 08:03:32 crc kubenswrapper[4832]: I0125 08:03:32.276014 4832 generic.go:334] "Generic (PLEG): container finished" podID="b4371fdc-00c0-4e6a-a877-b17501271922" containerID="86c4a2bde26ed7de7052c1b0754aad9df25f86f9d3a98ad1e334ea38acca2ea9" exitCode=0 Jan 25 08:03:32 crc kubenswrapper[4832]: I0125 08:03:32.276114 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-cjfdq" event={"ID":"b4371fdc-00c0-4e6a-a877-b17501271922","Type":"ContainerDied","Data":"86c4a2bde26ed7de7052c1b0754aad9df25f86f9d3a98ad1e334ea38acca2ea9"} Jan 25 08:03:32 crc kubenswrapper[4832]: I0125 08:03:32.276162 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-cjfdq" event={"ID":"b4371fdc-00c0-4e6a-a877-b17501271922","Type":"ContainerStarted","Data":"676785f7d16810fb16bce8ba0d3ab0343ff4baf4d37a710dbdc6468db6327536"} Jan 25 08:03:32 crc kubenswrapper[4832]: I0125 08:03:32.280469 4832 generic.go:334] "Generic (PLEG): container finished" podID="ab8542fb-edc3-4aac-9c78-41ec2ff8981f" containerID="70a3ef0c98f5718e1814b48d5921fdfd42048748eed0025dcf027348b5721671" exitCode=0 Jan 25 08:03:32 crc kubenswrapper[4832]: I0125 08:03:32.280561 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8dnnk" event={"ID":"ab8542fb-edc3-4aac-9c78-41ec2ff8981f","Type":"ContainerDied","Data":"70a3ef0c98f5718e1814b48d5921fdfd42048748eed0025dcf027348b5721671"} Jan 25 08:03:32 crc kubenswrapper[4832]: I0125 08:03:32.280600 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8dnnk" event={"ID":"ab8542fb-edc3-4aac-9c78-41ec2ff8981f","Type":"ContainerStarted","Data":"d1af42c7ad8f03857e43bdb9b722c5f66455622886455bc014b7b4b50d3bb808"} Jan 25 08:03:33 crc kubenswrapper[4832]: I0125 08:03:33.288518 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fnkc8" event={"ID":"8676ecdd-5a18-4dfb-aa09-0c398279d340","Type":"ContainerStarted","Data":"6c77360806eccc05767ed460202ed54070268dbe4098b785bcf802de5f2a7a2b"} Jan 25 08:03:33 crc kubenswrapper[4832]: I0125 08:03:33.290039 4832 generic.go:334] "Generic (PLEG): container finished" podID="b4371fdc-00c0-4e6a-a877-b17501271922" containerID="0788b79babfac6285735e43372ce99404d6c80924b53b074697bcc6a8a53583e" exitCode=0 Jan 25 08:03:33 crc kubenswrapper[4832]: I0125 08:03:33.290093 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-cjfdq" event={"ID":"b4371fdc-00c0-4e6a-a877-b17501271922","Type":"ContainerDied","Data":"0788b79babfac6285735e43372ce99404d6c80924b53b074697bcc6a8a53583e"} Jan 25 08:03:33 crc kubenswrapper[4832]: I0125 08:03:33.292176 4832 generic.go:334] "Generic (PLEG): container finished" podID="ab8542fb-edc3-4aac-9c78-41ec2ff8981f" containerID="d1af42c7ad8f03857e43bdb9b722c5f66455622886455bc014b7b4b50d3bb808" exitCode=0 Jan 25 08:03:33 crc kubenswrapper[4832]: I0125 08:03:33.292785 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8dnnk" event={"ID":"ab8542fb-edc3-4aac-9c78-41ec2ff8981f","Type":"ContainerDied","Data":"d1af42c7ad8f03857e43bdb9b722c5f66455622886455bc014b7b4b50d3bb808"} Jan 25 08:03:33 crc kubenswrapper[4832]: I0125 08:03:33.314035 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-fnkc8" podStartSLOduration=2.8134050569999998 podStartE2EDuration="5.314013643s" podCreationTimestamp="2026-01-25 08:03:28 +0000 UTC" firstStartedPulling="2026-01-25 08:03:30.246716034 +0000 UTC m=+392.920539567" lastFinishedPulling="2026-01-25 08:03:32.74732462 +0000 UTC m=+395.421148153" observedRunningTime="2026-01-25 08:03:33.310527202 +0000 UTC m=+395.984350735" watchObservedRunningTime="2026-01-25 08:03:33.314013643 +0000 UTC m=+395.987837166" Jan 25 08:03:34 crc kubenswrapper[4832]: I0125 08:03:34.300272 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-cjfdq" event={"ID":"b4371fdc-00c0-4e6a-a877-b17501271922","Type":"ContainerStarted","Data":"657e9409bca439b7cc0f55a332e9f6ebc704d294d04ab09c44aa58c7676d11be"} Jan 25 08:03:34 crc kubenswrapper[4832]: I0125 08:03:34.302855 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8dnnk" event={"ID":"ab8542fb-edc3-4aac-9c78-41ec2ff8981f","Type":"ContainerStarted","Data":"fb0cb096acd065c35877e177a61397474980d942cb73825c78833b688a44b626"} Jan 25 08:03:34 crc kubenswrapper[4832]: I0125 08:03:34.323865 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-cjfdq" podStartSLOduration=2.9120784730000002 podStartE2EDuration="4.323846739s" podCreationTimestamp="2026-01-25 08:03:30 +0000 UTC" firstStartedPulling="2026-01-25 08:03:32.277811239 +0000 UTC m=+394.951634772" lastFinishedPulling="2026-01-25 08:03:33.689579505 +0000 UTC m=+396.363403038" observedRunningTime="2026-01-25 08:03:34.322133624 +0000 UTC m=+396.995957157" watchObservedRunningTime="2026-01-25 08:03:34.323846739 +0000 UTC m=+396.997670272" Jan 25 08:03:34 crc kubenswrapper[4832]: I0125 08:03:34.339199 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-8dnnk" podStartSLOduration=1.852897987 podStartE2EDuration="4.339176407s" podCreationTimestamp="2026-01-25 08:03:30 +0000 UTC" firstStartedPulling="2026-01-25 08:03:31.266213557 +0000 UTC m=+393.940037090" lastFinishedPulling="2026-01-25 08:03:33.752491977 +0000 UTC m=+396.426315510" observedRunningTime="2026-01-25 08:03:34.337317387 +0000 UTC m=+397.011140920" watchObservedRunningTime="2026-01-25 08:03:34.339176407 +0000 UTC m=+397.013000150" Jan 25 08:03:37 crc kubenswrapper[4832]: I0125 08:03:37.111340 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-66df7c8f76-mz8gw" Jan 25 08:03:37 crc kubenswrapper[4832]: I0125 08:03:37.184181 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-xw4z9"] Jan 25 08:03:38 crc kubenswrapper[4832]: I0125 08:03:38.309986 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-228pm" Jan 25 08:03:38 crc kubenswrapper[4832]: I0125 08:03:38.310316 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-228pm" Jan 25 08:03:38 crc kubenswrapper[4832]: I0125 08:03:38.355523 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-228pm" Jan 25 08:03:38 crc kubenswrapper[4832]: I0125 08:03:38.396067 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-228pm" Jan 25 08:03:38 crc kubenswrapper[4832]: I0125 08:03:38.917658 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-fnkc8" Jan 25 08:03:38 crc kubenswrapper[4832]: I0125 08:03:38.918483 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-fnkc8" Jan 25 08:03:38 crc kubenswrapper[4832]: I0125 08:03:38.957290 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-fnkc8" Jan 25 08:03:39 crc kubenswrapper[4832]: I0125 08:03:39.372348 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-fnkc8" Jan 25 08:03:40 crc kubenswrapper[4832]: I0125 08:03:40.691949 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-8dnnk" Jan 25 08:03:40 crc kubenswrapper[4832]: I0125 08:03:40.692944 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-8dnnk" Jan 25 08:03:40 crc kubenswrapper[4832]: I0125 08:03:40.730828 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-8dnnk" Jan 25 08:03:41 crc kubenswrapper[4832]: I0125 08:03:41.287326 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-cjfdq" Jan 25 08:03:41 crc kubenswrapper[4832]: I0125 08:03:41.287643 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-cjfdq" Jan 25 08:03:41 crc kubenswrapper[4832]: I0125 08:03:41.321607 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-cjfdq" Jan 25 08:03:41 crc kubenswrapper[4832]: I0125 08:03:41.379255 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-8dnnk" Jan 25 08:03:41 crc kubenswrapper[4832]: I0125 08:03:41.389099 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-cjfdq" Jan 25 08:04:02 crc kubenswrapper[4832]: I0125 08:04:02.228227 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-image-registry/image-registry-697d97f7c8-xw4z9" podUID="267d2772-42e1-4031-bc5f-ac78559a7f82" containerName="registry" containerID="cri-o://2e4a259f45e25f040e748dd03bdc843d58af9dfb6b764398371bccceeb62895b" gracePeriod=30 Jan 25 08:04:02 crc kubenswrapper[4832]: I0125 08:04:02.463102 4832 generic.go:334] "Generic (PLEG): container finished" podID="267d2772-42e1-4031-bc5f-ac78559a7f82" containerID="2e4a259f45e25f040e748dd03bdc843d58af9dfb6b764398371bccceeb62895b" exitCode=0 Jan 25 08:04:02 crc kubenswrapper[4832]: I0125 08:04:02.463174 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-xw4z9" event={"ID":"267d2772-42e1-4031-bc5f-ac78559a7f82","Type":"ContainerDied","Data":"2e4a259f45e25f040e748dd03bdc843d58af9dfb6b764398371bccceeb62895b"} Jan 25 08:04:02 crc kubenswrapper[4832]: I0125 08:04:02.608615 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-xw4z9" Jan 25 08:04:02 crc kubenswrapper[4832]: I0125 08:04:02.742721 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/267d2772-42e1-4031-bc5f-ac78559a7f82-installation-pull-secrets\") pod \"267d2772-42e1-4031-bc5f-ac78559a7f82\" (UID: \"267d2772-42e1-4031-bc5f-ac78559a7f82\") " Jan 25 08:04:02 crc kubenswrapper[4832]: I0125 08:04:02.742778 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/267d2772-42e1-4031-bc5f-ac78559a7f82-bound-sa-token\") pod \"267d2772-42e1-4031-bc5f-ac78559a7f82\" (UID: \"267d2772-42e1-4031-bc5f-ac78559a7f82\") " Jan 25 08:04:02 crc kubenswrapper[4832]: I0125 08:04:02.742808 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/267d2772-42e1-4031-bc5f-ac78559a7f82-registry-tls\") pod \"267d2772-42e1-4031-bc5f-ac78559a7f82\" (UID: \"267d2772-42e1-4031-bc5f-ac78559a7f82\") " Jan 25 08:04:02 crc kubenswrapper[4832]: I0125 08:04:02.742864 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/267d2772-42e1-4031-bc5f-ac78559a7f82-registry-certificates\") pod \"267d2772-42e1-4031-bc5f-ac78559a7f82\" (UID: \"267d2772-42e1-4031-bc5f-ac78559a7f82\") " Jan 25 08:04:02 crc kubenswrapper[4832]: I0125 08:04:02.742951 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/267d2772-42e1-4031-bc5f-ac78559a7f82-ca-trust-extracted\") pod \"267d2772-42e1-4031-bc5f-ac78559a7f82\" (UID: \"267d2772-42e1-4031-bc5f-ac78559a7f82\") " Jan 25 08:04:02 crc kubenswrapper[4832]: I0125 08:04:02.742987 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l7lq6\" (UniqueName: \"kubernetes.io/projected/267d2772-42e1-4031-bc5f-ac78559a7f82-kube-api-access-l7lq6\") pod \"267d2772-42e1-4031-bc5f-ac78559a7f82\" (UID: \"267d2772-42e1-4031-bc5f-ac78559a7f82\") " Jan 25 08:04:02 crc kubenswrapper[4832]: I0125 08:04:02.743156 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-storage\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"267d2772-42e1-4031-bc5f-ac78559a7f82\" (UID: \"267d2772-42e1-4031-bc5f-ac78559a7f82\") " Jan 25 08:04:02 crc kubenswrapper[4832]: I0125 08:04:02.743189 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/267d2772-42e1-4031-bc5f-ac78559a7f82-trusted-ca\") pod \"267d2772-42e1-4031-bc5f-ac78559a7f82\" (UID: \"267d2772-42e1-4031-bc5f-ac78559a7f82\") " Jan 25 08:04:02 crc kubenswrapper[4832]: I0125 08:04:02.744621 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/267d2772-42e1-4031-bc5f-ac78559a7f82-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "267d2772-42e1-4031-bc5f-ac78559a7f82" (UID: "267d2772-42e1-4031-bc5f-ac78559a7f82"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 25 08:04:02 crc kubenswrapper[4832]: I0125 08:04:02.744674 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/267d2772-42e1-4031-bc5f-ac78559a7f82-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "267d2772-42e1-4031-bc5f-ac78559a7f82" (UID: "267d2772-42e1-4031-bc5f-ac78559a7f82"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 25 08:04:02 crc kubenswrapper[4832]: I0125 08:04:02.749317 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/267d2772-42e1-4031-bc5f-ac78559a7f82-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "267d2772-42e1-4031-bc5f-ac78559a7f82" (UID: "267d2772-42e1-4031-bc5f-ac78559a7f82"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 25 08:04:02 crc kubenswrapper[4832]: I0125 08:04:02.749555 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/267d2772-42e1-4031-bc5f-ac78559a7f82-kube-api-access-l7lq6" (OuterVolumeSpecName: "kube-api-access-l7lq6") pod "267d2772-42e1-4031-bc5f-ac78559a7f82" (UID: "267d2772-42e1-4031-bc5f-ac78559a7f82"). InnerVolumeSpecName "kube-api-access-l7lq6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 25 08:04:02 crc kubenswrapper[4832]: I0125 08:04:02.751886 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/267d2772-42e1-4031-bc5f-ac78559a7f82-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "267d2772-42e1-4031-bc5f-ac78559a7f82" (UID: "267d2772-42e1-4031-bc5f-ac78559a7f82"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 25 08:04:02 crc kubenswrapper[4832]: I0125 08:04:02.754371 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/267d2772-42e1-4031-bc5f-ac78559a7f82-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "267d2772-42e1-4031-bc5f-ac78559a7f82" (UID: "267d2772-42e1-4031-bc5f-ac78559a7f82"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 25 08:04:02 crc kubenswrapper[4832]: I0125 08:04:02.755008 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "registry-storage") pod "267d2772-42e1-4031-bc5f-ac78559a7f82" (UID: "267d2772-42e1-4031-bc5f-ac78559a7f82"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 25 08:04:02 crc kubenswrapper[4832]: I0125 08:04:02.761220 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/267d2772-42e1-4031-bc5f-ac78559a7f82-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "267d2772-42e1-4031-bc5f-ac78559a7f82" (UID: "267d2772-42e1-4031-bc5f-ac78559a7f82"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 25 08:04:02 crc kubenswrapper[4832]: I0125 08:04:02.844289 4832 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/267d2772-42e1-4031-bc5f-ac78559a7f82-registry-certificates\") on node \"crc\" DevicePath \"\"" Jan 25 08:04:02 crc kubenswrapper[4832]: I0125 08:04:02.844340 4832 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/267d2772-42e1-4031-bc5f-ac78559a7f82-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Jan 25 08:04:02 crc kubenswrapper[4832]: I0125 08:04:02.844353 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l7lq6\" (UniqueName: \"kubernetes.io/projected/267d2772-42e1-4031-bc5f-ac78559a7f82-kube-api-access-l7lq6\") on node \"crc\" DevicePath \"\"" Jan 25 08:04:02 crc kubenswrapper[4832]: I0125 08:04:02.844365 4832 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/267d2772-42e1-4031-bc5f-ac78559a7f82-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 25 08:04:02 crc kubenswrapper[4832]: I0125 08:04:02.844378 4832 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/267d2772-42e1-4031-bc5f-ac78559a7f82-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Jan 25 08:04:02 crc kubenswrapper[4832]: I0125 08:04:02.844414 4832 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/267d2772-42e1-4031-bc5f-ac78559a7f82-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 25 08:04:02 crc kubenswrapper[4832]: I0125 08:04:02.844426 4832 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/267d2772-42e1-4031-bc5f-ac78559a7f82-registry-tls\") on node \"crc\" DevicePath \"\"" Jan 25 08:04:03 crc kubenswrapper[4832]: I0125 08:04:03.477980 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-xw4z9" event={"ID":"267d2772-42e1-4031-bc5f-ac78559a7f82","Type":"ContainerDied","Data":"c12e4fcdfe62748c8378c2d864a15c0e20bcb1ff3331dd8ec72ab9e1e242d267"} Jan 25 08:04:03 crc kubenswrapper[4832]: I0125 08:04:03.478045 4832 scope.go:117] "RemoveContainer" containerID="2e4a259f45e25f040e748dd03bdc843d58af9dfb6b764398371bccceeb62895b" Jan 25 08:04:03 crc kubenswrapper[4832]: I0125 08:04:03.480695 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-xw4z9" Jan 25 08:04:03 crc kubenswrapper[4832]: I0125 08:04:03.515345 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-xw4z9"] Jan 25 08:04:03 crc kubenswrapper[4832]: I0125 08:04:03.518915 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-xw4z9"] Jan 25 08:04:03 crc kubenswrapper[4832]: I0125 08:04:03.680665 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="267d2772-42e1-4031-bc5f-ac78559a7f82" path="/var/lib/kubelet/pods/267d2772-42e1-4031-bc5f-ac78559a7f82/volumes" Jan 25 08:05:22 crc kubenswrapper[4832]: I0125 08:05:22.149956 4832 patch_prober.go:28] interesting pod/machine-config-daemon-9r9sz container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 25 08:05:22 crc kubenswrapper[4832]: I0125 08:05:22.150587 4832 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-9r9sz" podUID="1fb47e8e-c812-41b4-9be7-3fad81e121b0" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 25 08:05:52 crc kubenswrapper[4832]: I0125 08:05:52.150180 4832 patch_prober.go:28] interesting pod/machine-config-daemon-9r9sz container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 25 08:05:52 crc kubenswrapper[4832]: I0125 08:05:52.150928 4832 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-9r9sz" podUID="1fb47e8e-c812-41b4-9be7-3fad81e121b0" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 25 08:06:22 crc kubenswrapper[4832]: I0125 08:06:22.149857 4832 patch_prober.go:28] interesting pod/machine-config-daemon-9r9sz container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 25 08:06:22 crc kubenswrapper[4832]: I0125 08:06:22.150584 4832 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-9r9sz" podUID="1fb47e8e-c812-41b4-9be7-3fad81e121b0" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 25 08:06:22 crc kubenswrapper[4832]: I0125 08:06:22.150651 4832 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-9r9sz" Jan 25 08:06:22 crc kubenswrapper[4832]: I0125 08:06:22.151593 4832 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"63d1a0b13b16f0668b1c02ef162797d02564ab151b4d705b380dc4d22fa1cf34"} pod="openshift-machine-config-operator/machine-config-daemon-9r9sz" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 25 08:06:22 crc kubenswrapper[4832]: I0125 08:06:22.151706 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-9r9sz" podUID="1fb47e8e-c812-41b4-9be7-3fad81e121b0" containerName="machine-config-daemon" containerID="cri-o://63d1a0b13b16f0668b1c02ef162797d02564ab151b4d705b380dc4d22fa1cf34" gracePeriod=600 Jan 25 08:06:22 crc kubenswrapper[4832]: I0125 08:06:22.339326 4832 generic.go:334] "Generic (PLEG): container finished" podID="1fb47e8e-c812-41b4-9be7-3fad81e121b0" containerID="63d1a0b13b16f0668b1c02ef162797d02564ab151b4d705b380dc4d22fa1cf34" exitCode=0 Jan 25 08:06:22 crc kubenswrapper[4832]: I0125 08:06:22.339480 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-9r9sz" event={"ID":"1fb47e8e-c812-41b4-9be7-3fad81e121b0","Type":"ContainerDied","Data":"63d1a0b13b16f0668b1c02ef162797d02564ab151b4d705b380dc4d22fa1cf34"} Jan 25 08:06:22 crc kubenswrapper[4832]: I0125 08:06:22.339997 4832 scope.go:117] "RemoveContainer" containerID="ab67a00f3383f3ebf817c9eee1dbd1d6d82dc6ce62d279f6c63b25d61faa31bb" Jan 25 08:06:23 crc kubenswrapper[4832]: I0125 08:06:23.347917 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-9r9sz" event={"ID":"1fb47e8e-c812-41b4-9be7-3fad81e121b0","Type":"ContainerStarted","Data":"2e5cad5f69dc7b0bf2005b84dd78b370ac52759a8ef11d5ebaebb12ca134de5d"} Jan 25 08:08:22 crc kubenswrapper[4832]: I0125 08:08:22.149651 4832 patch_prober.go:28] interesting pod/machine-config-daemon-9r9sz container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 25 08:08:22 crc kubenswrapper[4832]: I0125 08:08:22.150118 4832 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-9r9sz" podUID="1fb47e8e-c812-41b4-9be7-3fad81e121b0" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 25 08:08:31 crc kubenswrapper[4832]: I0125 08:08:31.481558 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-m4mtp"] Jan 25 08:08:31 crc kubenswrapper[4832]: E0125 08:08:31.482361 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="267d2772-42e1-4031-bc5f-ac78559a7f82" containerName="registry" Jan 25 08:08:31 crc kubenswrapper[4832]: I0125 08:08:31.482379 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="267d2772-42e1-4031-bc5f-ac78559a7f82" containerName="registry" Jan 25 08:08:31 crc kubenswrapper[4832]: I0125 08:08:31.482516 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="267d2772-42e1-4031-bc5f-ac78559a7f82" containerName="registry" Jan 25 08:08:31 crc kubenswrapper[4832]: I0125 08:08:31.483125 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-cf98fcc89-m4mtp" Jan 25 08:08:31 crc kubenswrapper[4832]: I0125 08:08:31.485484 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"openshift-service-ca.crt" Jan 25 08:08:31 crc kubenswrapper[4832]: I0125 08:08:31.485845 4832 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-cainjector-dockercfg-sw665" Jan 25 08:08:31 crc kubenswrapper[4832]: I0125 08:08:31.486013 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"kube-root-ca.crt" Jan 25 08:08:31 crc kubenswrapper[4832]: I0125 08:08:31.491565 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-858654f9db-n5qlr"] Jan 25 08:08:31 crc kubenswrapper[4832]: I0125 08:08:31.492348 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-858654f9db-n5qlr" Jan 25 08:08:31 crc kubenswrapper[4832]: I0125 08:08:31.493651 4832 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-dockercfg-zxvh6" Jan 25 08:08:31 crc kubenswrapper[4832]: I0125 08:08:31.505679 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-858654f9db-n5qlr"] Jan 25 08:08:31 crc kubenswrapper[4832]: I0125 08:08:31.511757 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-5kx64"] Jan 25 08:08:31 crc kubenswrapper[4832]: I0125 08:08:31.512437 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-687f57d79b-5kx64" Jan 25 08:08:31 crc kubenswrapper[4832]: I0125 08:08:31.513819 4832 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-webhook-dockercfg-wzgh9" Jan 25 08:08:31 crc kubenswrapper[4832]: I0125 08:08:31.521594 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-5kx64"] Jan 25 08:08:31 crc kubenswrapper[4832]: I0125 08:08:31.549994 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-m4mtp"] Jan 25 08:08:31 crc kubenswrapper[4832]: I0125 08:08:31.579610 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5b6zw\" (UniqueName: \"kubernetes.io/projected/3f1a7c21-638b-4421-b695-12d246c8909c-kube-api-access-5b6zw\") pod \"cert-manager-858654f9db-n5qlr\" (UID: \"3f1a7c21-638b-4421-b695-12d246c8909c\") " pod="cert-manager/cert-manager-858654f9db-n5qlr" Jan 25 08:08:31 crc kubenswrapper[4832]: I0125 08:08:31.579675 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zjkk2\" (UniqueName: \"kubernetes.io/projected/93467136-4fbc-430d-88c8-44d921001d30-kube-api-access-zjkk2\") pod \"cert-manager-cainjector-cf98fcc89-m4mtp\" (UID: \"93467136-4fbc-430d-88c8-44d921001d30\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-m4mtp" Jan 25 08:08:31 crc kubenswrapper[4832]: I0125 08:08:31.579731 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rft6m\" (UniqueName: \"kubernetes.io/projected/b8b3bc3a-3311-4381-98b3-546a392b9967-kube-api-access-rft6m\") pod \"cert-manager-webhook-687f57d79b-5kx64\" (UID: \"b8b3bc3a-3311-4381-98b3-546a392b9967\") " pod="cert-manager/cert-manager-webhook-687f57d79b-5kx64" Jan 25 08:08:31 crc kubenswrapper[4832]: I0125 08:08:31.680518 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rft6m\" (UniqueName: \"kubernetes.io/projected/b8b3bc3a-3311-4381-98b3-546a392b9967-kube-api-access-rft6m\") pod \"cert-manager-webhook-687f57d79b-5kx64\" (UID: \"b8b3bc3a-3311-4381-98b3-546a392b9967\") " pod="cert-manager/cert-manager-webhook-687f57d79b-5kx64" Jan 25 08:08:31 crc kubenswrapper[4832]: I0125 08:08:31.680573 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5b6zw\" (UniqueName: \"kubernetes.io/projected/3f1a7c21-638b-4421-b695-12d246c8909c-kube-api-access-5b6zw\") pod \"cert-manager-858654f9db-n5qlr\" (UID: \"3f1a7c21-638b-4421-b695-12d246c8909c\") " pod="cert-manager/cert-manager-858654f9db-n5qlr" Jan 25 08:08:31 crc kubenswrapper[4832]: I0125 08:08:31.680614 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zjkk2\" (UniqueName: \"kubernetes.io/projected/93467136-4fbc-430d-88c8-44d921001d30-kube-api-access-zjkk2\") pod \"cert-manager-cainjector-cf98fcc89-m4mtp\" (UID: \"93467136-4fbc-430d-88c8-44d921001d30\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-m4mtp" Jan 25 08:08:31 crc kubenswrapper[4832]: I0125 08:08:31.699564 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5b6zw\" (UniqueName: \"kubernetes.io/projected/3f1a7c21-638b-4421-b695-12d246c8909c-kube-api-access-5b6zw\") pod \"cert-manager-858654f9db-n5qlr\" (UID: \"3f1a7c21-638b-4421-b695-12d246c8909c\") " pod="cert-manager/cert-manager-858654f9db-n5qlr" Jan 25 08:08:31 crc kubenswrapper[4832]: I0125 08:08:31.699628 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zjkk2\" (UniqueName: \"kubernetes.io/projected/93467136-4fbc-430d-88c8-44d921001d30-kube-api-access-zjkk2\") pod \"cert-manager-cainjector-cf98fcc89-m4mtp\" (UID: \"93467136-4fbc-430d-88c8-44d921001d30\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-m4mtp" Jan 25 08:08:31 crc kubenswrapper[4832]: I0125 08:08:31.700425 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rft6m\" (UniqueName: \"kubernetes.io/projected/b8b3bc3a-3311-4381-98b3-546a392b9967-kube-api-access-rft6m\") pod \"cert-manager-webhook-687f57d79b-5kx64\" (UID: \"b8b3bc3a-3311-4381-98b3-546a392b9967\") " pod="cert-manager/cert-manager-webhook-687f57d79b-5kx64" Jan 25 08:08:31 crc kubenswrapper[4832]: I0125 08:08:31.805366 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-cf98fcc89-m4mtp" Jan 25 08:08:31 crc kubenswrapper[4832]: I0125 08:08:31.819500 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-858654f9db-n5qlr" Jan 25 08:08:31 crc kubenswrapper[4832]: I0125 08:08:31.831862 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-687f57d79b-5kx64" Jan 25 08:08:32 crc kubenswrapper[4832]: I0125 08:08:32.096756 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-5kx64"] Jan 25 08:08:32 crc kubenswrapper[4832]: I0125 08:08:32.104076 4832 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 25 08:08:32 crc kubenswrapper[4832]: I0125 08:08:32.154557 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-687f57d79b-5kx64" event={"ID":"b8b3bc3a-3311-4381-98b3-546a392b9967","Type":"ContainerStarted","Data":"d9515c24bd476cdc55deaffa940ebadc689d7c659fc83f1abdbbc7e199caa917"} Jan 25 08:08:32 crc kubenswrapper[4832]: W0125 08:08:32.272711 4832 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod93467136_4fbc_430d_88c8_44d921001d30.slice/crio-39111012422ac0c95110e064bde9dd1a550dfb3471593d00e26efe5cb6f28cf1 WatchSource:0}: Error finding container 39111012422ac0c95110e064bde9dd1a550dfb3471593d00e26efe5cb6f28cf1: Status 404 returned error can't find the container with id 39111012422ac0c95110e064bde9dd1a550dfb3471593d00e26efe5cb6f28cf1 Jan 25 08:08:32 crc kubenswrapper[4832]: W0125 08:08:32.273079 4832 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3f1a7c21_638b_4421_b695_12d246c8909c.slice/crio-902bccd606395690095a1db627d8571bf1ba210486db427016848a5bdb72aa44 WatchSource:0}: Error finding container 902bccd606395690095a1db627d8571bf1ba210486db427016848a5bdb72aa44: Status 404 returned error can't find the container with id 902bccd606395690095a1db627d8571bf1ba210486db427016848a5bdb72aa44 Jan 25 08:08:32 crc kubenswrapper[4832]: I0125 08:08:32.273211 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-m4mtp"] Jan 25 08:08:32 crc kubenswrapper[4832]: I0125 08:08:32.275944 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-858654f9db-n5qlr"] Jan 25 08:08:33 crc kubenswrapper[4832]: I0125 08:08:33.161965 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-858654f9db-n5qlr" event={"ID":"3f1a7c21-638b-4421-b695-12d246c8909c","Type":"ContainerStarted","Data":"902bccd606395690095a1db627d8571bf1ba210486db427016848a5bdb72aa44"} Jan 25 08:08:33 crc kubenswrapper[4832]: I0125 08:08:33.165264 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-cf98fcc89-m4mtp" event={"ID":"93467136-4fbc-430d-88c8-44d921001d30","Type":"ContainerStarted","Data":"39111012422ac0c95110e064bde9dd1a550dfb3471593d00e26efe5cb6f28cf1"} Jan 25 08:08:37 crc kubenswrapper[4832]: I0125 08:08:37.202286 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-858654f9db-n5qlr" event={"ID":"3f1a7c21-638b-4421-b695-12d246c8909c","Type":"ContainerStarted","Data":"cc41087533a14ab6f3204e6a8e89a9888b472dc9372390a57fbfe91cd89927b2"} Jan 25 08:08:37 crc kubenswrapper[4832]: I0125 08:08:37.205771 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-cf98fcc89-m4mtp" event={"ID":"93467136-4fbc-430d-88c8-44d921001d30","Type":"ContainerStarted","Data":"e5c43b812a5f4aab239fd5fdfc956d7229ac6b3da7eec964229e2579db9afa1d"} Jan 25 08:08:37 crc kubenswrapper[4832]: I0125 08:08:37.208400 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-687f57d79b-5kx64" event={"ID":"b8b3bc3a-3311-4381-98b3-546a392b9967","Type":"ContainerStarted","Data":"fc5019d783c716fbda9e99a51d835f03772b341703bdb1501341d92891dc855b"} Jan 25 08:08:37 crc kubenswrapper[4832]: I0125 08:08:37.208799 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="cert-manager/cert-manager-webhook-687f57d79b-5kx64" Jan 25 08:08:37 crc kubenswrapper[4832]: I0125 08:08:37.226430 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-858654f9db-n5qlr" podStartSLOduration=1.973558285 podStartE2EDuration="6.22638101s" podCreationTimestamp="2026-01-25 08:08:31 +0000 UTC" firstStartedPulling="2026-01-25 08:08:32.275281807 +0000 UTC m=+694.949105340" lastFinishedPulling="2026-01-25 08:08:36.528104532 +0000 UTC m=+699.201928065" observedRunningTime="2026-01-25 08:08:37.218454931 +0000 UTC m=+699.892278464" watchObservedRunningTime="2026-01-25 08:08:37.22638101 +0000 UTC m=+699.900204573" Jan 25 08:08:37 crc kubenswrapper[4832]: I0125 08:08:37.238626 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-cainjector-cf98fcc89-m4mtp" podStartSLOduration=1.984774184 podStartE2EDuration="6.23860643s" podCreationTimestamp="2026-01-25 08:08:31 +0000 UTC" firstStartedPulling="2026-01-25 08:08:32.274525834 +0000 UTC m=+694.948349367" lastFinishedPulling="2026-01-25 08:08:36.52835809 +0000 UTC m=+699.202181613" observedRunningTime="2026-01-25 08:08:37.233526887 +0000 UTC m=+699.907350450" watchObservedRunningTime="2026-01-25 08:08:37.23860643 +0000 UTC m=+699.912429963" Jan 25 08:08:37 crc kubenswrapper[4832]: I0125 08:08:37.259394 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-webhook-687f57d79b-5kx64" podStartSLOduration=1.774554659 podStartE2EDuration="6.259361127s" podCreationTimestamp="2026-01-25 08:08:31 +0000 UTC" firstStartedPulling="2026-01-25 08:08:32.103879305 +0000 UTC m=+694.777702838" lastFinishedPulling="2026-01-25 08:08:36.588685773 +0000 UTC m=+699.262509306" observedRunningTime="2026-01-25 08:08:37.255951955 +0000 UTC m=+699.929775528" watchObservedRunningTime="2026-01-25 08:08:37.259361127 +0000 UTC m=+699.933184650" Jan 25 08:08:41 crc kubenswrapper[4832]: I0125 08:08:41.444962 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-plv66"] Jan 25 08:08:41 crc kubenswrapper[4832]: I0125 08:08:41.446252 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-plv66" podUID="9c6fdc72-86dc-433d-8aac-57b0eeefaca3" containerName="ovn-controller" containerID="cri-o://e0de5e2c0084fa8b9faf368e61b965f84d8411bcbdfb8b3cf6a35f4bc6088e68" gracePeriod=30 Jan 25 08:08:41 crc kubenswrapper[4832]: I0125 08:08:41.446307 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-plv66" podUID="9c6fdc72-86dc-433d-8aac-57b0eeefaca3" containerName="sbdb" containerID="cri-o://5d82289bf3a8f5881decb5d348cc43fdfd61f4ce6af17013a893b687d2c759d1" gracePeriod=30 Jan 25 08:08:41 crc kubenswrapper[4832]: I0125 08:08:41.446502 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-plv66" podUID="9c6fdc72-86dc-433d-8aac-57b0eeefaca3" containerName="kube-rbac-proxy-node" containerID="cri-o://4eb8d5ded80c75addd304eb271c805a5558200db4ad062ef7354d8a0e4d2892d" gracePeriod=30 Jan 25 08:08:41 crc kubenswrapper[4832]: I0125 08:08:41.446659 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-plv66" podUID="9c6fdc72-86dc-433d-8aac-57b0eeefaca3" containerName="kube-rbac-proxy-ovn-metrics" containerID="cri-o://5b2bdf85709ae59146893142e9c99259a30d0a3d382b2212b1863f677f6afc2c" gracePeriod=30 Jan 25 08:08:41 crc kubenswrapper[4832]: I0125 08:08:41.446614 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-plv66" podUID="9c6fdc72-86dc-433d-8aac-57b0eeefaca3" containerName="northd" containerID="cri-o://4a4281c5178e1f538e268252a65fbf98cf6d3febdb246a148f96a4aa074654ef" gracePeriod=30 Jan 25 08:08:41 crc kubenswrapper[4832]: I0125 08:08:41.446581 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-plv66" podUID="9c6fdc72-86dc-433d-8aac-57b0eeefaca3" containerName="ovn-acl-logging" containerID="cri-o://9039a4038315d24ad4f721f3a16dc792881c104d23270f4ab5ffb3d84ff4cb99" gracePeriod=30 Jan 25 08:08:41 crc kubenswrapper[4832]: I0125 08:08:41.446997 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-plv66" podUID="9c6fdc72-86dc-433d-8aac-57b0eeefaca3" containerName="nbdb" containerID="cri-o://955df1f749685e35f57096ab341705a767f9f044c498ff9fe0c578205ab00e47" gracePeriod=30 Jan 25 08:08:41 crc kubenswrapper[4832]: I0125 08:08:41.477965 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-plv66" podUID="9c6fdc72-86dc-433d-8aac-57b0eeefaca3" containerName="ovnkube-controller" containerID="cri-o://d3706bdff863467890f6e3493480a401b3ed42903abef7290645045a203f1741" gracePeriod=30 Jan 25 08:08:41 crc kubenswrapper[4832]: I0125 08:08:41.721129 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-plv66_9c6fdc72-86dc-433d-8aac-57b0eeefaca3/ovnkube-controller/3.log" Jan 25 08:08:41 crc kubenswrapper[4832]: I0125 08:08:41.725988 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-plv66_9c6fdc72-86dc-433d-8aac-57b0eeefaca3/ovn-acl-logging/0.log" Jan 25 08:08:41 crc kubenswrapper[4832]: I0125 08:08:41.726646 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-plv66_9c6fdc72-86dc-433d-8aac-57b0eeefaca3/ovn-controller/0.log" Jan 25 08:08:41 crc kubenswrapper[4832]: I0125 08:08:41.727258 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-plv66" Jan 25 08:08:41 crc kubenswrapper[4832]: I0125 08:08:41.783342 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-8snq7"] Jan 25 08:08:41 crc kubenswrapper[4832]: E0125 08:08:41.783581 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9c6fdc72-86dc-433d-8aac-57b0eeefaca3" containerName="ovnkube-controller" Jan 25 08:08:41 crc kubenswrapper[4832]: I0125 08:08:41.783597 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="9c6fdc72-86dc-433d-8aac-57b0eeefaca3" containerName="ovnkube-controller" Jan 25 08:08:41 crc kubenswrapper[4832]: E0125 08:08:41.783605 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9c6fdc72-86dc-433d-8aac-57b0eeefaca3" containerName="northd" Jan 25 08:08:41 crc kubenswrapper[4832]: I0125 08:08:41.783611 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="9c6fdc72-86dc-433d-8aac-57b0eeefaca3" containerName="northd" Jan 25 08:08:41 crc kubenswrapper[4832]: E0125 08:08:41.783620 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9c6fdc72-86dc-433d-8aac-57b0eeefaca3" containerName="ovn-controller" Jan 25 08:08:41 crc kubenswrapper[4832]: I0125 08:08:41.783628 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="9c6fdc72-86dc-433d-8aac-57b0eeefaca3" containerName="ovn-controller" Jan 25 08:08:41 crc kubenswrapper[4832]: E0125 08:08:41.783636 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9c6fdc72-86dc-433d-8aac-57b0eeefaca3" containerName="sbdb" Jan 25 08:08:41 crc kubenswrapper[4832]: I0125 08:08:41.783644 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="9c6fdc72-86dc-433d-8aac-57b0eeefaca3" containerName="sbdb" Jan 25 08:08:41 crc kubenswrapper[4832]: E0125 08:08:41.783652 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9c6fdc72-86dc-433d-8aac-57b0eeefaca3" containerName="kubecfg-setup" Jan 25 08:08:41 crc kubenswrapper[4832]: I0125 08:08:41.783658 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="9c6fdc72-86dc-433d-8aac-57b0eeefaca3" containerName="kubecfg-setup" Jan 25 08:08:41 crc kubenswrapper[4832]: E0125 08:08:41.783665 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9c6fdc72-86dc-433d-8aac-57b0eeefaca3" containerName="ovnkube-controller" Jan 25 08:08:41 crc kubenswrapper[4832]: I0125 08:08:41.783673 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="9c6fdc72-86dc-433d-8aac-57b0eeefaca3" containerName="ovnkube-controller" Jan 25 08:08:41 crc kubenswrapper[4832]: E0125 08:08:41.783681 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9c6fdc72-86dc-433d-8aac-57b0eeefaca3" containerName="kube-rbac-proxy-node" Jan 25 08:08:41 crc kubenswrapper[4832]: I0125 08:08:41.783689 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="9c6fdc72-86dc-433d-8aac-57b0eeefaca3" containerName="kube-rbac-proxy-node" Jan 25 08:08:41 crc kubenswrapper[4832]: E0125 08:08:41.783699 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9c6fdc72-86dc-433d-8aac-57b0eeefaca3" containerName="nbdb" Jan 25 08:08:41 crc kubenswrapper[4832]: I0125 08:08:41.783708 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="9c6fdc72-86dc-433d-8aac-57b0eeefaca3" containerName="nbdb" Jan 25 08:08:41 crc kubenswrapper[4832]: E0125 08:08:41.783720 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9c6fdc72-86dc-433d-8aac-57b0eeefaca3" containerName="ovn-acl-logging" Jan 25 08:08:41 crc kubenswrapper[4832]: I0125 08:08:41.783726 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="9c6fdc72-86dc-433d-8aac-57b0eeefaca3" containerName="ovn-acl-logging" Jan 25 08:08:41 crc kubenswrapper[4832]: E0125 08:08:41.783735 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9c6fdc72-86dc-433d-8aac-57b0eeefaca3" containerName="ovnkube-controller" Jan 25 08:08:41 crc kubenswrapper[4832]: I0125 08:08:41.783742 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="9c6fdc72-86dc-433d-8aac-57b0eeefaca3" containerName="ovnkube-controller" Jan 25 08:08:41 crc kubenswrapper[4832]: E0125 08:08:41.783750 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9c6fdc72-86dc-433d-8aac-57b0eeefaca3" containerName="kube-rbac-proxy-ovn-metrics" Jan 25 08:08:41 crc kubenswrapper[4832]: I0125 08:08:41.783757 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="9c6fdc72-86dc-433d-8aac-57b0eeefaca3" containerName="kube-rbac-proxy-ovn-metrics" Jan 25 08:08:41 crc kubenswrapper[4832]: I0125 08:08:41.783841 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="9c6fdc72-86dc-433d-8aac-57b0eeefaca3" containerName="kube-rbac-proxy-node" Jan 25 08:08:41 crc kubenswrapper[4832]: I0125 08:08:41.783852 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="9c6fdc72-86dc-433d-8aac-57b0eeefaca3" containerName="ovnkube-controller" Jan 25 08:08:41 crc kubenswrapper[4832]: I0125 08:08:41.783860 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="9c6fdc72-86dc-433d-8aac-57b0eeefaca3" containerName="sbdb" Jan 25 08:08:41 crc kubenswrapper[4832]: I0125 08:08:41.783869 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="9c6fdc72-86dc-433d-8aac-57b0eeefaca3" containerName="northd" Jan 25 08:08:41 crc kubenswrapper[4832]: I0125 08:08:41.783878 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="9c6fdc72-86dc-433d-8aac-57b0eeefaca3" containerName="kube-rbac-proxy-ovn-metrics" Jan 25 08:08:41 crc kubenswrapper[4832]: I0125 08:08:41.783885 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="9c6fdc72-86dc-433d-8aac-57b0eeefaca3" containerName="nbdb" Jan 25 08:08:41 crc kubenswrapper[4832]: I0125 08:08:41.783893 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="9c6fdc72-86dc-433d-8aac-57b0eeefaca3" containerName="ovn-acl-logging" Jan 25 08:08:41 crc kubenswrapper[4832]: I0125 08:08:41.783901 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="9c6fdc72-86dc-433d-8aac-57b0eeefaca3" containerName="ovn-controller" Jan 25 08:08:41 crc kubenswrapper[4832]: I0125 08:08:41.783912 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="9c6fdc72-86dc-433d-8aac-57b0eeefaca3" containerName="ovnkube-controller" Jan 25 08:08:41 crc kubenswrapper[4832]: I0125 08:08:41.783920 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="9c6fdc72-86dc-433d-8aac-57b0eeefaca3" containerName="ovnkube-controller" Jan 25 08:08:41 crc kubenswrapper[4832]: I0125 08:08:41.783927 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="9c6fdc72-86dc-433d-8aac-57b0eeefaca3" containerName="ovnkube-controller" Jan 25 08:08:41 crc kubenswrapper[4832]: E0125 08:08:41.784012 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9c6fdc72-86dc-433d-8aac-57b0eeefaca3" containerName="ovnkube-controller" Jan 25 08:08:41 crc kubenswrapper[4832]: I0125 08:08:41.784019 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="9c6fdc72-86dc-433d-8aac-57b0eeefaca3" containerName="ovnkube-controller" Jan 25 08:08:41 crc kubenswrapper[4832]: E0125 08:08:41.784029 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9c6fdc72-86dc-433d-8aac-57b0eeefaca3" containerName="ovnkube-controller" Jan 25 08:08:41 crc kubenswrapper[4832]: I0125 08:08:41.784035 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="9c6fdc72-86dc-433d-8aac-57b0eeefaca3" containerName="ovnkube-controller" Jan 25 08:08:41 crc kubenswrapper[4832]: I0125 08:08:41.784117 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="9c6fdc72-86dc-433d-8aac-57b0eeefaca3" containerName="ovnkube-controller" Jan 25 08:08:41 crc kubenswrapper[4832]: I0125 08:08:41.785696 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-8snq7" Jan 25 08:08:41 crc kubenswrapper[4832]: I0125 08:08:41.806946 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/9c6fdc72-86dc-433d-8aac-57b0eeefaca3-host-kubelet\") pod \"9c6fdc72-86dc-433d-8aac-57b0eeefaca3\" (UID: \"9c6fdc72-86dc-433d-8aac-57b0eeefaca3\") " Jan 25 08:08:41 crc kubenswrapper[4832]: I0125 08:08:41.807000 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/9c6fdc72-86dc-433d-8aac-57b0eeefaca3-systemd-units\") pod \"9c6fdc72-86dc-433d-8aac-57b0eeefaca3\" (UID: \"9c6fdc72-86dc-433d-8aac-57b0eeefaca3\") " Jan 25 08:08:41 crc kubenswrapper[4832]: I0125 08:08:41.807028 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/9c6fdc72-86dc-433d-8aac-57b0eeefaca3-host-var-lib-cni-networks-ovn-kubernetes\") pod \"9c6fdc72-86dc-433d-8aac-57b0eeefaca3\" (UID: \"9c6fdc72-86dc-433d-8aac-57b0eeefaca3\") " Jan 25 08:08:41 crc kubenswrapper[4832]: I0125 08:08:41.807049 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/9c6fdc72-86dc-433d-8aac-57b0eeefaca3-host-cni-netd\") pod \"9c6fdc72-86dc-433d-8aac-57b0eeefaca3\" (UID: \"9c6fdc72-86dc-433d-8aac-57b0eeefaca3\") " Jan 25 08:08:41 crc kubenswrapper[4832]: I0125 08:08:41.807072 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/9c6fdc72-86dc-433d-8aac-57b0eeefaca3-ovnkube-script-lib\") pod \"9c6fdc72-86dc-433d-8aac-57b0eeefaca3\" (UID: \"9c6fdc72-86dc-433d-8aac-57b0eeefaca3\") " Jan 25 08:08:41 crc kubenswrapper[4832]: I0125 08:08:41.807075 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9c6fdc72-86dc-433d-8aac-57b0eeefaca3-host-kubelet" (OuterVolumeSpecName: "host-kubelet") pod "9c6fdc72-86dc-433d-8aac-57b0eeefaca3" (UID: "9c6fdc72-86dc-433d-8aac-57b0eeefaca3"). InnerVolumeSpecName "host-kubelet". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 25 08:08:41 crc kubenswrapper[4832]: I0125 08:08:41.807117 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/9c6fdc72-86dc-433d-8aac-57b0eeefaca3-node-log\") pod \"9c6fdc72-86dc-433d-8aac-57b0eeefaca3\" (UID: \"9c6fdc72-86dc-433d-8aac-57b0eeefaca3\") " Jan 25 08:08:41 crc kubenswrapper[4832]: I0125 08:08:41.807126 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9c6fdc72-86dc-433d-8aac-57b0eeefaca3-host-cni-netd" (OuterVolumeSpecName: "host-cni-netd") pod "9c6fdc72-86dc-433d-8aac-57b0eeefaca3" (UID: "9c6fdc72-86dc-433d-8aac-57b0eeefaca3"). InnerVolumeSpecName "host-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 25 08:08:41 crc kubenswrapper[4832]: I0125 08:08:41.807139 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rkm2k\" (UniqueName: \"kubernetes.io/projected/9c6fdc72-86dc-433d-8aac-57b0eeefaca3-kube-api-access-rkm2k\") pod \"9c6fdc72-86dc-433d-8aac-57b0eeefaca3\" (UID: \"9c6fdc72-86dc-433d-8aac-57b0eeefaca3\") " Jan 25 08:08:41 crc kubenswrapper[4832]: I0125 08:08:41.807153 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9c6fdc72-86dc-433d-8aac-57b0eeefaca3-host-var-lib-cni-networks-ovn-kubernetes" (OuterVolumeSpecName: "host-var-lib-cni-networks-ovn-kubernetes") pod "9c6fdc72-86dc-433d-8aac-57b0eeefaca3" (UID: "9c6fdc72-86dc-433d-8aac-57b0eeefaca3"). InnerVolumeSpecName "host-var-lib-cni-networks-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 25 08:08:41 crc kubenswrapper[4832]: I0125 08:08:41.807167 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/9c6fdc72-86dc-433d-8aac-57b0eeefaca3-run-ovn\") pod \"9c6fdc72-86dc-433d-8aac-57b0eeefaca3\" (UID: \"9c6fdc72-86dc-433d-8aac-57b0eeefaca3\") " Jan 25 08:08:41 crc kubenswrapper[4832]: I0125 08:08:41.807200 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/9c6fdc72-86dc-433d-8aac-57b0eeefaca3-run-openvswitch\") pod \"9c6fdc72-86dc-433d-8aac-57b0eeefaca3\" (UID: \"9c6fdc72-86dc-433d-8aac-57b0eeefaca3\") " Jan 25 08:08:41 crc kubenswrapper[4832]: I0125 08:08:41.807236 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/9c6fdc72-86dc-433d-8aac-57b0eeefaca3-etc-openvswitch\") pod \"9c6fdc72-86dc-433d-8aac-57b0eeefaca3\" (UID: \"9c6fdc72-86dc-433d-8aac-57b0eeefaca3\") " Jan 25 08:08:41 crc kubenswrapper[4832]: I0125 08:08:41.807259 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/9c6fdc72-86dc-433d-8aac-57b0eeefaca3-host-cni-bin\") pod \"9c6fdc72-86dc-433d-8aac-57b0eeefaca3\" (UID: \"9c6fdc72-86dc-433d-8aac-57b0eeefaca3\") " Jan 25 08:08:41 crc kubenswrapper[4832]: I0125 08:08:41.807275 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/9c6fdc72-86dc-433d-8aac-57b0eeefaca3-ovnkube-config\") pod \"9c6fdc72-86dc-433d-8aac-57b0eeefaca3\" (UID: \"9c6fdc72-86dc-433d-8aac-57b0eeefaca3\") " Jan 25 08:08:41 crc kubenswrapper[4832]: I0125 08:08:41.807296 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/9c6fdc72-86dc-433d-8aac-57b0eeefaca3-host-run-ovn-kubernetes\") pod \"9c6fdc72-86dc-433d-8aac-57b0eeefaca3\" (UID: \"9c6fdc72-86dc-433d-8aac-57b0eeefaca3\") " Jan 25 08:08:41 crc kubenswrapper[4832]: I0125 08:08:41.807315 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/9c6fdc72-86dc-433d-8aac-57b0eeefaca3-ovn-node-metrics-cert\") pod \"9c6fdc72-86dc-433d-8aac-57b0eeefaca3\" (UID: \"9c6fdc72-86dc-433d-8aac-57b0eeefaca3\") " Jan 25 08:08:41 crc kubenswrapper[4832]: I0125 08:08:41.807331 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/9c6fdc72-86dc-433d-8aac-57b0eeefaca3-host-run-netns\") pod \"9c6fdc72-86dc-433d-8aac-57b0eeefaca3\" (UID: \"9c6fdc72-86dc-433d-8aac-57b0eeefaca3\") " Jan 25 08:08:41 crc kubenswrapper[4832]: I0125 08:08:41.807359 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/9c6fdc72-86dc-433d-8aac-57b0eeefaca3-var-lib-openvswitch\") pod \"9c6fdc72-86dc-433d-8aac-57b0eeefaca3\" (UID: \"9c6fdc72-86dc-433d-8aac-57b0eeefaca3\") " Jan 25 08:08:41 crc kubenswrapper[4832]: I0125 08:08:41.807373 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/9c6fdc72-86dc-433d-8aac-57b0eeefaca3-host-slash\") pod \"9c6fdc72-86dc-433d-8aac-57b0eeefaca3\" (UID: \"9c6fdc72-86dc-433d-8aac-57b0eeefaca3\") " Jan 25 08:08:41 crc kubenswrapper[4832]: I0125 08:08:41.807407 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/9c6fdc72-86dc-433d-8aac-57b0eeefaca3-run-systemd\") pod \"9c6fdc72-86dc-433d-8aac-57b0eeefaca3\" (UID: \"9c6fdc72-86dc-433d-8aac-57b0eeefaca3\") " Jan 25 08:08:41 crc kubenswrapper[4832]: I0125 08:08:41.807445 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/9c6fdc72-86dc-433d-8aac-57b0eeefaca3-env-overrides\") pod \"9c6fdc72-86dc-433d-8aac-57b0eeefaca3\" (UID: \"9c6fdc72-86dc-433d-8aac-57b0eeefaca3\") " Jan 25 08:08:41 crc kubenswrapper[4832]: I0125 08:08:41.807464 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/9c6fdc72-86dc-433d-8aac-57b0eeefaca3-log-socket\") pod \"9c6fdc72-86dc-433d-8aac-57b0eeefaca3\" (UID: \"9c6fdc72-86dc-433d-8aac-57b0eeefaca3\") " Jan 25 08:08:41 crc kubenswrapper[4832]: I0125 08:08:41.807590 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/c07d7ff5-d1b6-48b4-82bf-9de0e813ba3b-host-kubelet\") pod \"ovnkube-node-8snq7\" (UID: \"c07d7ff5-d1b6-48b4-82bf-9de0e813ba3b\") " pod="openshift-ovn-kubernetes/ovnkube-node-8snq7" Jan 25 08:08:41 crc kubenswrapper[4832]: I0125 08:08:41.807615 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c07d7ff5-d1b6-48b4-82bf-9de0e813ba3b-run-openvswitch\") pod \"ovnkube-node-8snq7\" (UID: \"c07d7ff5-d1b6-48b4-82bf-9de0e813ba3b\") " pod="openshift-ovn-kubernetes/ovnkube-node-8snq7" Jan 25 08:08:41 crc kubenswrapper[4832]: I0125 08:08:41.807634 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/c07d7ff5-d1b6-48b4-82bf-9de0e813ba3b-ovnkube-script-lib\") pod \"ovnkube-node-8snq7\" (UID: \"c07d7ff5-d1b6-48b4-82bf-9de0e813ba3b\") " pod="openshift-ovn-kubernetes/ovnkube-node-8snq7" Jan 25 08:08:41 crc kubenswrapper[4832]: I0125 08:08:41.807656 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/c07d7ff5-d1b6-48b4-82bf-9de0e813ba3b-log-socket\") pod \"ovnkube-node-8snq7\" (UID: \"c07d7ff5-d1b6-48b4-82bf-9de0e813ba3b\") " pod="openshift-ovn-kubernetes/ovnkube-node-8snq7" Jan 25 08:08:41 crc kubenswrapper[4832]: I0125 08:08:41.807671 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/c07d7ff5-d1b6-48b4-82bf-9de0e813ba3b-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-8snq7\" (UID: \"c07d7ff5-d1b6-48b4-82bf-9de0e813ba3b\") " pod="openshift-ovn-kubernetes/ovnkube-node-8snq7" Jan 25 08:08:41 crc kubenswrapper[4832]: I0125 08:08:41.807690 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/c07d7ff5-d1b6-48b4-82bf-9de0e813ba3b-host-cni-bin\") pod \"ovnkube-node-8snq7\" (UID: \"c07d7ff5-d1b6-48b4-82bf-9de0e813ba3b\") " pod="openshift-ovn-kubernetes/ovnkube-node-8snq7" Jan 25 08:08:41 crc kubenswrapper[4832]: I0125 08:08:41.807710 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/c07d7ff5-d1b6-48b4-82bf-9de0e813ba3b-node-log\") pod \"ovnkube-node-8snq7\" (UID: \"c07d7ff5-d1b6-48b4-82bf-9de0e813ba3b\") " pod="openshift-ovn-kubernetes/ovnkube-node-8snq7" Jan 25 08:08:41 crc kubenswrapper[4832]: I0125 08:08:41.807726 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/c07d7ff5-d1b6-48b4-82bf-9de0e813ba3b-host-cni-netd\") pod \"ovnkube-node-8snq7\" (UID: \"c07d7ff5-d1b6-48b4-82bf-9de0e813ba3b\") " pod="openshift-ovn-kubernetes/ovnkube-node-8snq7" Jan 25 08:08:41 crc kubenswrapper[4832]: I0125 08:08:41.807742 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/c07d7ff5-d1b6-48b4-82bf-9de0e813ba3b-ovnkube-config\") pod \"ovnkube-node-8snq7\" (UID: \"c07d7ff5-d1b6-48b4-82bf-9de0e813ba3b\") " pod="openshift-ovn-kubernetes/ovnkube-node-8snq7" Jan 25 08:08:41 crc kubenswrapper[4832]: I0125 08:08:41.807761 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/c07d7ff5-d1b6-48b4-82bf-9de0e813ba3b-ovn-node-metrics-cert\") pod \"ovnkube-node-8snq7\" (UID: \"c07d7ff5-d1b6-48b4-82bf-9de0e813ba3b\") " pod="openshift-ovn-kubernetes/ovnkube-node-8snq7" Jan 25 08:08:41 crc kubenswrapper[4832]: I0125 08:08:41.807780 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/c07d7ff5-d1b6-48b4-82bf-9de0e813ba3b-run-ovn\") pod \"ovnkube-node-8snq7\" (UID: \"c07d7ff5-d1b6-48b4-82bf-9de0e813ba3b\") " pod="openshift-ovn-kubernetes/ovnkube-node-8snq7" Jan 25 08:08:41 crc kubenswrapper[4832]: I0125 08:08:41.807793 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/c07d7ff5-d1b6-48b4-82bf-9de0e813ba3b-host-run-ovn-kubernetes\") pod \"ovnkube-node-8snq7\" (UID: \"c07d7ff5-d1b6-48b4-82bf-9de0e813ba3b\") " pod="openshift-ovn-kubernetes/ovnkube-node-8snq7" Jan 25 08:08:41 crc kubenswrapper[4832]: I0125 08:08:41.807811 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/c07d7ff5-d1b6-48b4-82bf-9de0e813ba3b-systemd-units\") pod \"ovnkube-node-8snq7\" (UID: \"c07d7ff5-d1b6-48b4-82bf-9de0e813ba3b\") " pod="openshift-ovn-kubernetes/ovnkube-node-8snq7" Jan 25 08:08:41 crc kubenswrapper[4832]: I0125 08:08:41.807840 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c07d7ff5-d1b6-48b4-82bf-9de0e813ba3b-var-lib-openvswitch\") pod \"ovnkube-node-8snq7\" (UID: \"c07d7ff5-d1b6-48b4-82bf-9de0e813ba3b\") " pod="openshift-ovn-kubernetes/ovnkube-node-8snq7" Jan 25 08:08:41 crc kubenswrapper[4832]: I0125 08:08:41.807858 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/c07d7ff5-d1b6-48b4-82bf-9de0e813ba3b-run-systemd\") pod \"ovnkube-node-8snq7\" (UID: \"c07d7ff5-d1b6-48b4-82bf-9de0e813ba3b\") " pod="openshift-ovn-kubernetes/ovnkube-node-8snq7" Jan 25 08:08:41 crc kubenswrapper[4832]: I0125 08:08:41.807875 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/c07d7ff5-d1b6-48b4-82bf-9de0e813ba3b-env-overrides\") pod \"ovnkube-node-8snq7\" (UID: \"c07d7ff5-d1b6-48b4-82bf-9de0e813ba3b\") " pod="openshift-ovn-kubernetes/ovnkube-node-8snq7" Jan 25 08:08:41 crc kubenswrapper[4832]: I0125 08:08:41.807902 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/c07d7ff5-d1b6-48b4-82bf-9de0e813ba3b-host-run-netns\") pod \"ovnkube-node-8snq7\" (UID: \"c07d7ff5-d1b6-48b4-82bf-9de0e813ba3b\") " pod="openshift-ovn-kubernetes/ovnkube-node-8snq7" Jan 25 08:08:41 crc kubenswrapper[4832]: I0125 08:08:41.807916 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dkrnv\" (UniqueName: \"kubernetes.io/projected/c07d7ff5-d1b6-48b4-82bf-9de0e813ba3b-kube-api-access-dkrnv\") pod \"ovnkube-node-8snq7\" (UID: \"c07d7ff5-d1b6-48b4-82bf-9de0e813ba3b\") " pod="openshift-ovn-kubernetes/ovnkube-node-8snq7" Jan 25 08:08:41 crc kubenswrapper[4832]: I0125 08:08:41.807177 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9c6fdc72-86dc-433d-8aac-57b0eeefaca3-node-log" (OuterVolumeSpecName: "node-log") pod "9c6fdc72-86dc-433d-8aac-57b0eeefaca3" (UID: "9c6fdc72-86dc-433d-8aac-57b0eeefaca3"). InnerVolumeSpecName "node-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 25 08:08:41 crc kubenswrapper[4832]: I0125 08:08:41.807935 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/c07d7ff5-d1b6-48b4-82bf-9de0e813ba3b-host-slash\") pod \"ovnkube-node-8snq7\" (UID: \"c07d7ff5-d1b6-48b4-82bf-9de0e813ba3b\") " pod="openshift-ovn-kubernetes/ovnkube-node-8snq7" Jan 25 08:08:41 crc kubenswrapper[4832]: I0125 08:08:41.807195 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9c6fdc72-86dc-433d-8aac-57b0eeefaca3-systemd-units" (OuterVolumeSpecName: "systemd-units") pod "9c6fdc72-86dc-433d-8aac-57b0eeefaca3" (UID: "9c6fdc72-86dc-433d-8aac-57b0eeefaca3"). InnerVolumeSpecName "systemd-units". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 25 08:08:41 crc kubenswrapper[4832]: I0125 08:08:41.807952 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c07d7ff5-d1b6-48b4-82bf-9de0e813ba3b-etc-openvswitch\") pod \"ovnkube-node-8snq7\" (UID: \"c07d7ff5-d1b6-48b4-82bf-9de0e813ba3b\") " pod="openshift-ovn-kubernetes/ovnkube-node-8snq7" Jan 25 08:08:41 crc kubenswrapper[4832]: I0125 08:08:41.807989 4832 reconciler_common.go:293] "Volume detached for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/9c6fdc72-86dc-433d-8aac-57b0eeefaca3-host-cni-netd\") on node \"crc\" DevicePath \"\"" Jan 25 08:08:41 crc kubenswrapper[4832]: I0125 08:08:41.808000 4832 reconciler_common.go:293] "Volume detached for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/9c6fdc72-86dc-433d-8aac-57b0eeefaca3-node-log\") on node \"crc\" DevicePath \"\"" Jan 25 08:08:41 crc kubenswrapper[4832]: I0125 08:08:41.808009 4832 reconciler_common.go:293] "Volume detached for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/9c6fdc72-86dc-433d-8aac-57b0eeefaca3-host-kubelet\") on node \"crc\" DevicePath \"\"" Jan 25 08:08:41 crc kubenswrapper[4832]: I0125 08:08:41.808018 4832 reconciler_common.go:293] "Volume detached for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/9c6fdc72-86dc-433d-8aac-57b0eeefaca3-systemd-units\") on node \"crc\" DevicePath \"\"" Jan 25 08:08:41 crc kubenswrapper[4832]: I0125 08:08:41.808035 4832 reconciler_common.go:293] "Volume detached for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/9c6fdc72-86dc-433d-8aac-57b0eeefaca3-host-var-lib-cni-networks-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Jan 25 08:08:41 crc kubenswrapper[4832]: I0125 08:08:41.807260 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9c6fdc72-86dc-433d-8aac-57b0eeefaca3-run-ovn" (OuterVolumeSpecName: "run-ovn") pod "9c6fdc72-86dc-433d-8aac-57b0eeefaca3" (UID: "9c6fdc72-86dc-433d-8aac-57b0eeefaca3"). InnerVolumeSpecName "run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 25 08:08:41 crc kubenswrapper[4832]: I0125 08:08:41.807988 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9c6fdc72-86dc-433d-8aac-57b0eeefaca3-run-openvswitch" (OuterVolumeSpecName: "run-openvswitch") pod "9c6fdc72-86dc-433d-8aac-57b0eeefaca3" (UID: "9c6fdc72-86dc-433d-8aac-57b0eeefaca3"). InnerVolumeSpecName "run-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 25 08:08:41 crc kubenswrapper[4832]: I0125 08:08:41.808015 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9c6fdc72-86dc-433d-8aac-57b0eeefaca3-etc-openvswitch" (OuterVolumeSpecName: "etc-openvswitch") pod "9c6fdc72-86dc-433d-8aac-57b0eeefaca3" (UID: "9c6fdc72-86dc-433d-8aac-57b0eeefaca3"). InnerVolumeSpecName "etc-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 25 08:08:41 crc kubenswrapper[4832]: I0125 08:08:41.808035 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9c6fdc72-86dc-433d-8aac-57b0eeefaca3-host-cni-bin" (OuterVolumeSpecName: "host-cni-bin") pod "9c6fdc72-86dc-433d-8aac-57b0eeefaca3" (UID: "9c6fdc72-86dc-433d-8aac-57b0eeefaca3"). InnerVolumeSpecName "host-cni-bin". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 25 08:08:41 crc kubenswrapper[4832]: I0125 08:08:41.808103 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9c6fdc72-86dc-433d-8aac-57b0eeefaca3-host-run-netns" (OuterVolumeSpecName: "host-run-netns") pod "9c6fdc72-86dc-433d-8aac-57b0eeefaca3" (UID: "9c6fdc72-86dc-433d-8aac-57b0eeefaca3"). InnerVolumeSpecName "host-run-netns". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 25 08:08:41 crc kubenswrapper[4832]: I0125 08:08:41.808122 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9c6fdc72-86dc-433d-8aac-57b0eeefaca3-var-lib-openvswitch" (OuterVolumeSpecName: "var-lib-openvswitch") pod "9c6fdc72-86dc-433d-8aac-57b0eeefaca3" (UID: "9c6fdc72-86dc-433d-8aac-57b0eeefaca3"). InnerVolumeSpecName "var-lib-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 25 08:08:41 crc kubenswrapper[4832]: I0125 08:08:41.808140 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9c6fdc72-86dc-433d-8aac-57b0eeefaca3-host-slash" (OuterVolumeSpecName: "host-slash") pod "9c6fdc72-86dc-433d-8aac-57b0eeefaca3" (UID: "9c6fdc72-86dc-433d-8aac-57b0eeefaca3"). InnerVolumeSpecName "host-slash". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 25 08:08:41 crc kubenswrapper[4832]: I0125 08:08:41.808562 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9c6fdc72-86dc-433d-8aac-57b0eeefaca3-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "9c6fdc72-86dc-433d-8aac-57b0eeefaca3" (UID: "9c6fdc72-86dc-433d-8aac-57b0eeefaca3"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 25 08:08:41 crc kubenswrapper[4832]: I0125 08:08:41.809061 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9c6fdc72-86dc-433d-8aac-57b0eeefaca3-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "9c6fdc72-86dc-433d-8aac-57b0eeefaca3" (UID: "9c6fdc72-86dc-433d-8aac-57b0eeefaca3"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 25 08:08:41 crc kubenswrapper[4832]: I0125 08:08:41.809711 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9c6fdc72-86dc-433d-8aac-57b0eeefaca3-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "9c6fdc72-86dc-433d-8aac-57b0eeefaca3" (UID: "9c6fdc72-86dc-433d-8aac-57b0eeefaca3"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 25 08:08:41 crc kubenswrapper[4832]: I0125 08:08:41.809751 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9c6fdc72-86dc-433d-8aac-57b0eeefaca3-log-socket" (OuterVolumeSpecName: "log-socket") pod "9c6fdc72-86dc-433d-8aac-57b0eeefaca3" (UID: "9c6fdc72-86dc-433d-8aac-57b0eeefaca3"). InnerVolumeSpecName "log-socket". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 25 08:08:41 crc kubenswrapper[4832]: I0125 08:08:41.809849 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9c6fdc72-86dc-433d-8aac-57b0eeefaca3-host-run-ovn-kubernetes" (OuterVolumeSpecName: "host-run-ovn-kubernetes") pod "9c6fdc72-86dc-433d-8aac-57b0eeefaca3" (UID: "9c6fdc72-86dc-433d-8aac-57b0eeefaca3"). InnerVolumeSpecName "host-run-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 25 08:08:41 crc kubenswrapper[4832]: I0125 08:08:41.822943 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9c6fdc72-86dc-433d-8aac-57b0eeefaca3-kube-api-access-rkm2k" (OuterVolumeSpecName: "kube-api-access-rkm2k") pod "9c6fdc72-86dc-433d-8aac-57b0eeefaca3" (UID: "9c6fdc72-86dc-433d-8aac-57b0eeefaca3"). InnerVolumeSpecName "kube-api-access-rkm2k". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 25 08:08:41 crc kubenswrapper[4832]: I0125 08:08:41.822991 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9c6fdc72-86dc-433d-8aac-57b0eeefaca3-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "9c6fdc72-86dc-433d-8aac-57b0eeefaca3" (UID: "9c6fdc72-86dc-433d-8aac-57b0eeefaca3"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 25 08:08:41 crc kubenswrapper[4832]: I0125 08:08:41.831030 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9c6fdc72-86dc-433d-8aac-57b0eeefaca3-run-systemd" (OuterVolumeSpecName: "run-systemd") pod "9c6fdc72-86dc-433d-8aac-57b0eeefaca3" (UID: "9c6fdc72-86dc-433d-8aac-57b0eeefaca3"). InnerVolumeSpecName "run-systemd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 25 08:08:41 crc kubenswrapper[4832]: I0125 08:08:41.835722 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="cert-manager/cert-manager-webhook-687f57d79b-5kx64" Jan 25 08:08:41 crc kubenswrapper[4832]: I0125 08:08:41.909194 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/c07d7ff5-d1b6-48b4-82bf-9de0e813ba3b-env-overrides\") pod \"ovnkube-node-8snq7\" (UID: \"c07d7ff5-d1b6-48b4-82bf-9de0e813ba3b\") " pod="openshift-ovn-kubernetes/ovnkube-node-8snq7" Jan 25 08:08:41 crc kubenswrapper[4832]: I0125 08:08:41.909279 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/c07d7ff5-d1b6-48b4-82bf-9de0e813ba3b-host-run-netns\") pod \"ovnkube-node-8snq7\" (UID: \"c07d7ff5-d1b6-48b4-82bf-9de0e813ba3b\") " pod="openshift-ovn-kubernetes/ovnkube-node-8snq7" Jan 25 08:08:41 crc kubenswrapper[4832]: I0125 08:08:41.909304 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dkrnv\" (UniqueName: \"kubernetes.io/projected/c07d7ff5-d1b6-48b4-82bf-9de0e813ba3b-kube-api-access-dkrnv\") pod \"ovnkube-node-8snq7\" (UID: \"c07d7ff5-d1b6-48b4-82bf-9de0e813ba3b\") " pod="openshift-ovn-kubernetes/ovnkube-node-8snq7" Jan 25 08:08:41 crc kubenswrapper[4832]: I0125 08:08:41.909338 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/c07d7ff5-d1b6-48b4-82bf-9de0e813ba3b-host-slash\") pod \"ovnkube-node-8snq7\" (UID: \"c07d7ff5-d1b6-48b4-82bf-9de0e813ba3b\") " pod="openshift-ovn-kubernetes/ovnkube-node-8snq7" Jan 25 08:08:41 crc kubenswrapper[4832]: I0125 08:08:41.909360 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c07d7ff5-d1b6-48b4-82bf-9de0e813ba3b-etc-openvswitch\") pod \"ovnkube-node-8snq7\" (UID: \"c07d7ff5-d1b6-48b4-82bf-9de0e813ba3b\") " pod="openshift-ovn-kubernetes/ovnkube-node-8snq7" Jan 25 08:08:41 crc kubenswrapper[4832]: I0125 08:08:41.909400 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/c07d7ff5-d1b6-48b4-82bf-9de0e813ba3b-host-kubelet\") pod \"ovnkube-node-8snq7\" (UID: \"c07d7ff5-d1b6-48b4-82bf-9de0e813ba3b\") " pod="openshift-ovn-kubernetes/ovnkube-node-8snq7" Jan 25 08:08:41 crc kubenswrapper[4832]: I0125 08:08:41.909430 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c07d7ff5-d1b6-48b4-82bf-9de0e813ba3b-run-openvswitch\") pod \"ovnkube-node-8snq7\" (UID: \"c07d7ff5-d1b6-48b4-82bf-9de0e813ba3b\") " pod="openshift-ovn-kubernetes/ovnkube-node-8snq7" Jan 25 08:08:41 crc kubenswrapper[4832]: I0125 08:08:41.909462 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/c07d7ff5-d1b6-48b4-82bf-9de0e813ba3b-ovnkube-script-lib\") pod \"ovnkube-node-8snq7\" (UID: \"c07d7ff5-d1b6-48b4-82bf-9de0e813ba3b\") " pod="openshift-ovn-kubernetes/ovnkube-node-8snq7" Jan 25 08:08:41 crc kubenswrapper[4832]: I0125 08:08:41.909490 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/c07d7ff5-d1b6-48b4-82bf-9de0e813ba3b-log-socket\") pod \"ovnkube-node-8snq7\" (UID: \"c07d7ff5-d1b6-48b4-82bf-9de0e813ba3b\") " pod="openshift-ovn-kubernetes/ovnkube-node-8snq7" Jan 25 08:08:41 crc kubenswrapper[4832]: I0125 08:08:41.909513 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/c07d7ff5-d1b6-48b4-82bf-9de0e813ba3b-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-8snq7\" (UID: \"c07d7ff5-d1b6-48b4-82bf-9de0e813ba3b\") " pod="openshift-ovn-kubernetes/ovnkube-node-8snq7" Jan 25 08:08:41 crc kubenswrapper[4832]: I0125 08:08:41.909541 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/c07d7ff5-d1b6-48b4-82bf-9de0e813ba3b-host-cni-bin\") pod \"ovnkube-node-8snq7\" (UID: \"c07d7ff5-d1b6-48b4-82bf-9de0e813ba3b\") " pod="openshift-ovn-kubernetes/ovnkube-node-8snq7" Jan 25 08:08:41 crc kubenswrapper[4832]: I0125 08:08:41.909570 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/c07d7ff5-d1b6-48b4-82bf-9de0e813ba3b-node-log\") pod \"ovnkube-node-8snq7\" (UID: \"c07d7ff5-d1b6-48b4-82bf-9de0e813ba3b\") " pod="openshift-ovn-kubernetes/ovnkube-node-8snq7" Jan 25 08:08:41 crc kubenswrapper[4832]: I0125 08:08:41.909594 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/c07d7ff5-d1b6-48b4-82bf-9de0e813ba3b-host-cni-netd\") pod \"ovnkube-node-8snq7\" (UID: \"c07d7ff5-d1b6-48b4-82bf-9de0e813ba3b\") " pod="openshift-ovn-kubernetes/ovnkube-node-8snq7" Jan 25 08:08:41 crc kubenswrapper[4832]: I0125 08:08:41.909615 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/c07d7ff5-d1b6-48b4-82bf-9de0e813ba3b-ovnkube-config\") pod \"ovnkube-node-8snq7\" (UID: \"c07d7ff5-d1b6-48b4-82bf-9de0e813ba3b\") " pod="openshift-ovn-kubernetes/ovnkube-node-8snq7" Jan 25 08:08:41 crc kubenswrapper[4832]: I0125 08:08:41.909642 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/c07d7ff5-d1b6-48b4-82bf-9de0e813ba3b-ovn-node-metrics-cert\") pod \"ovnkube-node-8snq7\" (UID: \"c07d7ff5-d1b6-48b4-82bf-9de0e813ba3b\") " pod="openshift-ovn-kubernetes/ovnkube-node-8snq7" Jan 25 08:08:41 crc kubenswrapper[4832]: I0125 08:08:41.909666 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/c07d7ff5-d1b6-48b4-82bf-9de0e813ba3b-run-ovn\") pod \"ovnkube-node-8snq7\" (UID: \"c07d7ff5-d1b6-48b4-82bf-9de0e813ba3b\") " pod="openshift-ovn-kubernetes/ovnkube-node-8snq7" Jan 25 08:08:41 crc kubenswrapper[4832]: I0125 08:08:41.909690 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/c07d7ff5-d1b6-48b4-82bf-9de0e813ba3b-host-run-ovn-kubernetes\") pod \"ovnkube-node-8snq7\" (UID: \"c07d7ff5-d1b6-48b4-82bf-9de0e813ba3b\") " pod="openshift-ovn-kubernetes/ovnkube-node-8snq7" Jan 25 08:08:41 crc kubenswrapper[4832]: I0125 08:08:41.909713 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/c07d7ff5-d1b6-48b4-82bf-9de0e813ba3b-systemd-units\") pod \"ovnkube-node-8snq7\" (UID: \"c07d7ff5-d1b6-48b4-82bf-9de0e813ba3b\") " pod="openshift-ovn-kubernetes/ovnkube-node-8snq7" Jan 25 08:08:41 crc kubenswrapper[4832]: I0125 08:08:41.909746 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c07d7ff5-d1b6-48b4-82bf-9de0e813ba3b-var-lib-openvswitch\") pod \"ovnkube-node-8snq7\" (UID: \"c07d7ff5-d1b6-48b4-82bf-9de0e813ba3b\") " pod="openshift-ovn-kubernetes/ovnkube-node-8snq7" Jan 25 08:08:41 crc kubenswrapper[4832]: I0125 08:08:41.909772 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/c07d7ff5-d1b6-48b4-82bf-9de0e813ba3b-run-systemd\") pod \"ovnkube-node-8snq7\" (UID: \"c07d7ff5-d1b6-48b4-82bf-9de0e813ba3b\") " pod="openshift-ovn-kubernetes/ovnkube-node-8snq7" Jan 25 08:08:41 crc kubenswrapper[4832]: I0125 08:08:41.909827 4832 reconciler_common.go:293] "Volume detached for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/9c6fdc72-86dc-433d-8aac-57b0eeefaca3-host-cni-bin\") on node \"crc\" DevicePath \"\"" Jan 25 08:08:41 crc kubenswrapper[4832]: I0125 08:08:41.909841 4832 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/9c6fdc72-86dc-433d-8aac-57b0eeefaca3-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 25 08:08:41 crc kubenswrapper[4832]: I0125 08:08:41.909856 4832 reconciler_common.go:293] "Volume detached for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/9c6fdc72-86dc-433d-8aac-57b0eeefaca3-host-run-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Jan 25 08:08:41 crc kubenswrapper[4832]: I0125 08:08:41.909872 4832 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/9c6fdc72-86dc-433d-8aac-57b0eeefaca3-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 25 08:08:41 crc kubenswrapper[4832]: I0125 08:08:41.909914 4832 reconciler_common.go:293] "Volume detached for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/9c6fdc72-86dc-433d-8aac-57b0eeefaca3-host-run-netns\") on node \"crc\" DevicePath \"\"" Jan 25 08:08:41 crc kubenswrapper[4832]: I0125 08:08:41.909931 4832 reconciler_common.go:293] "Volume detached for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/9c6fdc72-86dc-433d-8aac-57b0eeefaca3-var-lib-openvswitch\") on node \"crc\" DevicePath \"\"" Jan 25 08:08:41 crc kubenswrapper[4832]: I0125 08:08:41.909943 4832 reconciler_common.go:293] "Volume detached for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/9c6fdc72-86dc-433d-8aac-57b0eeefaca3-host-slash\") on node \"crc\" DevicePath \"\"" Jan 25 08:08:41 crc kubenswrapper[4832]: I0125 08:08:41.909954 4832 reconciler_common.go:293] "Volume detached for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/9c6fdc72-86dc-433d-8aac-57b0eeefaca3-run-systemd\") on node \"crc\" DevicePath \"\"" Jan 25 08:08:41 crc kubenswrapper[4832]: I0125 08:08:41.909968 4832 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/9c6fdc72-86dc-433d-8aac-57b0eeefaca3-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 25 08:08:41 crc kubenswrapper[4832]: I0125 08:08:41.910007 4832 reconciler_common.go:293] "Volume detached for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/9c6fdc72-86dc-433d-8aac-57b0eeefaca3-log-socket\") on node \"crc\" DevicePath \"\"" Jan 25 08:08:41 crc kubenswrapper[4832]: I0125 08:08:41.910021 4832 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/9c6fdc72-86dc-433d-8aac-57b0eeefaca3-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Jan 25 08:08:41 crc kubenswrapper[4832]: I0125 08:08:41.910034 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rkm2k\" (UniqueName: \"kubernetes.io/projected/9c6fdc72-86dc-433d-8aac-57b0eeefaca3-kube-api-access-rkm2k\") on node \"crc\" DevicePath \"\"" Jan 25 08:08:41 crc kubenswrapper[4832]: I0125 08:08:41.910048 4832 reconciler_common.go:293] "Volume detached for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/9c6fdc72-86dc-433d-8aac-57b0eeefaca3-run-ovn\") on node \"crc\" DevicePath \"\"" Jan 25 08:08:41 crc kubenswrapper[4832]: I0125 08:08:41.910081 4832 reconciler_common.go:293] "Volume detached for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/9c6fdc72-86dc-433d-8aac-57b0eeefaca3-run-openvswitch\") on node \"crc\" DevicePath \"\"" Jan 25 08:08:41 crc kubenswrapper[4832]: I0125 08:08:41.910096 4832 reconciler_common.go:293] "Volume detached for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/9c6fdc72-86dc-433d-8aac-57b0eeefaca3-etc-openvswitch\") on node \"crc\" DevicePath \"\"" Jan 25 08:08:41 crc kubenswrapper[4832]: I0125 08:08:41.910175 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/c07d7ff5-d1b6-48b4-82bf-9de0e813ba3b-run-systemd\") pod \"ovnkube-node-8snq7\" (UID: \"c07d7ff5-d1b6-48b4-82bf-9de0e813ba3b\") " pod="openshift-ovn-kubernetes/ovnkube-node-8snq7" Jan 25 08:08:41 crc kubenswrapper[4832]: I0125 08:08:41.910924 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/c07d7ff5-d1b6-48b4-82bf-9de0e813ba3b-env-overrides\") pod \"ovnkube-node-8snq7\" (UID: \"c07d7ff5-d1b6-48b4-82bf-9de0e813ba3b\") " pod="openshift-ovn-kubernetes/ovnkube-node-8snq7" Jan 25 08:08:41 crc kubenswrapper[4832]: I0125 08:08:41.911466 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/c07d7ff5-d1b6-48b4-82bf-9de0e813ba3b-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-8snq7\" (UID: \"c07d7ff5-d1b6-48b4-82bf-9de0e813ba3b\") " pod="openshift-ovn-kubernetes/ovnkube-node-8snq7" Jan 25 08:08:41 crc kubenswrapper[4832]: I0125 08:08:41.911556 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/c07d7ff5-d1b6-48b4-82bf-9de0e813ba3b-host-run-netns\") pod \"ovnkube-node-8snq7\" (UID: \"c07d7ff5-d1b6-48b4-82bf-9de0e813ba3b\") " pod="openshift-ovn-kubernetes/ovnkube-node-8snq7" Jan 25 08:08:41 crc kubenswrapper[4832]: I0125 08:08:41.911757 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/c07d7ff5-d1b6-48b4-82bf-9de0e813ba3b-host-cni-bin\") pod \"ovnkube-node-8snq7\" (UID: \"c07d7ff5-d1b6-48b4-82bf-9de0e813ba3b\") " pod="openshift-ovn-kubernetes/ovnkube-node-8snq7" Jan 25 08:08:41 crc kubenswrapper[4832]: I0125 08:08:41.911793 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/c07d7ff5-d1b6-48b4-82bf-9de0e813ba3b-run-ovn\") pod \"ovnkube-node-8snq7\" (UID: \"c07d7ff5-d1b6-48b4-82bf-9de0e813ba3b\") " pod="openshift-ovn-kubernetes/ovnkube-node-8snq7" Jan 25 08:08:41 crc kubenswrapper[4832]: I0125 08:08:41.911789 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/c07d7ff5-d1b6-48b4-82bf-9de0e813ba3b-host-kubelet\") pod \"ovnkube-node-8snq7\" (UID: \"c07d7ff5-d1b6-48b4-82bf-9de0e813ba3b\") " pod="openshift-ovn-kubernetes/ovnkube-node-8snq7" Jan 25 08:08:41 crc kubenswrapper[4832]: I0125 08:08:41.911836 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/c07d7ff5-d1b6-48b4-82bf-9de0e813ba3b-systemd-units\") pod \"ovnkube-node-8snq7\" (UID: \"c07d7ff5-d1b6-48b4-82bf-9de0e813ba3b\") " pod="openshift-ovn-kubernetes/ovnkube-node-8snq7" Jan 25 08:08:41 crc kubenswrapper[4832]: I0125 08:08:41.911816 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/c07d7ff5-d1b6-48b4-82bf-9de0e813ba3b-host-run-ovn-kubernetes\") pod \"ovnkube-node-8snq7\" (UID: \"c07d7ff5-d1b6-48b4-82bf-9de0e813ba3b\") " pod="openshift-ovn-kubernetes/ovnkube-node-8snq7" Jan 25 08:08:41 crc kubenswrapper[4832]: I0125 08:08:41.911866 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c07d7ff5-d1b6-48b4-82bf-9de0e813ba3b-var-lib-openvswitch\") pod \"ovnkube-node-8snq7\" (UID: \"c07d7ff5-d1b6-48b4-82bf-9de0e813ba3b\") " pod="openshift-ovn-kubernetes/ovnkube-node-8snq7" Jan 25 08:08:41 crc kubenswrapper[4832]: I0125 08:08:41.911893 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/c07d7ff5-d1b6-48b4-82bf-9de0e813ba3b-host-cni-netd\") pod \"ovnkube-node-8snq7\" (UID: \"c07d7ff5-d1b6-48b4-82bf-9de0e813ba3b\") " pod="openshift-ovn-kubernetes/ovnkube-node-8snq7" Jan 25 08:08:41 crc kubenswrapper[4832]: I0125 08:08:41.912754 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/c07d7ff5-d1b6-48b4-82bf-9de0e813ba3b-node-log\") pod \"ovnkube-node-8snq7\" (UID: \"c07d7ff5-d1b6-48b4-82bf-9de0e813ba3b\") " pod="openshift-ovn-kubernetes/ovnkube-node-8snq7" Jan 25 08:08:41 crc kubenswrapper[4832]: I0125 08:08:41.912801 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c07d7ff5-d1b6-48b4-82bf-9de0e813ba3b-run-openvswitch\") pod \"ovnkube-node-8snq7\" (UID: \"c07d7ff5-d1b6-48b4-82bf-9de0e813ba3b\") " pod="openshift-ovn-kubernetes/ovnkube-node-8snq7" Jan 25 08:08:41 crc kubenswrapper[4832]: I0125 08:08:41.912828 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/c07d7ff5-d1b6-48b4-82bf-9de0e813ba3b-host-slash\") pod \"ovnkube-node-8snq7\" (UID: \"c07d7ff5-d1b6-48b4-82bf-9de0e813ba3b\") " pod="openshift-ovn-kubernetes/ovnkube-node-8snq7" Jan 25 08:08:41 crc kubenswrapper[4832]: I0125 08:08:41.912917 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c07d7ff5-d1b6-48b4-82bf-9de0e813ba3b-etc-openvswitch\") pod \"ovnkube-node-8snq7\" (UID: \"c07d7ff5-d1b6-48b4-82bf-9de0e813ba3b\") " pod="openshift-ovn-kubernetes/ovnkube-node-8snq7" Jan 25 08:08:41 crc kubenswrapper[4832]: I0125 08:08:41.912947 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/c07d7ff5-d1b6-48b4-82bf-9de0e813ba3b-log-socket\") pod \"ovnkube-node-8snq7\" (UID: \"c07d7ff5-d1b6-48b4-82bf-9de0e813ba3b\") " pod="openshift-ovn-kubernetes/ovnkube-node-8snq7" Jan 25 08:08:41 crc kubenswrapper[4832]: I0125 08:08:41.913355 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/c07d7ff5-d1b6-48b4-82bf-9de0e813ba3b-ovnkube-config\") pod \"ovnkube-node-8snq7\" (UID: \"c07d7ff5-d1b6-48b4-82bf-9de0e813ba3b\") " pod="openshift-ovn-kubernetes/ovnkube-node-8snq7" Jan 25 08:08:41 crc kubenswrapper[4832]: I0125 08:08:41.913484 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/c07d7ff5-d1b6-48b4-82bf-9de0e813ba3b-ovnkube-script-lib\") pod \"ovnkube-node-8snq7\" (UID: \"c07d7ff5-d1b6-48b4-82bf-9de0e813ba3b\") " pod="openshift-ovn-kubernetes/ovnkube-node-8snq7" Jan 25 08:08:41 crc kubenswrapper[4832]: I0125 08:08:41.914931 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/c07d7ff5-d1b6-48b4-82bf-9de0e813ba3b-ovn-node-metrics-cert\") pod \"ovnkube-node-8snq7\" (UID: \"c07d7ff5-d1b6-48b4-82bf-9de0e813ba3b\") " pod="openshift-ovn-kubernetes/ovnkube-node-8snq7" Jan 25 08:08:41 crc kubenswrapper[4832]: I0125 08:08:41.934224 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dkrnv\" (UniqueName: \"kubernetes.io/projected/c07d7ff5-d1b6-48b4-82bf-9de0e813ba3b-kube-api-access-dkrnv\") pod \"ovnkube-node-8snq7\" (UID: \"c07d7ff5-d1b6-48b4-82bf-9de0e813ba3b\") " pod="openshift-ovn-kubernetes/ovnkube-node-8snq7" Jan 25 08:08:42 crc kubenswrapper[4832]: I0125 08:08:42.102749 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-8snq7" Jan 25 08:08:42 crc kubenswrapper[4832]: I0125 08:08:42.237256 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-plv66_9c6fdc72-86dc-433d-8aac-57b0eeefaca3/ovnkube-controller/3.log" Jan 25 08:08:42 crc kubenswrapper[4832]: I0125 08:08:42.241618 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-plv66_9c6fdc72-86dc-433d-8aac-57b0eeefaca3/ovn-acl-logging/0.log" Jan 25 08:08:42 crc kubenswrapper[4832]: I0125 08:08:42.242305 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-plv66_9c6fdc72-86dc-433d-8aac-57b0eeefaca3/ovn-controller/0.log" Jan 25 08:08:42 crc kubenswrapper[4832]: I0125 08:08:42.242836 4832 generic.go:334] "Generic (PLEG): container finished" podID="9c6fdc72-86dc-433d-8aac-57b0eeefaca3" containerID="d3706bdff863467890f6e3493480a401b3ed42903abef7290645045a203f1741" exitCode=0 Jan 25 08:08:42 crc kubenswrapper[4832]: I0125 08:08:42.242867 4832 generic.go:334] "Generic (PLEG): container finished" podID="9c6fdc72-86dc-433d-8aac-57b0eeefaca3" containerID="5d82289bf3a8f5881decb5d348cc43fdfd61f4ce6af17013a893b687d2c759d1" exitCode=0 Jan 25 08:08:42 crc kubenswrapper[4832]: I0125 08:08:42.242880 4832 generic.go:334] "Generic (PLEG): container finished" podID="9c6fdc72-86dc-433d-8aac-57b0eeefaca3" containerID="955df1f749685e35f57096ab341705a767f9f044c498ff9fe0c578205ab00e47" exitCode=0 Jan 25 08:08:42 crc kubenswrapper[4832]: I0125 08:08:42.242892 4832 generic.go:334] "Generic (PLEG): container finished" podID="9c6fdc72-86dc-433d-8aac-57b0eeefaca3" containerID="4a4281c5178e1f538e268252a65fbf98cf6d3febdb246a148f96a4aa074654ef" exitCode=0 Jan 25 08:08:42 crc kubenswrapper[4832]: I0125 08:08:42.242906 4832 generic.go:334] "Generic (PLEG): container finished" podID="9c6fdc72-86dc-433d-8aac-57b0eeefaca3" containerID="5b2bdf85709ae59146893142e9c99259a30d0a3d382b2212b1863f677f6afc2c" exitCode=0 Jan 25 08:08:42 crc kubenswrapper[4832]: I0125 08:08:42.242919 4832 generic.go:334] "Generic (PLEG): container finished" podID="9c6fdc72-86dc-433d-8aac-57b0eeefaca3" containerID="4eb8d5ded80c75addd304eb271c805a5558200db4ad062ef7354d8a0e4d2892d" exitCode=0 Jan 25 08:08:42 crc kubenswrapper[4832]: I0125 08:08:42.242934 4832 generic.go:334] "Generic (PLEG): container finished" podID="9c6fdc72-86dc-433d-8aac-57b0eeefaca3" containerID="9039a4038315d24ad4f721f3a16dc792881c104d23270f4ab5ffb3d84ff4cb99" exitCode=143 Jan 25 08:08:42 crc kubenswrapper[4832]: I0125 08:08:42.242949 4832 generic.go:334] "Generic (PLEG): container finished" podID="9c6fdc72-86dc-433d-8aac-57b0eeefaca3" containerID="e0de5e2c0084fa8b9faf368e61b965f84d8411bcbdfb8b3cf6a35f4bc6088e68" exitCode=143 Jan 25 08:08:42 crc kubenswrapper[4832]: I0125 08:08:42.242934 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-plv66" event={"ID":"9c6fdc72-86dc-433d-8aac-57b0eeefaca3","Type":"ContainerDied","Data":"d3706bdff863467890f6e3493480a401b3ed42903abef7290645045a203f1741"} Jan 25 08:08:42 crc kubenswrapper[4832]: I0125 08:08:42.243042 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-plv66" event={"ID":"9c6fdc72-86dc-433d-8aac-57b0eeefaca3","Type":"ContainerDied","Data":"5d82289bf3a8f5881decb5d348cc43fdfd61f4ce6af17013a893b687d2c759d1"} Jan 25 08:08:42 crc kubenswrapper[4832]: I0125 08:08:42.243080 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-plv66" event={"ID":"9c6fdc72-86dc-433d-8aac-57b0eeefaca3","Type":"ContainerDied","Data":"955df1f749685e35f57096ab341705a767f9f044c498ff9fe0c578205ab00e47"} Jan 25 08:08:42 crc kubenswrapper[4832]: I0125 08:08:42.243111 4832 scope.go:117] "RemoveContainer" containerID="d3706bdff863467890f6e3493480a401b3ed42903abef7290645045a203f1741" Jan 25 08:08:42 crc kubenswrapper[4832]: I0125 08:08:42.243114 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-plv66" event={"ID":"9c6fdc72-86dc-433d-8aac-57b0eeefaca3","Type":"ContainerDied","Data":"4a4281c5178e1f538e268252a65fbf98cf6d3febdb246a148f96a4aa074654ef"} Jan 25 08:08:42 crc kubenswrapper[4832]: I0125 08:08:42.243288 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-plv66" event={"ID":"9c6fdc72-86dc-433d-8aac-57b0eeefaca3","Type":"ContainerDied","Data":"5b2bdf85709ae59146893142e9c99259a30d0a3d382b2212b1863f677f6afc2c"} Jan 25 08:08:42 crc kubenswrapper[4832]: I0125 08:08:42.243427 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-plv66" event={"ID":"9c6fdc72-86dc-433d-8aac-57b0eeefaca3","Type":"ContainerDied","Data":"4eb8d5ded80c75addd304eb271c805a5558200db4ad062ef7354d8a0e4d2892d"} Jan 25 08:08:42 crc kubenswrapper[4832]: I0125 08:08:42.243456 4832 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"b9360fc46a4533171758f5c0111aec5209164d6ef530b6c4c7047c14a347f7bd"} Jan 25 08:08:42 crc kubenswrapper[4832]: I0125 08:08:42.243122 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-plv66" Jan 25 08:08:42 crc kubenswrapper[4832]: I0125 08:08:42.243480 4832 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"5d82289bf3a8f5881decb5d348cc43fdfd61f4ce6af17013a893b687d2c759d1"} Jan 25 08:08:42 crc kubenswrapper[4832]: I0125 08:08:42.243645 4832 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"955df1f749685e35f57096ab341705a767f9f044c498ff9fe0c578205ab00e47"} Jan 25 08:08:42 crc kubenswrapper[4832]: I0125 08:08:42.243668 4832 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"4a4281c5178e1f538e268252a65fbf98cf6d3febdb246a148f96a4aa074654ef"} Jan 25 08:08:42 crc kubenswrapper[4832]: I0125 08:08:42.243682 4832 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"5b2bdf85709ae59146893142e9c99259a30d0a3d382b2212b1863f677f6afc2c"} Jan 25 08:08:42 crc kubenswrapper[4832]: I0125 08:08:42.243695 4832 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"4eb8d5ded80c75addd304eb271c805a5558200db4ad062ef7354d8a0e4d2892d"} Jan 25 08:08:42 crc kubenswrapper[4832]: I0125 08:08:42.243708 4832 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"9039a4038315d24ad4f721f3a16dc792881c104d23270f4ab5ffb3d84ff4cb99"} Jan 25 08:08:42 crc kubenswrapper[4832]: I0125 08:08:42.243720 4832 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"e0de5e2c0084fa8b9faf368e61b965f84d8411bcbdfb8b3cf6a35f4bc6088e68"} Jan 25 08:08:42 crc kubenswrapper[4832]: I0125 08:08:42.243738 4832 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"ac96bdf8380dbae226d8f186a0449b986660f21889eb73734620b26fb796fbf1"} Jan 25 08:08:42 crc kubenswrapper[4832]: I0125 08:08:42.243764 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-plv66" event={"ID":"9c6fdc72-86dc-433d-8aac-57b0eeefaca3","Type":"ContainerDied","Data":"9039a4038315d24ad4f721f3a16dc792881c104d23270f4ab5ffb3d84ff4cb99"} Jan 25 08:08:42 crc kubenswrapper[4832]: I0125 08:08:42.243798 4832 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"d3706bdff863467890f6e3493480a401b3ed42903abef7290645045a203f1741"} Jan 25 08:08:42 crc kubenswrapper[4832]: I0125 08:08:42.243824 4832 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"b9360fc46a4533171758f5c0111aec5209164d6ef530b6c4c7047c14a347f7bd"} Jan 25 08:08:42 crc kubenswrapper[4832]: I0125 08:08:42.243842 4832 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"5d82289bf3a8f5881decb5d348cc43fdfd61f4ce6af17013a893b687d2c759d1"} Jan 25 08:08:42 crc kubenswrapper[4832]: I0125 08:08:42.243861 4832 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"955df1f749685e35f57096ab341705a767f9f044c498ff9fe0c578205ab00e47"} Jan 25 08:08:42 crc kubenswrapper[4832]: I0125 08:08:42.243876 4832 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"4a4281c5178e1f538e268252a65fbf98cf6d3febdb246a148f96a4aa074654ef"} Jan 25 08:08:42 crc kubenswrapper[4832]: I0125 08:08:42.243893 4832 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"5b2bdf85709ae59146893142e9c99259a30d0a3d382b2212b1863f677f6afc2c"} Jan 25 08:08:42 crc kubenswrapper[4832]: I0125 08:08:42.243909 4832 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"4eb8d5ded80c75addd304eb271c805a5558200db4ad062ef7354d8a0e4d2892d"} Jan 25 08:08:42 crc kubenswrapper[4832]: I0125 08:08:42.243926 4832 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"9039a4038315d24ad4f721f3a16dc792881c104d23270f4ab5ffb3d84ff4cb99"} Jan 25 08:08:42 crc kubenswrapper[4832]: I0125 08:08:42.243940 4832 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"e0de5e2c0084fa8b9faf368e61b965f84d8411bcbdfb8b3cf6a35f4bc6088e68"} Jan 25 08:08:42 crc kubenswrapper[4832]: I0125 08:08:42.243952 4832 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"ac96bdf8380dbae226d8f186a0449b986660f21889eb73734620b26fb796fbf1"} Jan 25 08:08:42 crc kubenswrapper[4832]: I0125 08:08:42.243970 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-plv66" event={"ID":"9c6fdc72-86dc-433d-8aac-57b0eeefaca3","Type":"ContainerDied","Data":"e0de5e2c0084fa8b9faf368e61b965f84d8411bcbdfb8b3cf6a35f4bc6088e68"} Jan 25 08:08:42 crc kubenswrapper[4832]: I0125 08:08:42.243990 4832 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"d3706bdff863467890f6e3493480a401b3ed42903abef7290645045a203f1741"} Jan 25 08:08:42 crc kubenswrapper[4832]: I0125 08:08:42.244004 4832 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"b9360fc46a4533171758f5c0111aec5209164d6ef530b6c4c7047c14a347f7bd"} Jan 25 08:08:42 crc kubenswrapper[4832]: I0125 08:08:42.244015 4832 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"5d82289bf3a8f5881decb5d348cc43fdfd61f4ce6af17013a893b687d2c759d1"} Jan 25 08:08:42 crc kubenswrapper[4832]: I0125 08:08:42.244027 4832 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"955df1f749685e35f57096ab341705a767f9f044c498ff9fe0c578205ab00e47"} Jan 25 08:08:42 crc kubenswrapper[4832]: I0125 08:08:42.244038 4832 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"4a4281c5178e1f538e268252a65fbf98cf6d3febdb246a148f96a4aa074654ef"} Jan 25 08:08:42 crc kubenswrapper[4832]: I0125 08:08:42.244050 4832 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"5b2bdf85709ae59146893142e9c99259a30d0a3d382b2212b1863f677f6afc2c"} Jan 25 08:08:42 crc kubenswrapper[4832]: I0125 08:08:42.244062 4832 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"4eb8d5ded80c75addd304eb271c805a5558200db4ad062ef7354d8a0e4d2892d"} Jan 25 08:08:42 crc kubenswrapper[4832]: I0125 08:08:42.244074 4832 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"9039a4038315d24ad4f721f3a16dc792881c104d23270f4ab5ffb3d84ff4cb99"} Jan 25 08:08:42 crc kubenswrapper[4832]: I0125 08:08:42.244086 4832 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"e0de5e2c0084fa8b9faf368e61b965f84d8411bcbdfb8b3cf6a35f4bc6088e68"} Jan 25 08:08:42 crc kubenswrapper[4832]: I0125 08:08:42.244097 4832 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"ac96bdf8380dbae226d8f186a0449b986660f21889eb73734620b26fb796fbf1"} Jan 25 08:08:42 crc kubenswrapper[4832]: I0125 08:08:42.244114 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-plv66" event={"ID":"9c6fdc72-86dc-433d-8aac-57b0eeefaca3","Type":"ContainerDied","Data":"d73c9049e88f0abcfe403e59157661b88c6def931705eca09ebe7047427a19f5"} Jan 25 08:08:42 crc kubenswrapper[4832]: I0125 08:08:42.244136 4832 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"d3706bdff863467890f6e3493480a401b3ed42903abef7290645045a203f1741"} Jan 25 08:08:42 crc kubenswrapper[4832]: I0125 08:08:42.244152 4832 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"b9360fc46a4533171758f5c0111aec5209164d6ef530b6c4c7047c14a347f7bd"} Jan 25 08:08:42 crc kubenswrapper[4832]: I0125 08:08:42.244165 4832 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"5d82289bf3a8f5881decb5d348cc43fdfd61f4ce6af17013a893b687d2c759d1"} Jan 25 08:08:42 crc kubenswrapper[4832]: I0125 08:08:42.244178 4832 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"955df1f749685e35f57096ab341705a767f9f044c498ff9fe0c578205ab00e47"} Jan 25 08:08:42 crc kubenswrapper[4832]: I0125 08:08:42.244191 4832 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"4a4281c5178e1f538e268252a65fbf98cf6d3febdb246a148f96a4aa074654ef"} Jan 25 08:08:42 crc kubenswrapper[4832]: I0125 08:08:42.244203 4832 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"5b2bdf85709ae59146893142e9c99259a30d0a3d382b2212b1863f677f6afc2c"} Jan 25 08:08:42 crc kubenswrapper[4832]: I0125 08:08:42.244215 4832 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"4eb8d5ded80c75addd304eb271c805a5558200db4ad062ef7354d8a0e4d2892d"} Jan 25 08:08:42 crc kubenswrapper[4832]: I0125 08:08:42.244227 4832 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"9039a4038315d24ad4f721f3a16dc792881c104d23270f4ab5ffb3d84ff4cb99"} Jan 25 08:08:42 crc kubenswrapper[4832]: I0125 08:08:42.244239 4832 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"e0de5e2c0084fa8b9faf368e61b965f84d8411bcbdfb8b3cf6a35f4bc6088e68"} Jan 25 08:08:42 crc kubenswrapper[4832]: I0125 08:08:42.244251 4832 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"ac96bdf8380dbae226d8f186a0449b986660f21889eb73734620b26fb796fbf1"} Jan 25 08:08:42 crc kubenswrapper[4832]: I0125 08:08:42.250068 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-kzrcf_5439ad80-35f6-4da4-8745-8104e9963472/kube-multus/2.log" Jan 25 08:08:42 crc kubenswrapper[4832]: I0125 08:08:42.251567 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-kzrcf_5439ad80-35f6-4da4-8745-8104e9963472/kube-multus/1.log" Jan 25 08:08:42 crc kubenswrapper[4832]: I0125 08:08:42.251829 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-kzrcf" event={"ID":"5439ad80-35f6-4da4-8745-8104e9963472","Type":"ContainerDied","Data":"ed577a9d1a5da395208b09f520d83f7012e027930420e43192c4061c5e804650"} Jan 25 08:08:42 crc kubenswrapper[4832]: I0125 08:08:42.251906 4832 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"bcaff12dd09b5de72efcfafa4784bfc96159d855dfb239fc5120bb5fb0c6653e"} Jan 25 08:08:42 crc kubenswrapper[4832]: I0125 08:08:42.253590 4832 generic.go:334] "Generic (PLEG): container finished" podID="5439ad80-35f6-4da4-8745-8104e9963472" containerID="ed577a9d1a5da395208b09f520d83f7012e027930420e43192c4061c5e804650" exitCode=2 Jan 25 08:08:42 crc kubenswrapper[4832]: I0125 08:08:42.254480 4832 scope.go:117] "RemoveContainer" containerID="ed577a9d1a5da395208b09f520d83f7012e027930420e43192c4061c5e804650" Jan 25 08:08:42 crc kubenswrapper[4832]: E0125 08:08:42.255264 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-multus pod=multus-kzrcf_openshift-multus(5439ad80-35f6-4da4-8745-8104e9963472)\"" pod="openshift-multus/multus-kzrcf" podUID="5439ad80-35f6-4da4-8745-8104e9963472" Jan 25 08:08:42 crc kubenswrapper[4832]: I0125 08:08:42.259650 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8snq7" event={"ID":"c07d7ff5-d1b6-48b4-82bf-9de0e813ba3b","Type":"ContainerStarted","Data":"57f697b67de158a3ce68cc151ee177af4d5fbbef40b765f3f2786ee488751ff9"} Jan 25 08:08:42 crc kubenswrapper[4832]: I0125 08:08:42.272880 4832 scope.go:117] "RemoveContainer" containerID="b9360fc46a4533171758f5c0111aec5209164d6ef530b6c4c7047c14a347f7bd" Jan 25 08:08:42 crc kubenswrapper[4832]: I0125 08:08:42.344799 4832 scope.go:117] "RemoveContainer" containerID="5d82289bf3a8f5881decb5d348cc43fdfd61f4ce6af17013a893b687d2c759d1" Jan 25 08:08:42 crc kubenswrapper[4832]: I0125 08:08:42.362050 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-plv66"] Jan 25 08:08:42 crc kubenswrapper[4832]: I0125 08:08:42.371878 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-plv66"] Jan 25 08:08:42 crc kubenswrapper[4832]: I0125 08:08:42.372215 4832 scope.go:117] "RemoveContainer" containerID="955df1f749685e35f57096ab341705a767f9f044c498ff9fe0c578205ab00e47" Jan 25 08:08:42 crc kubenswrapper[4832]: I0125 08:08:42.394237 4832 scope.go:117] "RemoveContainer" containerID="4a4281c5178e1f538e268252a65fbf98cf6d3febdb246a148f96a4aa074654ef" Jan 25 08:08:42 crc kubenswrapper[4832]: I0125 08:08:42.422534 4832 scope.go:117] "RemoveContainer" containerID="5b2bdf85709ae59146893142e9c99259a30d0a3d382b2212b1863f677f6afc2c" Jan 25 08:08:42 crc kubenswrapper[4832]: I0125 08:08:42.443568 4832 scope.go:117] "RemoveContainer" containerID="4eb8d5ded80c75addd304eb271c805a5558200db4ad062ef7354d8a0e4d2892d" Jan 25 08:08:42 crc kubenswrapper[4832]: I0125 08:08:42.462609 4832 scope.go:117] "RemoveContainer" containerID="9039a4038315d24ad4f721f3a16dc792881c104d23270f4ab5ffb3d84ff4cb99" Jan 25 08:08:42 crc kubenswrapper[4832]: I0125 08:08:42.478313 4832 scope.go:117] "RemoveContainer" containerID="e0de5e2c0084fa8b9faf368e61b965f84d8411bcbdfb8b3cf6a35f4bc6088e68" Jan 25 08:08:42 crc kubenswrapper[4832]: I0125 08:08:42.501407 4832 scope.go:117] "RemoveContainer" containerID="ac96bdf8380dbae226d8f186a0449b986660f21889eb73734620b26fb796fbf1" Jan 25 08:08:42 crc kubenswrapper[4832]: I0125 08:08:42.518111 4832 scope.go:117] "RemoveContainer" containerID="d3706bdff863467890f6e3493480a401b3ed42903abef7290645045a203f1741" Jan 25 08:08:42 crc kubenswrapper[4832]: E0125 08:08:42.518800 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d3706bdff863467890f6e3493480a401b3ed42903abef7290645045a203f1741\": container with ID starting with d3706bdff863467890f6e3493480a401b3ed42903abef7290645045a203f1741 not found: ID does not exist" containerID="d3706bdff863467890f6e3493480a401b3ed42903abef7290645045a203f1741" Jan 25 08:08:42 crc kubenswrapper[4832]: I0125 08:08:42.518861 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d3706bdff863467890f6e3493480a401b3ed42903abef7290645045a203f1741"} err="failed to get container status \"d3706bdff863467890f6e3493480a401b3ed42903abef7290645045a203f1741\": rpc error: code = NotFound desc = could not find container \"d3706bdff863467890f6e3493480a401b3ed42903abef7290645045a203f1741\": container with ID starting with d3706bdff863467890f6e3493480a401b3ed42903abef7290645045a203f1741 not found: ID does not exist" Jan 25 08:08:42 crc kubenswrapper[4832]: I0125 08:08:42.518910 4832 scope.go:117] "RemoveContainer" containerID="b9360fc46a4533171758f5c0111aec5209164d6ef530b6c4c7047c14a347f7bd" Jan 25 08:08:42 crc kubenswrapper[4832]: E0125 08:08:42.519416 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b9360fc46a4533171758f5c0111aec5209164d6ef530b6c4c7047c14a347f7bd\": container with ID starting with b9360fc46a4533171758f5c0111aec5209164d6ef530b6c4c7047c14a347f7bd not found: ID does not exist" containerID="b9360fc46a4533171758f5c0111aec5209164d6ef530b6c4c7047c14a347f7bd" Jan 25 08:08:42 crc kubenswrapper[4832]: I0125 08:08:42.519462 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b9360fc46a4533171758f5c0111aec5209164d6ef530b6c4c7047c14a347f7bd"} err="failed to get container status \"b9360fc46a4533171758f5c0111aec5209164d6ef530b6c4c7047c14a347f7bd\": rpc error: code = NotFound desc = could not find container \"b9360fc46a4533171758f5c0111aec5209164d6ef530b6c4c7047c14a347f7bd\": container with ID starting with b9360fc46a4533171758f5c0111aec5209164d6ef530b6c4c7047c14a347f7bd not found: ID does not exist" Jan 25 08:08:42 crc kubenswrapper[4832]: I0125 08:08:42.519485 4832 scope.go:117] "RemoveContainer" containerID="5d82289bf3a8f5881decb5d348cc43fdfd61f4ce6af17013a893b687d2c759d1" Jan 25 08:08:42 crc kubenswrapper[4832]: E0125 08:08:42.520265 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5d82289bf3a8f5881decb5d348cc43fdfd61f4ce6af17013a893b687d2c759d1\": container with ID starting with 5d82289bf3a8f5881decb5d348cc43fdfd61f4ce6af17013a893b687d2c759d1 not found: ID does not exist" containerID="5d82289bf3a8f5881decb5d348cc43fdfd61f4ce6af17013a893b687d2c759d1" Jan 25 08:08:42 crc kubenswrapper[4832]: I0125 08:08:42.520299 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5d82289bf3a8f5881decb5d348cc43fdfd61f4ce6af17013a893b687d2c759d1"} err="failed to get container status \"5d82289bf3a8f5881decb5d348cc43fdfd61f4ce6af17013a893b687d2c759d1\": rpc error: code = NotFound desc = could not find container \"5d82289bf3a8f5881decb5d348cc43fdfd61f4ce6af17013a893b687d2c759d1\": container with ID starting with 5d82289bf3a8f5881decb5d348cc43fdfd61f4ce6af17013a893b687d2c759d1 not found: ID does not exist" Jan 25 08:08:42 crc kubenswrapper[4832]: I0125 08:08:42.520319 4832 scope.go:117] "RemoveContainer" containerID="955df1f749685e35f57096ab341705a767f9f044c498ff9fe0c578205ab00e47" Jan 25 08:08:42 crc kubenswrapper[4832]: E0125 08:08:42.520710 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"955df1f749685e35f57096ab341705a767f9f044c498ff9fe0c578205ab00e47\": container with ID starting with 955df1f749685e35f57096ab341705a767f9f044c498ff9fe0c578205ab00e47 not found: ID does not exist" containerID="955df1f749685e35f57096ab341705a767f9f044c498ff9fe0c578205ab00e47" Jan 25 08:08:42 crc kubenswrapper[4832]: I0125 08:08:42.520743 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"955df1f749685e35f57096ab341705a767f9f044c498ff9fe0c578205ab00e47"} err="failed to get container status \"955df1f749685e35f57096ab341705a767f9f044c498ff9fe0c578205ab00e47\": rpc error: code = NotFound desc = could not find container \"955df1f749685e35f57096ab341705a767f9f044c498ff9fe0c578205ab00e47\": container with ID starting with 955df1f749685e35f57096ab341705a767f9f044c498ff9fe0c578205ab00e47 not found: ID does not exist" Jan 25 08:08:42 crc kubenswrapper[4832]: I0125 08:08:42.520766 4832 scope.go:117] "RemoveContainer" containerID="4a4281c5178e1f538e268252a65fbf98cf6d3febdb246a148f96a4aa074654ef" Jan 25 08:08:42 crc kubenswrapper[4832]: E0125 08:08:42.521161 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4a4281c5178e1f538e268252a65fbf98cf6d3febdb246a148f96a4aa074654ef\": container with ID starting with 4a4281c5178e1f538e268252a65fbf98cf6d3febdb246a148f96a4aa074654ef not found: ID does not exist" containerID="4a4281c5178e1f538e268252a65fbf98cf6d3febdb246a148f96a4aa074654ef" Jan 25 08:08:42 crc kubenswrapper[4832]: I0125 08:08:42.521194 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4a4281c5178e1f538e268252a65fbf98cf6d3febdb246a148f96a4aa074654ef"} err="failed to get container status \"4a4281c5178e1f538e268252a65fbf98cf6d3febdb246a148f96a4aa074654ef\": rpc error: code = NotFound desc = could not find container \"4a4281c5178e1f538e268252a65fbf98cf6d3febdb246a148f96a4aa074654ef\": container with ID starting with 4a4281c5178e1f538e268252a65fbf98cf6d3febdb246a148f96a4aa074654ef not found: ID does not exist" Jan 25 08:08:42 crc kubenswrapper[4832]: I0125 08:08:42.521216 4832 scope.go:117] "RemoveContainer" containerID="5b2bdf85709ae59146893142e9c99259a30d0a3d382b2212b1863f677f6afc2c" Jan 25 08:08:42 crc kubenswrapper[4832]: E0125 08:08:42.521685 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5b2bdf85709ae59146893142e9c99259a30d0a3d382b2212b1863f677f6afc2c\": container with ID starting with 5b2bdf85709ae59146893142e9c99259a30d0a3d382b2212b1863f677f6afc2c not found: ID does not exist" containerID="5b2bdf85709ae59146893142e9c99259a30d0a3d382b2212b1863f677f6afc2c" Jan 25 08:08:42 crc kubenswrapper[4832]: I0125 08:08:42.521756 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5b2bdf85709ae59146893142e9c99259a30d0a3d382b2212b1863f677f6afc2c"} err="failed to get container status \"5b2bdf85709ae59146893142e9c99259a30d0a3d382b2212b1863f677f6afc2c\": rpc error: code = NotFound desc = could not find container \"5b2bdf85709ae59146893142e9c99259a30d0a3d382b2212b1863f677f6afc2c\": container with ID starting with 5b2bdf85709ae59146893142e9c99259a30d0a3d382b2212b1863f677f6afc2c not found: ID does not exist" Jan 25 08:08:42 crc kubenswrapper[4832]: I0125 08:08:42.521807 4832 scope.go:117] "RemoveContainer" containerID="4eb8d5ded80c75addd304eb271c805a5558200db4ad062ef7354d8a0e4d2892d" Jan 25 08:08:42 crc kubenswrapper[4832]: E0125 08:08:42.522367 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4eb8d5ded80c75addd304eb271c805a5558200db4ad062ef7354d8a0e4d2892d\": container with ID starting with 4eb8d5ded80c75addd304eb271c805a5558200db4ad062ef7354d8a0e4d2892d not found: ID does not exist" containerID="4eb8d5ded80c75addd304eb271c805a5558200db4ad062ef7354d8a0e4d2892d" Jan 25 08:08:42 crc kubenswrapper[4832]: I0125 08:08:42.522423 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4eb8d5ded80c75addd304eb271c805a5558200db4ad062ef7354d8a0e4d2892d"} err="failed to get container status \"4eb8d5ded80c75addd304eb271c805a5558200db4ad062ef7354d8a0e4d2892d\": rpc error: code = NotFound desc = could not find container \"4eb8d5ded80c75addd304eb271c805a5558200db4ad062ef7354d8a0e4d2892d\": container with ID starting with 4eb8d5ded80c75addd304eb271c805a5558200db4ad062ef7354d8a0e4d2892d not found: ID does not exist" Jan 25 08:08:42 crc kubenswrapper[4832]: I0125 08:08:42.522458 4832 scope.go:117] "RemoveContainer" containerID="9039a4038315d24ad4f721f3a16dc792881c104d23270f4ab5ffb3d84ff4cb99" Jan 25 08:08:42 crc kubenswrapper[4832]: E0125 08:08:42.522856 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9039a4038315d24ad4f721f3a16dc792881c104d23270f4ab5ffb3d84ff4cb99\": container with ID starting with 9039a4038315d24ad4f721f3a16dc792881c104d23270f4ab5ffb3d84ff4cb99 not found: ID does not exist" containerID="9039a4038315d24ad4f721f3a16dc792881c104d23270f4ab5ffb3d84ff4cb99" Jan 25 08:08:42 crc kubenswrapper[4832]: I0125 08:08:42.522895 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9039a4038315d24ad4f721f3a16dc792881c104d23270f4ab5ffb3d84ff4cb99"} err="failed to get container status \"9039a4038315d24ad4f721f3a16dc792881c104d23270f4ab5ffb3d84ff4cb99\": rpc error: code = NotFound desc = could not find container \"9039a4038315d24ad4f721f3a16dc792881c104d23270f4ab5ffb3d84ff4cb99\": container with ID starting with 9039a4038315d24ad4f721f3a16dc792881c104d23270f4ab5ffb3d84ff4cb99 not found: ID does not exist" Jan 25 08:08:42 crc kubenswrapper[4832]: I0125 08:08:42.522918 4832 scope.go:117] "RemoveContainer" containerID="e0de5e2c0084fa8b9faf368e61b965f84d8411bcbdfb8b3cf6a35f4bc6088e68" Jan 25 08:08:42 crc kubenswrapper[4832]: E0125 08:08:42.523375 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e0de5e2c0084fa8b9faf368e61b965f84d8411bcbdfb8b3cf6a35f4bc6088e68\": container with ID starting with e0de5e2c0084fa8b9faf368e61b965f84d8411bcbdfb8b3cf6a35f4bc6088e68 not found: ID does not exist" containerID="e0de5e2c0084fa8b9faf368e61b965f84d8411bcbdfb8b3cf6a35f4bc6088e68" Jan 25 08:08:42 crc kubenswrapper[4832]: I0125 08:08:42.523450 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e0de5e2c0084fa8b9faf368e61b965f84d8411bcbdfb8b3cf6a35f4bc6088e68"} err="failed to get container status \"e0de5e2c0084fa8b9faf368e61b965f84d8411bcbdfb8b3cf6a35f4bc6088e68\": rpc error: code = NotFound desc = could not find container \"e0de5e2c0084fa8b9faf368e61b965f84d8411bcbdfb8b3cf6a35f4bc6088e68\": container with ID starting with e0de5e2c0084fa8b9faf368e61b965f84d8411bcbdfb8b3cf6a35f4bc6088e68 not found: ID does not exist" Jan 25 08:08:42 crc kubenswrapper[4832]: I0125 08:08:42.523491 4832 scope.go:117] "RemoveContainer" containerID="ac96bdf8380dbae226d8f186a0449b986660f21889eb73734620b26fb796fbf1" Jan 25 08:08:42 crc kubenswrapper[4832]: E0125 08:08:42.523904 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ac96bdf8380dbae226d8f186a0449b986660f21889eb73734620b26fb796fbf1\": container with ID starting with ac96bdf8380dbae226d8f186a0449b986660f21889eb73734620b26fb796fbf1 not found: ID does not exist" containerID="ac96bdf8380dbae226d8f186a0449b986660f21889eb73734620b26fb796fbf1" Jan 25 08:08:42 crc kubenswrapper[4832]: I0125 08:08:42.523965 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ac96bdf8380dbae226d8f186a0449b986660f21889eb73734620b26fb796fbf1"} err="failed to get container status \"ac96bdf8380dbae226d8f186a0449b986660f21889eb73734620b26fb796fbf1\": rpc error: code = NotFound desc = could not find container \"ac96bdf8380dbae226d8f186a0449b986660f21889eb73734620b26fb796fbf1\": container with ID starting with ac96bdf8380dbae226d8f186a0449b986660f21889eb73734620b26fb796fbf1 not found: ID does not exist" Jan 25 08:08:42 crc kubenswrapper[4832]: I0125 08:08:42.524000 4832 scope.go:117] "RemoveContainer" containerID="d3706bdff863467890f6e3493480a401b3ed42903abef7290645045a203f1741" Jan 25 08:08:42 crc kubenswrapper[4832]: I0125 08:08:42.524411 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d3706bdff863467890f6e3493480a401b3ed42903abef7290645045a203f1741"} err="failed to get container status \"d3706bdff863467890f6e3493480a401b3ed42903abef7290645045a203f1741\": rpc error: code = NotFound desc = could not find container \"d3706bdff863467890f6e3493480a401b3ed42903abef7290645045a203f1741\": container with ID starting with d3706bdff863467890f6e3493480a401b3ed42903abef7290645045a203f1741 not found: ID does not exist" Jan 25 08:08:42 crc kubenswrapper[4832]: I0125 08:08:42.524451 4832 scope.go:117] "RemoveContainer" containerID="b9360fc46a4533171758f5c0111aec5209164d6ef530b6c4c7047c14a347f7bd" Jan 25 08:08:42 crc kubenswrapper[4832]: I0125 08:08:42.524796 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b9360fc46a4533171758f5c0111aec5209164d6ef530b6c4c7047c14a347f7bd"} err="failed to get container status \"b9360fc46a4533171758f5c0111aec5209164d6ef530b6c4c7047c14a347f7bd\": rpc error: code = NotFound desc = could not find container \"b9360fc46a4533171758f5c0111aec5209164d6ef530b6c4c7047c14a347f7bd\": container with ID starting with b9360fc46a4533171758f5c0111aec5209164d6ef530b6c4c7047c14a347f7bd not found: ID does not exist" Jan 25 08:08:42 crc kubenswrapper[4832]: I0125 08:08:42.524832 4832 scope.go:117] "RemoveContainer" containerID="5d82289bf3a8f5881decb5d348cc43fdfd61f4ce6af17013a893b687d2c759d1" Jan 25 08:08:42 crc kubenswrapper[4832]: I0125 08:08:42.525254 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5d82289bf3a8f5881decb5d348cc43fdfd61f4ce6af17013a893b687d2c759d1"} err="failed to get container status \"5d82289bf3a8f5881decb5d348cc43fdfd61f4ce6af17013a893b687d2c759d1\": rpc error: code = NotFound desc = could not find container \"5d82289bf3a8f5881decb5d348cc43fdfd61f4ce6af17013a893b687d2c759d1\": container with ID starting with 5d82289bf3a8f5881decb5d348cc43fdfd61f4ce6af17013a893b687d2c759d1 not found: ID does not exist" Jan 25 08:08:42 crc kubenswrapper[4832]: I0125 08:08:42.525287 4832 scope.go:117] "RemoveContainer" containerID="955df1f749685e35f57096ab341705a767f9f044c498ff9fe0c578205ab00e47" Jan 25 08:08:42 crc kubenswrapper[4832]: I0125 08:08:42.525619 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"955df1f749685e35f57096ab341705a767f9f044c498ff9fe0c578205ab00e47"} err="failed to get container status \"955df1f749685e35f57096ab341705a767f9f044c498ff9fe0c578205ab00e47\": rpc error: code = NotFound desc = could not find container \"955df1f749685e35f57096ab341705a767f9f044c498ff9fe0c578205ab00e47\": container with ID starting with 955df1f749685e35f57096ab341705a767f9f044c498ff9fe0c578205ab00e47 not found: ID does not exist" Jan 25 08:08:42 crc kubenswrapper[4832]: I0125 08:08:42.525650 4832 scope.go:117] "RemoveContainer" containerID="4a4281c5178e1f538e268252a65fbf98cf6d3febdb246a148f96a4aa074654ef" Jan 25 08:08:42 crc kubenswrapper[4832]: I0125 08:08:42.525909 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4a4281c5178e1f538e268252a65fbf98cf6d3febdb246a148f96a4aa074654ef"} err="failed to get container status \"4a4281c5178e1f538e268252a65fbf98cf6d3febdb246a148f96a4aa074654ef\": rpc error: code = NotFound desc = could not find container \"4a4281c5178e1f538e268252a65fbf98cf6d3febdb246a148f96a4aa074654ef\": container with ID starting with 4a4281c5178e1f538e268252a65fbf98cf6d3febdb246a148f96a4aa074654ef not found: ID does not exist" Jan 25 08:08:42 crc kubenswrapper[4832]: I0125 08:08:42.525942 4832 scope.go:117] "RemoveContainer" containerID="5b2bdf85709ae59146893142e9c99259a30d0a3d382b2212b1863f677f6afc2c" Jan 25 08:08:42 crc kubenswrapper[4832]: I0125 08:08:42.526249 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5b2bdf85709ae59146893142e9c99259a30d0a3d382b2212b1863f677f6afc2c"} err="failed to get container status \"5b2bdf85709ae59146893142e9c99259a30d0a3d382b2212b1863f677f6afc2c\": rpc error: code = NotFound desc = could not find container \"5b2bdf85709ae59146893142e9c99259a30d0a3d382b2212b1863f677f6afc2c\": container with ID starting with 5b2bdf85709ae59146893142e9c99259a30d0a3d382b2212b1863f677f6afc2c not found: ID does not exist" Jan 25 08:08:42 crc kubenswrapper[4832]: I0125 08:08:42.526286 4832 scope.go:117] "RemoveContainer" containerID="4eb8d5ded80c75addd304eb271c805a5558200db4ad062ef7354d8a0e4d2892d" Jan 25 08:08:42 crc kubenswrapper[4832]: I0125 08:08:42.526612 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4eb8d5ded80c75addd304eb271c805a5558200db4ad062ef7354d8a0e4d2892d"} err="failed to get container status \"4eb8d5ded80c75addd304eb271c805a5558200db4ad062ef7354d8a0e4d2892d\": rpc error: code = NotFound desc = could not find container \"4eb8d5ded80c75addd304eb271c805a5558200db4ad062ef7354d8a0e4d2892d\": container with ID starting with 4eb8d5ded80c75addd304eb271c805a5558200db4ad062ef7354d8a0e4d2892d not found: ID does not exist" Jan 25 08:08:42 crc kubenswrapper[4832]: I0125 08:08:42.526640 4832 scope.go:117] "RemoveContainer" containerID="9039a4038315d24ad4f721f3a16dc792881c104d23270f4ab5ffb3d84ff4cb99" Jan 25 08:08:42 crc kubenswrapper[4832]: I0125 08:08:42.526975 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9039a4038315d24ad4f721f3a16dc792881c104d23270f4ab5ffb3d84ff4cb99"} err="failed to get container status \"9039a4038315d24ad4f721f3a16dc792881c104d23270f4ab5ffb3d84ff4cb99\": rpc error: code = NotFound desc = could not find container \"9039a4038315d24ad4f721f3a16dc792881c104d23270f4ab5ffb3d84ff4cb99\": container with ID starting with 9039a4038315d24ad4f721f3a16dc792881c104d23270f4ab5ffb3d84ff4cb99 not found: ID does not exist" Jan 25 08:08:42 crc kubenswrapper[4832]: I0125 08:08:42.527006 4832 scope.go:117] "RemoveContainer" containerID="e0de5e2c0084fa8b9faf368e61b965f84d8411bcbdfb8b3cf6a35f4bc6088e68" Jan 25 08:08:42 crc kubenswrapper[4832]: I0125 08:08:42.527317 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e0de5e2c0084fa8b9faf368e61b965f84d8411bcbdfb8b3cf6a35f4bc6088e68"} err="failed to get container status \"e0de5e2c0084fa8b9faf368e61b965f84d8411bcbdfb8b3cf6a35f4bc6088e68\": rpc error: code = NotFound desc = could not find container \"e0de5e2c0084fa8b9faf368e61b965f84d8411bcbdfb8b3cf6a35f4bc6088e68\": container with ID starting with e0de5e2c0084fa8b9faf368e61b965f84d8411bcbdfb8b3cf6a35f4bc6088e68 not found: ID does not exist" Jan 25 08:08:42 crc kubenswrapper[4832]: I0125 08:08:42.527339 4832 scope.go:117] "RemoveContainer" containerID="ac96bdf8380dbae226d8f186a0449b986660f21889eb73734620b26fb796fbf1" Jan 25 08:08:42 crc kubenswrapper[4832]: I0125 08:08:42.527642 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ac96bdf8380dbae226d8f186a0449b986660f21889eb73734620b26fb796fbf1"} err="failed to get container status \"ac96bdf8380dbae226d8f186a0449b986660f21889eb73734620b26fb796fbf1\": rpc error: code = NotFound desc = could not find container \"ac96bdf8380dbae226d8f186a0449b986660f21889eb73734620b26fb796fbf1\": container with ID starting with ac96bdf8380dbae226d8f186a0449b986660f21889eb73734620b26fb796fbf1 not found: ID does not exist" Jan 25 08:08:42 crc kubenswrapper[4832]: I0125 08:08:42.527665 4832 scope.go:117] "RemoveContainer" containerID="d3706bdff863467890f6e3493480a401b3ed42903abef7290645045a203f1741" Jan 25 08:08:42 crc kubenswrapper[4832]: I0125 08:08:42.528068 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d3706bdff863467890f6e3493480a401b3ed42903abef7290645045a203f1741"} err="failed to get container status \"d3706bdff863467890f6e3493480a401b3ed42903abef7290645045a203f1741\": rpc error: code = NotFound desc = could not find container \"d3706bdff863467890f6e3493480a401b3ed42903abef7290645045a203f1741\": container with ID starting with d3706bdff863467890f6e3493480a401b3ed42903abef7290645045a203f1741 not found: ID does not exist" Jan 25 08:08:42 crc kubenswrapper[4832]: I0125 08:08:42.528088 4832 scope.go:117] "RemoveContainer" containerID="b9360fc46a4533171758f5c0111aec5209164d6ef530b6c4c7047c14a347f7bd" Jan 25 08:08:42 crc kubenswrapper[4832]: I0125 08:08:42.528475 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b9360fc46a4533171758f5c0111aec5209164d6ef530b6c4c7047c14a347f7bd"} err="failed to get container status \"b9360fc46a4533171758f5c0111aec5209164d6ef530b6c4c7047c14a347f7bd\": rpc error: code = NotFound desc = could not find container \"b9360fc46a4533171758f5c0111aec5209164d6ef530b6c4c7047c14a347f7bd\": container with ID starting with b9360fc46a4533171758f5c0111aec5209164d6ef530b6c4c7047c14a347f7bd not found: ID does not exist" Jan 25 08:08:42 crc kubenswrapper[4832]: I0125 08:08:42.528494 4832 scope.go:117] "RemoveContainer" containerID="5d82289bf3a8f5881decb5d348cc43fdfd61f4ce6af17013a893b687d2c759d1" Jan 25 08:08:42 crc kubenswrapper[4832]: I0125 08:08:42.528867 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5d82289bf3a8f5881decb5d348cc43fdfd61f4ce6af17013a893b687d2c759d1"} err="failed to get container status \"5d82289bf3a8f5881decb5d348cc43fdfd61f4ce6af17013a893b687d2c759d1\": rpc error: code = NotFound desc = could not find container \"5d82289bf3a8f5881decb5d348cc43fdfd61f4ce6af17013a893b687d2c759d1\": container with ID starting with 5d82289bf3a8f5881decb5d348cc43fdfd61f4ce6af17013a893b687d2c759d1 not found: ID does not exist" Jan 25 08:08:42 crc kubenswrapper[4832]: I0125 08:08:42.528905 4832 scope.go:117] "RemoveContainer" containerID="955df1f749685e35f57096ab341705a767f9f044c498ff9fe0c578205ab00e47" Jan 25 08:08:42 crc kubenswrapper[4832]: I0125 08:08:42.529303 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"955df1f749685e35f57096ab341705a767f9f044c498ff9fe0c578205ab00e47"} err="failed to get container status \"955df1f749685e35f57096ab341705a767f9f044c498ff9fe0c578205ab00e47\": rpc error: code = NotFound desc = could not find container \"955df1f749685e35f57096ab341705a767f9f044c498ff9fe0c578205ab00e47\": container with ID starting with 955df1f749685e35f57096ab341705a767f9f044c498ff9fe0c578205ab00e47 not found: ID does not exist" Jan 25 08:08:42 crc kubenswrapper[4832]: I0125 08:08:42.529332 4832 scope.go:117] "RemoveContainer" containerID="4a4281c5178e1f538e268252a65fbf98cf6d3febdb246a148f96a4aa074654ef" Jan 25 08:08:42 crc kubenswrapper[4832]: I0125 08:08:42.529734 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4a4281c5178e1f538e268252a65fbf98cf6d3febdb246a148f96a4aa074654ef"} err="failed to get container status \"4a4281c5178e1f538e268252a65fbf98cf6d3febdb246a148f96a4aa074654ef\": rpc error: code = NotFound desc = could not find container \"4a4281c5178e1f538e268252a65fbf98cf6d3febdb246a148f96a4aa074654ef\": container with ID starting with 4a4281c5178e1f538e268252a65fbf98cf6d3febdb246a148f96a4aa074654ef not found: ID does not exist" Jan 25 08:08:42 crc kubenswrapper[4832]: I0125 08:08:42.529766 4832 scope.go:117] "RemoveContainer" containerID="5b2bdf85709ae59146893142e9c99259a30d0a3d382b2212b1863f677f6afc2c" Jan 25 08:08:42 crc kubenswrapper[4832]: I0125 08:08:42.530193 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5b2bdf85709ae59146893142e9c99259a30d0a3d382b2212b1863f677f6afc2c"} err="failed to get container status \"5b2bdf85709ae59146893142e9c99259a30d0a3d382b2212b1863f677f6afc2c\": rpc error: code = NotFound desc = could not find container \"5b2bdf85709ae59146893142e9c99259a30d0a3d382b2212b1863f677f6afc2c\": container with ID starting with 5b2bdf85709ae59146893142e9c99259a30d0a3d382b2212b1863f677f6afc2c not found: ID does not exist" Jan 25 08:08:42 crc kubenswrapper[4832]: I0125 08:08:42.530217 4832 scope.go:117] "RemoveContainer" containerID="4eb8d5ded80c75addd304eb271c805a5558200db4ad062ef7354d8a0e4d2892d" Jan 25 08:08:42 crc kubenswrapper[4832]: I0125 08:08:42.530635 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4eb8d5ded80c75addd304eb271c805a5558200db4ad062ef7354d8a0e4d2892d"} err="failed to get container status \"4eb8d5ded80c75addd304eb271c805a5558200db4ad062ef7354d8a0e4d2892d\": rpc error: code = NotFound desc = could not find container \"4eb8d5ded80c75addd304eb271c805a5558200db4ad062ef7354d8a0e4d2892d\": container with ID starting with 4eb8d5ded80c75addd304eb271c805a5558200db4ad062ef7354d8a0e4d2892d not found: ID does not exist" Jan 25 08:08:42 crc kubenswrapper[4832]: I0125 08:08:42.530669 4832 scope.go:117] "RemoveContainer" containerID="9039a4038315d24ad4f721f3a16dc792881c104d23270f4ab5ffb3d84ff4cb99" Jan 25 08:08:42 crc kubenswrapper[4832]: I0125 08:08:42.531020 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9039a4038315d24ad4f721f3a16dc792881c104d23270f4ab5ffb3d84ff4cb99"} err="failed to get container status \"9039a4038315d24ad4f721f3a16dc792881c104d23270f4ab5ffb3d84ff4cb99\": rpc error: code = NotFound desc = could not find container \"9039a4038315d24ad4f721f3a16dc792881c104d23270f4ab5ffb3d84ff4cb99\": container with ID starting with 9039a4038315d24ad4f721f3a16dc792881c104d23270f4ab5ffb3d84ff4cb99 not found: ID does not exist" Jan 25 08:08:42 crc kubenswrapper[4832]: I0125 08:08:42.531048 4832 scope.go:117] "RemoveContainer" containerID="e0de5e2c0084fa8b9faf368e61b965f84d8411bcbdfb8b3cf6a35f4bc6088e68" Jan 25 08:08:42 crc kubenswrapper[4832]: I0125 08:08:42.531437 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e0de5e2c0084fa8b9faf368e61b965f84d8411bcbdfb8b3cf6a35f4bc6088e68"} err="failed to get container status \"e0de5e2c0084fa8b9faf368e61b965f84d8411bcbdfb8b3cf6a35f4bc6088e68\": rpc error: code = NotFound desc = could not find container \"e0de5e2c0084fa8b9faf368e61b965f84d8411bcbdfb8b3cf6a35f4bc6088e68\": container with ID starting with e0de5e2c0084fa8b9faf368e61b965f84d8411bcbdfb8b3cf6a35f4bc6088e68 not found: ID does not exist" Jan 25 08:08:42 crc kubenswrapper[4832]: I0125 08:08:42.531460 4832 scope.go:117] "RemoveContainer" containerID="ac96bdf8380dbae226d8f186a0449b986660f21889eb73734620b26fb796fbf1" Jan 25 08:08:42 crc kubenswrapper[4832]: I0125 08:08:42.531800 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ac96bdf8380dbae226d8f186a0449b986660f21889eb73734620b26fb796fbf1"} err="failed to get container status \"ac96bdf8380dbae226d8f186a0449b986660f21889eb73734620b26fb796fbf1\": rpc error: code = NotFound desc = could not find container \"ac96bdf8380dbae226d8f186a0449b986660f21889eb73734620b26fb796fbf1\": container with ID starting with ac96bdf8380dbae226d8f186a0449b986660f21889eb73734620b26fb796fbf1 not found: ID does not exist" Jan 25 08:08:42 crc kubenswrapper[4832]: I0125 08:08:42.531819 4832 scope.go:117] "RemoveContainer" containerID="d3706bdff863467890f6e3493480a401b3ed42903abef7290645045a203f1741" Jan 25 08:08:42 crc kubenswrapper[4832]: I0125 08:08:42.532175 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d3706bdff863467890f6e3493480a401b3ed42903abef7290645045a203f1741"} err="failed to get container status \"d3706bdff863467890f6e3493480a401b3ed42903abef7290645045a203f1741\": rpc error: code = NotFound desc = could not find container \"d3706bdff863467890f6e3493480a401b3ed42903abef7290645045a203f1741\": container with ID starting with d3706bdff863467890f6e3493480a401b3ed42903abef7290645045a203f1741 not found: ID does not exist" Jan 25 08:08:42 crc kubenswrapper[4832]: I0125 08:08:42.532209 4832 scope.go:117] "RemoveContainer" containerID="b9360fc46a4533171758f5c0111aec5209164d6ef530b6c4c7047c14a347f7bd" Jan 25 08:08:42 crc kubenswrapper[4832]: I0125 08:08:42.532621 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b9360fc46a4533171758f5c0111aec5209164d6ef530b6c4c7047c14a347f7bd"} err="failed to get container status \"b9360fc46a4533171758f5c0111aec5209164d6ef530b6c4c7047c14a347f7bd\": rpc error: code = NotFound desc = could not find container \"b9360fc46a4533171758f5c0111aec5209164d6ef530b6c4c7047c14a347f7bd\": container with ID starting with b9360fc46a4533171758f5c0111aec5209164d6ef530b6c4c7047c14a347f7bd not found: ID does not exist" Jan 25 08:08:42 crc kubenswrapper[4832]: I0125 08:08:42.532642 4832 scope.go:117] "RemoveContainer" containerID="5d82289bf3a8f5881decb5d348cc43fdfd61f4ce6af17013a893b687d2c759d1" Jan 25 08:08:42 crc kubenswrapper[4832]: I0125 08:08:42.532964 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5d82289bf3a8f5881decb5d348cc43fdfd61f4ce6af17013a893b687d2c759d1"} err="failed to get container status \"5d82289bf3a8f5881decb5d348cc43fdfd61f4ce6af17013a893b687d2c759d1\": rpc error: code = NotFound desc = could not find container \"5d82289bf3a8f5881decb5d348cc43fdfd61f4ce6af17013a893b687d2c759d1\": container with ID starting with 5d82289bf3a8f5881decb5d348cc43fdfd61f4ce6af17013a893b687d2c759d1 not found: ID does not exist" Jan 25 08:08:42 crc kubenswrapper[4832]: I0125 08:08:42.532995 4832 scope.go:117] "RemoveContainer" containerID="955df1f749685e35f57096ab341705a767f9f044c498ff9fe0c578205ab00e47" Jan 25 08:08:42 crc kubenswrapper[4832]: I0125 08:08:42.533297 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"955df1f749685e35f57096ab341705a767f9f044c498ff9fe0c578205ab00e47"} err="failed to get container status \"955df1f749685e35f57096ab341705a767f9f044c498ff9fe0c578205ab00e47\": rpc error: code = NotFound desc = could not find container \"955df1f749685e35f57096ab341705a767f9f044c498ff9fe0c578205ab00e47\": container with ID starting with 955df1f749685e35f57096ab341705a767f9f044c498ff9fe0c578205ab00e47 not found: ID does not exist" Jan 25 08:08:42 crc kubenswrapper[4832]: I0125 08:08:42.533319 4832 scope.go:117] "RemoveContainer" containerID="4a4281c5178e1f538e268252a65fbf98cf6d3febdb246a148f96a4aa074654ef" Jan 25 08:08:42 crc kubenswrapper[4832]: I0125 08:08:42.533695 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4a4281c5178e1f538e268252a65fbf98cf6d3febdb246a148f96a4aa074654ef"} err="failed to get container status \"4a4281c5178e1f538e268252a65fbf98cf6d3febdb246a148f96a4aa074654ef\": rpc error: code = NotFound desc = could not find container \"4a4281c5178e1f538e268252a65fbf98cf6d3febdb246a148f96a4aa074654ef\": container with ID starting with 4a4281c5178e1f538e268252a65fbf98cf6d3febdb246a148f96a4aa074654ef not found: ID does not exist" Jan 25 08:08:42 crc kubenswrapper[4832]: I0125 08:08:42.533724 4832 scope.go:117] "RemoveContainer" containerID="5b2bdf85709ae59146893142e9c99259a30d0a3d382b2212b1863f677f6afc2c" Jan 25 08:08:42 crc kubenswrapper[4832]: I0125 08:08:42.534040 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5b2bdf85709ae59146893142e9c99259a30d0a3d382b2212b1863f677f6afc2c"} err="failed to get container status \"5b2bdf85709ae59146893142e9c99259a30d0a3d382b2212b1863f677f6afc2c\": rpc error: code = NotFound desc = could not find container \"5b2bdf85709ae59146893142e9c99259a30d0a3d382b2212b1863f677f6afc2c\": container with ID starting with 5b2bdf85709ae59146893142e9c99259a30d0a3d382b2212b1863f677f6afc2c not found: ID does not exist" Jan 25 08:08:42 crc kubenswrapper[4832]: I0125 08:08:42.534064 4832 scope.go:117] "RemoveContainer" containerID="4eb8d5ded80c75addd304eb271c805a5558200db4ad062ef7354d8a0e4d2892d" Jan 25 08:08:42 crc kubenswrapper[4832]: I0125 08:08:42.534416 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4eb8d5ded80c75addd304eb271c805a5558200db4ad062ef7354d8a0e4d2892d"} err="failed to get container status \"4eb8d5ded80c75addd304eb271c805a5558200db4ad062ef7354d8a0e4d2892d\": rpc error: code = NotFound desc = could not find container \"4eb8d5ded80c75addd304eb271c805a5558200db4ad062ef7354d8a0e4d2892d\": container with ID starting with 4eb8d5ded80c75addd304eb271c805a5558200db4ad062ef7354d8a0e4d2892d not found: ID does not exist" Jan 25 08:08:42 crc kubenswrapper[4832]: I0125 08:08:42.534454 4832 scope.go:117] "RemoveContainer" containerID="9039a4038315d24ad4f721f3a16dc792881c104d23270f4ab5ffb3d84ff4cb99" Jan 25 08:08:42 crc kubenswrapper[4832]: I0125 08:08:42.534728 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9039a4038315d24ad4f721f3a16dc792881c104d23270f4ab5ffb3d84ff4cb99"} err="failed to get container status \"9039a4038315d24ad4f721f3a16dc792881c104d23270f4ab5ffb3d84ff4cb99\": rpc error: code = NotFound desc = could not find container \"9039a4038315d24ad4f721f3a16dc792881c104d23270f4ab5ffb3d84ff4cb99\": container with ID starting with 9039a4038315d24ad4f721f3a16dc792881c104d23270f4ab5ffb3d84ff4cb99 not found: ID does not exist" Jan 25 08:08:42 crc kubenswrapper[4832]: I0125 08:08:42.534757 4832 scope.go:117] "RemoveContainer" containerID="e0de5e2c0084fa8b9faf368e61b965f84d8411bcbdfb8b3cf6a35f4bc6088e68" Jan 25 08:08:42 crc kubenswrapper[4832]: I0125 08:08:42.535214 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e0de5e2c0084fa8b9faf368e61b965f84d8411bcbdfb8b3cf6a35f4bc6088e68"} err="failed to get container status \"e0de5e2c0084fa8b9faf368e61b965f84d8411bcbdfb8b3cf6a35f4bc6088e68\": rpc error: code = NotFound desc = could not find container \"e0de5e2c0084fa8b9faf368e61b965f84d8411bcbdfb8b3cf6a35f4bc6088e68\": container with ID starting with e0de5e2c0084fa8b9faf368e61b965f84d8411bcbdfb8b3cf6a35f4bc6088e68 not found: ID does not exist" Jan 25 08:08:42 crc kubenswrapper[4832]: I0125 08:08:42.535238 4832 scope.go:117] "RemoveContainer" containerID="ac96bdf8380dbae226d8f186a0449b986660f21889eb73734620b26fb796fbf1" Jan 25 08:08:42 crc kubenswrapper[4832]: I0125 08:08:42.535597 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ac96bdf8380dbae226d8f186a0449b986660f21889eb73734620b26fb796fbf1"} err="failed to get container status \"ac96bdf8380dbae226d8f186a0449b986660f21889eb73734620b26fb796fbf1\": rpc error: code = NotFound desc = could not find container \"ac96bdf8380dbae226d8f186a0449b986660f21889eb73734620b26fb796fbf1\": container with ID starting with ac96bdf8380dbae226d8f186a0449b986660f21889eb73734620b26fb796fbf1 not found: ID does not exist" Jan 25 08:08:42 crc kubenswrapper[4832]: I0125 08:08:42.535629 4832 scope.go:117] "RemoveContainer" containerID="d3706bdff863467890f6e3493480a401b3ed42903abef7290645045a203f1741" Jan 25 08:08:42 crc kubenswrapper[4832]: I0125 08:08:42.535960 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d3706bdff863467890f6e3493480a401b3ed42903abef7290645045a203f1741"} err="failed to get container status \"d3706bdff863467890f6e3493480a401b3ed42903abef7290645045a203f1741\": rpc error: code = NotFound desc = could not find container \"d3706bdff863467890f6e3493480a401b3ed42903abef7290645045a203f1741\": container with ID starting with d3706bdff863467890f6e3493480a401b3ed42903abef7290645045a203f1741 not found: ID does not exist" Jan 25 08:08:43 crc kubenswrapper[4832]: I0125 08:08:43.267799 4832 generic.go:334] "Generic (PLEG): container finished" podID="c07d7ff5-d1b6-48b4-82bf-9de0e813ba3b" containerID="6f8902c6e901356ff20dedbd241e7b88342f7f59fb571278418ae2e0cd2a77b9" exitCode=0 Jan 25 08:08:43 crc kubenswrapper[4832]: I0125 08:08:43.267901 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8snq7" event={"ID":"c07d7ff5-d1b6-48b4-82bf-9de0e813ba3b","Type":"ContainerDied","Data":"6f8902c6e901356ff20dedbd241e7b88342f7f59fb571278418ae2e0cd2a77b9"} Jan 25 08:08:43 crc kubenswrapper[4832]: I0125 08:08:43.679658 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9c6fdc72-86dc-433d-8aac-57b0eeefaca3" path="/var/lib/kubelet/pods/9c6fdc72-86dc-433d-8aac-57b0eeefaca3/volumes" Jan 25 08:08:44 crc kubenswrapper[4832]: I0125 08:08:44.280298 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8snq7" event={"ID":"c07d7ff5-d1b6-48b4-82bf-9de0e813ba3b","Type":"ContainerStarted","Data":"3f7956c5c648514358088ab49bca4bb54be41f02b64ccca1eeb26f7ec05303a7"} Jan 25 08:08:44 crc kubenswrapper[4832]: I0125 08:08:44.280817 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8snq7" event={"ID":"c07d7ff5-d1b6-48b4-82bf-9de0e813ba3b","Type":"ContainerStarted","Data":"26a417fa8f00d7a0bf040ea78a439ccd33ba4a96bd7d2d84bc42fe71f2dd970a"} Jan 25 08:08:44 crc kubenswrapper[4832]: I0125 08:08:44.280837 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8snq7" event={"ID":"c07d7ff5-d1b6-48b4-82bf-9de0e813ba3b","Type":"ContainerStarted","Data":"f45cecb8d24f2c39eb2422a7d391d510f077c0b52a8da10223a545ae170dcb59"} Jan 25 08:08:44 crc kubenswrapper[4832]: I0125 08:08:44.280852 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8snq7" event={"ID":"c07d7ff5-d1b6-48b4-82bf-9de0e813ba3b","Type":"ContainerStarted","Data":"180f2f5b0e3fcec2799970560e2ff6bfd8307c0605801af2719ac345d7286e8a"} Jan 25 08:08:44 crc kubenswrapper[4832]: I0125 08:08:44.280865 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8snq7" event={"ID":"c07d7ff5-d1b6-48b4-82bf-9de0e813ba3b","Type":"ContainerStarted","Data":"39db2739b8dc6d86873ad7333c3524a10848440be2b370d06360f591dbc444a0"} Jan 25 08:08:44 crc kubenswrapper[4832]: I0125 08:08:44.280881 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8snq7" event={"ID":"c07d7ff5-d1b6-48b4-82bf-9de0e813ba3b","Type":"ContainerStarted","Data":"6581473e993d02729288c37b278960fae348f7adabfac8a4035dc6376f439aa3"} Jan 25 08:08:46 crc kubenswrapper[4832]: I0125 08:08:46.290702 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8snq7" event={"ID":"c07d7ff5-d1b6-48b4-82bf-9de0e813ba3b","Type":"ContainerStarted","Data":"421c01a771bd8135cd9112595d270a6fe21076103847825c9732c48f31f7c74a"} Jan 25 08:08:49 crc kubenswrapper[4832]: I0125 08:08:49.316261 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8snq7" event={"ID":"c07d7ff5-d1b6-48b4-82bf-9de0e813ba3b","Type":"ContainerStarted","Data":"37da1296d6569533793070ecc6d744660eb6fb8fb481db003faaaf0694332a0e"} Jan 25 08:08:49 crc kubenswrapper[4832]: I0125 08:08:49.316912 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-8snq7" Jan 25 08:08:49 crc kubenswrapper[4832]: I0125 08:08:49.316925 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-8snq7" Jan 25 08:08:49 crc kubenswrapper[4832]: I0125 08:08:49.317036 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-8snq7" Jan 25 08:08:49 crc kubenswrapper[4832]: I0125 08:08:49.348339 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-8snq7" Jan 25 08:08:49 crc kubenswrapper[4832]: I0125 08:08:49.352140 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-8snq7" podStartSLOduration=8.352118735 podStartE2EDuration="8.352118735s" podCreationTimestamp="2026-01-25 08:08:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-25 08:08:49.351829607 +0000 UTC m=+712.025653160" watchObservedRunningTime="2026-01-25 08:08:49.352118735 +0000 UTC m=+712.025942268" Jan 25 08:08:49 crc kubenswrapper[4832]: I0125 08:08:49.352463 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-8snq7" Jan 25 08:08:52 crc kubenswrapper[4832]: I0125 08:08:52.150159 4832 patch_prober.go:28] interesting pod/machine-config-daemon-9r9sz container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 25 08:08:52 crc kubenswrapper[4832]: I0125 08:08:52.151580 4832 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-9r9sz" podUID="1fb47e8e-c812-41b4-9be7-3fad81e121b0" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 25 08:08:57 crc kubenswrapper[4832]: I0125 08:08:57.675665 4832 scope.go:117] "RemoveContainer" containerID="ed577a9d1a5da395208b09f520d83f7012e027930420e43192c4061c5e804650" Jan 25 08:08:57 crc kubenswrapper[4832]: E0125 08:08:57.676032 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-multus pod=multus-kzrcf_openshift-multus(5439ad80-35f6-4da4-8745-8104e9963472)\"" pod="openshift-multus/multus-kzrcf" podUID="5439ad80-35f6-4da4-8745-8104e9963472" Jan 25 08:08:57 crc kubenswrapper[4832]: I0125 08:08:57.858256 4832 scope.go:117] "RemoveContainer" containerID="bcaff12dd09b5de72efcfafa4784bfc96159d855dfb239fc5120bb5fb0c6653e" Jan 25 08:08:59 crc kubenswrapper[4832]: I0125 08:08:59.374653 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-kzrcf_5439ad80-35f6-4da4-8745-8104e9963472/kube-multus/2.log" Jan 25 08:09:11 crc kubenswrapper[4832]: I0125 08:09:11.669847 4832 scope.go:117] "RemoveContainer" containerID="ed577a9d1a5da395208b09f520d83f7012e027930420e43192c4061c5e804650" Jan 25 08:09:12 crc kubenswrapper[4832]: I0125 08:09:12.143570 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-8snq7" Jan 25 08:09:12 crc kubenswrapper[4832]: I0125 08:09:12.462080 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-kzrcf_5439ad80-35f6-4da4-8745-8104e9963472/kube-multus/2.log" Jan 25 08:09:12 crc kubenswrapper[4832]: I0125 08:09:12.462158 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-kzrcf" event={"ID":"5439ad80-35f6-4da4-8745-8104e9963472","Type":"ContainerStarted","Data":"9677163c1466f12656b0decf37b4e8e9e21b70b49d09dce07e5464f787338e8e"} Jan 25 08:09:19 crc kubenswrapper[4832]: I0125 08:09:19.788129 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7139bh59"] Jan 25 08:09:19 crc kubenswrapper[4832]: I0125 08:09:19.790020 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7139bh59" Jan 25 08:09:19 crc kubenswrapper[4832]: I0125 08:09:19.792345 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Jan 25 08:09:19 crc kubenswrapper[4832]: I0125 08:09:19.801984 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7139bh59"] Jan 25 08:09:19 crc kubenswrapper[4832]: I0125 08:09:19.937065 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sc4nm\" (UniqueName: \"kubernetes.io/projected/65372180-5040-413f-a789-bebad10ff6d8-kube-api-access-sc4nm\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7139bh59\" (UID: \"65372180-5040-413f-a789-bebad10ff6d8\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7139bh59" Jan 25 08:09:19 crc kubenswrapper[4832]: I0125 08:09:19.937122 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/65372180-5040-413f-a789-bebad10ff6d8-util\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7139bh59\" (UID: \"65372180-5040-413f-a789-bebad10ff6d8\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7139bh59" Jan 25 08:09:19 crc kubenswrapper[4832]: I0125 08:09:19.937215 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/65372180-5040-413f-a789-bebad10ff6d8-bundle\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7139bh59\" (UID: \"65372180-5040-413f-a789-bebad10ff6d8\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7139bh59" Jan 25 08:09:20 crc kubenswrapper[4832]: I0125 08:09:20.038308 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/65372180-5040-413f-a789-bebad10ff6d8-bundle\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7139bh59\" (UID: \"65372180-5040-413f-a789-bebad10ff6d8\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7139bh59" Jan 25 08:09:20 crc kubenswrapper[4832]: I0125 08:09:20.038407 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sc4nm\" (UniqueName: \"kubernetes.io/projected/65372180-5040-413f-a789-bebad10ff6d8-kube-api-access-sc4nm\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7139bh59\" (UID: \"65372180-5040-413f-a789-bebad10ff6d8\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7139bh59" Jan 25 08:09:20 crc kubenswrapper[4832]: I0125 08:09:20.038434 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/65372180-5040-413f-a789-bebad10ff6d8-util\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7139bh59\" (UID: \"65372180-5040-413f-a789-bebad10ff6d8\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7139bh59" Jan 25 08:09:20 crc kubenswrapper[4832]: I0125 08:09:20.038804 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/65372180-5040-413f-a789-bebad10ff6d8-bundle\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7139bh59\" (UID: \"65372180-5040-413f-a789-bebad10ff6d8\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7139bh59" Jan 25 08:09:20 crc kubenswrapper[4832]: I0125 08:09:20.038834 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/65372180-5040-413f-a789-bebad10ff6d8-util\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7139bh59\" (UID: \"65372180-5040-413f-a789-bebad10ff6d8\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7139bh59" Jan 25 08:09:20 crc kubenswrapper[4832]: I0125 08:09:20.058265 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sc4nm\" (UniqueName: \"kubernetes.io/projected/65372180-5040-413f-a789-bebad10ff6d8-kube-api-access-sc4nm\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7139bh59\" (UID: \"65372180-5040-413f-a789-bebad10ff6d8\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7139bh59" Jan 25 08:09:20 crc kubenswrapper[4832]: I0125 08:09:20.154822 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7139bh59" Jan 25 08:09:20 crc kubenswrapper[4832]: I0125 08:09:20.397182 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7139bh59"] Jan 25 08:09:20 crc kubenswrapper[4832]: W0125 08:09:20.403398 4832 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod65372180_5040_413f_a789_bebad10ff6d8.slice/crio-7501630dd143db7cccd2925050f01692a0477e8b6dffcb0180d5942b559b9cc6 WatchSource:0}: Error finding container 7501630dd143db7cccd2925050f01692a0477e8b6dffcb0180d5942b559b9cc6: Status 404 returned error can't find the container with id 7501630dd143db7cccd2925050f01692a0477e8b6dffcb0180d5942b559b9cc6 Jan 25 08:09:20 crc kubenswrapper[4832]: I0125 08:09:20.527852 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7139bh59" event={"ID":"65372180-5040-413f-a789-bebad10ff6d8","Type":"ContainerStarted","Data":"7501630dd143db7cccd2925050f01692a0477e8b6dffcb0180d5942b559b9cc6"} Jan 25 08:09:21 crc kubenswrapper[4832]: I0125 08:09:21.534484 4832 generic.go:334] "Generic (PLEG): container finished" podID="65372180-5040-413f-a789-bebad10ff6d8" containerID="ba4588d592b0e9d6a429b84f8c4897948cb9f82b7e1c8db797934ecf6cd6ed59" exitCode=0 Jan 25 08:09:21 crc kubenswrapper[4832]: I0125 08:09:21.534646 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7139bh59" event={"ID":"65372180-5040-413f-a789-bebad10ff6d8","Type":"ContainerDied","Data":"ba4588d592b0e9d6a429b84f8c4897948cb9f82b7e1c8db797934ecf6cd6ed59"} Jan 25 08:09:22 crc kubenswrapper[4832]: I0125 08:09:22.111793 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-5pk6t"] Jan 25 08:09:22 crc kubenswrapper[4832]: I0125 08:09:22.112902 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-5pk6t" Jan 25 08:09:22 crc kubenswrapper[4832]: I0125 08:09:22.134835 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-5pk6t"] Jan 25 08:09:22 crc kubenswrapper[4832]: I0125 08:09:22.149566 4832 patch_prober.go:28] interesting pod/machine-config-daemon-9r9sz container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 25 08:09:22 crc kubenswrapper[4832]: I0125 08:09:22.149655 4832 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-9r9sz" podUID="1fb47e8e-c812-41b4-9be7-3fad81e121b0" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 25 08:09:22 crc kubenswrapper[4832]: I0125 08:09:22.149760 4832 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-9r9sz" Jan 25 08:09:22 crc kubenswrapper[4832]: I0125 08:09:22.150684 4832 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"2e5cad5f69dc7b0bf2005b84dd78b370ac52759a8ef11d5ebaebb12ca134de5d"} pod="openshift-machine-config-operator/machine-config-daemon-9r9sz" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 25 08:09:22 crc kubenswrapper[4832]: I0125 08:09:22.150794 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-9r9sz" podUID="1fb47e8e-c812-41b4-9be7-3fad81e121b0" containerName="machine-config-daemon" containerID="cri-o://2e5cad5f69dc7b0bf2005b84dd78b370ac52759a8ef11d5ebaebb12ca134de5d" gracePeriod=600 Jan 25 08:09:22 crc kubenswrapper[4832]: I0125 08:09:22.270035 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/65e58712-2959-43d7-8db4-50f22e9eacf5-catalog-content\") pod \"redhat-operators-5pk6t\" (UID: \"65e58712-2959-43d7-8db4-50f22e9eacf5\") " pod="openshift-marketplace/redhat-operators-5pk6t" Jan 25 08:09:22 crc kubenswrapper[4832]: I0125 08:09:22.270091 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p8f4n\" (UniqueName: \"kubernetes.io/projected/65e58712-2959-43d7-8db4-50f22e9eacf5-kube-api-access-p8f4n\") pod \"redhat-operators-5pk6t\" (UID: \"65e58712-2959-43d7-8db4-50f22e9eacf5\") " pod="openshift-marketplace/redhat-operators-5pk6t" Jan 25 08:09:22 crc kubenswrapper[4832]: I0125 08:09:22.270118 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/65e58712-2959-43d7-8db4-50f22e9eacf5-utilities\") pod \"redhat-operators-5pk6t\" (UID: \"65e58712-2959-43d7-8db4-50f22e9eacf5\") " pod="openshift-marketplace/redhat-operators-5pk6t" Jan 25 08:09:22 crc kubenswrapper[4832]: I0125 08:09:22.371119 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/65e58712-2959-43d7-8db4-50f22e9eacf5-utilities\") pod \"redhat-operators-5pk6t\" (UID: \"65e58712-2959-43d7-8db4-50f22e9eacf5\") " pod="openshift-marketplace/redhat-operators-5pk6t" Jan 25 08:09:22 crc kubenswrapper[4832]: I0125 08:09:22.371444 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/65e58712-2959-43d7-8db4-50f22e9eacf5-catalog-content\") pod \"redhat-operators-5pk6t\" (UID: \"65e58712-2959-43d7-8db4-50f22e9eacf5\") " pod="openshift-marketplace/redhat-operators-5pk6t" Jan 25 08:09:22 crc kubenswrapper[4832]: I0125 08:09:22.371486 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p8f4n\" (UniqueName: \"kubernetes.io/projected/65e58712-2959-43d7-8db4-50f22e9eacf5-kube-api-access-p8f4n\") pod \"redhat-operators-5pk6t\" (UID: \"65e58712-2959-43d7-8db4-50f22e9eacf5\") " pod="openshift-marketplace/redhat-operators-5pk6t" Jan 25 08:09:22 crc kubenswrapper[4832]: I0125 08:09:22.371674 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/65e58712-2959-43d7-8db4-50f22e9eacf5-utilities\") pod \"redhat-operators-5pk6t\" (UID: \"65e58712-2959-43d7-8db4-50f22e9eacf5\") " pod="openshift-marketplace/redhat-operators-5pk6t" Jan 25 08:09:22 crc kubenswrapper[4832]: I0125 08:09:22.371938 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/65e58712-2959-43d7-8db4-50f22e9eacf5-catalog-content\") pod \"redhat-operators-5pk6t\" (UID: \"65e58712-2959-43d7-8db4-50f22e9eacf5\") " pod="openshift-marketplace/redhat-operators-5pk6t" Jan 25 08:09:22 crc kubenswrapper[4832]: I0125 08:09:22.394292 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p8f4n\" (UniqueName: \"kubernetes.io/projected/65e58712-2959-43d7-8db4-50f22e9eacf5-kube-api-access-p8f4n\") pod \"redhat-operators-5pk6t\" (UID: \"65e58712-2959-43d7-8db4-50f22e9eacf5\") " pod="openshift-marketplace/redhat-operators-5pk6t" Jan 25 08:09:22 crc kubenswrapper[4832]: I0125 08:09:22.449010 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-5pk6t" Jan 25 08:09:22 crc kubenswrapper[4832]: I0125 08:09:22.549224 4832 generic.go:334] "Generic (PLEG): container finished" podID="1fb47e8e-c812-41b4-9be7-3fad81e121b0" containerID="2e5cad5f69dc7b0bf2005b84dd78b370ac52759a8ef11d5ebaebb12ca134de5d" exitCode=0 Jan 25 08:09:22 crc kubenswrapper[4832]: I0125 08:09:22.549267 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-9r9sz" event={"ID":"1fb47e8e-c812-41b4-9be7-3fad81e121b0","Type":"ContainerDied","Data":"2e5cad5f69dc7b0bf2005b84dd78b370ac52759a8ef11d5ebaebb12ca134de5d"} Jan 25 08:09:22 crc kubenswrapper[4832]: I0125 08:09:22.549299 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-9r9sz" event={"ID":"1fb47e8e-c812-41b4-9be7-3fad81e121b0","Type":"ContainerStarted","Data":"3375547b40eab52484bd4c11f9fadcc1b41ff739f66fbe9ad0a6f2e89555dcb1"} Jan 25 08:09:22 crc kubenswrapper[4832]: I0125 08:09:22.549321 4832 scope.go:117] "RemoveContainer" containerID="63d1a0b13b16f0668b1c02ef162797d02564ab151b4d705b380dc4d22fa1cf34" Jan 25 08:09:22 crc kubenswrapper[4832]: I0125 08:09:22.659774 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-5pk6t"] Jan 25 08:09:23 crc kubenswrapper[4832]: I0125 08:09:23.554948 4832 generic.go:334] "Generic (PLEG): container finished" podID="65e58712-2959-43d7-8db4-50f22e9eacf5" containerID="9d1398361a070e1d3f3fcc4dca8afdeed337dc0eb661f92010bbec9a0aa42ebb" exitCode=0 Jan 25 08:09:23 crc kubenswrapper[4832]: I0125 08:09:23.555618 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5pk6t" event={"ID":"65e58712-2959-43d7-8db4-50f22e9eacf5","Type":"ContainerDied","Data":"9d1398361a070e1d3f3fcc4dca8afdeed337dc0eb661f92010bbec9a0aa42ebb"} Jan 25 08:09:23 crc kubenswrapper[4832]: I0125 08:09:23.555642 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5pk6t" event={"ID":"65e58712-2959-43d7-8db4-50f22e9eacf5","Type":"ContainerStarted","Data":"e6da144f5ee6e7c79706b8b2fe6d3d2a2304b39bc7d84e182e433602b9ee8673"} Jan 25 08:09:23 crc kubenswrapper[4832]: I0125 08:09:23.560098 4832 generic.go:334] "Generic (PLEG): container finished" podID="65372180-5040-413f-a789-bebad10ff6d8" containerID="b59e117f495ca81505cd1e9279a26b8945af40d5af25224574e5bdbe501834f5" exitCode=0 Jan 25 08:09:23 crc kubenswrapper[4832]: I0125 08:09:23.560127 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7139bh59" event={"ID":"65372180-5040-413f-a789-bebad10ff6d8","Type":"ContainerDied","Data":"b59e117f495ca81505cd1e9279a26b8945af40d5af25224574e5bdbe501834f5"} Jan 25 08:09:24 crc kubenswrapper[4832]: I0125 08:09:24.015558 4832 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Jan 25 08:09:24 crc kubenswrapper[4832]: I0125 08:09:24.568060 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5pk6t" event={"ID":"65e58712-2959-43d7-8db4-50f22e9eacf5","Type":"ContainerStarted","Data":"917b82f64aae5cc60803240c84063a9c9c7384924eff237676e907ad18a40178"} Jan 25 08:09:24 crc kubenswrapper[4832]: I0125 08:09:24.571495 4832 generic.go:334] "Generic (PLEG): container finished" podID="65372180-5040-413f-a789-bebad10ff6d8" containerID="c80dc479c0826cb5bb41c6b44a44dcc167a104f3af5f7f1aa91f92369a61ad0d" exitCode=0 Jan 25 08:09:24 crc kubenswrapper[4832]: I0125 08:09:24.571547 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7139bh59" event={"ID":"65372180-5040-413f-a789-bebad10ff6d8","Type":"ContainerDied","Data":"c80dc479c0826cb5bb41c6b44a44dcc167a104f3af5f7f1aa91f92369a61ad0d"} Jan 25 08:09:25 crc kubenswrapper[4832]: I0125 08:09:25.579181 4832 generic.go:334] "Generic (PLEG): container finished" podID="65e58712-2959-43d7-8db4-50f22e9eacf5" containerID="917b82f64aae5cc60803240c84063a9c9c7384924eff237676e907ad18a40178" exitCode=0 Jan 25 08:09:25 crc kubenswrapper[4832]: I0125 08:09:25.579244 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5pk6t" event={"ID":"65e58712-2959-43d7-8db4-50f22e9eacf5","Type":"ContainerDied","Data":"917b82f64aae5cc60803240c84063a9c9c7384924eff237676e907ad18a40178"} Jan 25 08:09:25 crc kubenswrapper[4832]: I0125 08:09:25.782569 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7139bh59" Jan 25 08:09:25 crc kubenswrapper[4832]: I0125 08:09:25.914953 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sc4nm\" (UniqueName: \"kubernetes.io/projected/65372180-5040-413f-a789-bebad10ff6d8-kube-api-access-sc4nm\") pod \"65372180-5040-413f-a789-bebad10ff6d8\" (UID: \"65372180-5040-413f-a789-bebad10ff6d8\") " Jan 25 08:09:25 crc kubenswrapper[4832]: I0125 08:09:25.915036 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/65372180-5040-413f-a789-bebad10ff6d8-bundle\") pod \"65372180-5040-413f-a789-bebad10ff6d8\" (UID: \"65372180-5040-413f-a789-bebad10ff6d8\") " Jan 25 08:09:25 crc kubenswrapper[4832]: I0125 08:09:25.915074 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/65372180-5040-413f-a789-bebad10ff6d8-util\") pod \"65372180-5040-413f-a789-bebad10ff6d8\" (UID: \"65372180-5040-413f-a789-bebad10ff6d8\") " Jan 25 08:09:25 crc kubenswrapper[4832]: I0125 08:09:25.915692 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/65372180-5040-413f-a789-bebad10ff6d8-bundle" (OuterVolumeSpecName: "bundle") pod "65372180-5040-413f-a789-bebad10ff6d8" (UID: "65372180-5040-413f-a789-bebad10ff6d8"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 25 08:09:25 crc kubenswrapper[4832]: I0125 08:09:25.921760 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/65372180-5040-413f-a789-bebad10ff6d8-kube-api-access-sc4nm" (OuterVolumeSpecName: "kube-api-access-sc4nm") pod "65372180-5040-413f-a789-bebad10ff6d8" (UID: "65372180-5040-413f-a789-bebad10ff6d8"). InnerVolumeSpecName "kube-api-access-sc4nm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 25 08:09:25 crc kubenswrapper[4832]: I0125 08:09:25.928447 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/65372180-5040-413f-a789-bebad10ff6d8-util" (OuterVolumeSpecName: "util") pod "65372180-5040-413f-a789-bebad10ff6d8" (UID: "65372180-5040-413f-a789-bebad10ff6d8"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 25 08:09:26 crc kubenswrapper[4832]: I0125 08:09:26.016751 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sc4nm\" (UniqueName: \"kubernetes.io/projected/65372180-5040-413f-a789-bebad10ff6d8-kube-api-access-sc4nm\") on node \"crc\" DevicePath \"\"" Jan 25 08:09:26 crc kubenswrapper[4832]: I0125 08:09:26.016814 4832 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/65372180-5040-413f-a789-bebad10ff6d8-bundle\") on node \"crc\" DevicePath \"\"" Jan 25 08:09:26 crc kubenswrapper[4832]: I0125 08:09:26.016842 4832 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/65372180-5040-413f-a789-bebad10ff6d8-util\") on node \"crc\" DevicePath \"\"" Jan 25 08:09:26 crc kubenswrapper[4832]: I0125 08:09:26.590317 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5pk6t" event={"ID":"65e58712-2959-43d7-8db4-50f22e9eacf5","Type":"ContainerStarted","Data":"e5b9e8138cfca9e09490599957359e3e5ce70c3b645d5f5e87b2cc6ebe3aab00"} Jan 25 08:09:26 crc kubenswrapper[4832]: I0125 08:09:26.592291 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7139bh59" event={"ID":"65372180-5040-413f-a789-bebad10ff6d8","Type":"ContainerDied","Data":"7501630dd143db7cccd2925050f01692a0477e8b6dffcb0180d5942b559b9cc6"} Jan 25 08:09:26 crc kubenswrapper[4832]: I0125 08:09:26.592328 4832 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7501630dd143db7cccd2925050f01692a0477e8b6dffcb0180d5942b559b9cc6" Jan 25 08:09:26 crc kubenswrapper[4832]: I0125 08:09:26.592358 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7139bh59" Jan 25 08:09:26 crc kubenswrapper[4832]: I0125 08:09:26.609459 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-5pk6t" podStartSLOduration=2.013332309 podStartE2EDuration="4.609437307s" podCreationTimestamp="2026-01-25 08:09:22 +0000 UTC" firstStartedPulling="2026-01-25 08:09:23.556870184 +0000 UTC m=+746.230693717" lastFinishedPulling="2026-01-25 08:09:26.152975182 +0000 UTC m=+748.826798715" observedRunningTime="2026-01-25 08:09:26.608615921 +0000 UTC m=+749.282439474" watchObservedRunningTime="2026-01-25 08:09:26.609437307 +0000 UTC m=+749.283260890" Jan 25 08:09:30 crc kubenswrapper[4832]: I0125 08:09:30.221765 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-operator-646758c888-8j4d7"] Jan 25 08:09:30 crc kubenswrapper[4832]: E0125 08:09:30.222404 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="65372180-5040-413f-a789-bebad10ff6d8" containerName="extract" Jan 25 08:09:30 crc kubenswrapper[4832]: I0125 08:09:30.222421 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="65372180-5040-413f-a789-bebad10ff6d8" containerName="extract" Jan 25 08:09:30 crc kubenswrapper[4832]: E0125 08:09:30.222436 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="65372180-5040-413f-a789-bebad10ff6d8" containerName="util" Jan 25 08:09:30 crc kubenswrapper[4832]: I0125 08:09:30.222444 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="65372180-5040-413f-a789-bebad10ff6d8" containerName="util" Jan 25 08:09:30 crc kubenswrapper[4832]: E0125 08:09:30.222467 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="65372180-5040-413f-a789-bebad10ff6d8" containerName="pull" Jan 25 08:09:30 crc kubenswrapper[4832]: I0125 08:09:30.222476 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="65372180-5040-413f-a789-bebad10ff6d8" containerName="pull" Jan 25 08:09:30 crc kubenswrapper[4832]: I0125 08:09:30.222620 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="65372180-5040-413f-a789-bebad10ff6d8" containerName="extract" Jan 25 08:09:30 crc kubenswrapper[4832]: I0125 08:09:30.223068 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-646758c888-8j4d7" Jan 25 08:09:30 crc kubenswrapper[4832]: I0125 08:09:30.227017 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"openshift-service-ca.crt" Jan 25 08:09:30 crc kubenswrapper[4832]: I0125 08:09:30.227109 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-operator-dockercfg-cnwmk" Jan 25 08:09:30 crc kubenswrapper[4832]: I0125 08:09:30.229256 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"kube-root-ca.crt" Jan 25 08:09:30 crc kubenswrapper[4832]: I0125 08:09:30.239680 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-646758c888-8j4d7"] Jan 25 08:09:30 crc kubenswrapper[4832]: I0125 08:09:30.274162 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zftrr\" (UniqueName: \"kubernetes.io/projected/fdb77b21-70d0-4666-807f-60d0aed1040a-kube-api-access-zftrr\") pod \"nmstate-operator-646758c888-8j4d7\" (UID: \"fdb77b21-70d0-4666-807f-60d0aed1040a\") " pod="openshift-nmstate/nmstate-operator-646758c888-8j4d7" Jan 25 08:09:30 crc kubenswrapper[4832]: I0125 08:09:30.375304 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zftrr\" (UniqueName: \"kubernetes.io/projected/fdb77b21-70d0-4666-807f-60d0aed1040a-kube-api-access-zftrr\") pod \"nmstate-operator-646758c888-8j4d7\" (UID: \"fdb77b21-70d0-4666-807f-60d0aed1040a\") " pod="openshift-nmstate/nmstate-operator-646758c888-8j4d7" Jan 25 08:09:30 crc kubenswrapper[4832]: I0125 08:09:30.392231 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zftrr\" (UniqueName: \"kubernetes.io/projected/fdb77b21-70d0-4666-807f-60d0aed1040a-kube-api-access-zftrr\") pod \"nmstate-operator-646758c888-8j4d7\" (UID: \"fdb77b21-70d0-4666-807f-60d0aed1040a\") " pod="openshift-nmstate/nmstate-operator-646758c888-8j4d7" Jan 25 08:09:30 crc kubenswrapper[4832]: I0125 08:09:30.542240 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-646758c888-8j4d7" Jan 25 08:09:30 crc kubenswrapper[4832]: I0125 08:09:30.997342 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-646758c888-8j4d7"] Jan 25 08:09:31 crc kubenswrapper[4832]: W0125 08:09:31.000846 4832 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfdb77b21_70d0_4666_807f_60d0aed1040a.slice/crio-3d6d74ce7cc408be0d88c00f450188036cfac6c7eb652a0d78ffb1bc78412b3d WatchSource:0}: Error finding container 3d6d74ce7cc408be0d88c00f450188036cfac6c7eb652a0d78ffb1bc78412b3d: Status 404 returned error can't find the container with id 3d6d74ce7cc408be0d88c00f450188036cfac6c7eb652a0d78ffb1bc78412b3d Jan 25 08:09:31 crc kubenswrapper[4832]: I0125 08:09:31.620652 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-646758c888-8j4d7" event={"ID":"fdb77b21-70d0-4666-807f-60d0aed1040a","Type":"ContainerStarted","Data":"3d6d74ce7cc408be0d88c00f450188036cfac6c7eb652a0d78ffb1bc78412b3d"} Jan 25 08:09:32 crc kubenswrapper[4832]: I0125 08:09:32.449978 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-5pk6t" Jan 25 08:09:32 crc kubenswrapper[4832]: I0125 08:09:32.450300 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-5pk6t" Jan 25 08:09:32 crc kubenswrapper[4832]: I0125 08:09:32.506843 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-5pk6t" Jan 25 08:09:32 crc kubenswrapper[4832]: I0125 08:09:32.674622 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-5pk6t" Jan 25 08:09:33 crc kubenswrapper[4832]: I0125 08:09:33.631964 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-646758c888-8j4d7" event={"ID":"fdb77b21-70d0-4666-807f-60d0aed1040a","Type":"ContainerStarted","Data":"e101d531f79de47c9c32c677a2ccfe2744d68ec99b350886d3f060342f563930"} Jan 25 08:09:33 crc kubenswrapper[4832]: I0125 08:09:33.651827 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-operator-646758c888-8j4d7" podStartSLOduration=1.5386158989999998 podStartE2EDuration="3.651811408s" podCreationTimestamp="2026-01-25 08:09:30 +0000 UTC" firstStartedPulling="2026-01-25 08:09:31.00300115 +0000 UTC m=+753.676824683" lastFinishedPulling="2026-01-25 08:09:33.116196659 +0000 UTC m=+755.790020192" observedRunningTime="2026-01-25 08:09:33.647659317 +0000 UTC m=+756.321482850" watchObservedRunningTime="2026-01-25 08:09:33.651811408 +0000 UTC m=+756.325634931" Jan 25 08:09:35 crc kubenswrapper[4832]: I0125 08:09:35.100140 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-5pk6t"] Jan 25 08:09:35 crc kubenswrapper[4832]: I0125 08:09:35.100339 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-5pk6t" podUID="65e58712-2959-43d7-8db4-50f22e9eacf5" containerName="registry-server" containerID="cri-o://e5b9e8138cfca9e09490599957359e3e5ce70c3b645d5f5e87b2cc6ebe3aab00" gracePeriod=2 Jan 25 08:09:38 crc kubenswrapper[4832]: I0125 08:09:38.549885 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-5pk6t" Jan 25 08:09:38 crc kubenswrapper[4832]: I0125 08:09:38.661344 4832 generic.go:334] "Generic (PLEG): container finished" podID="65e58712-2959-43d7-8db4-50f22e9eacf5" containerID="e5b9e8138cfca9e09490599957359e3e5ce70c3b645d5f5e87b2cc6ebe3aab00" exitCode=0 Jan 25 08:09:38 crc kubenswrapper[4832]: I0125 08:09:38.661408 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5pk6t" event={"ID":"65e58712-2959-43d7-8db4-50f22e9eacf5","Type":"ContainerDied","Data":"e5b9e8138cfca9e09490599957359e3e5ce70c3b645d5f5e87b2cc6ebe3aab00"} Jan 25 08:09:38 crc kubenswrapper[4832]: I0125 08:09:38.661461 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-5pk6t" Jan 25 08:09:38 crc kubenswrapper[4832]: I0125 08:09:38.661483 4832 scope.go:117] "RemoveContainer" containerID="e5b9e8138cfca9e09490599957359e3e5ce70c3b645d5f5e87b2cc6ebe3aab00" Jan 25 08:09:38 crc kubenswrapper[4832]: I0125 08:09:38.661466 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5pk6t" event={"ID":"65e58712-2959-43d7-8db4-50f22e9eacf5","Type":"ContainerDied","Data":"e6da144f5ee6e7c79706b8b2fe6d3d2a2304b39bc7d84e182e433602b9ee8673"} Jan 25 08:09:38 crc kubenswrapper[4832]: I0125 08:09:38.678201 4832 scope.go:117] "RemoveContainer" containerID="917b82f64aae5cc60803240c84063a9c9c7384924eff237676e907ad18a40178" Jan 25 08:09:38 crc kubenswrapper[4832]: I0125 08:09:38.695981 4832 scope.go:117] "RemoveContainer" containerID="9d1398361a070e1d3f3fcc4dca8afdeed337dc0eb661f92010bbec9a0aa42ebb" Jan 25 08:09:38 crc kubenswrapper[4832]: I0125 08:09:38.714369 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p8f4n\" (UniqueName: \"kubernetes.io/projected/65e58712-2959-43d7-8db4-50f22e9eacf5-kube-api-access-p8f4n\") pod \"65e58712-2959-43d7-8db4-50f22e9eacf5\" (UID: \"65e58712-2959-43d7-8db4-50f22e9eacf5\") " Jan 25 08:09:38 crc kubenswrapper[4832]: I0125 08:09:38.714528 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/65e58712-2959-43d7-8db4-50f22e9eacf5-catalog-content\") pod \"65e58712-2959-43d7-8db4-50f22e9eacf5\" (UID: \"65e58712-2959-43d7-8db4-50f22e9eacf5\") " Jan 25 08:09:38 crc kubenswrapper[4832]: I0125 08:09:38.714601 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/65e58712-2959-43d7-8db4-50f22e9eacf5-utilities\") pod \"65e58712-2959-43d7-8db4-50f22e9eacf5\" (UID: \"65e58712-2959-43d7-8db4-50f22e9eacf5\") " Jan 25 08:09:38 crc kubenswrapper[4832]: I0125 08:09:38.715431 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/65e58712-2959-43d7-8db4-50f22e9eacf5-utilities" (OuterVolumeSpecName: "utilities") pod "65e58712-2959-43d7-8db4-50f22e9eacf5" (UID: "65e58712-2959-43d7-8db4-50f22e9eacf5"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 25 08:09:38 crc kubenswrapper[4832]: I0125 08:09:38.720793 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/65e58712-2959-43d7-8db4-50f22e9eacf5-kube-api-access-p8f4n" (OuterVolumeSpecName: "kube-api-access-p8f4n") pod "65e58712-2959-43d7-8db4-50f22e9eacf5" (UID: "65e58712-2959-43d7-8db4-50f22e9eacf5"). InnerVolumeSpecName "kube-api-access-p8f4n". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 25 08:09:38 crc kubenswrapper[4832]: I0125 08:09:38.759102 4832 scope.go:117] "RemoveContainer" containerID="e5b9e8138cfca9e09490599957359e3e5ce70c3b645d5f5e87b2cc6ebe3aab00" Jan 25 08:09:38 crc kubenswrapper[4832]: E0125 08:09:38.759591 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e5b9e8138cfca9e09490599957359e3e5ce70c3b645d5f5e87b2cc6ebe3aab00\": container with ID starting with e5b9e8138cfca9e09490599957359e3e5ce70c3b645d5f5e87b2cc6ebe3aab00 not found: ID does not exist" containerID="e5b9e8138cfca9e09490599957359e3e5ce70c3b645d5f5e87b2cc6ebe3aab00" Jan 25 08:09:38 crc kubenswrapper[4832]: I0125 08:09:38.759632 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e5b9e8138cfca9e09490599957359e3e5ce70c3b645d5f5e87b2cc6ebe3aab00"} err="failed to get container status \"e5b9e8138cfca9e09490599957359e3e5ce70c3b645d5f5e87b2cc6ebe3aab00\": rpc error: code = NotFound desc = could not find container \"e5b9e8138cfca9e09490599957359e3e5ce70c3b645d5f5e87b2cc6ebe3aab00\": container with ID starting with e5b9e8138cfca9e09490599957359e3e5ce70c3b645d5f5e87b2cc6ebe3aab00 not found: ID does not exist" Jan 25 08:09:38 crc kubenswrapper[4832]: I0125 08:09:38.759659 4832 scope.go:117] "RemoveContainer" containerID="917b82f64aae5cc60803240c84063a9c9c7384924eff237676e907ad18a40178" Jan 25 08:09:38 crc kubenswrapper[4832]: E0125 08:09:38.760013 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"917b82f64aae5cc60803240c84063a9c9c7384924eff237676e907ad18a40178\": container with ID starting with 917b82f64aae5cc60803240c84063a9c9c7384924eff237676e907ad18a40178 not found: ID does not exist" containerID="917b82f64aae5cc60803240c84063a9c9c7384924eff237676e907ad18a40178" Jan 25 08:09:38 crc kubenswrapper[4832]: I0125 08:09:38.760035 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"917b82f64aae5cc60803240c84063a9c9c7384924eff237676e907ad18a40178"} err="failed to get container status \"917b82f64aae5cc60803240c84063a9c9c7384924eff237676e907ad18a40178\": rpc error: code = NotFound desc = could not find container \"917b82f64aae5cc60803240c84063a9c9c7384924eff237676e907ad18a40178\": container with ID starting with 917b82f64aae5cc60803240c84063a9c9c7384924eff237676e907ad18a40178 not found: ID does not exist" Jan 25 08:09:38 crc kubenswrapper[4832]: I0125 08:09:38.760048 4832 scope.go:117] "RemoveContainer" containerID="9d1398361a070e1d3f3fcc4dca8afdeed337dc0eb661f92010bbec9a0aa42ebb" Jan 25 08:09:38 crc kubenswrapper[4832]: E0125 08:09:38.760401 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9d1398361a070e1d3f3fcc4dca8afdeed337dc0eb661f92010bbec9a0aa42ebb\": container with ID starting with 9d1398361a070e1d3f3fcc4dca8afdeed337dc0eb661f92010bbec9a0aa42ebb not found: ID does not exist" containerID="9d1398361a070e1d3f3fcc4dca8afdeed337dc0eb661f92010bbec9a0aa42ebb" Jan 25 08:09:38 crc kubenswrapper[4832]: I0125 08:09:38.760427 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9d1398361a070e1d3f3fcc4dca8afdeed337dc0eb661f92010bbec9a0aa42ebb"} err="failed to get container status \"9d1398361a070e1d3f3fcc4dca8afdeed337dc0eb661f92010bbec9a0aa42ebb\": rpc error: code = NotFound desc = could not find container \"9d1398361a070e1d3f3fcc4dca8afdeed337dc0eb661f92010bbec9a0aa42ebb\": container with ID starting with 9d1398361a070e1d3f3fcc4dca8afdeed337dc0eb661f92010bbec9a0aa42ebb not found: ID does not exist" Jan 25 08:09:38 crc kubenswrapper[4832]: I0125 08:09:38.815611 4832 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/65e58712-2959-43d7-8db4-50f22e9eacf5-utilities\") on node \"crc\" DevicePath \"\"" Jan 25 08:09:38 crc kubenswrapper[4832]: I0125 08:09:38.815648 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p8f4n\" (UniqueName: \"kubernetes.io/projected/65e58712-2959-43d7-8db4-50f22e9eacf5-kube-api-access-p8f4n\") on node \"crc\" DevicePath \"\"" Jan 25 08:09:38 crc kubenswrapper[4832]: I0125 08:09:38.840605 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/65e58712-2959-43d7-8db4-50f22e9eacf5-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "65e58712-2959-43d7-8db4-50f22e9eacf5" (UID: "65e58712-2959-43d7-8db4-50f22e9eacf5"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 25 08:09:38 crc kubenswrapper[4832]: I0125 08:09:38.917771 4832 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/65e58712-2959-43d7-8db4-50f22e9eacf5-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 25 08:09:38 crc kubenswrapper[4832]: I0125 08:09:38.987348 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-5pk6t"] Jan 25 08:09:38 crc kubenswrapper[4832]: I0125 08:09:38.993157 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-5pk6t"] Jan 25 08:09:39 crc kubenswrapper[4832]: I0125 08:09:39.687100 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="65e58712-2959-43d7-8db4-50f22e9eacf5" path="/var/lib/kubelet/pods/65e58712-2959-43d7-8db4-50f22e9eacf5/volumes" Jan 25 08:09:40 crc kubenswrapper[4832]: I0125 08:09:40.193833 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-metrics-54757c584b-2kvpm"] Jan 25 08:09:40 crc kubenswrapper[4832]: E0125 08:09:40.194519 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="65e58712-2959-43d7-8db4-50f22e9eacf5" containerName="extract-content" Jan 25 08:09:40 crc kubenswrapper[4832]: I0125 08:09:40.194546 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="65e58712-2959-43d7-8db4-50f22e9eacf5" containerName="extract-content" Jan 25 08:09:40 crc kubenswrapper[4832]: E0125 08:09:40.194568 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="65e58712-2959-43d7-8db4-50f22e9eacf5" containerName="registry-server" Jan 25 08:09:40 crc kubenswrapper[4832]: I0125 08:09:40.194596 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="65e58712-2959-43d7-8db4-50f22e9eacf5" containerName="registry-server" Jan 25 08:09:40 crc kubenswrapper[4832]: E0125 08:09:40.194609 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="65e58712-2959-43d7-8db4-50f22e9eacf5" containerName="extract-utilities" Jan 25 08:09:40 crc kubenswrapper[4832]: I0125 08:09:40.194620 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="65e58712-2959-43d7-8db4-50f22e9eacf5" containerName="extract-utilities" Jan 25 08:09:40 crc kubenswrapper[4832]: I0125 08:09:40.194778 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="65e58712-2959-43d7-8db4-50f22e9eacf5" containerName="registry-server" Jan 25 08:09:40 crc kubenswrapper[4832]: I0125 08:09:40.195743 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-54757c584b-2kvpm" Jan 25 08:09:40 crc kubenswrapper[4832]: I0125 08:09:40.201728 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-webhook-8474b5b9d8-c4g4v"] Jan 25 08:09:40 crc kubenswrapper[4832]: I0125 08:09:40.203347 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-c4g4v" Jan 25 08:09:40 crc kubenswrapper[4832]: I0125 08:09:40.204736 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-handler-dockercfg-9xvzn" Jan 25 08:09:40 crc kubenswrapper[4832]: I0125 08:09:40.206683 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"openshift-nmstate-webhook" Jan 25 08:09:40 crc kubenswrapper[4832]: I0125 08:09:40.214256 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-54757c584b-2kvpm"] Jan 25 08:09:40 crc kubenswrapper[4832]: I0125 08:09:40.223735 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-8474b5b9d8-c4g4v"] Jan 25 08:09:40 crc kubenswrapper[4832]: I0125 08:09:40.228146 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-handler-rjtfb"] Jan 25 08:09:40 crc kubenswrapper[4832]: I0125 08:09:40.230632 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-rjtfb" Jan 25 08:09:40 crc kubenswrapper[4832]: I0125 08:09:40.234210 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5n8pq\" (UniqueName: \"kubernetes.io/projected/e53d5a55-a9e1-406f-a7c0-b3e6bee8e9ce-kube-api-access-5n8pq\") pod \"nmstate-metrics-54757c584b-2kvpm\" (UID: \"e53d5a55-a9e1-406f-a7c0-b3e6bee8e9ce\") " pod="openshift-nmstate/nmstate-metrics-54757c584b-2kvpm" Jan 25 08:09:40 crc kubenswrapper[4832]: I0125 08:09:40.234398 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/83613ef6-706d-43d4-b310-98579e87fb5a-ovs-socket\") pod \"nmstate-handler-rjtfb\" (UID: \"83613ef6-706d-43d4-b310-98579e87fb5a\") " pod="openshift-nmstate/nmstate-handler-rjtfb" Jan 25 08:09:40 crc kubenswrapper[4832]: I0125 08:09:40.234497 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/fe63b032-94cc-4495-bc9b-84040a04da49-tls-key-pair\") pod \"nmstate-webhook-8474b5b9d8-c4g4v\" (UID: \"fe63b032-94cc-4495-bc9b-84040a04da49\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-c4g4v" Jan 25 08:09:40 crc kubenswrapper[4832]: I0125 08:09:40.234575 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/83613ef6-706d-43d4-b310-98579e87fb5a-nmstate-lock\") pod \"nmstate-handler-rjtfb\" (UID: \"83613ef6-706d-43d4-b310-98579e87fb5a\") " pod="openshift-nmstate/nmstate-handler-rjtfb" Jan 25 08:09:40 crc kubenswrapper[4832]: I0125 08:09:40.234652 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c7pvw\" (UniqueName: \"kubernetes.io/projected/fe63b032-94cc-4495-bc9b-84040a04da49-kube-api-access-c7pvw\") pod \"nmstate-webhook-8474b5b9d8-c4g4v\" (UID: \"fe63b032-94cc-4495-bc9b-84040a04da49\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-c4g4v" Jan 25 08:09:40 crc kubenswrapper[4832]: I0125 08:09:40.234716 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/83613ef6-706d-43d4-b310-98579e87fb5a-dbus-socket\") pod \"nmstate-handler-rjtfb\" (UID: \"83613ef6-706d-43d4-b310-98579e87fb5a\") " pod="openshift-nmstate/nmstate-handler-rjtfb" Jan 25 08:09:40 crc kubenswrapper[4832]: I0125 08:09:40.234784 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x2jj6\" (UniqueName: \"kubernetes.io/projected/83613ef6-706d-43d4-b310-98579e87fb5a-kube-api-access-x2jj6\") pod \"nmstate-handler-rjtfb\" (UID: \"83613ef6-706d-43d4-b310-98579e87fb5a\") " pod="openshift-nmstate/nmstate-handler-rjtfb" Jan 25 08:09:40 crc kubenswrapper[4832]: I0125 08:09:40.315165 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7754f76f8b-q6rnr"] Jan 25 08:09:40 crc kubenswrapper[4832]: I0125 08:09:40.315814 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-q6rnr" Jan 25 08:09:40 crc kubenswrapper[4832]: I0125 08:09:40.325702 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"default-dockercfg-8lqn7" Jan 25 08:09:40 crc kubenswrapper[4832]: I0125 08:09:40.325917 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"plugin-serving-cert" Jan 25 08:09:40 crc kubenswrapper[4832]: I0125 08:09:40.328420 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"nginx-conf" Jan 25 08:09:40 crc kubenswrapper[4832]: I0125 08:09:40.335262 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5n8pq\" (UniqueName: \"kubernetes.io/projected/e53d5a55-a9e1-406f-a7c0-b3e6bee8e9ce-kube-api-access-5n8pq\") pod \"nmstate-metrics-54757c584b-2kvpm\" (UID: \"e53d5a55-a9e1-406f-a7c0-b3e6bee8e9ce\") " pod="openshift-nmstate/nmstate-metrics-54757c584b-2kvpm" Jan 25 08:09:40 crc kubenswrapper[4832]: I0125 08:09:40.335314 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/83613ef6-706d-43d4-b310-98579e87fb5a-ovs-socket\") pod \"nmstate-handler-rjtfb\" (UID: \"83613ef6-706d-43d4-b310-98579e87fb5a\") " pod="openshift-nmstate/nmstate-handler-rjtfb" Jan 25 08:09:40 crc kubenswrapper[4832]: I0125 08:09:40.335355 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/fe63b032-94cc-4495-bc9b-84040a04da49-tls-key-pair\") pod \"nmstate-webhook-8474b5b9d8-c4g4v\" (UID: \"fe63b032-94cc-4495-bc9b-84040a04da49\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-c4g4v" Jan 25 08:09:40 crc kubenswrapper[4832]: I0125 08:09:40.335460 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/83613ef6-706d-43d4-b310-98579e87fb5a-nmstate-lock\") pod \"nmstate-handler-rjtfb\" (UID: \"83613ef6-706d-43d4-b310-98579e87fb5a\") " pod="openshift-nmstate/nmstate-handler-rjtfb" Jan 25 08:09:40 crc kubenswrapper[4832]: I0125 08:09:40.335496 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/2a4c7b1f-f7e7-4fa7-b912-0950280f6c5c-plugin-serving-cert\") pod \"nmstate-console-plugin-7754f76f8b-q6rnr\" (UID: \"2a4c7b1f-f7e7-4fa7-b912-0950280f6c5c\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-q6rnr" Jan 25 08:09:40 crc kubenswrapper[4832]: I0125 08:09:40.335526 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/2a4c7b1f-f7e7-4fa7-b912-0950280f6c5c-nginx-conf\") pod \"nmstate-console-plugin-7754f76f8b-q6rnr\" (UID: \"2a4c7b1f-f7e7-4fa7-b912-0950280f6c5c\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-q6rnr" Jan 25 08:09:40 crc kubenswrapper[4832]: I0125 08:09:40.335551 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c7pvw\" (UniqueName: \"kubernetes.io/projected/fe63b032-94cc-4495-bc9b-84040a04da49-kube-api-access-c7pvw\") pod \"nmstate-webhook-8474b5b9d8-c4g4v\" (UID: \"fe63b032-94cc-4495-bc9b-84040a04da49\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-c4g4v" Jan 25 08:09:40 crc kubenswrapper[4832]: I0125 08:09:40.335587 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/83613ef6-706d-43d4-b310-98579e87fb5a-dbus-socket\") pod \"nmstate-handler-rjtfb\" (UID: \"83613ef6-706d-43d4-b310-98579e87fb5a\") " pod="openshift-nmstate/nmstate-handler-rjtfb" Jan 25 08:09:40 crc kubenswrapper[4832]: I0125 08:09:40.335618 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x2jj6\" (UniqueName: \"kubernetes.io/projected/83613ef6-706d-43d4-b310-98579e87fb5a-kube-api-access-x2jj6\") pod \"nmstate-handler-rjtfb\" (UID: \"83613ef6-706d-43d4-b310-98579e87fb5a\") " pod="openshift-nmstate/nmstate-handler-rjtfb" Jan 25 08:09:40 crc kubenswrapper[4832]: I0125 08:09:40.335653 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xtvhp\" (UniqueName: \"kubernetes.io/projected/2a4c7b1f-f7e7-4fa7-b912-0950280f6c5c-kube-api-access-xtvhp\") pod \"nmstate-console-plugin-7754f76f8b-q6rnr\" (UID: \"2a4c7b1f-f7e7-4fa7-b912-0950280f6c5c\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-q6rnr" Jan 25 08:09:40 crc kubenswrapper[4832]: I0125 08:09:40.335999 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/83613ef6-706d-43d4-b310-98579e87fb5a-ovs-socket\") pod \"nmstate-handler-rjtfb\" (UID: \"83613ef6-706d-43d4-b310-98579e87fb5a\") " pod="openshift-nmstate/nmstate-handler-rjtfb" Jan 25 08:09:40 crc kubenswrapper[4832]: I0125 08:09:40.336698 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/83613ef6-706d-43d4-b310-98579e87fb5a-nmstate-lock\") pod \"nmstate-handler-rjtfb\" (UID: \"83613ef6-706d-43d4-b310-98579e87fb5a\") " pod="openshift-nmstate/nmstate-handler-rjtfb" Jan 25 08:09:40 crc kubenswrapper[4832]: I0125 08:09:40.336992 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/83613ef6-706d-43d4-b310-98579e87fb5a-dbus-socket\") pod \"nmstate-handler-rjtfb\" (UID: \"83613ef6-706d-43d4-b310-98579e87fb5a\") " pod="openshift-nmstate/nmstate-handler-rjtfb" Jan 25 08:09:40 crc kubenswrapper[4832]: I0125 08:09:40.338918 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7754f76f8b-q6rnr"] Jan 25 08:09:40 crc kubenswrapper[4832]: I0125 08:09:40.352792 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5n8pq\" (UniqueName: \"kubernetes.io/projected/e53d5a55-a9e1-406f-a7c0-b3e6bee8e9ce-kube-api-access-5n8pq\") pod \"nmstate-metrics-54757c584b-2kvpm\" (UID: \"e53d5a55-a9e1-406f-a7c0-b3e6bee8e9ce\") " pod="openshift-nmstate/nmstate-metrics-54757c584b-2kvpm" Jan 25 08:09:40 crc kubenswrapper[4832]: I0125 08:09:40.352892 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/fe63b032-94cc-4495-bc9b-84040a04da49-tls-key-pair\") pod \"nmstate-webhook-8474b5b9d8-c4g4v\" (UID: \"fe63b032-94cc-4495-bc9b-84040a04da49\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-c4g4v" Jan 25 08:09:40 crc kubenswrapper[4832]: I0125 08:09:40.358418 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x2jj6\" (UniqueName: \"kubernetes.io/projected/83613ef6-706d-43d4-b310-98579e87fb5a-kube-api-access-x2jj6\") pod \"nmstate-handler-rjtfb\" (UID: \"83613ef6-706d-43d4-b310-98579e87fb5a\") " pod="openshift-nmstate/nmstate-handler-rjtfb" Jan 25 08:09:40 crc kubenswrapper[4832]: I0125 08:09:40.360305 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c7pvw\" (UniqueName: \"kubernetes.io/projected/fe63b032-94cc-4495-bc9b-84040a04da49-kube-api-access-c7pvw\") pod \"nmstate-webhook-8474b5b9d8-c4g4v\" (UID: \"fe63b032-94cc-4495-bc9b-84040a04da49\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-c4g4v" Jan 25 08:09:40 crc kubenswrapper[4832]: I0125 08:09:40.436406 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/2a4c7b1f-f7e7-4fa7-b912-0950280f6c5c-plugin-serving-cert\") pod \"nmstate-console-plugin-7754f76f8b-q6rnr\" (UID: \"2a4c7b1f-f7e7-4fa7-b912-0950280f6c5c\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-q6rnr" Jan 25 08:09:40 crc kubenswrapper[4832]: I0125 08:09:40.436455 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/2a4c7b1f-f7e7-4fa7-b912-0950280f6c5c-nginx-conf\") pod \"nmstate-console-plugin-7754f76f8b-q6rnr\" (UID: \"2a4c7b1f-f7e7-4fa7-b912-0950280f6c5c\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-q6rnr" Jan 25 08:09:40 crc kubenswrapper[4832]: I0125 08:09:40.436492 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xtvhp\" (UniqueName: \"kubernetes.io/projected/2a4c7b1f-f7e7-4fa7-b912-0950280f6c5c-kube-api-access-xtvhp\") pod \"nmstate-console-plugin-7754f76f8b-q6rnr\" (UID: \"2a4c7b1f-f7e7-4fa7-b912-0950280f6c5c\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-q6rnr" Jan 25 08:09:40 crc kubenswrapper[4832]: E0125 08:09:40.436886 4832 secret.go:188] Couldn't get secret openshift-nmstate/plugin-serving-cert: secret "plugin-serving-cert" not found Jan 25 08:09:40 crc kubenswrapper[4832]: E0125 08:09:40.436984 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2a4c7b1f-f7e7-4fa7-b912-0950280f6c5c-plugin-serving-cert podName:2a4c7b1f-f7e7-4fa7-b912-0950280f6c5c nodeName:}" failed. No retries permitted until 2026-01-25 08:09:40.936962119 +0000 UTC m=+763.610785702 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "plugin-serving-cert" (UniqueName: "kubernetes.io/secret/2a4c7b1f-f7e7-4fa7-b912-0950280f6c5c-plugin-serving-cert") pod "nmstate-console-plugin-7754f76f8b-q6rnr" (UID: "2a4c7b1f-f7e7-4fa7-b912-0950280f6c5c") : secret "plugin-serving-cert" not found Jan 25 08:09:40 crc kubenswrapper[4832]: I0125 08:09:40.437677 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/2a4c7b1f-f7e7-4fa7-b912-0950280f6c5c-nginx-conf\") pod \"nmstate-console-plugin-7754f76f8b-q6rnr\" (UID: \"2a4c7b1f-f7e7-4fa7-b912-0950280f6c5c\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-q6rnr" Jan 25 08:09:40 crc kubenswrapper[4832]: I0125 08:09:40.451478 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xtvhp\" (UniqueName: \"kubernetes.io/projected/2a4c7b1f-f7e7-4fa7-b912-0950280f6c5c-kube-api-access-xtvhp\") pod \"nmstate-console-plugin-7754f76f8b-q6rnr\" (UID: \"2a4c7b1f-f7e7-4fa7-b912-0950280f6c5c\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-q6rnr" Jan 25 08:09:40 crc kubenswrapper[4832]: I0125 08:09:40.501223 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-78f98cfc5c-hbcdc"] Jan 25 08:09:40 crc kubenswrapper[4832]: I0125 08:09:40.502293 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-78f98cfc5c-hbcdc" Jan 25 08:09:40 crc kubenswrapper[4832]: I0125 08:09:40.511835 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-78f98cfc5c-hbcdc"] Jan 25 08:09:40 crc kubenswrapper[4832]: I0125 08:09:40.518770 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-54757c584b-2kvpm" Jan 25 08:09:40 crc kubenswrapper[4832]: I0125 08:09:40.529454 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-c4g4v" Jan 25 08:09:40 crc kubenswrapper[4832]: I0125 08:09:40.537299 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/19b9596c-7604-4d35-b2b2-249dcedabfb2-service-ca\") pod \"console-78f98cfc5c-hbcdc\" (UID: \"19b9596c-7604-4d35-b2b2-249dcedabfb2\") " pod="openshift-console/console-78f98cfc5c-hbcdc" Jan 25 08:09:40 crc kubenswrapper[4832]: I0125 08:09:40.537338 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/19b9596c-7604-4d35-b2b2-249dcedabfb2-oauth-serving-cert\") pod \"console-78f98cfc5c-hbcdc\" (UID: \"19b9596c-7604-4d35-b2b2-249dcedabfb2\") " pod="openshift-console/console-78f98cfc5c-hbcdc" Jan 25 08:09:40 crc kubenswrapper[4832]: I0125 08:09:40.537359 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/19b9596c-7604-4d35-b2b2-249dcedabfb2-console-oauth-config\") pod \"console-78f98cfc5c-hbcdc\" (UID: \"19b9596c-7604-4d35-b2b2-249dcedabfb2\") " pod="openshift-console/console-78f98cfc5c-hbcdc" Jan 25 08:09:40 crc kubenswrapper[4832]: I0125 08:09:40.537402 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/19b9596c-7604-4d35-b2b2-249dcedabfb2-console-serving-cert\") pod \"console-78f98cfc5c-hbcdc\" (UID: \"19b9596c-7604-4d35-b2b2-249dcedabfb2\") " pod="openshift-console/console-78f98cfc5c-hbcdc" Jan 25 08:09:40 crc kubenswrapper[4832]: I0125 08:09:40.537436 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/19b9596c-7604-4d35-b2b2-249dcedabfb2-trusted-ca-bundle\") pod \"console-78f98cfc5c-hbcdc\" (UID: \"19b9596c-7604-4d35-b2b2-249dcedabfb2\") " pod="openshift-console/console-78f98cfc5c-hbcdc" Jan 25 08:09:40 crc kubenswrapper[4832]: I0125 08:09:40.537462 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/19b9596c-7604-4d35-b2b2-249dcedabfb2-console-config\") pod \"console-78f98cfc5c-hbcdc\" (UID: \"19b9596c-7604-4d35-b2b2-249dcedabfb2\") " pod="openshift-console/console-78f98cfc5c-hbcdc" Jan 25 08:09:40 crc kubenswrapper[4832]: I0125 08:09:40.537497 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7djsv\" (UniqueName: \"kubernetes.io/projected/19b9596c-7604-4d35-b2b2-249dcedabfb2-kube-api-access-7djsv\") pod \"console-78f98cfc5c-hbcdc\" (UID: \"19b9596c-7604-4d35-b2b2-249dcedabfb2\") " pod="openshift-console/console-78f98cfc5c-hbcdc" Jan 25 08:09:40 crc kubenswrapper[4832]: I0125 08:09:40.559885 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-rjtfb" Jan 25 08:09:40 crc kubenswrapper[4832]: I0125 08:09:40.639127 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7djsv\" (UniqueName: \"kubernetes.io/projected/19b9596c-7604-4d35-b2b2-249dcedabfb2-kube-api-access-7djsv\") pod \"console-78f98cfc5c-hbcdc\" (UID: \"19b9596c-7604-4d35-b2b2-249dcedabfb2\") " pod="openshift-console/console-78f98cfc5c-hbcdc" Jan 25 08:09:40 crc kubenswrapper[4832]: I0125 08:09:40.639558 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/19b9596c-7604-4d35-b2b2-249dcedabfb2-service-ca\") pod \"console-78f98cfc5c-hbcdc\" (UID: \"19b9596c-7604-4d35-b2b2-249dcedabfb2\") " pod="openshift-console/console-78f98cfc5c-hbcdc" Jan 25 08:09:40 crc kubenswrapper[4832]: I0125 08:09:40.639583 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/19b9596c-7604-4d35-b2b2-249dcedabfb2-oauth-serving-cert\") pod \"console-78f98cfc5c-hbcdc\" (UID: \"19b9596c-7604-4d35-b2b2-249dcedabfb2\") " pod="openshift-console/console-78f98cfc5c-hbcdc" Jan 25 08:09:40 crc kubenswrapper[4832]: I0125 08:09:40.639604 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/19b9596c-7604-4d35-b2b2-249dcedabfb2-console-oauth-config\") pod \"console-78f98cfc5c-hbcdc\" (UID: \"19b9596c-7604-4d35-b2b2-249dcedabfb2\") " pod="openshift-console/console-78f98cfc5c-hbcdc" Jan 25 08:09:40 crc kubenswrapper[4832]: I0125 08:09:40.640584 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/19b9596c-7604-4d35-b2b2-249dcedabfb2-oauth-serving-cert\") pod \"console-78f98cfc5c-hbcdc\" (UID: \"19b9596c-7604-4d35-b2b2-249dcedabfb2\") " pod="openshift-console/console-78f98cfc5c-hbcdc" Jan 25 08:09:40 crc kubenswrapper[4832]: I0125 08:09:40.641128 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/19b9596c-7604-4d35-b2b2-249dcedabfb2-console-serving-cert\") pod \"console-78f98cfc5c-hbcdc\" (UID: \"19b9596c-7604-4d35-b2b2-249dcedabfb2\") " pod="openshift-console/console-78f98cfc5c-hbcdc" Jan 25 08:09:40 crc kubenswrapper[4832]: I0125 08:09:40.641243 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/19b9596c-7604-4d35-b2b2-249dcedabfb2-service-ca\") pod \"console-78f98cfc5c-hbcdc\" (UID: \"19b9596c-7604-4d35-b2b2-249dcedabfb2\") " pod="openshift-console/console-78f98cfc5c-hbcdc" Jan 25 08:09:40 crc kubenswrapper[4832]: I0125 08:09:40.641258 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/19b9596c-7604-4d35-b2b2-249dcedabfb2-trusted-ca-bundle\") pod \"console-78f98cfc5c-hbcdc\" (UID: \"19b9596c-7604-4d35-b2b2-249dcedabfb2\") " pod="openshift-console/console-78f98cfc5c-hbcdc" Jan 25 08:09:40 crc kubenswrapper[4832]: I0125 08:09:40.642427 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/19b9596c-7604-4d35-b2b2-249dcedabfb2-trusted-ca-bundle\") pod \"console-78f98cfc5c-hbcdc\" (UID: \"19b9596c-7604-4d35-b2b2-249dcedabfb2\") " pod="openshift-console/console-78f98cfc5c-hbcdc" Jan 25 08:09:40 crc kubenswrapper[4832]: I0125 08:09:40.642517 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/19b9596c-7604-4d35-b2b2-249dcedabfb2-console-config\") pod \"console-78f98cfc5c-hbcdc\" (UID: \"19b9596c-7604-4d35-b2b2-249dcedabfb2\") " pod="openshift-console/console-78f98cfc5c-hbcdc" Jan 25 08:09:40 crc kubenswrapper[4832]: I0125 08:09:40.643260 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/19b9596c-7604-4d35-b2b2-249dcedabfb2-console-config\") pod \"console-78f98cfc5c-hbcdc\" (UID: \"19b9596c-7604-4d35-b2b2-249dcedabfb2\") " pod="openshift-console/console-78f98cfc5c-hbcdc" Jan 25 08:09:40 crc kubenswrapper[4832]: I0125 08:09:40.645126 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/19b9596c-7604-4d35-b2b2-249dcedabfb2-console-serving-cert\") pod \"console-78f98cfc5c-hbcdc\" (UID: \"19b9596c-7604-4d35-b2b2-249dcedabfb2\") " pod="openshift-console/console-78f98cfc5c-hbcdc" Jan 25 08:09:40 crc kubenswrapper[4832]: I0125 08:09:40.645607 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/19b9596c-7604-4d35-b2b2-249dcedabfb2-console-oauth-config\") pod \"console-78f98cfc5c-hbcdc\" (UID: \"19b9596c-7604-4d35-b2b2-249dcedabfb2\") " pod="openshift-console/console-78f98cfc5c-hbcdc" Jan 25 08:09:40 crc kubenswrapper[4832]: I0125 08:09:40.654614 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7djsv\" (UniqueName: \"kubernetes.io/projected/19b9596c-7604-4d35-b2b2-249dcedabfb2-kube-api-access-7djsv\") pod \"console-78f98cfc5c-hbcdc\" (UID: \"19b9596c-7604-4d35-b2b2-249dcedabfb2\") " pod="openshift-console/console-78f98cfc5c-hbcdc" Jan 25 08:09:40 crc kubenswrapper[4832]: I0125 08:09:40.676602 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-rjtfb" event={"ID":"83613ef6-706d-43d4-b310-98579e87fb5a","Type":"ContainerStarted","Data":"885f19096490d8f65542e422aab021681a38cf51260eaf5a80443b2b9015673d"} Jan 25 08:09:40 crc kubenswrapper[4832]: I0125 08:09:40.817613 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-78f98cfc5c-hbcdc" Jan 25 08:09:40 crc kubenswrapper[4832]: I0125 08:09:40.907155 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-54757c584b-2kvpm"] Jan 25 08:09:40 crc kubenswrapper[4832]: W0125 08:09:40.912683 4832 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode53d5a55_a9e1_406f_a7c0_b3e6bee8e9ce.slice/crio-ea57f65a777d28a18532f2db74097bc6fc9138fbb4967d6455101566ef13c3fb WatchSource:0}: Error finding container ea57f65a777d28a18532f2db74097bc6fc9138fbb4967d6455101566ef13c3fb: Status 404 returned error can't find the container with id ea57f65a777d28a18532f2db74097bc6fc9138fbb4967d6455101566ef13c3fb Jan 25 08:09:40 crc kubenswrapper[4832]: I0125 08:09:40.946496 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-8474b5b9d8-c4g4v"] Jan 25 08:09:40 crc kubenswrapper[4832]: I0125 08:09:40.947750 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/2a4c7b1f-f7e7-4fa7-b912-0950280f6c5c-plugin-serving-cert\") pod \"nmstate-console-plugin-7754f76f8b-q6rnr\" (UID: \"2a4c7b1f-f7e7-4fa7-b912-0950280f6c5c\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-q6rnr" Jan 25 08:09:40 crc kubenswrapper[4832]: I0125 08:09:40.952882 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/2a4c7b1f-f7e7-4fa7-b912-0950280f6c5c-plugin-serving-cert\") pod \"nmstate-console-plugin-7754f76f8b-q6rnr\" (UID: \"2a4c7b1f-f7e7-4fa7-b912-0950280f6c5c\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-q6rnr" Jan 25 08:09:40 crc kubenswrapper[4832]: I0125 08:09:40.995734 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-78f98cfc5c-hbcdc"] Jan 25 08:09:40 crc kubenswrapper[4832]: W0125 08:09:40.999350 4832 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod19b9596c_7604_4d35_b2b2_249dcedabfb2.slice/crio-2e2563ebc3911ed71ba5089148b5369f65aaf150c6e0afe374f235e24bc4ff2d WatchSource:0}: Error finding container 2e2563ebc3911ed71ba5089148b5369f65aaf150c6e0afe374f235e24bc4ff2d: Status 404 returned error can't find the container with id 2e2563ebc3911ed71ba5089148b5369f65aaf150c6e0afe374f235e24bc4ff2d Jan 25 08:09:41 crc kubenswrapper[4832]: I0125 08:09:41.233081 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-q6rnr" Jan 25 08:09:41 crc kubenswrapper[4832]: I0125 08:09:41.409769 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7754f76f8b-q6rnr"] Jan 25 08:09:41 crc kubenswrapper[4832]: I0125 08:09:41.683616 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-q6rnr" event={"ID":"2a4c7b1f-f7e7-4fa7-b912-0950280f6c5c","Type":"ContainerStarted","Data":"ddf9f36437cf5b3241a2db3fdb0fe8d89a15e4d7e874c9d6bf75768dd1fb2094"} Jan 25 08:09:41 crc kubenswrapper[4832]: I0125 08:09:41.685141 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-78f98cfc5c-hbcdc" event={"ID":"19b9596c-7604-4d35-b2b2-249dcedabfb2","Type":"ContainerStarted","Data":"7cf0f23b9fadc36f4bbfa5a17158d295bc1375bd32e0f883e2e4bcadd99ce756"} Jan 25 08:09:41 crc kubenswrapper[4832]: I0125 08:09:41.685204 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-78f98cfc5c-hbcdc" event={"ID":"19b9596c-7604-4d35-b2b2-249dcedabfb2","Type":"ContainerStarted","Data":"2e2563ebc3911ed71ba5089148b5369f65aaf150c6e0afe374f235e24bc4ff2d"} Jan 25 08:09:41 crc kubenswrapper[4832]: I0125 08:09:41.686076 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-c4g4v" event={"ID":"fe63b032-94cc-4495-bc9b-84040a04da49","Type":"ContainerStarted","Data":"540b7d99d4dcba14407f631a18710f9beb55717153709230e761c63531cd99aa"} Jan 25 08:09:41 crc kubenswrapper[4832]: I0125 08:09:41.688858 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-54757c584b-2kvpm" event={"ID":"e53d5a55-a9e1-406f-a7c0-b3e6bee8e9ce","Type":"ContainerStarted","Data":"ea57f65a777d28a18532f2db74097bc6fc9138fbb4967d6455101566ef13c3fb"} Jan 25 08:09:41 crc kubenswrapper[4832]: I0125 08:09:41.707142 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-78f98cfc5c-hbcdc" podStartSLOduration=1.707119569 podStartE2EDuration="1.707119569s" podCreationTimestamp="2026-01-25 08:09:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-25 08:09:41.705800847 +0000 UTC m=+764.379624410" watchObservedRunningTime="2026-01-25 08:09:41.707119569 +0000 UTC m=+764.380943112" Jan 25 08:09:43 crc kubenswrapper[4832]: I0125 08:09:43.700350 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-54757c584b-2kvpm" event={"ID":"e53d5a55-a9e1-406f-a7c0-b3e6bee8e9ce","Type":"ContainerStarted","Data":"7f9b7b58ae90201ebf4bc39452028a65511d950f92d44526684a2f7a524d459d"} Jan 25 08:09:43 crc kubenswrapper[4832]: I0125 08:09:43.703980 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-rjtfb" event={"ID":"83613ef6-706d-43d4-b310-98579e87fb5a","Type":"ContainerStarted","Data":"141700c3797317740e880cdd4544524d668a7f476f020d5f4e04457ad0014cd8"} Jan 25 08:09:43 crc kubenswrapper[4832]: I0125 08:09:43.704341 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-handler-rjtfb" Jan 25 08:09:43 crc kubenswrapper[4832]: I0125 08:09:43.710702 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-c4g4v" event={"ID":"fe63b032-94cc-4495-bc9b-84040a04da49","Type":"ContainerStarted","Data":"7915f12f2e16fc57eb25c8c38776131d933f16ef88540bd2d3112f0dc05794f4"} Jan 25 08:09:43 crc kubenswrapper[4832]: I0125 08:09:43.710846 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-c4g4v" Jan 25 08:09:43 crc kubenswrapper[4832]: I0125 08:09:43.720747 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-handler-rjtfb" podStartSLOduration=1.391342955 podStartE2EDuration="3.720731813s" podCreationTimestamp="2026-01-25 08:09:40 +0000 UTC" firstStartedPulling="2026-01-25 08:09:40.581766469 +0000 UTC m=+763.255590002" lastFinishedPulling="2026-01-25 08:09:42.911155297 +0000 UTC m=+765.584978860" observedRunningTime="2026-01-25 08:09:43.715668503 +0000 UTC m=+766.389492036" watchObservedRunningTime="2026-01-25 08:09:43.720731813 +0000 UTC m=+766.394555336" Jan 25 08:09:43 crc kubenswrapper[4832]: I0125 08:09:43.737969 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-c4g4v" podStartSLOduration=1.721941115 podStartE2EDuration="3.737946194s" podCreationTimestamp="2026-01-25 08:09:40 +0000 UTC" firstStartedPulling="2026-01-25 08:09:40.955596371 +0000 UTC m=+763.629419904" lastFinishedPulling="2026-01-25 08:09:42.97160146 +0000 UTC m=+765.645424983" observedRunningTime="2026-01-25 08:09:43.733843556 +0000 UTC m=+766.407667089" watchObservedRunningTime="2026-01-25 08:09:43.737946194 +0000 UTC m=+766.411769727" Jan 25 08:09:44 crc kubenswrapper[4832]: I0125 08:09:44.716627 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-q6rnr" event={"ID":"2a4c7b1f-f7e7-4fa7-b912-0950280f6c5c","Type":"ContainerStarted","Data":"130415e94caba8c437e110a80b3f15960f75ba6be177ba5d9c7a617bd480d867"} Jan 25 08:09:44 crc kubenswrapper[4832]: I0125 08:09:44.731765 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-q6rnr" podStartSLOduration=2.212470783 podStartE2EDuration="4.731640518s" podCreationTimestamp="2026-01-25 08:09:40 +0000 UTC" firstStartedPulling="2026-01-25 08:09:41.420601335 +0000 UTC m=+764.094424868" lastFinishedPulling="2026-01-25 08:09:43.93977107 +0000 UTC m=+766.613594603" observedRunningTime="2026-01-25 08:09:44.728399157 +0000 UTC m=+767.402222700" watchObservedRunningTime="2026-01-25 08:09:44.731640518 +0000 UTC m=+767.405464081" Jan 25 08:09:45 crc kubenswrapper[4832]: I0125 08:09:45.727124 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-54757c584b-2kvpm" event={"ID":"e53d5a55-a9e1-406f-a7c0-b3e6bee8e9ce","Type":"ContainerStarted","Data":"b2dbe3949459d48bd0707976f450a336d33ece6e45dd258d0528297595f39585"} Jan 25 08:09:45 crc kubenswrapper[4832]: I0125 08:09:45.752659 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-metrics-54757c584b-2kvpm" podStartSLOduration=1.48964179 podStartE2EDuration="5.752626162s" podCreationTimestamp="2026-01-25 08:09:40 +0000 UTC" firstStartedPulling="2026-01-25 08:09:40.914620601 +0000 UTC m=+763.588444134" lastFinishedPulling="2026-01-25 08:09:45.177604973 +0000 UTC m=+767.851428506" observedRunningTime="2026-01-25 08:09:45.748841273 +0000 UTC m=+768.422664816" watchObservedRunningTime="2026-01-25 08:09:45.752626162 +0000 UTC m=+768.426449725" Jan 25 08:09:50 crc kubenswrapper[4832]: I0125 08:09:50.602436 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-handler-rjtfb" Jan 25 08:09:50 crc kubenswrapper[4832]: I0125 08:09:50.818781 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-78f98cfc5c-hbcdc" Jan 25 08:09:50 crc kubenswrapper[4832]: I0125 08:09:50.819252 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-78f98cfc5c-hbcdc" Jan 25 08:09:50 crc kubenswrapper[4832]: I0125 08:09:50.823582 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-78f98cfc5c-hbcdc" Jan 25 08:09:51 crc kubenswrapper[4832]: I0125 08:09:51.775982 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-78f98cfc5c-hbcdc" Jan 25 08:09:51 crc kubenswrapper[4832]: I0125 08:09:51.869234 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-8pg27"] Jan 25 08:10:00 crc kubenswrapper[4832]: I0125 08:10:00.536188 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-c4g4v" Jan 25 08:10:13 crc kubenswrapper[4832]: I0125 08:10:13.773748 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcfvv6m"] Jan 25 08:10:13 crc kubenswrapper[4832]: I0125 08:10:13.775436 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcfvv6m" Jan 25 08:10:13 crc kubenswrapper[4832]: I0125 08:10:13.779315 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Jan 25 08:10:13 crc kubenswrapper[4832]: I0125 08:10:13.800131 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcfvv6m"] Jan 25 08:10:13 crc kubenswrapper[4832]: I0125 08:10:13.816033 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/c23342e3-9a86-4405-823c-ba9e4f90a4da-util\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcfvv6m\" (UID: \"c23342e3-9a86-4405-823c-ba9e4f90a4da\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcfvv6m" Jan 25 08:10:13 crc kubenswrapper[4832]: I0125 08:10:13.816310 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/c23342e3-9a86-4405-823c-ba9e4f90a4da-bundle\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcfvv6m\" (UID: \"c23342e3-9a86-4405-823c-ba9e4f90a4da\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcfvv6m" Jan 25 08:10:13 crc kubenswrapper[4832]: I0125 08:10:13.816398 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s5x4r\" (UniqueName: \"kubernetes.io/projected/c23342e3-9a86-4405-823c-ba9e4f90a4da-kube-api-access-s5x4r\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcfvv6m\" (UID: \"c23342e3-9a86-4405-823c-ba9e4f90a4da\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcfvv6m" Jan 25 08:10:13 crc kubenswrapper[4832]: I0125 08:10:13.917190 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/c23342e3-9a86-4405-823c-ba9e4f90a4da-util\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcfvv6m\" (UID: \"c23342e3-9a86-4405-823c-ba9e4f90a4da\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcfvv6m" Jan 25 08:10:13 crc kubenswrapper[4832]: I0125 08:10:13.917277 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/c23342e3-9a86-4405-823c-ba9e4f90a4da-bundle\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcfvv6m\" (UID: \"c23342e3-9a86-4405-823c-ba9e4f90a4da\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcfvv6m" Jan 25 08:10:13 crc kubenswrapper[4832]: I0125 08:10:13.917296 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s5x4r\" (UniqueName: \"kubernetes.io/projected/c23342e3-9a86-4405-823c-ba9e4f90a4da-kube-api-access-s5x4r\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcfvv6m\" (UID: \"c23342e3-9a86-4405-823c-ba9e4f90a4da\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcfvv6m" Jan 25 08:10:13 crc kubenswrapper[4832]: I0125 08:10:13.917814 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/c23342e3-9a86-4405-823c-ba9e4f90a4da-util\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcfvv6m\" (UID: \"c23342e3-9a86-4405-823c-ba9e4f90a4da\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcfvv6m" Jan 25 08:10:13 crc kubenswrapper[4832]: I0125 08:10:13.917834 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/c23342e3-9a86-4405-823c-ba9e4f90a4da-bundle\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcfvv6m\" (UID: \"c23342e3-9a86-4405-823c-ba9e4f90a4da\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcfvv6m" Jan 25 08:10:13 crc kubenswrapper[4832]: I0125 08:10:13.937339 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s5x4r\" (UniqueName: \"kubernetes.io/projected/c23342e3-9a86-4405-823c-ba9e4f90a4da-kube-api-access-s5x4r\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcfvv6m\" (UID: \"c23342e3-9a86-4405-823c-ba9e4f90a4da\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcfvv6m" Jan 25 08:10:14 crc kubenswrapper[4832]: I0125 08:10:14.149175 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcfvv6m" Jan 25 08:10:14 crc kubenswrapper[4832]: I0125 08:10:14.599474 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcfvv6m"] Jan 25 08:10:14 crc kubenswrapper[4832]: I0125 08:10:14.956753 4832 generic.go:334] "Generic (PLEG): container finished" podID="c23342e3-9a86-4405-823c-ba9e4f90a4da" containerID="7b6531d45e72e0468b21a14839d9f2bd9c6f83c9691acc664f01dc9aad171575" exitCode=0 Jan 25 08:10:14 crc kubenswrapper[4832]: I0125 08:10:14.956821 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcfvv6m" event={"ID":"c23342e3-9a86-4405-823c-ba9e4f90a4da","Type":"ContainerDied","Data":"7b6531d45e72e0468b21a14839d9f2bd9c6f83c9691acc664f01dc9aad171575"} Jan 25 08:10:14 crc kubenswrapper[4832]: I0125 08:10:14.956864 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcfvv6m" event={"ID":"c23342e3-9a86-4405-823c-ba9e4f90a4da","Type":"ContainerStarted","Data":"d6c5a9201b2fe44b935d694840925cccb9f4111b67d126be5e404aeb918c84b4"} Jan 25 08:10:16 crc kubenswrapper[4832]: I0125 08:10:16.918520 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-f9d7485db-8pg27" podUID="95dbbcf8-838b-4f56-928a-81b4f038b259" containerName="console" containerID="cri-o://33d0fc31b0bc1409c2a27e276061ecab896dcb3c68dd7eae28791bbd6fcd9d91" gracePeriod=15 Jan 25 08:10:16 crc kubenswrapper[4832]: I0125 08:10:16.969120 4832 generic.go:334] "Generic (PLEG): container finished" podID="c23342e3-9a86-4405-823c-ba9e4f90a4da" containerID="ad6f1e4f1325dde34b549d1eb99a14fb43affc9887db76cf09963ff026df68ea" exitCode=0 Jan 25 08:10:16 crc kubenswrapper[4832]: I0125 08:10:16.969163 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcfvv6m" event={"ID":"c23342e3-9a86-4405-823c-ba9e4f90a4da","Type":"ContainerDied","Data":"ad6f1e4f1325dde34b549d1eb99a14fb43affc9887db76cf09963ff026df68ea"} Jan 25 08:10:17 crc kubenswrapper[4832]: I0125 08:10:17.352788 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-8pg27_95dbbcf8-838b-4f56-928a-81b4f038b259/console/0.log" Jan 25 08:10:17 crc kubenswrapper[4832]: I0125 08:10:17.353096 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-8pg27" Jan 25 08:10:17 crc kubenswrapper[4832]: I0125 08:10:17.366008 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c9mbt\" (UniqueName: \"kubernetes.io/projected/95dbbcf8-838b-4f56-928a-81b4f038b259-kube-api-access-c9mbt\") pod \"95dbbcf8-838b-4f56-928a-81b4f038b259\" (UID: \"95dbbcf8-838b-4f56-928a-81b4f038b259\") " Jan 25 08:10:17 crc kubenswrapper[4832]: I0125 08:10:17.366336 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/95dbbcf8-838b-4f56-928a-81b4f038b259-console-serving-cert\") pod \"95dbbcf8-838b-4f56-928a-81b4f038b259\" (UID: \"95dbbcf8-838b-4f56-928a-81b4f038b259\") " Jan 25 08:10:17 crc kubenswrapper[4832]: I0125 08:10:17.366369 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/95dbbcf8-838b-4f56-928a-81b4f038b259-console-oauth-config\") pod \"95dbbcf8-838b-4f56-928a-81b4f038b259\" (UID: \"95dbbcf8-838b-4f56-928a-81b4f038b259\") " Jan 25 08:10:17 crc kubenswrapper[4832]: I0125 08:10:17.374137 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/95dbbcf8-838b-4f56-928a-81b4f038b259-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "95dbbcf8-838b-4f56-928a-81b4f038b259" (UID: "95dbbcf8-838b-4f56-928a-81b4f038b259"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 25 08:10:17 crc kubenswrapper[4832]: I0125 08:10:17.374683 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/95dbbcf8-838b-4f56-928a-81b4f038b259-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "95dbbcf8-838b-4f56-928a-81b4f038b259" (UID: "95dbbcf8-838b-4f56-928a-81b4f038b259"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 25 08:10:17 crc kubenswrapper[4832]: I0125 08:10:17.374686 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/95dbbcf8-838b-4f56-928a-81b4f038b259-kube-api-access-c9mbt" (OuterVolumeSpecName: "kube-api-access-c9mbt") pod "95dbbcf8-838b-4f56-928a-81b4f038b259" (UID: "95dbbcf8-838b-4f56-928a-81b4f038b259"). InnerVolumeSpecName "kube-api-access-c9mbt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 25 08:10:17 crc kubenswrapper[4832]: I0125 08:10:17.469696 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/95dbbcf8-838b-4f56-928a-81b4f038b259-oauth-serving-cert\") pod \"95dbbcf8-838b-4f56-928a-81b4f038b259\" (UID: \"95dbbcf8-838b-4f56-928a-81b4f038b259\") " Jan 25 08:10:17 crc kubenswrapper[4832]: I0125 08:10:17.469749 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/95dbbcf8-838b-4f56-928a-81b4f038b259-service-ca\") pod \"95dbbcf8-838b-4f56-928a-81b4f038b259\" (UID: \"95dbbcf8-838b-4f56-928a-81b4f038b259\") " Jan 25 08:10:17 crc kubenswrapper[4832]: I0125 08:10:17.469781 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/95dbbcf8-838b-4f56-928a-81b4f038b259-console-config\") pod \"95dbbcf8-838b-4f56-928a-81b4f038b259\" (UID: \"95dbbcf8-838b-4f56-928a-81b4f038b259\") " Jan 25 08:10:17 crc kubenswrapper[4832]: I0125 08:10:17.469818 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/95dbbcf8-838b-4f56-928a-81b4f038b259-trusted-ca-bundle\") pod \"95dbbcf8-838b-4f56-928a-81b4f038b259\" (UID: \"95dbbcf8-838b-4f56-928a-81b4f038b259\") " Jan 25 08:10:17 crc kubenswrapper[4832]: I0125 08:10:17.470090 4832 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/95dbbcf8-838b-4f56-928a-81b4f038b259-console-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 25 08:10:17 crc kubenswrapper[4832]: I0125 08:10:17.470107 4832 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/95dbbcf8-838b-4f56-928a-81b4f038b259-console-oauth-config\") on node \"crc\" DevicePath \"\"" Jan 25 08:10:17 crc kubenswrapper[4832]: I0125 08:10:17.470118 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c9mbt\" (UniqueName: \"kubernetes.io/projected/95dbbcf8-838b-4f56-928a-81b4f038b259-kube-api-access-c9mbt\") on node \"crc\" DevicePath \"\"" Jan 25 08:10:17 crc kubenswrapper[4832]: I0125 08:10:17.470504 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/95dbbcf8-838b-4f56-928a-81b4f038b259-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "95dbbcf8-838b-4f56-928a-81b4f038b259" (UID: "95dbbcf8-838b-4f56-928a-81b4f038b259"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 25 08:10:17 crc kubenswrapper[4832]: I0125 08:10:17.470580 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/95dbbcf8-838b-4f56-928a-81b4f038b259-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "95dbbcf8-838b-4f56-928a-81b4f038b259" (UID: "95dbbcf8-838b-4f56-928a-81b4f038b259"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 25 08:10:17 crc kubenswrapper[4832]: I0125 08:10:17.470553 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/95dbbcf8-838b-4f56-928a-81b4f038b259-service-ca" (OuterVolumeSpecName: "service-ca") pod "95dbbcf8-838b-4f56-928a-81b4f038b259" (UID: "95dbbcf8-838b-4f56-928a-81b4f038b259"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 25 08:10:17 crc kubenswrapper[4832]: I0125 08:10:17.471079 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/95dbbcf8-838b-4f56-928a-81b4f038b259-console-config" (OuterVolumeSpecName: "console-config") pod "95dbbcf8-838b-4f56-928a-81b4f038b259" (UID: "95dbbcf8-838b-4f56-928a-81b4f038b259"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 25 08:10:17 crc kubenswrapper[4832]: I0125 08:10:17.571282 4832 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/95dbbcf8-838b-4f56-928a-81b4f038b259-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 25 08:10:17 crc kubenswrapper[4832]: I0125 08:10:17.571332 4832 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/95dbbcf8-838b-4f56-928a-81b4f038b259-service-ca\") on node \"crc\" DevicePath \"\"" Jan 25 08:10:17 crc kubenswrapper[4832]: I0125 08:10:17.571351 4832 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/95dbbcf8-838b-4f56-928a-81b4f038b259-console-config\") on node \"crc\" DevicePath \"\"" Jan 25 08:10:17 crc kubenswrapper[4832]: I0125 08:10:17.571365 4832 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/95dbbcf8-838b-4f56-928a-81b4f038b259-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 25 08:10:17 crc kubenswrapper[4832]: I0125 08:10:17.978444 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-8pg27_95dbbcf8-838b-4f56-928a-81b4f038b259/console/0.log" Jan 25 08:10:17 crc kubenswrapper[4832]: I0125 08:10:17.978510 4832 generic.go:334] "Generic (PLEG): container finished" podID="95dbbcf8-838b-4f56-928a-81b4f038b259" containerID="33d0fc31b0bc1409c2a27e276061ecab896dcb3c68dd7eae28791bbd6fcd9d91" exitCode=2 Jan 25 08:10:17 crc kubenswrapper[4832]: I0125 08:10:17.978612 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-8pg27" event={"ID":"95dbbcf8-838b-4f56-928a-81b4f038b259","Type":"ContainerDied","Data":"33d0fc31b0bc1409c2a27e276061ecab896dcb3c68dd7eae28791bbd6fcd9d91"} Jan 25 08:10:17 crc kubenswrapper[4832]: I0125 08:10:17.978648 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-8pg27" event={"ID":"95dbbcf8-838b-4f56-928a-81b4f038b259","Type":"ContainerDied","Data":"90480f7dba8ae9fdd219e48f2f1853f5e269418bf954755c0e82491f1fd113da"} Jan 25 08:10:17 crc kubenswrapper[4832]: I0125 08:10:17.978669 4832 scope.go:117] "RemoveContainer" containerID="33d0fc31b0bc1409c2a27e276061ecab896dcb3c68dd7eae28791bbd6fcd9d91" Jan 25 08:10:17 crc kubenswrapper[4832]: I0125 08:10:17.979649 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-8pg27" Jan 25 08:10:17 crc kubenswrapper[4832]: I0125 08:10:17.981575 4832 generic.go:334] "Generic (PLEG): container finished" podID="c23342e3-9a86-4405-823c-ba9e4f90a4da" containerID="7491d64dc293ae5402381299aba2ebded46bd89b1196b77a1c43d39b64fdbbee" exitCode=0 Jan 25 08:10:17 crc kubenswrapper[4832]: I0125 08:10:17.981621 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcfvv6m" event={"ID":"c23342e3-9a86-4405-823c-ba9e4f90a4da","Type":"ContainerDied","Data":"7491d64dc293ae5402381299aba2ebded46bd89b1196b77a1c43d39b64fdbbee"} Jan 25 08:10:18 crc kubenswrapper[4832]: I0125 08:10:18.000639 4832 scope.go:117] "RemoveContainer" containerID="33d0fc31b0bc1409c2a27e276061ecab896dcb3c68dd7eae28791bbd6fcd9d91" Jan 25 08:10:18 crc kubenswrapper[4832]: E0125 08:10:18.001157 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"33d0fc31b0bc1409c2a27e276061ecab896dcb3c68dd7eae28791bbd6fcd9d91\": container with ID starting with 33d0fc31b0bc1409c2a27e276061ecab896dcb3c68dd7eae28791bbd6fcd9d91 not found: ID does not exist" containerID="33d0fc31b0bc1409c2a27e276061ecab896dcb3c68dd7eae28791bbd6fcd9d91" Jan 25 08:10:18 crc kubenswrapper[4832]: I0125 08:10:18.001197 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"33d0fc31b0bc1409c2a27e276061ecab896dcb3c68dd7eae28791bbd6fcd9d91"} err="failed to get container status \"33d0fc31b0bc1409c2a27e276061ecab896dcb3c68dd7eae28791bbd6fcd9d91\": rpc error: code = NotFound desc = could not find container \"33d0fc31b0bc1409c2a27e276061ecab896dcb3c68dd7eae28791bbd6fcd9d91\": container with ID starting with 33d0fc31b0bc1409c2a27e276061ecab896dcb3c68dd7eae28791bbd6fcd9d91 not found: ID does not exist" Jan 25 08:10:18 crc kubenswrapper[4832]: I0125 08:10:18.020832 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-8pg27"] Jan 25 08:10:18 crc kubenswrapper[4832]: I0125 08:10:18.030202 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-f9d7485db-8pg27"] Jan 25 08:10:19 crc kubenswrapper[4832]: I0125 08:10:19.201370 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcfvv6m" Jan 25 08:10:19 crc kubenswrapper[4832]: I0125 08:10:19.294027 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s5x4r\" (UniqueName: \"kubernetes.io/projected/c23342e3-9a86-4405-823c-ba9e4f90a4da-kube-api-access-s5x4r\") pod \"c23342e3-9a86-4405-823c-ba9e4f90a4da\" (UID: \"c23342e3-9a86-4405-823c-ba9e4f90a4da\") " Jan 25 08:10:19 crc kubenswrapper[4832]: I0125 08:10:19.294085 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/c23342e3-9a86-4405-823c-ba9e4f90a4da-util\") pod \"c23342e3-9a86-4405-823c-ba9e4f90a4da\" (UID: \"c23342e3-9a86-4405-823c-ba9e4f90a4da\") " Jan 25 08:10:19 crc kubenswrapper[4832]: I0125 08:10:19.294116 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/c23342e3-9a86-4405-823c-ba9e4f90a4da-bundle\") pod \"c23342e3-9a86-4405-823c-ba9e4f90a4da\" (UID: \"c23342e3-9a86-4405-823c-ba9e4f90a4da\") " Jan 25 08:10:19 crc kubenswrapper[4832]: I0125 08:10:19.295327 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c23342e3-9a86-4405-823c-ba9e4f90a4da-bundle" (OuterVolumeSpecName: "bundle") pod "c23342e3-9a86-4405-823c-ba9e4f90a4da" (UID: "c23342e3-9a86-4405-823c-ba9e4f90a4da"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 25 08:10:19 crc kubenswrapper[4832]: I0125 08:10:19.300817 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c23342e3-9a86-4405-823c-ba9e4f90a4da-kube-api-access-s5x4r" (OuterVolumeSpecName: "kube-api-access-s5x4r") pod "c23342e3-9a86-4405-823c-ba9e4f90a4da" (UID: "c23342e3-9a86-4405-823c-ba9e4f90a4da"). InnerVolumeSpecName "kube-api-access-s5x4r". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 25 08:10:19 crc kubenswrapper[4832]: I0125 08:10:19.310429 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c23342e3-9a86-4405-823c-ba9e4f90a4da-util" (OuterVolumeSpecName: "util") pod "c23342e3-9a86-4405-823c-ba9e4f90a4da" (UID: "c23342e3-9a86-4405-823c-ba9e4f90a4da"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 25 08:10:19 crc kubenswrapper[4832]: I0125 08:10:19.395638 4832 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/c23342e3-9a86-4405-823c-ba9e4f90a4da-util\") on node \"crc\" DevicePath \"\"" Jan 25 08:10:19 crc kubenswrapper[4832]: I0125 08:10:19.395678 4832 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/c23342e3-9a86-4405-823c-ba9e4f90a4da-bundle\") on node \"crc\" DevicePath \"\"" Jan 25 08:10:19 crc kubenswrapper[4832]: I0125 08:10:19.395693 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s5x4r\" (UniqueName: \"kubernetes.io/projected/c23342e3-9a86-4405-823c-ba9e4f90a4da-kube-api-access-s5x4r\") on node \"crc\" DevicePath \"\"" Jan 25 08:10:19 crc kubenswrapper[4832]: I0125 08:10:19.679504 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="95dbbcf8-838b-4f56-928a-81b4f038b259" path="/var/lib/kubelet/pods/95dbbcf8-838b-4f56-928a-81b4f038b259/volumes" Jan 25 08:10:19 crc kubenswrapper[4832]: I0125 08:10:19.996376 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcfvv6m" event={"ID":"c23342e3-9a86-4405-823c-ba9e4f90a4da","Type":"ContainerDied","Data":"d6c5a9201b2fe44b935d694840925cccb9f4111b67d126be5e404aeb918c84b4"} Jan 25 08:10:19 crc kubenswrapper[4832]: I0125 08:10:19.996443 4832 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d6c5a9201b2fe44b935d694840925cccb9f4111b67d126be5e404aeb918c84b4" Jan 25 08:10:19 crc kubenswrapper[4832]: I0125 08:10:19.996760 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcfvv6m" Jan 25 08:10:28 crc kubenswrapper[4832]: I0125 08:10:28.616739 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-controller-manager-5864b67f75-pvtmd"] Jan 25 08:10:28 crc kubenswrapper[4832]: E0125 08:10:28.617420 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c23342e3-9a86-4405-823c-ba9e4f90a4da" containerName="pull" Jan 25 08:10:28 crc kubenswrapper[4832]: I0125 08:10:28.617432 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="c23342e3-9a86-4405-823c-ba9e4f90a4da" containerName="pull" Jan 25 08:10:28 crc kubenswrapper[4832]: E0125 08:10:28.617444 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="95dbbcf8-838b-4f56-928a-81b4f038b259" containerName="console" Jan 25 08:10:28 crc kubenswrapper[4832]: I0125 08:10:28.617450 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="95dbbcf8-838b-4f56-928a-81b4f038b259" containerName="console" Jan 25 08:10:28 crc kubenswrapper[4832]: E0125 08:10:28.617461 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c23342e3-9a86-4405-823c-ba9e4f90a4da" containerName="util" Jan 25 08:10:28 crc kubenswrapper[4832]: I0125 08:10:28.617467 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="c23342e3-9a86-4405-823c-ba9e4f90a4da" containerName="util" Jan 25 08:10:28 crc kubenswrapper[4832]: E0125 08:10:28.617475 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c23342e3-9a86-4405-823c-ba9e4f90a4da" containerName="extract" Jan 25 08:10:28 crc kubenswrapper[4832]: I0125 08:10:28.617481 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="c23342e3-9a86-4405-823c-ba9e4f90a4da" containerName="extract" Jan 25 08:10:28 crc kubenswrapper[4832]: I0125 08:10:28.617580 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="95dbbcf8-838b-4f56-928a-81b4f038b259" containerName="console" Jan 25 08:10:28 crc kubenswrapper[4832]: I0125 08:10:28.617593 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="c23342e3-9a86-4405-823c-ba9e4f90a4da" containerName="extract" Jan 25 08:10:28 crc kubenswrapper[4832]: I0125 08:10:28.617939 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-5864b67f75-pvtmd" Jan 25 08:10:28 crc kubenswrapper[4832]: I0125 08:10:28.620628 4832 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-cert" Jan 25 08:10:28 crc kubenswrapper[4832]: I0125 08:10:28.622141 4832 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-controller-manager-service-cert" Jan 25 08:10:28 crc kubenswrapper[4832]: I0125 08:10:28.622541 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"kube-root-ca.crt" Jan 25 08:10:28 crc kubenswrapper[4832]: I0125 08:10:28.622850 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"openshift-service-ca.crt" Jan 25 08:10:28 crc kubenswrapper[4832]: I0125 08:10:28.623095 4832 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"manager-account-dockercfg-67h6f" Jan 25 08:10:28 crc kubenswrapper[4832]: I0125 08:10:28.741744 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-5864b67f75-pvtmd"] Jan 25 08:10:28 crc kubenswrapper[4832]: I0125 08:10:28.745497 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/71c97cd3-3f75-4fbd-84d8-f08942aba882-apiservice-cert\") pod \"metallb-operator-controller-manager-5864b67f75-pvtmd\" (UID: \"71c97cd3-3f75-4fbd-84d8-f08942aba882\") " pod="metallb-system/metallb-operator-controller-manager-5864b67f75-pvtmd" Jan 25 08:10:28 crc kubenswrapper[4832]: I0125 08:10:28.745592 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/71c97cd3-3f75-4fbd-84d8-f08942aba882-webhook-cert\") pod \"metallb-operator-controller-manager-5864b67f75-pvtmd\" (UID: \"71c97cd3-3f75-4fbd-84d8-f08942aba882\") " pod="metallb-system/metallb-operator-controller-manager-5864b67f75-pvtmd" Jan 25 08:10:28 crc kubenswrapper[4832]: I0125 08:10:28.745624 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qtl6n\" (UniqueName: \"kubernetes.io/projected/71c97cd3-3f75-4fbd-84d8-f08942aba882-kube-api-access-qtl6n\") pod \"metallb-operator-controller-manager-5864b67f75-pvtmd\" (UID: \"71c97cd3-3f75-4fbd-84d8-f08942aba882\") " pod="metallb-system/metallb-operator-controller-manager-5864b67f75-pvtmd" Jan 25 08:10:28 crc kubenswrapper[4832]: I0125 08:10:28.847023 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/71c97cd3-3f75-4fbd-84d8-f08942aba882-webhook-cert\") pod \"metallb-operator-controller-manager-5864b67f75-pvtmd\" (UID: \"71c97cd3-3f75-4fbd-84d8-f08942aba882\") " pod="metallb-system/metallb-operator-controller-manager-5864b67f75-pvtmd" Jan 25 08:10:28 crc kubenswrapper[4832]: I0125 08:10:28.847098 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qtl6n\" (UniqueName: \"kubernetes.io/projected/71c97cd3-3f75-4fbd-84d8-f08942aba882-kube-api-access-qtl6n\") pod \"metallb-operator-controller-manager-5864b67f75-pvtmd\" (UID: \"71c97cd3-3f75-4fbd-84d8-f08942aba882\") " pod="metallb-system/metallb-operator-controller-manager-5864b67f75-pvtmd" Jan 25 08:10:28 crc kubenswrapper[4832]: I0125 08:10:28.847161 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/71c97cd3-3f75-4fbd-84d8-f08942aba882-apiservice-cert\") pod \"metallb-operator-controller-manager-5864b67f75-pvtmd\" (UID: \"71c97cd3-3f75-4fbd-84d8-f08942aba882\") " pod="metallb-system/metallb-operator-controller-manager-5864b67f75-pvtmd" Jan 25 08:10:28 crc kubenswrapper[4832]: I0125 08:10:28.854292 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/71c97cd3-3f75-4fbd-84d8-f08942aba882-apiservice-cert\") pod \"metallb-operator-controller-manager-5864b67f75-pvtmd\" (UID: \"71c97cd3-3f75-4fbd-84d8-f08942aba882\") " pod="metallb-system/metallb-operator-controller-manager-5864b67f75-pvtmd" Jan 25 08:10:28 crc kubenswrapper[4832]: I0125 08:10:28.855157 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/71c97cd3-3f75-4fbd-84d8-f08942aba882-webhook-cert\") pod \"metallb-operator-controller-manager-5864b67f75-pvtmd\" (UID: \"71c97cd3-3f75-4fbd-84d8-f08942aba882\") " pod="metallb-system/metallb-operator-controller-manager-5864b67f75-pvtmd" Jan 25 08:10:28 crc kubenswrapper[4832]: I0125 08:10:28.870153 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qtl6n\" (UniqueName: \"kubernetes.io/projected/71c97cd3-3f75-4fbd-84d8-f08942aba882-kube-api-access-qtl6n\") pod \"metallb-operator-controller-manager-5864b67f75-pvtmd\" (UID: \"71c97cd3-3f75-4fbd-84d8-f08942aba882\") " pod="metallb-system/metallb-operator-controller-manager-5864b67f75-pvtmd" Jan 25 08:10:28 crc kubenswrapper[4832]: I0125 08:10:28.876670 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-webhook-server-ffcf449bb-jz2q4"] Jan 25 08:10:28 crc kubenswrapper[4832]: I0125 08:10:28.877642 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-ffcf449bb-jz2q4" Jan 25 08:10:28 crc kubenswrapper[4832]: I0125 08:10:28.883791 4832 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-dockercfg-nz8m6" Jan 25 08:10:28 crc kubenswrapper[4832]: I0125 08:10:28.883994 4832 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-service-cert" Jan 25 08:10:28 crc kubenswrapper[4832]: I0125 08:10:28.884171 4832 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Jan 25 08:10:28 crc kubenswrapper[4832]: I0125 08:10:28.894899 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-ffcf449bb-jz2q4"] Jan 25 08:10:28 crc kubenswrapper[4832]: I0125 08:10:28.947718 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-5864b67f75-pvtmd" Jan 25 08:10:28 crc kubenswrapper[4832]: I0125 08:10:28.950223 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/d6219f5c-261f-419a-b3de-ec9119991024-apiservice-cert\") pod \"metallb-operator-webhook-server-ffcf449bb-jz2q4\" (UID: \"d6219f5c-261f-419a-b3de-ec9119991024\") " pod="metallb-system/metallb-operator-webhook-server-ffcf449bb-jz2q4" Jan 25 08:10:28 crc kubenswrapper[4832]: I0125 08:10:28.950276 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/d6219f5c-261f-419a-b3de-ec9119991024-webhook-cert\") pod \"metallb-operator-webhook-server-ffcf449bb-jz2q4\" (UID: \"d6219f5c-261f-419a-b3de-ec9119991024\") " pod="metallb-system/metallb-operator-webhook-server-ffcf449bb-jz2q4" Jan 25 08:10:28 crc kubenswrapper[4832]: I0125 08:10:28.950296 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g8xq6\" (UniqueName: \"kubernetes.io/projected/d6219f5c-261f-419a-b3de-ec9119991024-kube-api-access-g8xq6\") pod \"metallb-operator-webhook-server-ffcf449bb-jz2q4\" (UID: \"d6219f5c-261f-419a-b3de-ec9119991024\") " pod="metallb-system/metallb-operator-webhook-server-ffcf449bb-jz2q4" Jan 25 08:10:29 crc kubenswrapper[4832]: I0125 08:10:29.051141 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/d6219f5c-261f-419a-b3de-ec9119991024-apiservice-cert\") pod \"metallb-operator-webhook-server-ffcf449bb-jz2q4\" (UID: \"d6219f5c-261f-419a-b3de-ec9119991024\") " pod="metallb-system/metallb-operator-webhook-server-ffcf449bb-jz2q4" Jan 25 08:10:29 crc kubenswrapper[4832]: I0125 08:10:29.051197 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/d6219f5c-261f-419a-b3de-ec9119991024-webhook-cert\") pod \"metallb-operator-webhook-server-ffcf449bb-jz2q4\" (UID: \"d6219f5c-261f-419a-b3de-ec9119991024\") " pod="metallb-system/metallb-operator-webhook-server-ffcf449bb-jz2q4" Jan 25 08:10:29 crc kubenswrapper[4832]: I0125 08:10:29.051225 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g8xq6\" (UniqueName: \"kubernetes.io/projected/d6219f5c-261f-419a-b3de-ec9119991024-kube-api-access-g8xq6\") pod \"metallb-operator-webhook-server-ffcf449bb-jz2q4\" (UID: \"d6219f5c-261f-419a-b3de-ec9119991024\") " pod="metallb-system/metallb-operator-webhook-server-ffcf449bb-jz2q4" Jan 25 08:10:29 crc kubenswrapper[4832]: I0125 08:10:29.056211 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/d6219f5c-261f-419a-b3de-ec9119991024-webhook-cert\") pod \"metallb-operator-webhook-server-ffcf449bb-jz2q4\" (UID: \"d6219f5c-261f-419a-b3de-ec9119991024\") " pod="metallb-system/metallb-operator-webhook-server-ffcf449bb-jz2q4" Jan 25 08:10:29 crc kubenswrapper[4832]: I0125 08:10:29.056298 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/d6219f5c-261f-419a-b3de-ec9119991024-apiservice-cert\") pod \"metallb-operator-webhook-server-ffcf449bb-jz2q4\" (UID: \"d6219f5c-261f-419a-b3de-ec9119991024\") " pod="metallb-system/metallb-operator-webhook-server-ffcf449bb-jz2q4" Jan 25 08:10:29 crc kubenswrapper[4832]: I0125 08:10:29.074338 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g8xq6\" (UniqueName: \"kubernetes.io/projected/d6219f5c-261f-419a-b3de-ec9119991024-kube-api-access-g8xq6\") pod \"metallb-operator-webhook-server-ffcf449bb-jz2q4\" (UID: \"d6219f5c-261f-419a-b3de-ec9119991024\") " pod="metallb-system/metallb-operator-webhook-server-ffcf449bb-jz2q4" Jan 25 08:10:29 crc kubenswrapper[4832]: I0125 08:10:29.200600 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-5864b67f75-pvtmd"] Jan 25 08:10:29 crc kubenswrapper[4832]: I0125 08:10:29.203553 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-ffcf449bb-jz2q4" Jan 25 08:10:29 crc kubenswrapper[4832]: W0125 08:10:29.203614 4832 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod71c97cd3_3f75_4fbd_84d8_f08942aba882.slice/crio-2a1b45f5b1159ffd36357720d90ceebc24b67a04bbc0e63c53a8636e047b19b4 WatchSource:0}: Error finding container 2a1b45f5b1159ffd36357720d90ceebc24b67a04bbc0e63c53a8636e047b19b4: Status 404 returned error can't find the container with id 2a1b45f5b1159ffd36357720d90ceebc24b67a04bbc0e63c53a8636e047b19b4 Jan 25 08:10:29 crc kubenswrapper[4832]: I0125 08:10:29.455092 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-ffcf449bb-jz2q4"] Jan 25 08:10:29 crc kubenswrapper[4832]: W0125 08:10:29.458649 4832 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd6219f5c_261f_419a_b3de_ec9119991024.slice/crio-8eccbe8cefa8f956538c9eb2fc64c809111891f81a1cae58be5510f40f7d4b96 WatchSource:0}: Error finding container 8eccbe8cefa8f956538c9eb2fc64c809111891f81a1cae58be5510f40f7d4b96: Status 404 returned error can't find the container with id 8eccbe8cefa8f956538c9eb2fc64c809111891f81a1cae58be5510f40f7d4b96 Jan 25 08:10:30 crc kubenswrapper[4832]: I0125 08:10:30.060580 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-5864b67f75-pvtmd" event={"ID":"71c97cd3-3f75-4fbd-84d8-f08942aba882","Type":"ContainerStarted","Data":"2a1b45f5b1159ffd36357720d90ceebc24b67a04bbc0e63c53a8636e047b19b4"} Jan 25 08:10:30 crc kubenswrapper[4832]: I0125 08:10:30.062020 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-ffcf449bb-jz2q4" event={"ID":"d6219f5c-261f-419a-b3de-ec9119991024","Type":"ContainerStarted","Data":"8eccbe8cefa8f956538c9eb2fc64c809111891f81a1cae58be5510f40f7d4b96"} Jan 25 08:10:32 crc kubenswrapper[4832]: I0125 08:10:32.073555 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-5864b67f75-pvtmd" event={"ID":"71c97cd3-3f75-4fbd-84d8-f08942aba882","Type":"ContainerStarted","Data":"d889795ff84cc73974d7bdd4800bb0ff90dd723434ceea616d6a37271005d62b"} Jan 25 08:10:32 crc kubenswrapper[4832]: I0125 08:10:32.074529 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-controller-manager-5864b67f75-pvtmd" Jan 25 08:10:32 crc kubenswrapper[4832]: I0125 08:10:32.094444 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-controller-manager-5864b67f75-pvtmd" podStartSLOduration=1.445316195 podStartE2EDuration="4.094421541s" podCreationTimestamp="2026-01-25 08:10:28 +0000 UTC" firstStartedPulling="2026-01-25 08:10:29.216632133 +0000 UTC m=+811.890455666" lastFinishedPulling="2026-01-25 08:10:31.865737479 +0000 UTC m=+814.539561012" observedRunningTime="2026-01-25 08:10:32.091172899 +0000 UTC m=+814.764996432" watchObservedRunningTime="2026-01-25 08:10:32.094421541 +0000 UTC m=+814.768245074" Jan 25 08:10:34 crc kubenswrapper[4832]: I0125 08:10:34.085648 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-ffcf449bb-jz2q4" event={"ID":"d6219f5c-261f-419a-b3de-ec9119991024","Type":"ContainerStarted","Data":"68ecd570e71fa8a266b9677fb17927eaaa9ee0d40a94c8328653814f1d948de1"} Jan 25 08:10:34 crc kubenswrapper[4832]: I0125 08:10:34.085944 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-webhook-server-ffcf449bb-jz2q4" Jan 25 08:10:34 crc kubenswrapper[4832]: I0125 08:10:34.105716 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-webhook-server-ffcf449bb-jz2q4" podStartSLOduration=2.063962766 podStartE2EDuration="6.10569034s" podCreationTimestamp="2026-01-25 08:10:28 +0000 UTC" firstStartedPulling="2026-01-25 08:10:29.461555365 +0000 UTC m=+812.135378898" lastFinishedPulling="2026-01-25 08:10:33.503282939 +0000 UTC m=+816.177106472" observedRunningTime="2026-01-25 08:10:34.099733812 +0000 UTC m=+816.773557345" watchObservedRunningTime="2026-01-25 08:10:34.10569034 +0000 UTC m=+816.779513873" Jan 25 08:10:49 crc kubenswrapper[4832]: I0125 08:10:49.211588 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-webhook-server-ffcf449bb-jz2q4" Jan 25 08:11:08 crc kubenswrapper[4832]: I0125 08:11:08.952671 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-controller-manager-5864b67f75-pvtmd" Jan 25 08:11:09 crc kubenswrapper[4832]: I0125 08:11:09.766752 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-webhook-server-7df86c4f6c-np4h7"] Jan 25 08:11:09 crc kubenswrapper[4832]: I0125 08:11:09.767891 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-np4h7" Jan 25 08:11:09 crc kubenswrapper[4832]: I0125 08:11:09.770256 4832 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-daemon-dockercfg-rf6nh" Jan 25 08:11:09 crc kubenswrapper[4832]: I0125 08:11:09.771112 4832 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-webhook-server-cert" Jan 25 08:11:09 crc kubenswrapper[4832]: I0125 08:11:09.782874 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-6zmfq"] Jan 25 08:11:09 crc kubenswrapper[4832]: I0125 08:11:09.786711 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-6zmfq" Jan 25 08:11:09 crc kubenswrapper[4832]: I0125 08:11:09.788268 4832 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-certs-secret" Jan 25 08:11:09 crc kubenswrapper[4832]: I0125 08:11:09.794275 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-7df86c4f6c-np4h7"] Jan 25 08:11:09 crc kubenswrapper[4832]: I0125 08:11:09.796712 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"frr-startup" Jan 25 08:11:09 crc kubenswrapper[4832]: I0125 08:11:09.854535 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/speaker-lbb8k"] Jan 25 08:11:09 crc kubenswrapper[4832]: I0125 08:11:09.855711 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-lbb8k" Jan 25 08:11:09 crc kubenswrapper[4832]: I0125 08:11:09.858702 4832 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-certs-secret" Jan 25 08:11:09 crc kubenswrapper[4832]: I0125 08:11:09.859209 4832 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-memberlist" Jan 25 08:11:09 crc kubenswrapper[4832]: I0125 08:11:09.859222 4832 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-dockercfg-qs97k" Jan 25 08:11:09 crc kubenswrapper[4832]: I0125 08:11:09.860257 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"metallb-excludel2" Jan 25 08:11:09 crc kubenswrapper[4832]: I0125 08:11:09.892150 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/controller-6968d8fdc4-z2hg2"] Jan 25 08:11:09 crc kubenswrapper[4832]: I0125 08:11:09.893129 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-6968d8fdc4-z2hg2" Jan 25 08:11:09 crc kubenswrapper[4832]: I0125 08:11:09.896307 4832 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-certs-secret" Jan 25 08:11:09 crc kubenswrapper[4832]: I0125 08:11:09.908354 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-6968d8fdc4-z2hg2"] Jan 25 08:11:09 crc kubenswrapper[4832]: I0125 08:11:09.909253 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/c203bd63-9985-423a-bc14-8542960372f1-frr-conf\") pod \"frr-k8s-6zmfq\" (UID: \"c203bd63-9985-423a-bc14-8542960372f1\") " pod="metallb-system/frr-k8s-6zmfq" Jan 25 08:11:09 crc kubenswrapper[4832]: I0125 08:11:09.909298 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/940e2830-7ef2-4237-a053-6981a3bbf2b3-cert\") pod \"frr-k8s-webhook-server-7df86c4f6c-np4h7\" (UID: \"940e2830-7ef2-4237-a053-6981a3bbf2b3\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-np4h7" Jan 25 08:11:09 crc kubenswrapper[4832]: I0125 08:11:09.909332 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/c203bd63-9985-423a-bc14-8542960372f1-reloader\") pod \"frr-k8s-6zmfq\" (UID: \"c203bd63-9985-423a-bc14-8542960372f1\") " pod="metallb-system/frr-k8s-6zmfq" Jan 25 08:11:09 crc kubenswrapper[4832]: I0125 08:11:09.909356 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/c203bd63-9985-423a-bc14-8542960372f1-frr-sockets\") pod \"frr-k8s-6zmfq\" (UID: \"c203bd63-9985-423a-bc14-8542960372f1\") " pod="metallb-system/frr-k8s-6zmfq" Jan 25 08:11:09 crc kubenswrapper[4832]: I0125 08:11:09.909403 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/c203bd63-9985-423a-bc14-8542960372f1-frr-startup\") pod \"frr-k8s-6zmfq\" (UID: \"c203bd63-9985-423a-bc14-8542960372f1\") " pod="metallb-system/frr-k8s-6zmfq" Jan 25 08:11:09 crc kubenswrapper[4832]: I0125 08:11:09.909440 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hc9fr\" (UniqueName: \"kubernetes.io/projected/940e2830-7ef2-4237-a053-6981a3bbf2b3-kube-api-access-hc9fr\") pod \"frr-k8s-webhook-server-7df86c4f6c-np4h7\" (UID: \"940e2830-7ef2-4237-a053-6981a3bbf2b3\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-np4h7" Jan 25 08:11:09 crc kubenswrapper[4832]: I0125 08:11:09.909459 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/c203bd63-9985-423a-bc14-8542960372f1-metrics\") pod \"frr-k8s-6zmfq\" (UID: \"c203bd63-9985-423a-bc14-8542960372f1\") " pod="metallb-system/frr-k8s-6zmfq" Jan 25 08:11:09 crc kubenswrapper[4832]: I0125 08:11:09.909482 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7tjf4\" (UniqueName: \"kubernetes.io/projected/c203bd63-9985-423a-bc14-8542960372f1-kube-api-access-7tjf4\") pod \"frr-k8s-6zmfq\" (UID: \"c203bd63-9985-423a-bc14-8542960372f1\") " pod="metallb-system/frr-k8s-6zmfq" Jan 25 08:11:09 crc kubenswrapper[4832]: I0125 08:11:09.909506 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c203bd63-9985-423a-bc14-8542960372f1-metrics-certs\") pod \"frr-k8s-6zmfq\" (UID: \"c203bd63-9985-423a-bc14-8542960372f1\") " pod="metallb-system/frr-k8s-6zmfq" Jan 25 08:11:10 crc kubenswrapper[4832]: I0125 08:11:10.010554 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7tjf4\" (UniqueName: \"kubernetes.io/projected/c203bd63-9985-423a-bc14-8542960372f1-kube-api-access-7tjf4\") pod \"frr-k8s-6zmfq\" (UID: \"c203bd63-9985-423a-bc14-8542960372f1\") " pod="metallb-system/frr-k8s-6zmfq" Jan 25 08:11:10 crc kubenswrapper[4832]: I0125 08:11:10.011467 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c203bd63-9985-423a-bc14-8542960372f1-metrics-certs\") pod \"frr-k8s-6zmfq\" (UID: \"c203bd63-9985-423a-bc14-8542960372f1\") " pod="metallb-system/frr-k8s-6zmfq" Jan 25 08:11:10 crc kubenswrapper[4832]: I0125 08:11:10.011574 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/c203bd63-9985-423a-bc14-8542960372f1-frr-conf\") pod \"frr-k8s-6zmfq\" (UID: \"c203bd63-9985-423a-bc14-8542960372f1\") " pod="metallb-system/frr-k8s-6zmfq" Jan 25 08:11:10 crc kubenswrapper[4832]: I0125 08:11:10.011671 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/940e2830-7ef2-4237-a053-6981a3bbf2b3-cert\") pod \"frr-k8s-webhook-server-7df86c4f6c-np4h7\" (UID: \"940e2830-7ef2-4237-a053-6981a3bbf2b3\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-np4h7" Jan 25 08:11:10 crc kubenswrapper[4832]: I0125 08:11:10.011774 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/4095df57-d3c6-4d95-8f54-1d5eafc2a919-metrics-certs\") pod \"speaker-lbb8k\" (UID: \"4095df57-d3c6-4d95-8f54-1d5eafc2a919\") " pod="metallb-system/speaker-lbb8k" Jan 25 08:11:10 crc kubenswrapper[4832]: I0125 08:11:10.011864 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/c203bd63-9985-423a-bc14-8542960372f1-reloader\") pod \"frr-k8s-6zmfq\" (UID: \"c203bd63-9985-423a-bc14-8542960372f1\") " pod="metallb-system/frr-k8s-6zmfq" Jan 25 08:11:10 crc kubenswrapper[4832]: I0125 08:11:10.011963 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/c203bd63-9985-423a-bc14-8542960372f1-frr-sockets\") pod \"frr-k8s-6zmfq\" (UID: \"c203bd63-9985-423a-bc14-8542960372f1\") " pod="metallb-system/frr-k8s-6zmfq" Jan 25 08:11:10 crc kubenswrapper[4832]: I0125 08:11:10.012064 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4hd4h\" (UniqueName: \"kubernetes.io/projected/80c752a5-a0c6-4968-8f2f-4b5aa047c6c5-kube-api-access-4hd4h\") pod \"controller-6968d8fdc4-z2hg2\" (UID: \"80c752a5-a0c6-4968-8f2f-4b5aa047c6c5\") " pod="metallb-system/controller-6968d8fdc4-z2hg2" Jan 25 08:11:10 crc kubenswrapper[4832]: I0125 08:11:10.012160 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/4095df57-d3c6-4d95-8f54-1d5eafc2a919-metallb-excludel2\") pod \"speaker-lbb8k\" (UID: \"4095df57-d3c6-4d95-8f54-1d5eafc2a919\") " pod="metallb-system/speaker-lbb8k" Jan 25 08:11:10 crc kubenswrapper[4832]: I0125 08:11:10.012249 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/80c752a5-a0c6-4968-8f2f-4b5aa047c6c5-cert\") pod \"controller-6968d8fdc4-z2hg2\" (UID: \"80c752a5-a0c6-4968-8f2f-4b5aa047c6c5\") " pod="metallb-system/controller-6968d8fdc4-z2hg2" Jan 25 08:11:10 crc kubenswrapper[4832]: I0125 08:11:10.012337 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pwzgv\" (UniqueName: \"kubernetes.io/projected/4095df57-d3c6-4d95-8f54-1d5eafc2a919-kube-api-access-pwzgv\") pod \"speaker-lbb8k\" (UID: \"4095df57-d3c6-4d95-8f54-1d5eafc2a919\") " pod="metallb-system/speaker-lbb8k" Jan 25 08:11:10 crc kubenswrapper[4832]: I0125 08:11:10.012453 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/c203bd63-9985-423a-bc14-8542960372f1-frr-startup\") pod \"frr-k8s-6zmfq\" (UID: \"c203bd63-9985-423a-bc14-8542960372f1\") " pod="metallb-system/frr-k8s-6zmfq" Jan 25 08:11:10 crc kubenswrapper[4832]: I0125 08:11:10.012535 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/4095df57-d3c6-4d95-8f54-1d5eafc2a919-memberlist\") pod \"speaker-lbb8k\" (UID: \"4095df57-d3c6-4d95-8f54-1d5eafc2a919\") " pod="metallb-system/speaker-lbb8k" Jan 25 08:11:10 crc kubenswrapper[4832]: I0125 08:11:10.012677 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/80c752a5-a0c6-4968-8f2f-4b5aa047c6c5-metrics-certs\") pod \"controller-6968d8fdc4-z2hg2\" (UID: \"80c752a5-a0c6-4968-8f2f-4b5aa047c6c5\") " pod="metallb-system/controller-6968d8fdc4-z2hg2" Jan 25 08:11:10 crc kubenswrapper[4832]: I0125 08:11:10.012781 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hc9fr\" (UniqueName: \"kubernetes.io/projected/940e2830-7ef2-4237-a053-6981a3bbf2b3-kube-api-access-hc9fr\") pod \"frr-k8s-webhook-server-7df86c4f6c-np4h7\" (UID: \"940e2830-7ef2-4237-a053-6981a3bbf2b3\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-np4h7" Jan 25 08:11:10 crc kubenswrapper[4832]: I0125 08:11:10.012854 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/c203bd63-9985-423a-bc14-8542960372f1-metrics\") pod \"frr-k8s-6zmfq\" (UID: \"c203bd63-9985-423a-bc14-8542960372f1\") " pod="metallb-system/frr-k8s-6zmfq" Jan 25 08:11:10 crc kubenswrapper[4832]: I0125 08:11:10.013242 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/c203bd63-9985-423a-bc14-8542960372f1-metrics\") pod \"frr-k8s-6zmfq\" (UID: \"c203bd63-9985-423a-bc14-8542960372f1\") " pod="metallb-system/frr-k8s-6zmfq" Jan 25 08:11:10 crc kubenswrapper[4832]: I0125 08:11:10.014791 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/c203bd63-9985-423a-bc14-8542960372f1-frr-conf\") pod \"frr-k8s-6zmfq\" (UID: \"c203bd63-9985-423a-bc14-8542960372f1\") " pod="metallb-system/frr-k8s-6zmfq" Jan 25 08:11:10 crc kubenswrapper[4832]: I0125 08:11:10.014875 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/c203bd63-9985-423a-bc14-8542960372f1-frr-sockets\") pod \"frr-k8s-6zmfq\" (UID: \"c203bd63-9985-423a-bc14-8542960372f1\") " pod="metallb-system/frr-k8s-6zmfq" Jan 25 08:11:10 crc kubenswrapper[4832]: I0125 08:11:10.015016 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/c203bd63-9985-423a-bc14-8542960372f1-reloader\") pod \"frr-k8s-6zmfq\" (UID: \"c203bd63-9985-423a-bc14-8542960372f1\") " pod="metallb-system/frr-k8s-6zmfq" Jan 25 08:11:10 crc kubenswrapper[4832]: I0125 08:11:10.015649 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/c203bd63-9985-423a-bc14-8542960372f1-frr-startup\") pod \"frr-k8s-6zmfq\" (UID: \"c203bd63-9985-423a-bc14-8542960372f1\") " pod="metallb-system/frr-k8s-6zmfq" Jan 25 08:11:10 crc kubenswrapper[4832]: I0125 08:11:10.023008 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/940e2830-7ef2-4237-a053-6981a3bbf2b3-cert\") pod \"frr-k8s-webhook-server-7df86c4f6c-np4h7\" (UID: \"940e2830-7ef2-4237-a053-6981a3bbf2b3\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-np4h7" Jan 25 08:11:10 crc kubenswrapper[4832]: I0125 08:11:10.027415 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c203bd63-9985-423a-bc14-8542960372f1-metrics-certs\") pod \"frr-k8s-6zmfq\" (UID: \"c203bd63-9985-423a-bc14-8542960372f1\") " pod="metallb-system/frr-k8s-6zmfq" Jan 25 08:11:10 crc kubenswrapper[4832]: I0125 08:11:10.049791 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hc9fr\" (UniqueName: \"kubernetes.io/projected/940e2830-7ef2-4237-a053-6981a3bbf2b3-kube-api-access-hc9fr\") pod \"frr-k8s-webhook-server-7df86c4f6c-np4h7\" (UID: \"940e2830-7ef2-4237-a053-6981a3bbf2b3\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-np4h7" Jan 25 08:11:10 crc kubenswrapper[4832]: I0125 08:11:10.049950 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7tjf4\" (UniqueName: \"kubernetes.io/projected/c203bd63-9985-423a-bc14-8542960372f1-kube-api-access-7tjf4\") pod \"frr-k8s-6zmfq\" (UID: \"c203bd63-9985-423a-bc14-8542960372f1\") " pod="metallb-system/frr-k8s-6zmfq" Jan 25 08:11:10 crc kubenswrapper[4832]: I0125 08:11:10.083907 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-np4h7" Jan 25 08:11:10 crc kubenswrapper[4832]: I0125 08:11:10.103511 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-6zmfq" Jan 25 08:11:10 crc kubenswrapper[4832]: I0125 08:11:10.113827 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4hd4h\" (UniqueName: \"kubernetes.io/projected/80c752a5-a0c6-4968-8f2f-4b5aa047c6c5-kube-api-access-4hd4h\") pod \"controller-6968d8fdc4-z2hg2\" (UID: \"80c752a5-a0c6-4968-8f2f-4b5aa047c6c5\") " pod="metallb-system/controller-6968d8fdc4-z2hg2" Jan 25 08:11:10 crc kubenswrapper[4832]: I0125 08:11:10.113870 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/4095df57-d3c6-4d95-8f54-1d5eafc2a919-metallb-excludel2\") pod \"speaker-lbb8k\" (UID: \"4095df57-d3c6-4d95-8f54-1d5eafc2a919\") " pod="metallb-system/speaker-lbb8k" Jan 25 08:11:10 crc kubenswrapper[4832]: I0125 08:11:10.113900 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pwzgv\" (UniqueName: \"kubernetes.io/projected/4095df57-d3c6-4d95-8f54-1d5eafc2a919-kube-api-access-pwzgv\") pod \"speaker-lbb8k\" (UID: \"4095df57-d3c6-4d95-8f54-1d5eafc2a919\") " pod="metallb-system/speaker-lbb8k" Jan 25 08:11:10 crc kubenswrapper[4832]: I0125 08:11:10.113920 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/80c752a5-a0c6-4968-8f2f-4b5aa047c6c5-cert\") pod \"controller-6968d8fdc4-z2hg2\" (UID: \"80c752a5-a0c6-4968-8f2f-4b5aa047c6c5\") " pod="metallb-system/controller-6968d8fdc4-z2hg2" Jan 25 08:11:10 crc kubenswrapper[4832]: I0125 08:11:10.113948 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/4095df57-d3c6-4d95-8f54-1d5eafc2a919-memberlist\") pod \"speaker-lbb8k\" (UID: \"4095df57-d3c6-4d95-8f54-1d5eafc2a919\") " pod="metallb-system/speaker-lbb8k" Jan 25 08:11:10 crc kubenswrapper[4832]: I0125 08:11:10.113967 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/80c752a5-a0c6-4968-8f2f-4b5aa047c6c5-metrics-certs\") pod \"controller-6968d8fdc4-z2hg2\" (UID: \"80c752a5-a0c6-4968-8f2f-4b5aa047c6c5\") " pod="metallb-system/controller-6968d8fdc4-z2hg2" Jan 25 08:11:10 crc kubenswrapper[4832]: I0125 08:11:10.114030 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/4095df57-d3c6-4d95-8f54-1d5eafc2a919-metrics-certs\") pod \"speaker-lbb8k\" (UID: \"4095df57-d3c6-4d95-8f54-1d5eafc2a919\") " pod="metallb-system/speaker-lbb8k" Jan 25 08:11:10 crc kubenswrapper[4832]: E0125 08:11:10.114145 4832 secret.go:188] Couldn't get secret metallb-system/speaker-certs-secret: secret "speaker-certs-secret" not found Jan 25 08:11:10 crc kubenswrapper[4832]: E0125 08:11:10.114199 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4095df57-d3c6-4d95-8f54-1d5eafc2a919-metrics-certs podName:4095df57-d3c6-4d95-8f54-1d5eafc2a919 nodeName:}" failed. No retries permitted until 2026-01-25 08:11:10.614182447 +0000 UTC m=+853.288005980 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/4095df57-d3c6-4d95-8f54-1d5eafc2a919-metrics-certs") pod "speaker-lbb8k" (UID: "4095df57-d3c6-4d95-8f54-1d5eafc2a919") : secret "speaker-certs-secret" not found Jan 25 08:11:10 crc kubenswrapper[4832]: I0125 08:11:10.115255 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/4095df57-d3c6-4d95-8f54-1d5eafc2a919-metallb-excludel2\") pod \"speaker-lbb8k\" (UID: \"4095df57-d3c6-4d95-8f54-1d5eafc2a919\") " pod="metallb-system/speaker-lbb8k" Jan 25 08:11:10 crc kubenswrapper[4832]: E0125 08:11:10.115480 4832 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Jan 25 08:11:10 crc kubenswrapper[4832]: E0125 08:11:10.115510 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4095df57-d3c6-4d95-8f54-1d5eafc2a919-memberlist podName:4095df57-d3c6-4d95-8f54-1d5eafc2a919 nodeName:}" failed. No retries permitted until 2026-01-25 08:11:10.615502549 +0000 UTC m=+853.289326082 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/4095df57-d3c6-4d95-8f54-1d5eafc2a919-memberlist") pod "speaker-lbb8k" (UID: "4095df57-d3c6-4d95-8f54-1d5eafc2a919") : secret "metallb-memberlist" not found Jan 25 08:11:10 crc kubenswrapper[4832]: E0125 08:11:10.115543 4832 secret.go:188] Couldn't get secret metallb-system/controller-certs-secret: secret "controller-certs-secret" not found Jan 25 08:11:10 crc kubenswrapper[4832]: E0125 08:11:10.115560 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/80c752a5-a0c6-4968-8f2f-4b5aa047c6c5-metrics-certs podName:80c752a5-a0c6-4968-8f2f-4b5aa047c6c5 nodeName:}" failed. No retries permitted until 2026-01-25 08:11:10.615554951 +0000 UTC m=+853.289378484 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/80c752a5-a0c6-4968-8f2f-4b5aa047c6c5-metrics-certs") pod "controller-6968d8fdc4-z2hg2" (UID: "80c752a5-a0c6-4968-8f2f-4b5aa047c6c5") : secret "controller-certs-secret" not found Jan 25 08:11:10 crc kubenswrapper[4832]: I0125 08:11:10.117599 4832 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Jan 25 08:11:10 crc kubenswrapper[4832]: I0125 08:11:10.128258 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/80c752a5-a0c6-4968-8f2f-4b5aa047c6c5-cert\") pod \"controller-6968d8fdc4-z2hg2\" (UID: \"80c752a5-a0c6-4968-8f2f-4b5aa047c6c5\") " pod="metallb-system/controller-6968d8fdc4-z2hg2" Jan 25 08:11:10 crc kubenswrapper[4832]: I0125 08:11:10.132624 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pwzgv\" (UniqueName: \"kubernetes.io/projected/4095df57-d3c6-4d95-8f54-1d5eafc2a919-kube-api-access-pwzgv\") pod \"speaker-lbb8k\" (UID: \"4095df57-d3c6-4d95-8f54-1d5eafc2a919\") " pod="metallb-system/speaker-lbb8k" Jan 25 08:11:10 crc kubenswrapper[4832]: I0125 08:11:10.137989 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4hd4h\" (UniqueName: \"kubernetes.io/projected/80c752a5-a0c6-4968-8f2f-4b5aa047c6c5-kube-api-access-4hd4h\") pod \"controller-6968d8fdc4-z2hg2\" (UID: \"80c752a5-a0c6-4968-8f2f-4b5aa047c6c5\") " pod="metallb-system/controller-6968d8fdc4-z2hg2" Jan 25 08:11:10 crc kubenswrapper[4832]: I0125 08:11:10.292558 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-6zmfq" event={"ID":"c203bd63-9985-423a-bc14-8542960372f1","Type":"ContainerStarted","Data":"b089c493e4ecfb1c61c4debb5002d5e02fc4ee2c9d61f55993b6e755086c3db5"} Jan 25 08:11:10 crc kubenswrapper[4832]: I0125 08:11:10.508812 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-7df86c4f6c-np4h7"] Jan 25 08:11:10 crc kubenswrapper[4832]: W0125 08:11:10.514135 4832 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod940e2830_7ef2_4237_a053_6981a3bbf2b3.slice/crio-b3b669a388ab583d8db5073cd0c8a2139c1dd5aeb4e8d2e09811be5d3bb48d65 WatchSource:0}: Error finding container b3b669a388ab583d8db5073cd0c8a2139c1dd5aeb4e8d2e09811be5d3bb48d65: Status 404 returned error can't find the container with id b3b669a388ab583d8db5073cd0c8a2139c1dd5aeb4e8d2e09811be5d3bb48d65 Jan 25 08:11:10 crc kubenswrapper[4832]: I0125 08:11:10.619857 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/4095df57-d3c6-4d95-8f54-1d5eafc2a919-memberlist\") pod \"speaker-lbb8k\" (UID: \"4095df57-d3c6-4d95-8f54-1d5eafc2a919\") " pod="metallb-system/speaker-lbb8k" Jan 25 08:11:10 crc kubenswrapper[4832]: I0125 08:11:10.619910 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/80c752a5-a0c6-4968-8f2f-4b5aa047c6c5-metrics-certs\") pod \"controller-6968d8fdc4-z2hg2\" (UID: \"80c752a5-a0c6-4968-8f2f-4b5aa047c6c5\") " pod="metallb-system/controller-6968d8fdc4-z2hg2" Jan 25 08:11:10 crc kubenswrapper[4832]: E0125 08:11:10.619980 4832 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Jan 25 08:11:10 crc kubenswrapper[4832]: E0125 08:11:10.620033 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4095df57-d3c6-4d95-8f54-1d5eafc2a919-memberlist podName:4095df57-d3c6-4d95-8f54-1d5eafc2a919 nodeName:}" failed. No retries permitted until 2026-01-25 08:11:11.620020535 +0000 UTC m=+854.293844068 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/4095df57-d3c6-4d95-8f54-1d5eafc2a919-memberlist") pod "speaker-lbb8k" (UID: "4095df57-d3c6-4d95-8f54-1d5eafc2a919") : secret "metallb-memberlist" not found Jan 25 08:11:10 crc kubenswrapper[4832]: I0125 08:11:10.619987 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/4095df57-d3c6-4d95-8f54-1d5eafc2a919-metrics-certs\") pod \"speaker-lbb8k\" (UID: \"4095df57-d3c6-4d95-8f54-1d5eafc2a919\") " pod="metallb-system/speaker-lbb8k" Jan 25 08:11:10 crc kubenswrapper[4832]: I0125 08:11:10.624297 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/4095df57-d3c6-4d95-8f54-1d5eafc2a919-metrics-certs\") pod \"speaker-lbb8k\" (UID: \"4095df57-d3c6-4d95-8f54-1d5eafc2a919\") " pod="metallb-system/speaker-lbb8k" Jan 25 08:11:10 crc kubenswrapper[4832]: I0125 08:11:10.624640 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/80c752a5-a0c6-4968-8f2f-4b5aa047c6c5-metrics-certs\") pod \"controller-6968d8fdc4-z2hg2\" (UID: \"80c752a5-a0c6-4968-8f2f-4b5aa047c6c5\") " pod="metallb-system/controller-6968d8fdc4-z2hg2" Jan 25 08:11:10 crc kubenswrapper[4832]: I0125 08:11:10.813307 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-6968d8fdc4-z2hg2" Jan 25 08:11:11 crc kubenswrapper[4832]: I0125 08:11:11.298081 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-np4h7" event={"ID":"940e2830-7ef2-4237-a053-6981a3bbf2b3","Type":"ContainerStarted","Data":"b3b669a388ab583d8db5073cd0c8a2139c1dd5aeb4e8d2e09811be5d3bb48d65"} Jan 25 08:11:11 crc kubenswrapper[4832]: I0125 08:11:11.321690 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-6968d8fdc4-z2hg2"] Jan 25 08:11:11 crc kubenswrapper[4832]: W0125 08:11:11.331287 4832 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod80c752a5_a0c6_4968_8f2f_4b5aa047c6c5.slice/crio-495fd745147389db08df859501181ed40f0987e9abca0d51f5471c950f367864 WatchSource:0}: Error finding container 495fd745147389db08df859501181ed40f0987e9abca0d51f5471c950f367864: Status 404 returned error can't find the container with id 495fd745147389db08df859501181ed40f0987e9abca0d51f5471c950f367864 Jan 25 08:11:11 crc kubenswrapper[4832]: I0125 08:11:11.638342 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/4095df57-d3c6-4d95-8f54-1d5eafc2a919-memberlist\") pod \"speaker-lbb8k\" (UID: \"4095df57-d3c6-4d95-8f54-1d5eafc2a919\") " pod="metallb-system/speaker-lbb8k" Jan 25 08:11:11 crc kubenswrapper[4832]: I0125 08:11:11.645859 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/4095df57-d3c6-4d95-8f54-1d5eafc2a919-memberlist\") pod \"speaker-lbb8k\" (UID: \"4095df57-d3c6-4d95-8f54-1d5eafc2a919\") " pod="metallb-system/speaker-lbb8k" Jan 25 08:11:11 crc kubenswrapper[4832]: I0125 08:11:11.669728 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-lbb8k" Jan 25 08:11:12 crc kubenswrapper[4832]: I0125 08:11:12.322504 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6968d8fdc4-z2hg2" event={"ID":"80c752a5-a0c6-4968-8f2f-4b5aa047c6c5","Type":"ContainerStarted","Data":"fd109b23fdde22f75f33534930fd23479f1212989303f2192ee98ca8d22c79d6"} Jan 25 08:11:12 crc kubenswrapper[4832]: I0125 08:11:12.322790 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6968d8fdc4-z2hg2" event={"ID":"80c752a5-a0c6-4968-8f2f-4b5aa047c6c5","Type":"ContainerStarted","Data":"8a92bce67141df48020c45c98b14238d37a064b6b2a50588b1a73eb06b51eb79"} Jan 25 08:11:12 crc kubenswrapper[4832]: I0125 08:11:12.322805 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6968d8fdc4-z2hg2" event={"ID":"80c752a5-a0c6-4968-8f2f-4b5aa047c6c5","Type":"ContainerStarted","Data":"495fd745147389db08df859501181ed40f0987e9abca0d51f5471c950f367864"} Jan 25 08:11:12 crc kubenswrapper[4832]: I0125 08:11:12.323658 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/controller-6968d8fdc4-z2hg2" Jan 25 08:11:12 crc kubenswrapper[4832]: I0125 08:11:12.332097 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-lbb8k" event={"ID":"4095df57-d3c6-4d95-8f54-1d5eafc2a919","Type":"ContainerStarted","Data":"87ffc963bd20000f98799224892c06894964623583d383f414e0a7c67511bfab"} Jan 25 08:11:12 crc kubenswrapper[4832]: I0125 08:11:12.332150 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-lbb8k" event={"ID":"4095df57-d3c6-4d95-8f54-1d5eafc2a919","Type":"ContainerStarted","Data":"3cb7b7a4dbe40f8c9b2704f77b486711bc4331d1e733e2078e3546486b1a8744"} Jan 25 08:11:12 crc kubenswrapper[4832]: I0125 08:11:12.332161 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-lbb8k" event={"ID":"4095df57-d3c6-4d95-8f54-1d5eafc2a919","Type":"ContainerStarted","Data":"37a98c7cbbb19619a93ca3e4ce597c0e48fb4798f00057130beffa0b86a1be6e"} Jan 25 08:11:12 crc kubenswrapper[4832]: I0125 08:11:12.332344 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/speaker-lbb8k" Jan 25 08:11:12 crc kubenswrapper[4832]: I0125 08:11:12.354195 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/controller-6968d8fdc4-z2hg2" podStartSLOduration=3.354178543 podStartE2EDuration="3.354178543s" podCreationTimestamp="2026-01-25 08:11:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-25 08:11:12.351893482 +0000 UTC m=+855.025717035" watchObservedRunningTime="2026-01-25 08:11:12.354178543 +0000 UTC m=+855.028002076" Jan 25 08:11:12 crc kubenswrapper[4832]: I0125 08:11:12.371440 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/speaker-lbb8k" podStartSLOduration=3.371420396 podStartE2EDuration="3.371420396s" podCreationTimestamp="2026-01-25 08:11:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-25 08:11:12.369630849 +0000 UTC m=+855.043454392" watchObservedRunningTime="2026-01-25 08:11:12.371420396 +0000 UTC m=+855.045243949" Jan 25 08:11:19 crc kubenswrapper[4832]: I0125 08:11:19.395880 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-np4h7" event={"ID":"940e2830-7ef2-4237-a053-6981a3bbf2b3","Type":"ContainerStarted","Data":"cfda3beceda28bae081bf0999bf220158c60c3e4087996ea832cbdf7f404c0d1"} Jan 25 08:11:19 crc kubenswrapper[4832]: I0125 08:11:19.396468 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-np4h7" Jan 25 08:11:19 crc kubenswrapper[4832]: I0125 08:11:19.397888 4832 generic.go:334] "Generic (PLEG): container finished" podID="c203bd63-9985-423a-bc14-8542960372f1" containerID="bfef06f9e0215ce26546fec514fb3790d2f8f5aedfef5ecfa3b4c356e6103579" exitCode=0 Jan 25 08:11:19 crc kubenswrapper[4832]: I0125 08:11:19.397937 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-6zmfq" event={"ID":"c203bd63-9985-423a-bc14-8542960372f1","Type":"ContainerDied","Data":"bfef06f9e0215ce26546fec514fb3790d2f8f5aedfef5ecfa3b4c356e6103579"} Jan 25 08:11:19 crc kubenswrapper[4832]: I0125 08:11:19.414286 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-np4h7" podStartSLOduration=2.028555163 podStartE2EDuration="10.414269317s" podCreationTimestamp="2026-01-25 08:11:09 +0000 UTC" firstStartedPulling="2026-01-25 08:11:10.517851972 +0000 UTC m=+853.191675505" lastFinishedPulling="2026-01-25 08:11:18.903566136 +0000 UTC m=+861.577389659" observedRunningTime="2026-01-25 08:11:19.409736865 +0000 UTC m=+862.083560408" watchObservedRunningTime="2026-01-25 08:11:19.414269317 +0000 UTC m=+862.088092850" Jan 25 08:11:20 crc kubenswrapper[4832]: I0125 08:11:20.405367 4832 generic.go:334] "Generic (PLEG): container finished" podID="c203bd63-9985-423a-bc14-8542960372f1" containerID="2435bc817e4e2dee364d428b860f32f7f11603cdaf8fc2bd0e2148c6bc1bbc78" exitCode=0 Jan 25 08:11:20 crc kubenswrapper[4832]: I0125 08:11:20.405456 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-6zmfq" event={"ID":"c203bd63-9985-423a-bc14-8542960372f1","Type":"ContainerDied","Data":"2435bc817e4e2dee364d428b860f32f7f11603cdaf8fc2bd0e2148c6bc1bbc78"} Jan 25 08:11:21 crc kubenswrapper[4832]: I0125 08:11:21.412847 4832 generic.go:334] "Generic (PLEG): container finished" podID="c203bd63-9985-423a-bc14-8542960372f1" containerID="b532e2d0166de147be5151417cd457a080fc516e36acf66e2e710c9f53385c19" exitCode=0 Jan 25 08:11:21 crc kubenswrapper[4832]: I0125 08:11:21.412951 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-6zmfq" event={"ID":"c203bd63-9985-423a-bc14-8542960372f1","Type":"ContainerDied","Data":"b532e2d0166de147be5151417cd457a080fc516e36acf66e2e710c9f53385c19"} Jan 25 08:11:21 crc kubenswrapper[4832]: I0125 08:11:21.678908 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/speaker-lbb8k" Jan 25 08:11:22 crc kubenswrapper[4832]: I0125 08:11:22.150067 4832 patch_prober.go:28] interesting pod/machine-config-daemon-9r9sz container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 25 08:11:22 crc kubenswrapper[4832]: I0125 08:11:22.150585 4832 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-9r9sz" podUID="1fb47e8e-c812-41b4-9be7-3fad81e121b0" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 25 08:11:22 crc kubenswrapper[4832]: I0125 08:11:22.432704 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-6zmfq" event={"ID":"c203bd63-9985-423a-bc14-8542960372f1","Type":"ContainerStarted","Data":"df7e84945f3c0edcc1e9aeb7985f3e224f291f5bd0e7a36ef1aaf861aac54555"} Jan 25 08:11:22 crc kubenswrapper[4832]: I0125 08:11:22.432743 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-6zmfq" event={"ID":"c203bd63-9985-423a-bc14-8542960372f1","Type":"ContainerStarted","Data":"f248d7d41b4bf034e4603268168fbe83027bce3d9b3e19d41c33f59cc40a0862"} Jan 25 08:11:22 crc kubenswrapper[4832]: I0125 08:11:22.432753 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-6zmfq" event={"ID":"c203bd63-9985-423a-bc14-8542960372f1","Type":"ContainerStarted","Data":"ac769e80b14ff9703d459ab5b15516e1d49a9dcbe2c6326fb073c3bcaa08ec76"} Jan 25 08:11:22 crc kubenswrapper[4832]: I0125 08:11:22.432763 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-6zmfq" event={"ID":"c203bd63-9985-423a-bc14-8542960372f1","Type":"ContainerStarted","Data":"70102ab32a3302dc7ade82344f082dc8dc93aa23d950011a117aeb5b212a0568"} Jan 25 08:11:22 crc kubenswrapper[4832]: I0125 08:11:22.432772 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-6zmfq" event={"ID":"c203bd63-9985-423a-bc14-8542960372f1","Type":"ContainerStarted","Data":"bd4f1dfe19188d0264142993ba32493269a9629eab48a84959696dbb70c893eb"} Jan 25 08:11:23 crc kubenswrapper[4832]: I0125 08:11:23.442621 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-6zmfq" event={"ID":"c203bd63-9985-423a-bc14-8542960372f1","Type":"ContainerStarted","Data":"ed0ad0b98c6839b257ca0bbc7fe8bcb2627a1386de8412dcd3af5b402b9c5d88"} Jan 25 08:11:23 crc kubenswrapper[4832]: I0125 08:11:23.442764 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-6zmfq" Jan 25 08:11:23 crc kubenswrapper[4832]: I0125 08:11:23.464820 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-6zmfq" podStartSLOduration=5.838052698 podStartE2EDuration="14.464802163s" podCreationTimestamp="2026-01-25 08:11:09 +0000 UTC" firstStartedPulling="2026-01-25 08:11:10.259331742 +0000 UTC m=+852.933155275" lastFinishedPulling="2026-01-25 08:11:18.886081207 +0000 UTC m=+861.559904740" observedRunningTime="2026-01-25 08:11:23.462129069 +0000 UTC m=+866.135952612" watchObservedRunningTime="2026-01-25 08:11:23.464802163 +0000 UTC m=+866.138625706" Jan 25 08:11:24 crc kubenswrapper[4832]: I0125 08:11:24.640022 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-6grwr"] Jan 25 08:11:24 crc kubenswrapper[4832]: I0125 08:11:24.641037 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-6grwr" Jan 25 08:11:24 crc kubenswrapper[4832]: I0125 08:11:24.650006 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"openshift-service-ca.crt" Jan 25 08:11:24 crc kubenswrapper[4832]: I0125 08:11:24.650184 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"kube-root-ca.crt" Jan 25 08:11:24 crc kubenswrapper[4832]: I0125 08:11:24.662048 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-6grwr"] Jan 25 08:11:24 crc kubenswrapper[4832]: I0125 08:11:24.666505 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-index-dockercfg-gcwk2" Jan 25 08:11:24 crc kubenswrapper[4832]: I0125 08:11:24.730439 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nk899\" (UniqueName: \"kubernetes.io/projected/6cb6c547-f5ea-4507-a7f0-867b4a2a2363-kube-api-access-nk899\") pod \"openstack-operator-index-6grwr\" (UID: \"6cb6c547-f5ea-4507-a7f0-867b4a2a2363\") " pod="openstack-operators/openstack-operator-index-6grwr" Jan 25 08:11:24 crc kubenswrapper[4832]: I0125 08:11:24.831903 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nk899\" (UniqueName: \"kubernetes.io/projected/6cb6c547-f5ea-4507-a7f0-867b4a2a2363-kube-api-access-nk899\") pod \"openstack-operator-index-6grwr\" (UID: \"6cb6c547-f5ea-4507-a7f0-867b4a2a2363\") " pod="openstack-operators/openstack-operator-index-6grwr" Jan 25 08:11:24 crc kubenswrapper[4832]: I0125 08:11:24.858568 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nk899\" (UniqueName: \"kubernetes.io/projected/6cb6c547-f5ea-4507-a7f0-867b4a2a2363-kube-api-access-nk899\") pod \"openstack-operator-index-6grwr\" (UID: \"6cb6c547-f5ea-4507-a7f0-867b4a2a2363\") " pod="openstack-operators/openstack-operator-index-6grwr" Jan 25 08:11:24 crc kubenswrapper[4832]: I0125 08:11:24.967574 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-6grwr" Jan 25 08:11:25 crc kubenswrapper[4832]: I0125 08:11:25.104823 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="metallb-system/frr-k8s-6zmfq" Jan 25 08:11:25 crc kubenswrapper[4832]: I0125 08:11:25.148490 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="metallb-system/frr-k8s-6zmfq" Jan 25 08:11:25 crc kubenswrapper[4832]: I0125 08:11:25.428968 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-6grwr"] Jan 25 08:11:25 crc kubenswrapper[4832]: I0125 08:11:25.470303 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-6grwr" event={"ID":"6cb6c547-f5ea-4507-a7f0-867b4a2a2363","Type":"ContainerStarted","Data":"7e79aed1a80eeb9e57b89b21c4621ac8ea34977d95237ff1edbbc7c1069ab769"} Jan 25 08:11:28 crc kubenswrapper[4832]: I0125 08:11:28.021552 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-6grwr"] Jan 25 08:11:28 crc kubenswrapper[4832]: I0125 08:11:28.487270 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-6grwr" event={"ID":"6cb6c547-f5ea-4507-a7f0-867b4a2a2363","Type":"ContainerStarted","Data":"98295ad785c2985adcd995772f21a861e0d06d820f05c86e419632207fdac539"} Jan 25 08:11:28 crc kubenswrapper[4832]: I0125 08:11:28.487440 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/openstack-operator-index-6grwr" podUID="6cb6c547-f5ea-4507-a7f0-867b4a2a2363" containerName="registry-server" containerID="cri-o://98295ad785c2985adcd995772f21a861e0d06d820f05c86e419632207fdac539" gracePeriod=2 Jan 25 08:11:28 crc kubenswrapper[4832]: I0125 08:11:28.506326 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-6grwr" podStartSLOduration=1.88318601 podStartE2EDuration="4.506302694s" podCreationTimestamp="2026-01-25 08:11:24 +0000 UTC" firstStartedPulling="2026-01-25 08:11:25.426804517 +0000 UTC m=+868.100628050" lastFinishedPulling="2026-01-25 08:11:28.049921201 +0000 UTC m=+870.723744734" observedRunningTime="2026-01-25 08:11:28.503119995 +0000 UTC m=+871.176943528" watchObservedRunningTime="2026-01-25 08:11:28.506302694 +0000 UTC m=+871.180126267" Jan 25 08:11:28 crc kubenswrapper[4832]: I0125 08:11:28.628956 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-k945x"] Jan 25 08:11:28 crc kubenswrapper[4832]: I0125 08:11:28.630472 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-k945x" Jan 25 08:11:28 crc kubenswrapper[4832]: I0125 08:11:28.636541 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-k945x"] Jan 25 08:11:28 crc kubenswrapper[4832]: I0125 08:11:28.784774 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vzwkn\" (UniqueName: \"kubernetes.io/projected/40c93737-1880-48e7-a342-d3a8c8a5ad68-kube-api-access-vzwkn\") pod \"openstack-operator-index-k945x\" (UID: \"40c93737-1880-48e7-a342-d3a8c8a5ad68\") " pod="openstack-operators/openstack-operator-index-k945x" Jan 25 08:11:28 crc kubenswrapper[4832]: I0125 08:11:28.885979 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vzwkn\" (UniqueName: \"kubernetes.io/projected/40c93737-1880-48e7-a342-d3a8c8a5ad68-kube-api-access-vzwkn\") pod \"openstack-operator-index-k945x\" (UID: \"40c93737-1880-48e7-a342-d3a8c8a5ad68\") " pod="openstack-operators/openstack-operator-index-k945x" Jan 25 08:11:28 crc kubenswrapper[4832]: I0125 08:11:28.894283 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-6grwr" Jan 25 08:11:28 crc kubenswrapper[4832]: I0125 08:11:28.907355 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vzwkn\" (UniqueName: \"kubernetes.io/projected/40c93737-1880-48e7-a342-d3a8c8a5ad68-kube-api-access-vzwkn\") pod \"openstack-operator-index-k945x\" (UID: \"40c93737-1880-48e7-a342-d3a8c8a5ad68\") " pod="openstack-operators/openstack-operator-index-k945x" Jan 25 08:11:28 crc kubenswrapper[4832]: I0125 08:11:28.961641 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-k945x" Jan 25 08:11:28 crc kubenswrapper[4832]: I0125 08:11:28.986688 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nk899\" (UniqueName: \"kubernetes.io/projected/6cb6c547-f5ea-4507-a7f0-867b4a2a2363-kube-api-access-nk899\") pod \"6cb6c547-f5ea-4507-a7f0-867b4a2a2363\" (UID: \"6cb6c547-f5ea-4507-a7f0-867b4a2a2363\") " Jan 25 08:11:28 crc kubenswrapper[4832]: I0125 08:11:28.990058 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6cb6c547-f5ea-4507-a7f0-867b4a2a2363-kube-api-access-nk899" (OuterVolumeSpecName: "kube-api-access-nk899") pod "6cb6c547-f5ea-4507-a7f0-867b4a2a2363" (UID: "6cb6c547-f5ea-4507-a7f0-867b4a2a2363"). InnerVolumeSpecName "kube-api-access-nk899". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 25 08:11:29 crc kubenswrapper[4832]: I0125 08:11:29.088461 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nk899\" (UniqueName: \"kubernetes.io/projected/6cb6c547-f5ea-4507-a7f0-867b4a2a2363-kube-api-access-nk899\") on node \"crc\" DevicePath \"\"" Jan 25 08:11:29 crc kubenswrapper[4832]: I0125 08:11:29.365244 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-k945x"] Jan 25 08:11:29 crc kubenswrapper[4832]: W0125 08:11:29.378540 4832 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod40c93737_1880_48e7_a342_d3a8c8a5ad68.slice/crio-76223078824b81e965845755caa5813c386f2b7384bb3d68540f9631af09359d WatchSource:0}: Error finding container 76223078824b81e965845755caa5813c386f2b7384bb3d68540f9631af09359d: Status 404 returned error can't find the container with id 76223078824b81e965845755caa5813c386f2b7384bb3d68540f9631af09359d Jan 25 08:11:29 crc kubenswrapper[4832]: I0125 08:11:29.496454 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-k945x" event={"ID":"40c93737-1880-48e7-a342-d3a8c8a5ad68","Type":"ContainerStarted","Data":"76223078824b81e965845755caa5813c386f2b7384bb3d68540f9631af09359d"} Jan 25 08:11:29 crc kubenswrapper[4832]: I0125 08:11:29.498858 4832 generic.go:334] "Generic (PLEG): container finished" podID="6cb6c547-f5ea-4507-a7f0-867b4a2a2363" containerID="98295ad785c2985adcd995772f21a861e0d06d820f05c86e419632207fdac539" exitCode=0 Jan 25 08:11:29 crc kubenswrapper[4832]: I0125 08:11:29.498906 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-6grwr" event={"ID":"6cb6c547-f5ea-4507-a7f0-867b4a2a2363","Type":"ContainerDied","Data":"98295ad785c2985adcd995772f21a861e0d06d820f05c86e419632207fdac539"} Jan 25 08:11:29 crc kubenswrapper[4832]: I0125 08:11:29.498945 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-6grwr" event={"ID":"6cb6c547-f5ea-4507-a7f0-867b4a2a2363","Type":"ContainerDied","Data":"7e79aed1a80eeb9e57b89b21c4621ac8ea34977d95237ff1edbbc7c1069ab769"} Jan 25 08:11:29 crc kubenswrapper[4832]: I0125 08:11:29.498966 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-6grwr" Jan 25 08:11:29 crc kubenswrapper[4832]: I0125 08:11:29.498978 4832 scope.go:117] "RemoveContainer" containerID="98295ad785c2985adcd995772f21a861e0d06d820f05c86e419632207fdac539" Jan 25 08:11:29 crc kubenswrapper[4832]: I0125 08:11:29.542436 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-6grwr"] Jan 25 08:11:29 crc kubenswrapper[4832]: I0125 08:11:29.542613 4832 scope.go:117] "RemoveContainer" containerID="98295ad785c2985adcd995772f21a861e0d06d820f05c86e419632207fdac539" Jan 25 08:11:29 crc kubenswrapper[4832]: E0125 08:11:29.543364 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"98295ad785c2985adcd995772f21a861e0d06d820f05c86e419632207fdac539\": container with ID starting with 98295ad785c2985adcd995772f21a861e0d06d820f05c86e419632207fdac539 not found: ID does not exist" containerID="98295ad785c2985adcd995772f21a861e0d06d820f05c86e419632207fdac539" Jan 25 08:11:29 crc kubenswrapper[4832]: I0125 08:11:29.543419 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"98295ad785c2985adcd995772f21a861e0d06d820f05c86e419632207fdac539"} err="failed to get container status \"98295ad785c2985adcd995772f21a861e0d06d820f05c86e419632207fdac539\": rpc error: code = NotFound desc = could not find container \"98295ad785c2985adcd995772f21a861e0d06d820f05c86e419632207fdac539\": container with ID starting with 98295ad785c2985adcd995772f21a861e0d06d820f05c86e419632207fdac539 not found: ID does not exist" Jan 25 08:11:29 crc kubenswrapper[4832]: I0125 08:11:29.546409 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/openstack-operator-index-6grwr"] Jan 25 08:11:29 crc kubenswrapper[4832]: I0125 08:11:29.679122 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6cb6c547-f5ea-4507-a7f0-867b4a2a2363" path="/var/lib/kubelet/pods/6cb6c547-f5ea-4507-a7f0-867b4a2a2363/volumes" Jan 25 08:11:30 crc kubenswrapper[4832]: I0125 08:11:30.092158 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-np4h7" Jan 25 08:11:30 crc kubenswrapper[4832]: I0125 08:11:30.511405 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-k945x" event={"ID":"40c93737-1880-48e7-a342-d3a8c8a5ad68","Type":"ContainerStarted","Data":"293c5a5b2e4805b7b6d6b99da8bd99ae3d41df3c37282119701a9a5f62a919a1"} Jan 25 08:11:30 crc kubenswrapper[4832]: I0125 08:11:30.536805 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-k945x" podStartSLOduration=2.482734831 podStartE2EDuration="2.536769271s" podCreationTimestamp="2026-01-25 08:11:28 +0000 UTC" firstStartedPulling="2026-01-25 08:11:29.383318656 +0000 UTC m=+872.057142189" lastFinishedPulling="2026-01-25 08:11:29.437353096 +0000 UTC m=+872.111176629" observedRunningTime="2026-01-25 08:11:30.530329308 +0000 UTC m=+873.204152841" watchObservedRunningTime="2026-01-25 08:11:30.536769271 +0000 UTC m=+873.210592834" Jan 25 08:11:30 crc kubenswrapper[4832]: I0125 08:11:30.816590 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/controller-6968d8fdc4-z2hg2" Jan 25 08:11:38 crc kubenswrapper[4832]: I0125 08:11:38.963024 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack-operators/openstack-operator-index-k945x" Jan 25 08:11:38 crc kubenswrapper[4832]: I0125 08:11:38.963621 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-index-k945x" Jan 25 08:11:38 crc kubenswrapper[4832]: I0125 08:11:38.993208 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack-operators/openstack-operator-index-k945x" Jan 25 08:11:39 crc kubenswrapper[4832]: I0125 08:11:39.591751 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-index-k945x" Jan 25 08:11:40 crc kubenswrapper[4832]: I0125 08:11:40.107893 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-6zmfq" Jan 25 08:11:46 crc kubenswrapper[4832]: I0125 08:11:46.616123 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/2d2f0d7580858c77849655cfe8dde1d34625d82185eda51b1088a6ebe2g2vmq"] Jan 25 08:11:46 crc kubenswrapper[4832]: E0125 08:11:46.628892 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6cb6c547-f5ea-4507-a7f0-867b4a2a2363" containerName="registry-server" Jan 25 08:11:46 crc kubenswrapper[4832]: I0125 08:11:46.628919 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="6cb6c547-f5ea-4507-a7f0-867b4a2a2363" containerName="registry-server" Jan 25 08:11:46 crc kubenswrapper[4832]: I0125 08:11:46.629054 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="6cb6c547-f5ea-4507-a7f0-867b4a2a2363" containerName="registry-server" Jan 25 08:11:46 crc kubenswrapper[4832]: I0125 08:11:46.630053 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/2d2f0d7580858c77849655cfe8dde1d34625d82185eda51b1088a6ebe2g2vmq"] Jan 25 08:11:46 crc kubenswrapper[4832]: I0125 08:11:46.630155 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/2d2f0d7580858c77849655cfe8dde1d34625d82185eda51b1088a6ebe2g2vmq" Jan 25 08:11:46 crc kubenswrapper[4832]: I0125 08:11:46.631921 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"default-dockercfg-pk2bp" Jan 25 08:11:46 crc kubenswrapper[4832]: I0125 08:11:46.750241 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sfg2p\" (UniqueName: \"kubernetes.io/projected/f27419fd-d9b8-4ae4-ae3c-a9ad071152b2-kube-api-access-sfg2p\") pod \"2d2f0d7580858c77849655cfe8dde1d34625d82185eda51b1088a6ebe2g2vmq\" (UID: \"f27419fd-d9b8-4ae4-ae3c-a9ad071152b2\") " pod="openstack-operators/2d2f0d7580858c77849655cfe8dde1d34625d82185eda51b1088a6ebe2g2vmq" Jan 25 08:11:46 crc kubenswrapper[4832]: I0125 08:11:46.750307 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/f27419fd-d9b8-4ae4-ae3c-a9ad071152b2-bundle\") pod \"2d2f0d7580858c77849655cfe8dde1d34625d82185eda51b1088a6ebe2g2vmq\" (UID: \"f27419fd-d9b8-4ae4-ae3c-a9ad071152b2\") " pod="openstack-operators/2d2f0d7580858c77849655cfe8dde1d34625d82185eda51b1088a6ebe2g2vmq" Jan 25 08:11:46 crc kubenswrapper[4832]: I0125 08:11:46.750425 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/f27419fd-d9b8-4ae4-ae3c-a9ad071152b2-util\") pod \"2d2f0d7580858c77849655cfe8dde1d34625d82185eda51b1088a6ebe2g2vmq\" (UID: \"f27419fd-d9b8-4ae4-ae3c-a9ad071152b2\") " pod="openstack-operators/2d2f0d7580858c77849655cfe8dde1d34625d82185eda51b1088a6ebe2g2vmq" Jan 25 08:11:46 crc kubenswrapper[4832]: I0125 08:11:46.851516 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/f27419fd-d9b8-4ae4-ae3c-a9ad071152b2-util\") pod \"2d2f0d7580858c77849655cfe8dde1d34625d82185eda51b1088a6ebe2g2vmq\" (UID: \"f27419fd-d9b8-4ae4-ae3c-a9ad071152b2\") " pod="openstack-operators/2d2f0d7580858c77849655cfe8dde1d34625d82185eda51b1088a6ebe2g2vmq" Jan 25 08:11:46 crc kubenswrapper[4832]: I0125 08:11:46.851603 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sfg2p\" (UniqueName: \"kubernetes.io/projected/f27419fd-d9b8-4ae4-ae3c-a9ad071152b2-kube-api-access-sfg2p\") pod \"2d2f0d7580858c77849655cfe8dde1d34625d82185eda51b1088a6ebe2g2vmq\" (UID: \"f27419fd-d9b8-4ae4-ae3c-a9ad071152b2\") " pod="openstack-operators/2d2f0d7580858c77849655cfe8dde1d34625d82185eda51b1088a6ebe2g2vmq" Jan 25 08:11:46 crc kubenswrapper[4832]: I0125 08:11:46.851701 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/f27419fd-d9b8-4ae4-ae3c-a9ad071152b2-bundle\") pod \"2d2f0d7580858c77849655cfe8dde1d34625d82185eda51b1088a6ebe2g2vmq\" (UID: \"f27419fd-d9b8-4ae4-ae3c-a9ad071152b2\") " pod="openstack-operators/2d2f0d7580858c77849655cfe8dde1d34625d82185eda51b1088a6ebe2g2vmq" Jan 25 08:11:46 crc kubenswrapper[4832]: I0125 08:11:46.852135 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/f27419fd-d9b8-4ae4-ae3c-a9ad071152b2-bundle\") pod \"2d2f0d7580858c77849655cfe8dde1d34625d82185eda51b1088a6ebe2g2vmq\" (UID: \"f27419fd-d9b8-4ae4-ae3c-a9ad071152b2\") " pod="openstack-operators/2d2f0d7580858c77849655cfe8dde1d34625d82185eda51b1088a6ebe2g2vmq" Jan 25 08:11:46 crc kubenswrapper[4832]: I0125 08:11:46.852422 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/f27419fd-d9b8-4ae4-ae3c-a9ad071152b2-util\") pod \"2d2f0d7580858c77849655cfe8dde1d34625d82185eda51b1088a6ebe2g2vmq\" (UID: \"f27419fd-d9b8-4ae4-ae3c-a9ad071152b2\") " pod="openstack-operators/2d2f0d7580858c77849655cfe8dde1d34625d82185eda51b1088a6ebe2g2vmq" Jan 25 08:11:46 crc kubenswrapper[4832]: I0125 08:11:46.876078 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sfg2p\" (UniqueName: \"kubernetes.io/projected/f27419fd-d9b8-4ae4-ae3c-a9ad071152b2-kube-api-access-sfg2p\") pod \"2d2f0d7580858c77849655cfe8dde1d34625d82185eda51b1088a6ebe2g2vmq\" (UID: \"f27419fd-d9b8-4ae4-ae3c-a9ad071152b2\") " pod="openstack-operators/2d2f0d7580858c77849655cfe8dde1d34625d82185eda51b1088a6ebe2g2vmq" Jan 25 08:11:46 crc kubenswrapper[4832]: I0125 08:11:46.954340 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/2d2f0d7580858c77849655cfe8dde1d34625d82185eda51b1088a6ebe2g2vmq" Jan 25 08:11:47 crc kubenswrapper[4832]: I0125 08:11:47.143028 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/2d2f0d7580858c77849655cfe8dde1d34625d82185eda51b1088a6ebe2g2vmq"] Jan 25 08:11:47 crc kubenswrapper[4832]: I0125 08:11:47.622496 4832 generic.go:334] "Generic (PLEG): container finished" podID="f27419fd-d9b8-4ae4-ae3c-a9ad071152b2" containerID="9bf8038e2a191e44ae6ee9657d91e92e86cba239c22c125f802784c0a73bd07b" exitCode=0 Jan 25 08:11:47 crc kubenswrapper[4832]: I0125 08:11:47.622548 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/2d2f0d7580858c77849655cfe8dde1d34625d82185eda51b1088a6ebe2g2vmq" event={"ID":"f27419fd-d9b8-4ae4-ae3c-a9ad071152b2","Type":"ContainerDied","Data":"9bf8038e2a191e44ae6ee9657d91e92e86cba239c22c125f802784c0a73bd07b"} Jan 25 08:11:47 crc kubenswrapper[4832]: I0125 08:11:47.622801 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/2d2f0d7580858c77849655cfe8dde1d34625d82185eda51b1088a6ebe2g2vmq" event={"ID":"f27419fd-d9b8-4ae4-ae3c-a9ad071152b2","Type":"ContainerStarted","Data":"b8ce46efe87292a9e842adfeda3a16d5be7960c5a8375f462301a6fb4eff549f"} Jan 25 08:11:48 crc kubenswrapper[4832]: I0125 08:11:48.630144 4832 generic.go:334] "Generic (PLEG): container finished" podID="f27419fd-d9b8-4ae4-ae3c-a9ad071152b2" containerID="ad744a9163b00de7030edf1d3d777601e13a60070bfff61f5d2c60afd7f192cd" exitCode=0 Jan 25 08:11:48 crc kubenswrapper[4832]: I0125 08:11:48.630241 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/2d2f0d7580858c77849655cfe8dde1d34625d82185eda51b1088a6ebe2g2vmq" event={"ID":"f27419fd-d9b8-4ae4-ae3c-a9ad071152b2","Type":"ContainerDied","Data":"ad744a9163b00de7030edf1d3d777601e13a60070bfff61f5d2c60afd7f192cd"} Jan 25 08:11:49 crc kubenswrapper[4832]: I0125 08:11:49.638112 4832 generic.go:334] "Generic (PLEG): container finished" podID="f27419fd-d9b8-4ae4-ae3c-a9ad071152b2" containerID="32a717dcd1318e9e5038c6ae0e93e2d82a4da9431a8572dcd88eb83c19e928ef" exitCode=0 Jan 25 08:11:49 crc kubenswrapper[4832]: I0125 08:11:49.638188 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/2d2f0d7580858c77849655cfe8dde1d34625d82185eda51b1088a6ebe2g2vmq" event={"ID":"f27419fd-d9b8-4ae4-ae3c-a9ad071152b2","Type":"ContainerDied","Data":"32a717dcd1318e9e5038c6ae0e93e2d82a4da9431a8572dcd88eb83c19e928ef"} Jan 25 08:11:50 crc kubenswrapper[4832]: I0125 08:11:50.895490 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/2d2f0d7580858c77849655cfe8dde1d34625d82185eda51b1088a6ebe2g2vmq" Jan 25 08:11:51 crc kubenswrapper[4832]: I0125 08:11:51.040354 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sfg2p\" (UniqueName: \"kubernetes.io/projected/f27419fd-d9b8-4ae4-ae3c-a9ad071152b2-kube-api-access-sfg2p\") pod \"f27419fd-d9b8-4ae4-ae3c-a9ad071152b2\" (UID: \"f27419fd-d9b8-4ae4-ae3c-a9ad071152b2\") " Jan 25 08:11:51 crc kubenswrapper[4832]: I0125 08:11:51.040500 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/f27419fd-d9b8-4ae4-ae3c-a9ad071152b2-util\") pod \"f27419fd-d9b8-4ae4-ae3c-a9ad071152b2\" (UID: \"f27419fd-d9b8-4ae4-ae3c-a9ad071152b2\") " Jan 25 08:11:51 crc kubenswrapper[4832]: I0125 08:11:51.040572 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/f27419fd-d9b8-4ae4-ae3c-a9ad071152b2-bundle\") pod \"f27419fd-d9b8-4ae4-ae3c-a9ad071152b2\" (UID: \"f27419fd-d9b8-4ae4-ae3c-a9ad071152b2\") " Jan 25 08:11:51 crc kubenswrapper[4832]: I0125 08:11:51.041239 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f27419fd-d9b8-4ae4-ae3c-a9ad071152b2-bundle" (OuterVolumeSpecName: "bundle") pod "f27419fd-d9b8-4ae4-ae3c-a9ad071152b2" (UID: "f27419fd-d9b8-4ae4-ae3c-a9ad071152b2"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 25 08:11:51 crc kubenswrapper[4832]: I0125 08:11:51.044890 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f27419fd-d9b8-4ae4-ae3c-a9ad071152b2-kube-api-access-sfg2p" (OuterVolumeSpecName: "kube-api-access-sfg2p") pod "f27419fd-d9b8-4ae4-ae3c-a9ad071152b2" (UID: "f27419fd-d9b8-4ae4-ae3c-a9ad071152b2"). InnerVolumeSpecName "kube-api-access-sfg2p". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 25 08:11:51 crc kubenswrapper[4832]: I0125 08:11:51.055146 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f27419fd-d9b8-4ae4-ae3c-a9ad071152b2-util" (OuterVolumeSpecName: "util") pod "f27419fd-d9b8-4ae4-ae3c-a9ad071152b2" (UID: "f27419fd-d9b8-4ae4-ae3c-a9ad071152b2"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 25 08:11:51 crc kubenswrapper[4832]: I0125 08:11:51.142422 4832 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/f27419fd-d9b8-4ae4-ae3c-a9ad071152b2-util\") on node \"crc\" DevicePath \"\"" Jan 25 08:11:51 crc kubenswrapper[4832]: I0125 08:11:51.142459 4832 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/f27419fd-d9b8-4ae4-ae3c-a9ad071152b2-bundle\") on node \"crc\" DevicePath \"\"" Jan 25 08:11:51 crc kubenswrapper[4832]: I0125 08:11:51.142469 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sfg2p\" (UniqueName: \"kubernetes.io/projected/f27419fd-d9b8-4ae4-ae3c-a9ad071152b2-kube-api-access-sfg2p\") on node \"crc\" DevicePath \"\"" Jan 25 08:11:51 crc kubenswrapper[4832]: I0125 08:11:51.651817 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/2d2f0d7580858c77849655cfe8dde1d34625d82185eda51b1088a6ebe2g2vmq" event={"ID":"f27419fd-d9b8-4ae4-ae3c-a9ad071152b2","Type":"ContainerDied","Data":"b8ce46efe87292a9e842adfeda3a16d5be7960c5a8375f462301a6fb4eff549f"} Jan 25 08:11:51 crc kubenswrapper[4832]: I0125 08:11:51.651856 4832 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b8ce46efe87292a9e842adfeda3a16d5be7960c5a8375f462301a6fb4eff549f" Jan 25 08:11:51 crc kubenswrapper[4832]: I0125 08:11:51.651911 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/2d2f0d7580858c77849655cfe8dde1d34625d82185eda51b1088a6ebe2g2vmq" Jan 25 08:11:52 crc kubenswrapper[4832]: I0125 08:11:52.149543 4832 patch_prober.go:28] interesting pod/machine-config-daemon-9r9sz container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 25 08:11:52 crc kubenswrapper[4832]: I0125 08:11:52.149598 4832 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-9r9sz" podUID="1fb47e8e-c812-41b4-9be7-3fad81e121b0" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 25 08:11:54 crc kubenswrapper[4832]: I0125 08:11:54.401437 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-init-6d9d58658-glj79"] Jan 25 08:11:54 crc kubenswrapper[4832]: E0125 08:11:54.402041 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f27419fd-d9b8-4ae4-ae3c-a9ad071152b2" containerName="util" Jan 25 08:11:54 crc kubenswrapper[4832]: I0125 08:11:54.402055 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="f27419fd-d9b8-4ae4-ae3c-a9ad071152b2" containerName="util" Jan 25 08:11:54 crc kubenswrapper[4832]: E0125 08:11:54.402070 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f27419fd-d9b8-4ae4-ae3c-a9ad071152b2" containerName="extract" Jan 25 08:11:54 crc kubenswrapper[4832]: I0125 08:11:54.402076 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="f27419fd-d9b8-4ae4-ae3c-a9ad071152b2" containerName="extract" Jan 25 08:11:54 crc kubenswrapper[4832]: E0125 08:11:54.402089 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f27419fd-d9b8-4ae4-ae3c-a9ad071152b2" containerName="pull" Jan 25 08:11:54 crc kubenswrapper[4832]: I0125 08:11:54.402095 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="f27419fd-d9b8-4ae4-ae3c-a9ad071152b2" containerName="pull" Jan 25 08:11:54 crc kubenswrapper[4832]: I0125 08:11:54.402208 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="f27419fd-d9b8-4ae4-ae3c-a9ad071152b2" containerName="extract" Jan 25 08:11:54 crc kubenswrapper[4832]: I0125 08:11:54.402693 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-init-6d9d58658-glj79" Jan 25 08:11:54 crc kubenswrapper[4832]: I0125 08:11:54.405377 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-init-dockercfg-t5rcm" Jan 25 08:11:54 crc kubenswrapper[4832]: I0125 08:11:54.423536 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-init-6d9d58658-glj79"] Jan 25 08:11:54 crc kubenswrapper[4832]: I0125 08:11:54.585592 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4vfcc\" (UniqueName: \"kubernetes.io/projected/6daad9ca-374e-4351-b5f4-3b262d9816b6-kube-api-access-4vfcc\") pod \"openstack-operator-controller-init-6d9d58658-glj79\" (UID: \"6daad9ca-374e-4351-b5f4-3b262d9816b6\") " pod="openstack-operators/openstack-operator-controller-init-6d9d58658-glj79" Jan 25 08:11:54 crc kubenswrapper[4832]: I0125 08:11:54.686436 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4vfcc\" (UniqueName: \"kubernetes.io/projected/6daad9ca-374e-4351-b5f4-3b262d9816b6-kube-api-access-4vfcc\") pod \"openstack-operator-controller-init-6d9d58658-glj79\" (UID: \"6daad9ca-374e-4351-b5f4-3b262d9816b6\") " pod="openstack-operators/openstack-operator-controller-init-6d9d58658-glj79" Jan 25 08:11:54 crc kubenswrapper[4832]: I0125 08:11:54.705469 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4vfcc\" (UniqueName: \"kubernetes.io/projected/6daad9ca-374e-4351-b5f4-3b262d9816b6-kube-api-access-4vfcc\") pod \"openstack-operator-controller-init-6d9d58658-glj79\" (UID: \"6daad9ca-374e-4351-b5f4-3b262d9816b6\") " pod="openstack-operators/openstack-operator-controller-init-6d9d58658-glj79" Jan 25 08:11:54 crc kubenswrapper[4832]: I0125 08:11:54.720766 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-init-6d9d58658-glj79" Jan 25 08:11:55 crc kubenswrapper[4832]: I0125 08:11:55.129968 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-init-6d9d58658-glj79"] Jan 25 08:11:55 crc kubenswrapper[4832]: I0125 08:11:55.676449 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-6d9d58658-glj79" event={"ID":"6daad9ca-374e-4351-b5f4-3b262d9816b6","Type":"ContainerStarted","Data":"fc790a56b27961850ee4738a753fe8dba59ed41d3997bf447f7376f6b27b3f63"} Jan 25 08:12:00 crc kubenswrapper[4832]: I0125 08:12:00.730822 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-6d9d58658-glj79" event={"ID":"6daad9ca-374e-4351-b5f4-3b262d9816b6","Type":"ContainerStarted","Data":"92d3fb065071c05616be90ef5797f6fc40549b0d8b955e4095ce99814af23c39"} Jan 25 08:12:00 crc kubenswrapper[4832]: I0125 08:12:00.731575 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-init-6d9d58658-glj79" Jan 25 08:12:00 crc kubenswrapper[4832]: I0125 08:12:00.772949 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-init-6d9d58658-glj79" podStartSLOduration=2.012554954 podStartE2EDuration="6.772923693s" podCreationTimestamp="2026-01-25 08:11:54 +0000 UTC" firstStartedPulling="2026-01-25 08:11:55.137290847 +0000 UTC m=+897.811114380" lastFinishedPulling="2026-01-25 08:11:59.897659586 +0000 UTC m=+902.571483119" observedRunningTime="2026-01-25 08:12:00.770126915 +0000 UTC m=+903.443950488" watchObservedRunningTime="2026-01-25 08:12:00.772923693 +0000 UTC m=+903.446747246" Jan 25 08:12:14 crc kubenswrapper[4832]: I0125 08:12:14.725346 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-init-6d9d58658-glj79" Jan 25 08:12:22 crc kubenswrapper[4832]: I0125 08:12:22.149823 4832 patch_prober.go:28] interesting pod/machine-config-daemon-9r9sz container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 25 08:12:22 crc kubenswrapper[4832]: I0125 08:12:22.150559 4832 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-9r9sz" podUID="1fb47e8e-c812-41b4-9be7-3fad81e121b0" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 25 08:12:22 crc kubenswrapper[4832]: I0125 08:12:22.150608 4832 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-9r9sz" Jan 25 08:12:22 crc kubenswrapper[4832]: I0125 08:12:22.151211 4832 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"3375547b40eab52484bd4c11f9fadcc1b41ff739f66fbe9ad0a6f2e89555dcb1"} pod="openshift-machine-config-operator/machine-config-daemon-9r9sz" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 25 08:12:22 crc kubenswrapper[4832]: I0125 08:12:22.151454 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-9r9sz" podUID="1fb47e8e-c812-41b4-9be7-3fad81e121b0" containerName="machine-config-daemon" containerID="cri-o://3375547b40eab52484bd4c11f9fadcc1b41ff739f66fbe9ad0a6f2e89555dcb1" gracePeriod=600 Jan 25 08:12:22 crc kubenswrapper[4832]: I0125 08:12:22.942729 4832 generic.go:334] "Generic (PLEG): container finished" podID="1fb47e8e-c812-41b4-9be7-3fad81e121b0" containerID="3375547b40eab52484bd4c11f9fadcc1b41ff739f66fbe9ad0a6f2e89555dcb1" exitCode=0 Jan 25 08:12:22 crc kubenswrapper[4832]: I0125 08:12:22.942766 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-9r9sz" event={"ID":"1fb47e8e-c812-41b4-9be7-3fad81e121b0","Type":"ContainerDied","Data":"3375547b40eab52484bd4c11f9fadcc1b41ff739f66fbe9ad0a6f2e89555dcb1"} Jan 25 08:12:22 crc kubenswrapper[4832]: I0125 08:12:22.943096 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-9r9sz" event={"ID":"1fb47e8e-c812-41b4-9be7-3fad81e121b0","Type":"ContainerStarted","Data":"bc7fb24eb792d448b55ed5e2d984c4783247ec2dc70708259ed13f1676a5263b"} Jan 25 08:12:22 crc kubenswrapper[4832]: I0125 08:12:22.943123 4832 scope.go:117] "RemoveContainer" containerID="2e5cad5f69dc7b0bf2005b84dd78b370ac52759a8ef11d5ebaebb12ca134de5d" Jan 25 08:12:30 crc kubenswrapper[4832]: I0125 08:12:30.454855 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-7hnz5"] Jan 25 08:12:30 crc kubenswrapper[4832]: I0125 08:12:30.456808 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7hnz5" Jan 25 08:12:30 crc kubenswrapper[4832]: I0125 08:12:30.466156 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-7hnz5"] Jan 25 08:12:30 crc kubenswrapper[4832]: I0125 08:12:30.591181 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j2k9s\" (UniqueName: \"kubernetes.io/projected/464e0a0d-87e3-44d8-aa9d-2b95b2aa2781-kube-api-access-j2k9s\") pod \"certified-operators-7hnz5\" (UID: \"464e0a0d-87e3-44d8-aa9d-2b95b2aa2781\") " pod="openshift-marketplace/certified-operators-7hnz5" Jan 25 08:12:30 crc kubenswrapper[4832]: I0125 08:12:30.591234 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/464e0a0d-87e3-44d8-aa9d-2b95b2aa2781-catalog-content\") pod \"certified-operators-7hnz5\" (UID: \"464e0a0d-87e3-44d8-aa9d-2b95b2aa2781\") " pod="openshift-marketplace/certified-operators-7hnz5" Jan 25 08:12:30 crc kubenswrapper[4832]: I0125 08:12:30.591476 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/464e0a0d-87e3-44d8-aa9d-2b95b2aa2781-utilities\") pod \"certified-operators-7hnz5\" (UID: \"464e0a0d-87e3-44d8-aa9d-2b95b2aa2781\") " pod="openshift-marketplace/certified-operators-7hnz5" Jan 25 08:12:30 crc kubenswrapper[4832]: I0125 08:12:30.692353 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/464e0a0d-87e3-44d8-aa9d-2b95b2aa2781-utilities\") pod \"certified-operators-7hnz5\" (UID: \"464e0a0d-87e3-44d8-aa9d-2b95b2aa2781\") " pod="openshift-marketplace/certified-operators-7hnz5" Jan 25 08:12:30 crc kubenswrapper[4832]: I0125 08:12:30.692436 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j2k9s\" (UniqueName: \"kubernetes.io/projected/464e0a0d-87e3-44d8-aa9d-2b95b2aa2781-kube-api-access-j2k9s\") pod \"certified-operators-7hnz5\" (UID: \"464e0a0d-87e3-44d8-aa9d-2b95b2aa2781\") " pod="openshift-marketplace/certified-operators-7hnz5" Jan 25 08:12:30 crc kubenswrapper[4832]: I0125 08:12:30.692486 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/464e0a0d-87e3-44d8-aa9d-2b95b2aa2781-catalog-content\") pod \"certified-operators-7hnz5\" (UID: \"464e0a0d-87e3-44d8-aa9d-2b95b2aa2781\") " pod="openshift-marketplace/certified-operators-7hnz5" Jan 25 08:12:30 crc kubenswrapper[4832]: I0125 08:12:30.693319 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/464e0a0d-87e3-44d8-aa9d-2b95b2aa2781-utilities\") pod \"certified-operators-7hnz5\" (UID: \"464e0a0d-87e3-44d8-aa9d-2b95b2aa2781\") " pod="openshift-marketplace/certified-operators-7hnz5" Jan 25 08:12:30 crc kubenswrapper[4832]: I0125 08:12:30.693347 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/464e0a0d-87e3-44d8-aa9d-2b95b2aa2781-catalog-content\") pod \"certified-operators-7hnz5\" (UID: \"464e0a0d-87e3-44d8-aa9d-2b95b2aa2781\") " pod="openshift-marketplace/certified-operators-7hnz5" Jan 25 08:12:30 crc kubenswrapper[4832]: I0125 08:12:30.737289 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j2k9s\" (UniqueName: \"kubernetes.io/projected/464e0a0d-87e3-44d8-aa9d-2b95b2aa2781-kube-api-access-j2k9s\") pod \"certified-operators-7hnz5\" (UID: \"464e0a0d-87e3-44d8-aa9d-2b95b2aa2781\") " pod="openshift-marketplace/certified-operators-7hnz5" Jan 25 08:12:30 crc kubenswrapper[4832]: I0125 08:12:30.772661 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7hnz5" Jan 25 08:12:31 crc kubenswrapper[4832]: I0125 08:12:31.112276 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-7hnz5"] Jan 25 08:12:32 crc kubenswrapper[4832]: I0125 08:12:32.013846 4832 generic.go:334] "Generic (PLEG): container finished" podID="464e0a0d-87e3-44d8-aa9d-2b95b2aa2781" containerID="484291b5b6ffa715120bf1be4f1dc156505e4b81f1b8b5b9bc44cd8664377e72" exitCode=0 Jan 25 08:12:32 crc kubenswrapper[4832]: I0125 08:12:32.013910 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7hnz5" event={"ID":"464e0a0d-87e3-44d8-aa9d-2b95b2aa2781","Type":"ContainerDied","Data":"484291b5b6ffa715120bf1be4f1dc156505e4b81f1b8b5b9bc44cd8664377e72"} Jan 25 08:12:32 crc kubenswrapper[4832]: I0125 08:12:32.014369 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7hnz5" event={"ID":"464e0a0d-87e3-44d8-aa9d-2b95b2aa2781","Type":"ContainerStarted","Data":"76c94a4ada191fab81c74a8135e8103d72ac6e7ba3a3431370fab69e42a13715"} Jan 25 08:12:33 crc kubenswrapper[4832]: I0125 08:12:33.235686 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-qrg9b"] Jan 25 08:12:33 crc kubenswrapper[4832]: I0125 08:12:33.237401 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-qrg9b" Jan 25 08:12:33 crc kubenswrapper[4832]: I0125 08:12:33.250227 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-qrg9b"] Jan 25 08:12:33 crc kubenswrapper[4832]: I0125 08:12:33.327014 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/09f1c770-b9b1-40cf-9805-b88a1445218a-catalog-content\") pod \"community-operators-qrg9b\" (UID: \"09f1c770-b9b1-40cf-9805-b88a1445218a\") " pod="openshift-marketplace/community-operators-qrg9b" Jan 25 08:12:33 crc kubenswrapper[4832]: I0125 08:12:33.327080 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zrcwn\" (UniqueName: \"kubernetes.io/projected/09f1c770-b9b1-40cf-9805-b88a1445218a-kube-api-access-zrcwn\") pod \"community-operators-qrg9b\" (UID: \"09f1c770-b9b1-40cf-9805-b88a1445218a\") " pod="openshift-marketplace/community-operators-qrg9b" Jan 25 08:12:33 crc kubenswrapper[4832]: I0125 08:12:33.327129 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/09f1c770-b9b1-40cf-9805-b88a1445218a-utilities\") pod \"community-operators-qrg9b\" (UID: \"09f1c770-b9b1-40cf-9805-b88a1445218a\") " pod="openshift-marketplace/community-operators-qrg9b" Jan 25 08:12:33 crc kubenswrapper[4832]: I0125 08:12:33.428688 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/09f1c770-b9b1-40cf-9805-b88a1445218a-catalog-content\") pod \"community-operators-qrg9b\" (UID: \"09f1c770-b9b1-40cf-9805-b88a1445218a\") " pod="openshift-marketplace/community-operators-qrg9b" Jan 25 08:12:33 crc kubenswrapper[4832]: I0125 08:12:33.428748 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zrcwn\" (UniqueName: \"kubernetes.io/projected/09f1c770-b9b1-40cf-9805-b88a1445218a-kube-api-access-zrcwn\") pod \"community-operators-qrg9b\" (UID: \"09f1c770-b9b1-40cf-9805-b88a1445218a\") " pod="openshift-marketplace/community-operators-qrg9b" Jan 25 08:12:33 crc kubenswrapper[4832]: I0125 08:12:33.428796 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/09f1c770-b9b1-40cf-9805-b88a1445218a-utilities\") pod \"community-operators-qrg9b\" (UID: \"09f1c770-b9b1-40cf-9805-b88a1445218a\") " pod="openshift-marketplace/community-operators-qrg9b" Jan 25 08:12:33 crc kubenswrapper[4832]: I0125 08:12:33.429409 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/09f1c770-b9b1-40cf-9805-b88a1445218a-utilities\") pod \"community-operators-qrg9b\" (UID: \"09f1c770-b9b1-40cf-9805-b88a1445218a\") " pod="openshift-marketplace/community-operators-qrg9b" Jan 25 08:12:33 crc kubenswrapper[4832]: I0125 08:12:33.429598 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/09f1c770-b9b1-40cf-9805-b88a1445218a-catalog-content\") pod \"community-operators-qrg9b\" (UID: \"09f1c770-b9b1-40cf-9805-b88a1445218a\") " pod="openshift-marketplace/community-operators-qrg9b" Jan 25 08:12:33 crc kubenswrapper[4832]: I0125 08:12:33.458032 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zrcwn\" (UniqueName: \"kubernetes.io/projected/09f1c770-b9b1-40cf-9805-b88a1445218a-kube-api-access-zrcwn\") pod \"community-operators-qrg9b\" (UID: \"09f1c770-b9b1-40cf-9805-b88a1445218a\") " pod="openshift-marketplace/community-operators-qrg9b" Jan 25 08:12:33 crc kubenswrapper[4832]: I0125 08:12:33.554028 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-qrg9b" Jan 25 08:12:33 crc kubenswrapper[4832]: I0125 08:12:33.946460 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-qrg9b"] Jan 25 08:12:33 crc kubenswrapper[4832]: W0125 08:12:33.956555 4832 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod09f1c770_b9b1_40cf_9805_b88a1445218a.slice/crio-b02f06b863a28d731d0354cd161b29f46c3652314add722082a9acd658808e5f WatchSource:0}: Error finding container b02f06b863a28d731d0354cd161b29f46c3652314add722082a9acd658808e5f: Status 404 returned error can't find the container with id b02f06b863a28d731d0354cd161b29f46c3652314add722082a9acd658808e5f Jan 25 08:12:34 crc kubenswrapper[4832]: I0125 08:12:34.038326 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qrg9b" event={"ID":"09f1c770-b9b1-40cf-9805-b88a1445218a","Type":"ContainerStarted","Data":"b02f06b863a28d731d0354cd161b29f46c3652314add722082a9acd658808e5f"} Jan 25 08:12:34 crc kubenswrapper[4832]: I0125 08:12:34.041054 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7hnz5" event={"ID":"464e0a0d-87e3-44d8-aa9d-2b95b2aa2781","Type":"ContainerStarted","Data":"64279ddfe0fd6c4111fa0a57d49500f98d2c05e2c63437a405d802ef9cb276f3"} Jan 25 08:12:34 crc kubenswrapper[4832]: I0125 08:12:34.731704 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/barbican-operator-controller-manager-7f86f8796f-hr9t5"] Jan 25 08:12:34 crc kubenswrapper[4832]: I0125 08:12:34.732446 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-7f86f8796f-hr9t5" Jan 25 08:12:34 crc kubenswrapper[4832]: W0125 08:12:34.735673 4832 reflector.go:561] object-"openstack-operators"/"barbican-operator-controller-manager-dockercfg-d9h8b": failed to list *v1.Secret: secrets "barbican-operator-controller-manager-dockercfg-d9h8b" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openstack-operators": no relationship found between node 'crc' and this object Jan 25 08:12:34 crc kubenswrapper[4832]: E0125 08:12:34.735725 4832 reflector.go:158] "Unhandled Error" err="object-\"openstack-operators\"/\"barbican-operator-controller-manager-dockercfg-d9h8b\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"barbican-operator-controller-manager-dockercfg-d9h8b\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openstack-operators\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 25 08:12:34 crc kubenswrapper[4832]: I0125 08:12:34.740853 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/cinder-operator-controller-manager-7478f7dbf9-qdwdw"] Jan 25 08:12:34 crc kubenswrapper[4832]: I0125 08:12:34.741978 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-7478f7dbf9-qdwdw" Jan 25 08:12:34 crc kubenswrapper[4832]: I0125 08:12:34.744260 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"cinder-operator-controller-manager-dockercfg-8tdbg" Jan 25 08:12:34 crc kubenswrapper[4832]: I0125 08:12:34.755055 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-7f86f8796f-hr9t5"] Jan 25 08:12:34 crc kubenswrapper[4832]: I0125 08:12:34.757104 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/designate-operator-controller-manager-b45d7bf98-75hsw"] Jan 25 08:12:34 crc kubenswrapper[4832]: I0125 08:12:34.758113 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-75hsw" Jan 25 08:12:34 crc kubenswrapper[4832]: I0125 08:12:34.766522 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"designate-operator-controller-manager-dockercfg-9g9ml" Jan 25 08:12:34 crc kubenswrapper[4832]: I0125 08:12:34.771245 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-7478f7dbf9-qdwdw"] Jan 25 08:12:34 crc kubenswrapper[4832]: I0125 08:12:34.814746 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/glance-operator-controller-manager-78fdd796fd-mgsq7"] Jan 25 08:12:34 crc kubenswrapper[4832]: I0125 08:12:34.816579 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-mgsq7" Jan 25 08:12:34 crc kubenswrapper[4832]: I0125 08:12:34.835121 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"glance-operator-controller-manager-dockercfg-4pn2x" Jan 25 08:12:34 crc kubenswrapper[4832]: I0125 08:12:34.871674 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-b45d7bf98-75hsw"] Jan 25 08:12:34 crc kubenswrapper[4832]: I0125 08:12:34.871711 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jq2jr\" (UniqueName: \"kubernetes.io/projected/0cac9e7d-b342-4b55-a667-76fa1c144080-kube-api-access-jq2jr\") pod \"designate-operator-controller-manager-b45d7bf98-75hsw\" (UID: \"0cac9e7d-b342-4b55-a667-76fa1c144080\") " pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-75hsw" Jan 25 08:12:34 crc kubenswrapper[4832]: I0125 08:12:34.872354 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kct5t\" (UniqueName: \"kubernetes.io/projected/8251d5ba-3a9a-429c-ba20-1af897640ad3-kube-api-access-kct5t\") pod \"barbican-operator-controller-manager-7f86f8796f-hr9t5\" (UID: \"8251d5ba-3a9a-429c-ba20-1af897640ad3\") " pod="openstack-operators/barbican-operator-controller-manager-7f86f8796f-hr9t5" Jan 25 08:12:34 crc kubenswrapper[4832]: I0125 08:12:34.872443 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h92kb\" (UniqueName: \"kubernetes.io/projected/b1702aab-2dd8-488f-8a7f-93f43df4b0ab-kube-api-access-h92kb\") pod \"glance-operator-controller-manager-78fdd796fd-mgsq7\" (UID: \"b1702aab-2dd8-488f-8a7f-93f43df4b0ab\") " pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-mgsq7" Jan 25 08:12:34 crc kubenswrapper[4832]: I0125 08:12:34.872474 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-75bkz\" (UniqueName: \"kubernetes.io/projected/b3a8f752-cc73-4933-88d1-3b661a42ead2-kube-api-access-75bkz\") pod \"cinder-operator-controller-manager-7478f7dbf9-qdwdw\" (UID: \"b3a8f752-cc73-4933-88d1-3b661a42ead2\") " pod="openstack-operators/cinder-operator-controller-manager-7478f7dbf9-qdwdw" Jan 25 08:12:34 crc kubenswrapper[4832]: I0125 08:12:34.891742 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/heat-operator-controller-manager-594c8c9d5d-h4c7b"] Jan 25 08:12:34 crc kubenswrapper[4832]: I0125 08:12:34.892763 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-h4c7b" Jan 25 08:12:34 crc kubenswrapper[4832]: I0125 08:12:34.895962 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"heat-operator-controller-manager-dockercfg-z7j9v" Jan 25 08:12:34 crc kubenswrapper[4832]: I0125 08:12:34.917886 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-78fdd796fd-mgsq7"] Jan 25 08:12:34 crc kubenswrapper[4832]: I0125 08:12:34.925976 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-594c8c9d5d-h4c7b"] Jan 25 08:12:34 crc kubenswrapper[4832]: I0125 08:12:34.944487 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/horizon-operator-controller-manager-77d5c5b54f-nzjmz"] Jan 25 08:12:34 crc kubenswrapper[4832]: I0125 08:12:34.945464 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-nzjmz" Jan 25 08:12:34 crc kubenswrapper[4832]: I0125 08:12:34.951663 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"horizon-operator-controller-manager-dockercfg-lx84n" Jan 25 08:12:34 crc kubenswrapper[4832]: I0125 08:12:34.955459 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-77d5c5b54f-nzjmz"] Jan 25 08:12:34 crc kubenswrapper[4832]: I0125 08:12:34.965303 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/infra-operator-controller-manager-694cf4f878-vt5m9"] Jan 25 08:12:34 crc kubenswrapper[4832]: I0125 08:12:34.966248 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-694cf4f878-vt5m9" Jan 25 08:12:34 crc kubenswrapper[4832]: I0125 08:12:34.969783 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-webhook-server-cert" Jan 25 08:12:34 crc kubenswrapper[4832]: I0125 08:12:34.970016 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-controller-manager-dockercfg-zzlmb" Jan 25 08:12:34 crc kubenswrapper[4832]: I0125 08:12:34.971418 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ironic-operator-controller-manager-598f7747c9-t8jng"] Jan 25 08:12:34 crc kubenswrapper[4832]: I0125 08:12:34.972478 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-598f7747c9-t8jng" Jan 25 08:12:34 crc kubenswrapper[4832]: I0125 08:12:34.976168 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-82m7m\" (UniqueName: \"kubernetes.io/projected/3f993c1e-81ae-4e86-9b28-eccb1db48f2b-kube-api-access-82m7m\") pod \"horizon-operator-controller-manager-77d5c5b54f-nzjmz\" (UID: \"3f993c1e-81ae-4e86-9b28-eccb1db48f2b\") " pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-nzjmz" Jan 25 08:12:34 crc kubenswrapper[4832]: I0125 08:12:34.976233 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sbwfn\" (UniqueName: \"kubernetes.io/projected/efdb6007-fdd7-4a18-9dba-4f1571f6f822-kube-api-access-sbwfn\") pod \"heat-operator-controller-manager-594c8c9d5d-h4c7b\" (UID: \"efdb6007-fdd7-4a18-9dba-4f1571f6f822\") " pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-h4c7b" Jan 25 08:12:34 crc kubenswrapper[4832]: I0125 08:12:34.976267 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jq2jr\" (UniqueName: \"kubernetes.io/projected/0cac9e7d-b342-4b55-a667-76fa1c144080-kube-api-access-jq2jr\") pod \"designate-operator-controller-manager-b45d7bf98-75hsw\" (UID: \"0cac9e7d-b342-4b55-a667-76fa1c144080\") " pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-75hsw" Jan 25 08:12:34 crc kubenswrapper[4832]: I0125 08:12:34.976297 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kct5t\" (UniqueName: \"kubernetes.io/projected/8251d5ba-3a9a-429c-ba20-1af897640ad3-kube-api-access-kct5t\") pod \"barbican-operator-controller-manager-7f86f8796f-hr9t5\" (UID: \"8251d5ba-3a9a-429c-ba20-1af897640ad3\") " pod="openstack-operators/barbican-operator-controller-manager-7f86f8796f-hr9t5" Jan 25 08:12:34 crc kubenswrapper[4832]: I0125 08:12:34.976320 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h92kb\" (UniqueName: \"kubernetes.io/projected/b1702aab-2dd8-488f-8a7f-93f43df4b0ab-kube-api-access-h92kb\") pod \"glance-operator-controller-manager-78fdd796fd-mgsq7\" (UID: \"b1702aab-2dd8-488f-8a7f-93f43df4b0ab\") " pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-mgsq7" Jan 25 08:12:34 crc kubenswrapper[4832]: I0125 08:12:34.976338 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-75bkz\" (UniqueName: \"kubernetes.io/projected/b3a8f752-cc73-4933-88d1-3b661a42ead2-kube-api-access-75bkz\") pod \"cinder-operator-controller-manager-7478f7dbf9-qdwdw\" (UID: \"b3a8f752-cc73-4933-88d1-3b661a42ead2\") " pod="openstack-operators/cinder-operator-controller-manager-7478f7dbf9-qdwdw" Jan 25 08:12:34 crc kubenswrapper[4832]: I0125 08:12:34.976783 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/keystone-operator-controller-manager-b8b6d4659-vvwcx"] Jan 25 08:12:34 crc kubenswrapper[4832]: I0125 08:12:34.977788 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-vvwcx" Jan 25 08:12:34 crc kubenswrapper[4832]: I0125 08:12:34.979336 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ironic-operator-controller-manager-dockercfg-crshq" Jan 25 08:12:34 crc kubenswrapper[4832]: I0125 08:12:34.983779 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"keystone-operator-controller-manager-dockercfg-ztcvw" Jan 25 08:12:34 crc kubenswrapper[4832]: I0125 08:12:34.988744 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-694cf4f878-vt5m9"] Jan 25 08:12:35 crc kubenswrapper[4832]: I0125 08:12:35.011608 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-598f7747c9-t8jng"] Jan 25 08:12:35 crc kubenswrapper[4832]: I0125 08:12:35.025904 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jq2jr\" (UniqueName: \"kubernetes.io/projected/0cac9e7d-b342-4b55-a667-76fa1c144080-kube-api-access-jq2jr\") pod \"designate-operator-controller-manager-b45d7bf98-75hsw\" (UID: \"0cac9e7d-b342-4b55-a667-76fa1c144080\") " pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-75hsw" Jan 25 08:12:35 crc kubenswrapper[4832]: I0125 08:12:35.026876 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h92kb\" (UniqueName: \"kubernetes.io/projected/b1702aab-2dd8-488f-8a7f-93f43df4b0ab-kube-api-access-h92kb\") pod \"glance-operator-controller-manager-78fdd796fd-mgsq7\" (UID: \"b1702aab-2dd8-488f-8a7f-93f43df4b0ab\") " pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-mgsq7" Jan 25 08:12:35 crc kubenswrapper[4832]: I0125 08:12:35.030648 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-75bkz\" (UniqueName: \"kubernetes.io/projected/b3a8f752-cc73-4933-88d1-3b661a42ead2-kube-api-access-75bkz\") pod \"cinder-operator-controller-manager-7478f7dbf9-qdwdw\" (UID: \"b3a8f752-cc73-4933-88d1-3b661a42ead2\") " pod="openstack-operators/cinder-operator-controller-manager-7478f7dbf9-qdwdw" Jan 25 08:12:35 crc kubenswrapper[4832]: I0125 08:12:35.030721 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/manila-operator-controller-manager-78c6999f6f-mstsp"] Jan 25 08:12:35 crc kubenswrapper[4832]: I0125 08:12:35.033485 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-mstsp" Jan 25 08:12:35 crc kubenswrapper[4832]: I0125 08:12:35.045706 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"manila-operator-controller-manager-dockercfg-crj8g" Jan 25 08:12:35 crc kubenswrapper[4832]: I0125 08:12:35.046593 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kct5t\" (UniqueName: \"kubernetes.io/projected/8251d5ba-3a9a-429c-ba20-1af897640ad3-kube-api-access-kct5t\") pod \"barbican-operator-controller-manager-7f86f8796f-hr9t5\" (UID: \"8251d5ba-3a9a-429c-ba20-1af897640ad3\") " pod="openstack-operators/barbican-operator-controller-manager-7f86f8796f-hr9t5" Jan 25 08:12:35 crc kubenswrapper[4832]: I0125 08:12:35.054453 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-78c6999f6f-mstsp"] Jan 25 08:12:35 crc kubenswrapper[4832]: I0125 08:12:35.059177 4832 generic.go:334] "Generic (PLEG): container finished" podID="464e0a0d-87e3-44d8-aa9d-2b95b2aa2781" containerID="64279ddfe0fd6c4111fa0a57d49500f98d2c05e2c63437a405d802ef9cb276f3" exitCode=0 Jan 25 08:12:35 crc kubenswrapper[4832]: I0125 08:12:35.059233 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7hnz5" event={"ID":"464e0a0d-87e3-44d8-aa9d-2b95b2aa2781","Type":"ContainerDied","Data":"64279ddfe0fd6c4111fa0a57d49500f98d2c05e2c63437a405d802ef9cb276f3"} Jan 25 08:12:35 crc kubenswrapper[4832]: I0125 08:12:35.059458 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-4k5f7"] Jan 25 08:12:35 crc kubenswrapper[4832]: I0125 08:12:35.060358 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-4k5f7" Jan 25 08:12:35 crc kubenswrapper[4832]: I0125 08:12:35.068677 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"mariadb-operator-controller-manager-dockercfg-84svc" Jan 25 08:12:35 crc kubenswrapper[4832]: I0125 08:12:35.075649 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-b8b6d4659-vvwcx"] Jan 25 08:12:35 crc kubenswrapper[4832]: I0125 08:12:35.077063 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qcrt8\" (UniqueName: \"kubernetes.io/projected/50da9b0d-da00-4211-95cd-0218828341e5-kube-api-access-qcrt8\") pod \"keystone-operator-controller-manager-b8b6d4659-vvwcx\" (UID: \"50da9b0d-da00-4211-95cd-0218828341e5\") " pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-vvwcx" Jan 25 08:12:35 crc kubenswrapper[4832]: I0125 08:12:35.077128 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8fvwz\" (UniqueName: \"kubernetes.io/projected/29b29aa4-b326-4515-9842-6d848c208096-kube-api-access-8fvwz\") pod \"infra-operator-controller-manager-694cf4f878-vt5m9\" (UID: \"29b29aa4-b326-4515-9842-6d848c208096\") " pod="openstack-operators/infra-operator-controller-manager-694cf4f878-vt5m9" Jan 25 08:12:35 crc kubenswrapper[4832]: I0125 08:12:35.077172 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-82m7m\" (UniqueName: \"kubernetes.io/projected/3f993c1e-81ae-4e86-9b28-eccb1db48f2b-kube-api-access-82m7m\") pod \"horizon-operator-controller-manager-77d5c5b54f-nzjmz\" (UID: \"3f993c1e-81ae-4e86-9b28-eccb1db48f2b\") " pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-nzjmz" Jan 25 08:12:35 crc kubenswrapper[4832]: I0125 08:12:35.077198 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5krl5\" (UniqueName: \"kubernetes.io/projected/d75c853c-428e-4f6a-8a82-a050b71af662-kube-api-access-5krl5\") pod \"manila-operator-controller-manager-78c6999f6f-mstsp\" (UID: \"d75c853c-428e-4f6a-8a82-a050b71af662\") " pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-mstsp" Jan 25 08:12:35 crc kubenswrapper[4832]: I0125 08:12:35.077236 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/29b29aa4-b326-4515-9842-6d848c208096-cert\") pod \"infra-operator-controller-manager-694cf4f878-vt5m9\" (UID: \"29b29aa4-b326-4515-9842-6d848c208096\") " pod="openstack-operators/infra-operator-controller-manager-694cf4f878-vt5m9" Jan 25 08:12:35 crc kubenswrapper[4832]: I0125 08:12:35.077262 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cl5tf\" (UniqueName: \"kubernetes.io/projected/44be34d2-851c-4bf5-a3fb-87607d045d1f-kube-api-access-cl5tf\") pod \"ironic-operator-controller-manager-598f7747c9-t8jng\" (UID: \"44be34d2-851c-4bf5-a3fb-87607d045d1f\") " pod="openstack-operators/ironic-operator-controller-manager-598f7747c9-t8jng" Jan 25 08:12:35 crc kubenswrapper[4832]: I0125 08:12:35.077301 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sbwfn\" (UniqueName: \"kubernetes.io/projected/efdb6007-fdd7-4a18-9dba-4f1571f6f822-kube-api-access-sbwfn\") pod \"heat-operator-controller-manager-594c8c9d5d-h4c7b\" (UID: \"efdb6007-fdd7-4a18-9dba-4f1571f6f822\") " pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-h4c7b" Jan 25 08:12:35 crc kubenswrapper[4832]: I0125 08:12:35.078430 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/neutron-operator-controller-manager-78d58447c5-hpqjz"] Jan 25 08:12:35 crc kubenswrapper[4832]: I0125 08:12:35.079230 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-78d58447c5-hpqjz" Jan 25 08:12:35 crc kubenswrapper[4832]: I0125 08:12:35.079648 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-7478f7dbf9-qdwdw" Jan 25 08:12:35 crc kubenswrapper[4832]: I0125 08:12:35.081617 4832 generic.go:334] "Generic (PLEG): container finished" podID="09f1c770-b9b1-40cf-9805-b88a1445218a" containerID="3c0da3ec0e400b7084c9b356e526fcbdb60ae830140eadc704c95246af074504" exitCode=0 Jan 25 08:12:35 crc kubenswrapper[4832]: I0125 08:12:35.081662 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qrg9b" event={"ID":"09f1c770-b9b1-40cf-9805-b88a1445218a","Type":"ContainerDied","Data":"3c0da3ec0e400b7084c9b356e526fcbdb60ae830140eadc704c95246af074504"} Jan 25 08:12:35 crc kubenswrapper[4832]: I0125 08:12:35.082098 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"neutron-operator-controller-manager-dockercfg-jjpxj" Jan 25 08:12:35 crc kubenswrapper[4832]: I0125 08:12:35.082903 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-4k5f7"] Jan 25 08:12:35 crc kubenswrapper[4832]: I0125 08:12:35.091646 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/nova-operator-controller-manager-7bdb645866-q67lr"] Jan 25 08:12:35 crc kubenswrapper[4832]: I0125 08:12:35.097605 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-7bdb645866-q67lr" Jan 25 08:12:35 crc kubenswrapper[4832]: I0125 08:12:35.101092 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"nova-operator-controller-manager-dockercfg-j69w7" Jan 25 08:12:35 crc kubenswrapper[4832]: I0125 08:12:35.101100 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-78d58447c5-hpqjz"] Jan 25 08:12:35 crc kubenswrapper[4832]: I0125 08:12:35.104704 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-7bdb645866-q67lr"] Jan 25 08:12:35 crc kubenswrapper[4832]: I0125 08:12:35.105018 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-75hsw" Jan 25 08:12:35 crc kubenswrapper[4832]: I0125 08:12:35.114986 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-82m7m\" (UniqueName: \"kubernetes.io/projected/3f993c1e-81ae-4e86-9b28-eccb1db48f2b-kube-api-access-82m7m\") pod \"horizon-operator-controller-manager-77d5c5b54f-nzjmz\" (UID: \"3f993c1e-81ae-4e86-9b28-eccb1db48f2b\") " pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-nzjmz" Jan 25 08:12:35 crc kubenswrapper[4832]: I0125 08:12:35.115051 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/octavia-operator-controller-manager-5f4cd88d46-642xd"] Jan 25 08:12:35 crc kubenswrapper[4832]: I0125 08:12:35.116267 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-5f4cd88d46-642xd" Jan 25 08:12:35 crc kubenswrapper[4832]: I0125 08:12:35.120779 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"octavia-operator-controller-manager-dockercfg-x9dgc" Jan 25 08:12:35 crc kubenswrapper[4832]: I0125 08:12:35.122963 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-5f4cd88d46-642xd"] Jan 25 08:12:35 crc kubenswrapper[4832]: I0125 08:12:35.126663 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sbwfn\" (UniqueName: \"kubernetes.io/projected/efdb6007-fdd7-4a18-9dba-4f1571f6f822-kube-api-access-sbwfn\") pod \"heat-operator-controller-manager-594c8c9d5d-h4c7b\" (UID: \"efdb6007-fdd7-4a18-9dba-4f1571f6f822\") " pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-h4c7b" Jan 25 08:12:35 crc kubenswrapper[4832]: I0125 08:12:35.143493 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-mgsq7" Jan 25 08:12:35 crc kubenswrapper[4832]: I0125 08:12:35.159063 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854b8jhw"] Jan 25 08:12:35 crc kubenswrapper[4832]: I0125 08:12:35.160078 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854b8jhw" Jan 25 08:12:35 crc kubenswrapper[4832]: I0125 08:12:35.169055 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-webhook-server-cert" Jan 25 08:12:35 crc kubenswrapper[4832]: I0125 08:12:35.169866 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-controller-manager-dockercfg-8r76f" Jan 25 08:12:35 crc kubenswrapper[4832]: I0125 08:12:35.178187 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854b8jhw"] Jan 25 08:12:35 crc kubenswrapper[4832]: I0125 08:12:35.178925 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5sfqf\" (UniqueName: \"kubernetes.io/projected/0c897c34-1c91-416c-91e2-65ae83958e10-kube-api-access-5sfqf\") pod \"neutron-operator-controller-manager-78d58447c5-hpqjz\" (UID: \"0c897c34-1c91-416c-91e2-65ae83958e10\") " pod="openstack-operators/neutron-operator-controller-manager-78d58447c5-hpqjz" Jan 25 08:12:35 crc kubenswrapper[4832]: I0125 08:12:35.178969 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-54n57\" (UniqueName: \"kubernetes.io/projected/31cef49b-390b-4029-bdc4-64893be3d183-kube-api-access-54n57\") pod \"mariadb-operator-controller-manager-6b9fb5fdcb-4k5f7\" (UID: \"31cef49b-390b-4029-bdc4-64893be3d183\") " pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-4k5f7" Jan 25 08:12:35 crc kubenswrapper[4832]: I0125 08:12:35.179021 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qcrt8\" (UniqueName: \"kubernetes.io/projected/50da9b0d-da00-4211-95cd-0218828341e5-kube-api-access-qcrt8\") pod \"keystone-operator-controller-manager-b8b6d4659-vvwcx\" (UID: \"50da9b0d-da00-4211-95cd-0218828341e5\") " pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-vvwcx" Jan 25 08:12:35 crc kubenswrapper[4832]: I0125 08:12:35.179067 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8fvwz\" (UniqueName: \"kubernetes.io/projected/29b29aa4-b326-4515-9842-6d848c208096-kube-api-access-8fvwz\") pod \"infra-operator-controller-manager-694cf4f878-vt5m9\" (UID: \"29b29aa4-b326-4515-9842-6d848c208096\") " pod="openstack-operators/infra-operator-controller-manager-694cf4f878-vt5m9" Jan 25 08:12:35 crc kubenswrapper[4832]: I0125 08:12:35.179115 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5krl5\" (UniqueName: \"kubernetes.io/projected/d75c853c-428e-4f6a-8a82-a050b71af662-kube-api-access-5krl5\") pod \"manila-operator-controller-manager-78c6999f6f-mstsp\" (UID: \"d75c853c-428e-4f6a-8a82-a050b71af662\") " pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-mstsp" Jan 25 08:12:35 crc kubenswrapper[4832]: I0125 08:12:35.179165 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/29b29aa4-b326-4515-9842-6d848c208096-cert\") pod \"infra-operator-controller-manager-694cf4f878-vt5m9\" (UID: \"29b29aa4-b326-4515-9842-6d848c208096\") " pod="openstack-operators/infra-operator-controller-manager-694cf4f878-vt5m9" Jan 25 08:12:35 crc kubenswrapper[4832]: I0125 08:12:35.179203 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-46sfr\" (UniqueName: \"kubernetes.io/projected/b618d12e-02c2-4ae7-872a-15bd233259b5-kube-api-access-46sfr\") pod \"octavia-operator-controller-manager-5f4cd88d46-642xd\" (UID: \"b618d12e-02c2-4ae7-872a-15bd233259b5\") " pod="openstack-operators/octavia-operator-controller-manager-5f4cd88d46-642xd" Jan 25 08:12:35 crc kubenswrapper[4832]: I0125 08:12:35.179230 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cl5tf\" (UniqueName: \"kubernetes.io/projected/44be34d2-851c-4bf5-a3fb-87607d045d1f-kube-api-access-cl5tf\") pod \"ironic-operator-controller-manager-598f7747c9-t8jng\" (UID: \"44be34d2-851c-4bf5-a3fb-87607d045d1f\") " pod="openstack-operators/ironic-operator-controller-manager-598f7747c9-t8jng" Jan 25 08:12:35 crc kubenswrapper[4832]: I0125 08:12:35.179259 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mcv5t\" (UniqueName: \"kubernetes.io/projected/d221c44f-6fb5-4b96-b84e-f1d55253ed08-kube-api-access-mcv5t\") pod \"nova-operator-controller-manager-7bdb645866-q67lr\" (UID: \"d221c44f-6fb5-4b96-b84e-f1d55253ed08\") " pod="openstack-operators/nova-operator-controller-manager-7bdb645866-q67lr" Jan 25 08:12:35 crc kubenswrapper[4832]: E0125 08:12:35.179806 4832 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 25 08:12:35 crc kubenswrapper[4832]: E0125 08:12:35.179881 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/29b29aa4-b326-4515-9842-6d848c208096-cert podName:29b29aa4-b326-4515-9842-6d848c208096 nodeName:}" failed. No retries permitted until 2026-01-25 08:12:35.679862483 +0000 UTC m=+938.353686016 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/29b29aa4-b326-4515-9842-6d848c208096-cert") pod "infra-operator-controller-manager-694cf4f878-vt5m9" (UID: "29b29aa4-b326-4515-9842-6d848c208096") : secret "infra-operator-webhook-server-cert" not found Jan 25 08:12:35 crc kubenswrapper[4832]: I0125 08:12:35.187616 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ovn-operator-controller-manager-6f75f45d54-cf7rg"] Jan 25 08:12:35 crc kubenswrapper[4832]: I0125 08:12:35.188555 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-6f75f45d54-cf7rg" Jan 25 08:12:35 crc kubenswrapper[4832]: I0125 08:12:35.192855 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ovn-operator-controller-manager-dockercfg-jsndn" Jan 25 08:12:35 crc kubenswrapper[4832]: I0125 08:12:35.203802 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-6f75f45d54-cf7rg"] Jan 25 08:12:35 crc kubenswrapper[4832]: I0125 08:12:35.204852 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cl5tf\" (UniqueName: \"kubernetes.io/projected/44be34d2-851c-4bf5-a3fb-87607d045d1f-kube-api-access-cl5tf\") pod \"ironic-operator-controller-manager-598f7747c9-t8jng\" (UID: \"44be34d2-851c-4bf5-a3fb-87607d045d1f\") " pod="openstack-operators/ironic-operator-controller-manager-598f7747c9-t8jng" Jan 25 08:12:35 crc kubenswrapper[4832]: I0125 08:12:35.209427 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5krl5\" (UniqueName: \"kubernetes.io/projected/d75c853c-428e-4f6a-8a82-a050b71af662-kube-api-access-5krl5\") pod \"manila-operator-controller-manager-78c6999f6f-mstsp\" (UID: \"d75c853c-428e-4f6a-8a82-a050b71af662\") " pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-mstsp" Jan 25 08:12:35 crc kubenswrapper[4832]: I0125 08:12:35.211127 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qcrt8\" (UniqueName: \"kubernetes.io/projected/50da9b0d-da00-4211-95cd-0218828341e5-kube-api-access-qcrt8\") pod \"keystone-operator-controller-manager-b8b6d4659-vvwcx\" (UID: \"50da9b0d-da00-4211-95cd-0218828341e5\") " pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-vvwcx" Jan 25 08:12:35 crc kubenswrapper[4832]: I0125 08:12:35.216922 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8fvwz\" (UniqueName: \"kubernetes.io/projected/29b29aa4-b326-4515-9842-6d848c208096-kube-api-access-8fvwz\") pod \"infra-operator-controller-manager-694cf4f878-vt5m9\" (UID: \"29b29aa4-b326-4515-9842-6d848c208096\") " pod="openstack-operators/infra-operator-controller-manager-694cf4f878-vt5m9" Jan 25 08:12:35 crc kubenswrapper[4832]: I0125 08:12:35.217972 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-h4c7b" Jan 25 08:12:35 crc kubenswrapper[4832]: I0125 08:12:35.223475 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/placement-operator-controller-manager-79d5ccc684-lrsxz"] Jan 25 08:12:35 crc kubenswrapper[4832]: I0125 08:12:35.225251 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-79d5ccc684-lrsxz" Jan 25 08:12:35 crc kubenswrapper[4832]: I0125 08:12:35.236812 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"placement-operator-controller-manager-dockercfg-jjltn" Jan 25 08:12:35 crc kubenswrapper[4832]: I0125 08:12:35.279494 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/swift-operator-controller-manager-547cbdb99f-zwlrf"] Jan 25 08:12:35 crc kubenswrapper[4832]: I0125 08:12:35.295088 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-nzjmz" Jan 25 08:12:35 crc kubenswrapper[4832]: I0125 08:12:35.315697 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5sfqf\" (UniqueName: \"kubernetes.io/projected/0c897c34-1c91-416c-91e2-65ae83958e10-kube-api-access-5sfqf\") pod \"neutron-operator-controller-manager-78d58447c5-hpqjz\" (UID: \"0c897c34-1c91-416c-91e2-65ae83958e10\") " pod="openstack-operators/neutron-operator-controller-manager-78d58447c5-hpqjz" Jan 25 08:12:35 crc kubenswrapper[4832]: I0125 08:12:35.315767 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-54n57\" (UniqueName: \"kubernetes.io/projected/31cef49b-390b-4029-bdc4-64893be3d183-kube-api-access-54n57\") pod \"mariadb-operator-controller-manager-6b9fb5fdcb-4k5f7\" (UID: \"31cef49b-390b-4029-bdc4-64893be3d183\") " pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-4k5f7" Jan 25 08:12:35 crc kubenswrapper[4832]: I0125 08:12:35.317402 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pbrbl\" (UniqueName: \"kubernetes.io/projected/1e30c775-7a32-478e-8c3c-7312757f846b-kube-api-access-pbrbl\") pod \"placement-operator-controller-manager-79d5ccc684-lrsxz\" (UID: \"1e30c775-7a32-478e-8c3c-7312757f846b\") " pod="openstack-operators/placement-operator-controller-manager-79d5ccc684-lrsxz" Jan 25 08:12:35 crc kubenswrapper[4832]: I0125 08:12:35.317565 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-46sfr\" (UniqueName: \"kubernetes.io/projected/b618d12e-02c2-4ae7-872a-15bd233259b5-kube-api-access-46sfr\") pod \"octavia-operator-controller-manager-5f4cd88d46-642xd\" (UID: \"b618d12e-02c2-4ae7-872a-15bd233259b5\") " pod="openstack-operators/octavia-operator-controller-manager-5f4cd88d46-642xd" Jan 25 08:12:35 crc kubenswrapper[4832]: I0125 08:12:35.317613 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mcv5t\" (UniqueName: \"kubernetes.io/projected/d221c44f-6fb5-4b96-b84e-f1d55253ed08-kube-api-access-mcv5t\") pod \"nova-operator-controller-manager-7bdb645866-q67lr\" (UID: \"d221c44f-6fb5-4b96-b84e-f1d55253ed08\") " pod="openstack-operators/nova-operator-controller-manager-7bdb645866-q67lr" Jan 25 08:12:35 crc kubenswrapper[4832]: I0125 08:12:35.317671 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/3b784c4a-e1cf-42fb-ad96-dca059f63e79-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b854b8jhw\" (UID: \"3b784c4a-e1cf-42fb-ad96-dca059f63e79\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854b8jhw" Jan 25 08:12:35 crc kubenswrapper[4832]: I0125 08:12:35.317708 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lggrn\" (UniqueName: \"kubernetes.io/projected/8d21c83b-b981-4466-b81a-ed7954d1f3cb-kube-api-access-lggrn\") pod \"ovn-operator-controller-manager-6f75f45d54-cf7rg\" (UID: \"8d21c83b-b981-4466-b81a-ed7954d1f3cb\") " pod="openstack-operators/ovn-operator-controller-manager-6f75f45d54-cf7rg" Jan 25 08:12:35 crc kubenswrapper[4832]: I0125 08:12:35.317737 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t57zq\" (UniqueName: \"kubernetes.io/projected/3b784c4a-e1cf-42fb-ad96-dca059f63e79-kube-api-access-t57zq\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b854b8jhw\" (UID: \"3b784c4a-e1cf-42fb-ad96-dca059f63e79\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854b8jhw" Jan 25 08:12:35 crc kubenswrapper[4832]: I0125 08:12:35.341431 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-79d5ccc684-lrsxz"] Jan 25 08:12:35 crc kubenswrapper[4832]: I0125 08:12:35.341556 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-zwlrf" Jan 25 08:12:35 crc kubenswrapper[4832]: I0125 08:12:35.345544 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"swift-operator-controller-manager-dockercfg-9nd6t" Jan 25 08:12:35 crc kubenswrapper[4832]: I0125 08:12:35.362526 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5sfqf\" (UniqueName: \"kubernetes.io/projected/0c897c34-1c91-416c-91e2-65ae83958e10-kube-api-access-5sfqf\") pod \"neutron-operator-controller-manager-78d58447c5-hpqjz\" (UID: \"0c897c34-1c91-416c-91e2-65ae83958e10\") " pod="openstack-operators/neutron-operator-controller-manager-78d58447c5-hpqjz" Jan 25 08:12:35 crc kubenswrapper[4832]: I0125 08:12:35.376120 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-547cbdb99f-zwlrf"] Jan 25 08:12:35 crc kubenswrapper[4832]: I0125 08:12:35.382634 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-598f7747c9-t8jng" Jan 25 08:12:35 crc kubenswrapper[4832]: I0125 08:12:35.416491 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-54n57\" (UniqueName: \"kubernetes.io/projected/31cef49b-390b-4029-bdc4-64893be3d183-kube-api-access-54n57\") pod \"mariadb-operator-controller-manager-6b9fb5fdcb-4k5f7\" (UID: \"31cef49b-390b-4029-bdc4-64893be3d183\") " pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-4k5f7" Jan 25 08:12:35 crc kubenswrapper[4832]: I0125 08:12:35.423072 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-mstsp" Jan 25 08:12:35 crc kubenswrapper[4832]: I0125 08:12:35.423740 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-vvwcx" Jan 25 08:12:35 crc kubenswrapper[4832]: I0125 08:12:35.430346 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mcv5t\" (UniqueName: \"kubernetes.io/projected/d221c44f-6fb5-4b96-b84e-f1d55253ed08-kube-api-access-mcv5t\") pod \"nova-operator-controller-manager-7bdb645866-q67lr\" (UID: \"d221c44f-6fb5-4b96-b84e-f1d55253ed08\") " pod="openstack-operators/nova-operator-controller-manager-7bdb645866-q67lr" Jan 25 08:12:35 crc kubenswrapper[4832]: I0125 08:12:35.455649 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pbrbl\" (UniqueName: \"kubernetes.io/projected/1e30c775-7a32-478e-8c3c-7312757f846b-kube-api-access-pbrbl\") pod \"placement-operator-controller-manager-79d5ccc684-lrsxz\" (UID: \"1e30c775-7a32-478e-8c3c-7312757f846b\") " pod="openstack-operators/placement-operator-controller-manager-79d5ccc684-lrsxz" Jan 25 08:12:35 crc kubenswrapper[4832]: I0125 08:12:35.455739 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w8g9b\" (UniqueName: \"kubernetes.io/projected/eb801494-724f-482a-a359-896e5b735b62-kube-api-access-w8g9b\") pod \"swift-operator-controller-manager-547cbdb99f-zwlrf\" (UID: \"eb801494-724f-482a-a359-896e5b735b62\") " pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-zwlrf" Jan 25 08:12:35 crc kubenswrapper[4832]: I0125 08:12:35.455778 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/3b784c4a-e1cf-42fb-ad96-dca059f63e79-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b854b8jhw\" (UID: \"3b784c4a-e1cf-42fb-ad96-dca059f63e79\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854b8jhw" Jan 25 08:12:35 crc kubenswrapper[4832]: I0125 08:12:35.455798 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lggrn\" (UniqueName: \"kubernetes.io/projected/8d21c83b-b981-4466-b81a-ed7954d1f3cb-kube-api-access-lggrn\") pod \"ovn-operator-controller-manager-6f75f45d54-cf7rg\" (UID: \"8d21c83b-b981-4466-b81a-ed7954d1f3cb\") " pod="openstack-operators/ovn-operator-controller-manager-6f75f45d54-cf7rg" Jan 25 08:12:35 crc kubenswrapper[4832]: I0125 08:12:35.455838 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t57zq\" (UniqueName: \"kubernetes.io/projected/3b784c4a-e1cf-42fb-ad96-dca059f63e79-kube-api-access-t57zq\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b854b8jhw\" (UID: \"3b784c4a-e1cf-42fb-ad96-dca059f63e79\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854b8jhw" Jan 25 08:12:35 crc kubenswrapper[4832]: E0125 08:12:35.456183 4832 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 25 08:12:35 crc kubenswrapper[4832]: E0125 08:12:35.456227 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3b784c4a-e1cf-42fb-ad96-dca059f63e79-cert podName:3b784c4a-e1cf-42fb-ad96-dca059f63e79 nodeName:}" failed. No retries permitted until 2026-01-25 08:12:35.956214474 +0000 UTC m=+938.630038007 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/3b784c4a-e1cf-42fb-ad96-dca059f63e79-cert") pod "openstack-baremetal-operator-controller-manager-6b68b8b854b8jhw" (UID: "3b784c4a-e1cf-42fb-ad96-dca059f63e79") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 25 08:12:35 crc kubenswrapper[4832]: I0125 08:12:35.469584 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-46sfr\" (UniqueName: \"kubernetes.io/projected/b618d12e-02c2-4ae7-872a-15bd233259b5-kube-api-access-46sfr\") pod \"octavia-operator-controller-manager-5f4cd88d46-642xd\" (UID: \"b618d12e-02c2-4ae7-872a-15bd233259b5\") " pod="openstack-operators/octavia-operator-controller-manager-5f4cd88d46-642xd" Jan 25 08:12:35 crc kubenswrapper[4832]: I0125 08:12:35.479557 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-85cd9769bb-59gds"] Jan 25 08:12:35 crc kubenswrapper[4832]: I0125 08:12:35.480694 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-59gds" Jan 25 08:12:35 crc kubenswrapper[4832]: I0125 08:12:35.504000 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-85cd9769bb-59gds"] Jan 25 08:12:35 crc kubenswrapper[4832]: I0125 08:12:35.504530 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pbrbl\" (UniqueName: \"kubernetes.io/projected/1e30c775-7a32-478e-8c3c-7312757f846b-kube-api-access-pbrbl\") pod \"placement-operator-controller-manager-79d5ccc684-lrsxz\" (UID: \"1e30c775-7a32-478e-8c3c-7312757f846b\") " pod="openstack-operators/placement-operator-controller-manager-79d5ccc684-lrsxz" Jan 25 08:12:35 crc kubenswrapper[4832]: I0125 08:12:35.515589 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-4k5f7" Jan 25 08:12:35 crc kubenswrapper[4832]: I0125 08:12:35.516400 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"telemetry-operator-controller-manager-dockercfg-6lvzp" Jan 25 08:12:35 crc kubenswrapper[4832]: I0125 08:12:35.528991 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-78d58447c5-hpqjz" Jan 25 08:12:35 crc kubenswrapper[4832]: I0125 08:12:35.532917 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t57zq\" (UniqueName: \"kubernetes.io/projected/3b784c4a-e1cf-42fb-ad96-dca059f63e79-kube-api-access-t57zq\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b854b8jhw\" (UID: \"3b784c4a-e1cf-42fb-ad96-dca059f63e79\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854b8jhw" Jan 25 08:12:35 crc kubenswrapper[4832]: I0125 08:12:35.546081 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-79d5ccc684-lrsxz" Jan 25 08:12:35 crc kubenswrapper[4832]: I0125 08:12:35.558353 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w8g9b\" (UniqueName: \"kubernetes.io/projected/eb801494-724f-482a-a359-896e5b735b62-kube-api-access-w8g9b\") pod \"swift-operator-controller-manager-547cbdb99f-zwlrf\" (UID: \"eb801494-724f-482a-a359-896e5b735b62\") " pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-zwlrf" Jan 25 08:12:35 crc kubenswrapper[4832]: I0125 08:12:35.558477 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rvsnv\" (UniqueName: \"kubernetes.io/projected/47605944-bcb8-4196-9eb3-b26c2e923e70-kube-api-access-rvsnv\") pod \"telemetry-operator-controller-manager-85cd9769bb-59gds\" (UID: \"47605944-bcb8-4196-9eb3-b26c2e923e70\") " pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-59gds" Jan 25 08:12:35 crc kubenswrapper[4832]: I0125 08:12:35.558963 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-7bdb645866-q67lr" Jan 25 08:12:35 crc kubenswrapper[4832]: I0125 08:12:35.561581 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lggrn\" (UniqueName: \"kubernetes.io/projected/8d21c83b-b981-4466-b81a-ed7954d1f3cb-kube-api-access-lggrn\") pod \"ovn-operator-controller-manager-6f75f45d54-cf7rg\" (UID: \"8d21c83b-b981-4466-b81a-ed7954d1f3cb\") " pod="openstack-operators/ovn-operator-controller-manager-6f75f45d54-cf7rg" Jan 25 08:12:35 crc kubenswrapper[4832]: I0125 08:12:35.627288 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-5f4cd88d46-642xd" Jan 25 08:12:35 crc kubenswrapper[4832]: I0125 08:12:35.653335 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w8g9b\" (UniqueName: \"kubernetes.io/projected/eb801494-724f-482a-a359-896e5b735b62-kube-api-access-w8g9b\") pod \"swift-operator-controller-manager-547cbdb99f-zwlrf\" (UID: \"eb801494-724f-482a-a359-896e5b735b62\") " pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-zwlrf" Jan 25 08:12:35 crc kubenswrapper[4832]: I0125 08:12:35.659282 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rvsnv\" (UniqueName: \"kubernetes.io/projected/47605944-bcb8-4196-9eb3-b26c2e923e70-kube-api-access-rvsnv\") pod \"telemetry-operator-controller-manager-85cd9769bb-59gds\" (UID: \"47605944-bcb8-4196-9eb3-b26c2e923e70\") " pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-59gds" Jan 25 08:12:35 crc kubenswrapper[4832]: I0125 08:12:35.727538 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-6f75f45d54-cf7rg" Jan 25 08:12:35 crc kubenswrapper[4832]: I0125 08:12:35.735222 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rvsnv\" (UniqueName: \"kubernetes.io/projected/47605944-bcb8-4196-9eb3-b26c2e923e70-kube-api-access-rvsnv\") pod \"telemetry-operator-controller-manager-85cd9769bb-59gds\" (UID: \"47605944-bcb8-4196-9eb3-b26c2e923e70\") " pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-59gds" Jan 25 08:12:35 crc kubenswrapper[4832]: I0125 08:12:35.764449 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/29b29aa4-b326-4515-9842-6d848c208096-cert\") pod \"infra-operator-controller-manager-694cf4f878-vt5m9\" (UID: \"29b29aa4-b326-4515-9842-6d848c208096\") " pod="openstack-operators/infra-operator-controller-manager-694cf4f878-vt5m9" Jan 25 08:12:35 crc kubenswrapper[4832]: E0125 08:12:35.764615 4832 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 25 08:12:35 crc kubenswrapper[4832]: E0125 08:12:35.764660 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/29b29aa4-b326-4515-9842-6d848c208096-cert podName:29b29aa4-b326-4515-9842-6d848c208096 nodeName:}" failed. No retries permitted until 2026-01-25 08:12:36.764643914 +0000 UTC m=+939.438467447 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/29b29aa4-b326-4515-9842-6d848c208096-cert") pod "infra-operator-controller-manager-694cf4f878-vt5m9" (UID: "29b29aa4-b326-4515-9842-6d848c208096") : secret "infra-operator-webhook-server-cert" not found Jan 25 08:12:35 crc kubenswrapper[4832]: I0125 08:12:35.786795 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/test-operator-controller-manager-69797bbcbd-qnxqc"] Jan 25 08:12:35 crc kubenswrapper[4832]: I0125 08:12:35.787871 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-69797bbcbd-qnxqc"] Jan 25 08:12:35 crc kubenswrapper[4832]: I0125 08:12:35.787894 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/watcher-operator-controller-manager-564965969-57npv"] Jan 25 08:12:35 crc kubenswrapper[4832]: I0125 08:12:35.788480 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-564965969-57npv"] Jan 25 08:12:35 crc kubenswrapper[4832]: I0125 08:12:35.788625 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-qnxqc" Jan 25 08:12:35 crc kubenswrapper[4832]: I0125 08:12:35.788816 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-564965969-57npv" Jan 25 08:12:35 crc kubenswrapper[4832]: I0125 08:12:35.793606 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-zwlrf" Jan 25 08:12:35 crc kubenswrapper[4832]: I0125 08:12:35.794456 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-manager-745947945d-jwhxb"] Jan 25 08:12:35 crc kubenswrapper[4832]: I0125 08:12:35.795580 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-745947945d-jwhxb" Jan 25 08:12:35 crc kubenswrapper[4832]: I0125 08:12:35.804683 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"test-operator-controller-manager-dockercfg-trll6" Jan 25 08:12:35 crc kubenswrapper[4832]: I0125 08:12:35.804889 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"webhook-server-cert" Jan 25 08:12:35 crc kubenswrapper[4832]: I0125 08:12:35.805001 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"watcher-operator-controller-manager-dockercfg-llc26" Jan 25 08:12:35 crc kubenswrapper[4832]: I0125 08:12:35.812844 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-manager-dockercfg-46tnh" Jan 25 08:12:35 crc kubenswrapper[4832]: I0125 08:12:35.813231 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"metrics-server-cert" Jan 25 08:12:35 crc kubenswrapper[4832]: I0125 08:12:35.813284 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-745947945d-jwhxb"] Jan 25 08:12:35 crc kubenswrapper[4832]: I0125 08:12:35.858444 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-59gds" Jan 25 08:12:35 crc kubenswrapper[4832]: I0125 08:12:35.865343 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/1529f819-52bd-428f-970f-5f67f071e729-metrics-certs\") pod \"openstack-operator-controller-manager-745947945d-jwhxb\" (UID: \"1529f819-52bd-428f-970f-5f67f071e729\") " pod="openstack-operators/openstack-operator-controller-manager-745947945d-jwhxb" Jan 25 08:12:35 crc kubenswrapper[4832]: I0125 08:12:35.865373 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/1529f819-52bd-428f-970f-5f67f071e729-webhook-certs\") pod \"openstack-operator-controller-manager-745947945d-jwhxb\" (UID: \"1529f819-52bd-428f-970f-5f67f071e729\") " pod="openstack-operators/openstack-operator-controller-manager-745947945d-jwhxb" Jan 25 08:12:35 crc kubenswrapper[4832]: I0125 08:12:35.865420 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jw47d\" (UniqueName: \"kubernetes.io/projected/1f038807-2bed-41a2-aecd-35d29e529eb8-kube-api-access-jw47d\") pod \"watcher-operator-controller-manager-564965969-57npv\" (UID: \"1f038807-2bed-41a2-aecd-35d29e529eb8\") " pod="openstack-operators/watcher-operator-controller-manager-564965969-57npv" Jan 25 08:12:35 crc kubenswrapper[4832]: I0125 08:12:35.865538 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nj99s\" (UniqueName: \"kubernetes.io/projected/1529f819-52bd-428f-970f-5f67f071e729-kube-api-access-nj99s\") pod \"openstack-operator-controller-manager-745947945d-jwhxb\" (UID: \"1529f819-52bd-428f-970f-5f67f071e729\") " pod="openstack-operators/openstack-operator-controller-manager-745947945d-jwhxb" Jan 25 08:12:35 crc kubenswrapper[4832]: I0125 08:12:35.865555 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-96d8l\" (UniqueName: \"kubernetes.io/projected/c3356b9d-3a3c-4583-9803-d08fcb621401-kube-api-access-96d8l\") pod \"test-operator-controller-manager-69797bbcbd-qnxqc\" (UID: \"c3356b9d-3a3c-4583-9803-d08fcb621401\") " pod="openstack-operators/test-operator-controller-manager-69797bbcbd-qnxqc" Jan 25 08:12:35 crc kubenswrapper[4832]: I0125 08:12:35.910886 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-f87nw"] Jan 25 08:12:35 crc kubenswrapper[4832]: I0125 08:12:35.912096 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-f87nw" Jan 25 08:12:35 crc kubenswrapper[4832]: I0125 08:12:35.929309 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"rabbitmq-cluster-operator-controller-manager-dockercfg-dpkz6" Jan 25 08:12:35 crc kubenswrapper[4832]: I0125 08:12:35.953722 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-f87nw"] Jan 25 08:12:35 crc kubenswrapper[4832]: I0125 08:12:35.977875 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/3b784c4a-e1cf-42fb-ad96-dca059f63e79-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b854b8jhw\" (UID: \"3b784c4a-e1cf-42fb-ad96-dca059f63e79\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854b8jhw" Jan 25 08:12:35 crc kubenswrapper[4832]: I0125 08:12:35.977928 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/1529f819-52bd-428f-970f-5f67f071e729-metrics-certs\") pod \"openstack-operator-controller-manager-745947945d-jwhxb\" (UID: \"1529f819-52bd-428f-970f-5f67f071e729\") " pod="openstack-operators/openstack-operator-controller-manager-745947945d-jwhxb" Jan 25 08:12:35 crc kubenswrapper[4832]: I0125 08:12:35.977951 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/1529f819-52bd-428f-970f-5f67f071e729-webhook-certs\") pod \"openstack-operator-controller-manager-745947945d-jwhxb\" (UID: \"1529f819-52bd-428f-970f-5f67f071e729\") " pod="openstack-operators/openstack-operator-controller-manager-745947945d-jwhxb" Jan 25 08:12:35 crc kubenswrapper[4832]: I0125 08:12:35.977978 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jw47d\" (UniqueName: \"kubernetes.io/projected/1f038807-2bed-41a2-aecd-35d29e529eb8-kube-api-access-jw47d\") pod \"watcher-operator-controller-manager-564965969-57npv\" (UID: \"1f038807-2bed-41a2-aecd-35d29e529eb8\") " pod="openstack-operators/watcher-operator-controller-manager-564965969-57npv" Jan 25 08:12:35 crc kubenswrapper[4832]: I0125 08:12:35.978034 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rn6tk\" (UniqueName: \"kubernetes.io/projected/cdb822ca-2a1d-4b10-8d44-f2cb33173358-kube-api-access-rn6tk\") pod \"rabbitmq-cluster-operator-manager-668c99d594-f87nw\" (UID: \"cdb822ca-2a1d-4b10-8d44-f2cb33173358\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-f87nw" Jan 25 08:12:35 crc kubenswrapper[4832]: I0125 08:12:35.978075 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nj99s\" (UniqueName: \"kubernetes.io/projected/1529f819-52bd-428f-970f-5f67f071e729-kube-api-access-nj99s\") pod \"openstack-operator-controller-manager-745947945d-jwhxb\" (UID: \"1529f819-52bd-428f-970f-5f67f071e729\") " pod="openstack-operators/openstack-operator-controller-manager-745947945d-jwhxb" Jan 25 08:12:35 crc kubenswrapper[4832]: I0125 08:12:35.978093 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-96d8l\" (UniqueName: \"kubernetes.io/projected/c3356b9d-3a3c-4583-9803-d08fcb621401-kube-api-access-96d8l\") pod \"test-operator-controller-manager-69797bbcbd-qnxqc\" (UID: \"c3356b9d-3a3c-4583-9803-d08fcb621401\") " pod="openstack-operators/test-operator-controller-manager-69797bbcbd-qnxqc" Jan 25 08:12:35 crc kubenswrapper[4832]: E0125 08:12:35.978575 4832 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 25 08:12:35 crc kubenswrapper[4832]: E0125 08:12:35.978617 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3b784c4a-e1cf-42fb-ad96-dca059f63e79-cert podName:3b784c4a-e1cf-42fb-ad96-dca059f63e79 nodeName:}" failed. No retries permitted until 2026-01-25 08:12:36.978604332 +0000 UTC m=+939.652427855 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/3b784c4a-e1cf-42fb-ad96-dca059f63e79-cert") pod "openstack-baremetal-operator-controller-manager-6b68b8b854b8jhw" (UID: "3b784c4a-e1cf-42fb-ad96-dca059f63e79") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 25 08:12:35 crc kubenswrapper[4832]: E0125 08:12:35.978875 4832 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 25 08:12:35 crc kubenswrapper[4832]: E0125 08:12:35.978906 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1529f819-52bd-428f-970f-5f67f071e729-metrics-certs podName:1529f819-52bd-428f-970f-5f67f071e729 nodeName:}" failed. No retries permitted until 2026-01-25 08:12:36.478897202 +0000 UTC m=+939.152720735 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/1529f819-52bd-428f-970f-5f67f071e729-metrics-certs") pod "openstack-operator-controller-manager-745947945d-jwhxb" (UID: "1529f819-52bd-428f-970f-5f67f071e729") : secret "metrics-server-cert" not found Jan 25 08:12:35 crc kubenswrapper[4832]: E0125 08:12:35.978939 4832 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 25 08:12:35 crc kubenswrapper[4832]: E0125 08:12:35.978956 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1529f819-52bd-428f-970f-5f67f071e729-webhook-certs podName:1529f819-52bd-428f-970f-5f67f071e729 nodeName:}" failed. No retries permitted until 2026-01-25 08:12:36.478951293 +0000 UTC m=+939.152774826 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/1529f819-52bd-428f-970f-5f67f071e729-webhook-certs") pod "openstack-operator-controller-manager-745947945d-jwhxb" (UID: "1529f819-52bd-428f-970f-5f67f071e729") : secret "webhook-server-cert" not found Jan 25 08:12:36 crc kubenswrapper[4832]: I0125 08:12:36.005668 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jw47d\" (UniqueName: \"kubernetes.io/projected/1f038807-2bed-41a2-aecd-35d29e529eb8-kube-api-access-jw47d\") pod \"watcher-operator-controller-manager-564965969-57npv\" (UID: \"1f038807-2bed-41a2-aecd-35d29e529eb8\") " pod="openstack-operators/watcher-operator-controller-manager-564965969-57npv" Jan 25 08:12:36 crc kubenswrapper[4832]: I0125 08:12:36.008780 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nj99s\" (UniqueName: \"kubernetes.io/projected/1529f819-52bd-428f-970f-5f67f071e729-kube-api-access-nj99s\") pod \"openstack-operator-controller-manager-745947945d-jwhxb\" (UID: \"1529f819-52bd-428f-970f-5f67f071e729\") " pod="openstack-operators/openstack-operator-controller-manager-745947945d-jwhxb" Jan 25 08:12:36 crc kubenswrapper[4832]: I0125 08:12:36.010661 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-96d8l\" (UniqueName: \"kubernetes.io/projected/c3356b9d-3a3c-4583-9803-d08fcb621401-kube-api-access-96d8l\") pod \"test-operator-controller-manager-69797bbcbd-qnxqc\" (UID: \"c3356b9d-3a3c-4583-9803-d08fcb621401\") " pod="openstack-operators/test-operator-controller-manager-69797bbcbd-qnxqc" Jan 25 08:12:36 crc kubenswrapper[4832]: I0125 08:12:36.069962 4832 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="openstack-operators/barbican-operator-controller-manager-7f86f8796f-hr9t5" secret="" err="failed to sync secret cache: timed out waiting for the condition" Jan 25 08:12:36 crc kubenswrapper[4832]: I0125 08:12:36.070065 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-7f86f8796f-hr9t5" Jan 25 08:12:36 crc kubenswrapper[4832]: I0125 08:12:36.074172 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"barbican-operator-controller-manager-dockercfg-d9h8b" Jan 25 08:12:36 crc kubenswrapper[4832]: I0125 08:12:36.080023 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rn6tk\" (UniqueName: \"kubernetes.io/projected/cdb822ca-2a1d-4b10-8d44-f2cb33173358-kube-api-access-rn6tk\") pod \"rabbitmq-cluster-operator-manager-668c99d594-f87nw\" (UID: \"cdb822ca-2a1d-4b10-8d44-f2cb33173358\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-f87nw" Jan 25 08:12:36 crc kubenswrapper[4832]: I0125 08:12:36.101800 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rn6tk\" (UniqueName: \"kubernetes.io/projected/cdb822ca-2a1d-4b10-8d44-f2cb33173358-kube-api-access-rn6tk\") pod \"rabbitmq-cluster-operator-manager-668c99d594-f87nw\" (UID: \"cdb822ca-2a1d-4b10-8d44-f2cb33173358\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-f87nw" Jan 25 08:12:36 crc kubenswrapper[4832]: I0125 08:12:36.131092 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-b45d7bf98-75hsw"] Jan 25 08:12:36 crc kubenswrapper[4832]: I0125 08:12:36.197375 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-564965969-57npv" Jan 25 08:12:36 crc kubenswrapper[4832]: I0125 08:12:36.234816 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-qnxqc" Jan 25 08:12:36 crc kubenswrapper[4832]: I0125 08:12:36.257023 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-f87nw" Jan 25 08:12:36 crc kubenswrapper[4832]: I0125 08:12:36.314000 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-594c8c9d5d-h4c7b"] Jan 25 08:12:36 crc kubenswrapper[4832]: I0125 08:12:36.319274 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-7478f7dbf9-qdwdw"] Jan 25 08:12:36 crc kubenswrapper[4832]: W0125 08:12:36.345203 4832 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0cac9e7d_b342_4b55_a667_76fa1c144080.slice/crio-8c53e089652a442a103c53346a261f446d671a842ac495e3388c4e810135a46f WatchSource:0}: Error finding container 8c53e089652a442a103c53346a261f446d671a842ac495e3388c4e810135a46f: Status 404 returned error can't find the container with id 8c53e089652a442a103c53346a261f446d671a842ac495e3388c4e810135a46f Jan 25 08:12:36 crc kubenswrapper[4832]: W0125 08:12:36.384133 4832 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podefdb6007_fdd7_4a18_9dba_4f1571f6f822.slice/crio-b937f004fce88be68066b0944bf54fa6907f16c4fd45db54ec11f8f82058ecd7 WatchSource:0}: Error finding container b937f004fce88be68066b0944bf54fa6907f16c4fd45db54ec11f8f82058ecd7: Status 404 returned error can't find the container with id b937f004fce88be68066b0944bf54fa6907f16c4fd45db54ec11f8f82058ecd7 Jan 25 08:12:36 crc kubenswrapper[4832]: I0125 08:12:36.430435 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-78fdd796fd-mgsq7"] Jan 25 08:12:36 crc kubenswrapper[4832]: I0125 08:12:36.489095 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/1529f819-52bd-428f-970f-5f67f071e729-metrics-certs\") pod \"openstack-operator-controller-manager-745947945d-jwhxb\" (UID: \"1529f819-52bd-428f-970f-5f67f071e729\") " pod="openstack-operators/openstack-operator-controller-manager-745947945d-jwhxb" Jan 25 08:12:36 crc kubenswrapper[4832]: I0125 08:12:36.489137 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/1529f819-52bd-428f-970f-5f67f071e729-webhook-certs\") pod \"openstack-operator-controller-manager-745947945d-jwhxb\" (UID: \"1529f819-52bd-428f-970f-5f67f071e729\") " pod="openstack-operators/openstack-operator-controller-manager-745947945d-jwhxb" Jan 25 08:12:36 crc kubenswrapper[4832]: E0125 08:12:36.489284 4832 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 25 08:12:36 crc kubenswrapper[4832]: E0125 08:12:36.489341 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1529f819-52bd-428f-970f-5f67f071e729-webhook-certs podName:1529f819-52bd-428f-970f-5f67f071e729 nodeName:}" failed. No retries permitted until 2026-01-25 08:12:37.489312924 +0000 UTC m=+940.163136447 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/1529f819-52bd-428f-970f-5f67f071e729-webhook-certs") pod "openstack-operator-controller-manager-745947945d-jwhxb" (UID: "1529f819-52bd-428f-970f-5f67f071e729") : secret "webhook-server-cert" not found Jan 25 08:12:36 crc kubenswrapper[4832]: E0125 08:12:36.489686 4832 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 25 08:12:36 crc kubenswrapper[4832]: E0125 08:12:36.489713 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1529f819-52bd-428f-970f-5f67f071e729-metrics-certs podName:1529f819-52bd-428f-970f-5f67f071e729 nodeName:}" failed. No retries permitted until 2026-01-25 08:12:37.489705407 +0000 UTC m=+940.163528940 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/1529f819-52bd-428f-970f-5f67f071e729-metrics-certs") pod "openstack-operator-controller-manager-745947945d-jwhxb" (UID: "1529f819-52bd-428f-970f-5f67f071e729") : secret "metrics-server-cert" not found Jan 25 08:12:36 crc kubenswrapper[4832]: I0125 08:12:36.816096 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/29b29aa4-b326-4515-9842-6d848c208096-cert\") pod \"infra-operator-controller-manager-694cf4f878-vt5m9\" (UID: \"29b29aa4-b326-4515-9842-6d848c208096\") " pod="openstack-operators/infra-operator-controller-manager-694cf4f878-vt5m9" Jan 25 08:12:36 crc kubenswrapper[4832]: E0125 08:12:36.817514 4832 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 25 08:12:36 crc kubenswrapper[4832]: E0125 08:12:36.817574 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/29b29aa4-b326-4515-9842-6d848c208096-cert podName:29b29aa4-b326-4515-9842-6d848c208096 nodeName:}" failed. No retries permitted until 2026-01-25 08:12:38.817555878 +0000 UTC m=+941.491379501 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/29b29aa4-b326-4515-9842-6d848c208096-cert") pod "infra-operator-controller-manager-694cf4f878-vt5m9" (UID: "29b29aa4-b326-4515-9842-6d848c208096") : secret "infra-operator-webhook-server-cert" not found Jan 25 08:12:37 crc kubenswrapper[4832]: I0125 08:12:37.021679 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-b8b6d4659-vvwcx"] Jan 25 08:12:37 crc kubenswrapper[4832]: I0125 08:12:37.026061 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/3b784c4a-e1cf-42fb-ad96-dca059f63e79-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b854b8jhw\" (UID: \"3b784c4a-e1cf-42fb-ad96-dca059f63e79\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854b8jhw" Jan 25 08:12:37 crc kubenswrapper[4832]: E0125 08:12:37.026193 4832 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 25 08:12:37 crc kubenswrapper[4832]: E0125 08:12:37.026252 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3b784c4a-e1cf-42fb-ad96-dca059f63e79-cert podName:3b784c4a-e1cf-42fb-ad96-dca059f63e79 nodeName:}" failed. No retries permitted until 2026-01-25 08:12:39.02623663 +0000 UTC m=+941.700060163 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/3b784c4a-e1cf-42fb-ad96-dca059f63e79-cert") pod "openstack-baremetal-operator-controller-manager-6b68b8b854b8jhw" (UID: "3b784c4a-e1cf-42fb-ad96-dca059f63e79") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 25 08:12:37 crc kubenswrapper[4832]: I0125 08:12:37.038977 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-598f7747c9-t8jng"] Jan 25 08:12:37 crc kubenswrapper[4832]: W0125 08:12:37.083376 4832 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod44be34d2_851c_4bf5_a3fb_87607d045d1f.slice/crio-6ed428378388ee013711eb519a628aed3d73e1c98895b8b6d100e01b1b062708 WatchSource:0}: Error finding container 6ed428378388ee013711eb519a628aed3d73e1c98895b8b6d100e01b1b062708: Status 404 returned error can't find the container with id 6ed428378388ee013711eb519a628aed3d73e1c98895b8b6d100e01b1b062708 Jan 25 08:12:37 crc kubenswrapper[4832]: I0125 08:12:37.113030 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-598f7747c9-t8jng" event={"ID":"44be34d2-851c-4bf5-a3fb-87607d045d1f","Type":"ContainerStarted","Data":"6ed428378388ee013711eb519a628aed3d73e1c98895b8b6d100e01b1b062708"} Jan 25 08:12:37 crc kubenswrapper[4832]: I0125 08:12:37.115839 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-75hsw" event={"ID":"0cac9e7d-b342-4b55-a667-76fa1c144080","Type":"ContainerStarted","Data":"8c53e089652a442a103c53346a261f446d671a842ac495e3388c4e810135a46f"} Jan 25 08:12:37 crc kubenswrapper[4832]: I0125 08:12:37.121187 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qrg9b" event={"ID":"09f1c770-b9b1-40cf-9805-b88a1445218a","Type":"ContainerStarted","Data":"8c7d2065755c03c6d86cba8bd7e425579eebda18aacdca0c04ae93121b369e38"} Jan 25 08:12:37 crc kubenswrapper[4832]: I0125 08:12:37.123001 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-vvwcx" event={"ID":"50da9b0d-da00-4211-95cd-0218828341e5","Type":"ContainerStarted","Data":"124f880c2bca656f56a8cbb4995c7303149754f06c9f99457c02917c0f0ab707"} Jan 25 08:12:37 crc kubenswrapper[4832]: I0125 08:12:37.129447 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-h4c7b" event={"ID":"efdb6007-fdd7-4a18-9dba-4f1571f6f822","Type":"ContainerStarted","Data":"b937f004fce88be68066b0944bf54fa6907f16c4fd45db54ec11f8f82058ecd7"} Jan 25 08:12:37 crc kubenswrapper[4832]: I0125 08:12:37.130205 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-7478f7dbf9-qdwdw" event={"ID":"b3a8f752-cc73-4933-88d1-3b661a42ead2","Type":"ContainerStarted","Data":"20ac8c0f14efbe69ee8d455dc7d90adef6b6a4657db16e4bf4a49aae89e98c6f"} Jan 25 08:12:37 crc kubenswrapper[4832]: I0125 08:12:37.131041 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-mgsq7" event={"ID":"b1702aab-2dd8-488f-8a7f-93f43df4b0ab","Type":"ContainerStarted","Data":"7fc69571f4f93f8a4b26fc308bb339d06dace5702407b0c9d83fc627f5438da8"} Jan 25 08:12:37 crc kubenswrapper[4832]: I0125 08:12:37.138811 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7hnz5" event={"ID":"464e0a0d-87e3-44d8-aa9d-2b95b2aa2781","Type":"ContainerStarted","Data":"5b0d07b034dc06627e4569cf551c91ce8308dd9d33993e65dadb8af7f28bbd1a"} Jan 25 08:12:37 crc kubenswrapper[4832]: I0125 08:12:37.165362 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-7hnz5" podStartSLOduration=3.14505409 podStartE2EDuration="7.165343495s" podCreationTimestamp="2026-01-25 08:12:30 +0000 UTC" firstStartedPulling="2026-01-25 08:12:32.016455387 +0000 UTC m=+934.690278920" lastFinishedPulling="2026-01-25 08:12:36.036744792 +0000 UTC m=+938.710568325" observedRunningTime="2026-01-25 08:12:37.157375894 +0000 UTC m=+939.831199427" watchObservedRunningTime="2026-01-25 08:12:37.165343495 +0000 UTC m=+939.839167028" Jan 25 08:12:37 crc kubenswrapper[4832]: I0125 08:12:37.276791 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-78c6999f6f-mstsp"] Jan 25 08:12:37 crc kubenswrapper[4832]: I0125 08:12:37.297170 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-78d58447c5-hpqjz"] Jan 25 08:12:37 crc kubenswrapper[4832]: I0125 08:12:37.315228 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-77d5c5b54f-nzjmz"] Jan 25 08:12:37 crc kubenswrapper[4832]: I0125 08:12:37.319844 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-5f4cd88d46-642xd"] Jan 25 08:12:37 crc kubenswrapper[4832]: I0125 08:12:37.325502 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-79d5ccc684-lrsxz"] Jan 25 08:12:37 crc kubenswrapper[4832]: W0125 08:12:37.331553 4832 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3f993c1e_81ae_4e86_9b28_eccb1db48f2b.slice/crio-9afda7018692248002a4822cb365a3e961d2dee157561d4badf7357d2eb55d67 WatchSource:0}: Error finding container 9afda7018692248002a4822cb365a3e961d2dee157561d4badf7357d2eb55d67: Status 404 returned error can't find the container with id 9afda7018692248002a4822cb365a3e961d2dee157561d4badf7357d2eb55d67 Jan 25 08:12:37 crc kubenswrapper[4832]: W0125 08:12:37.333277 4832 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1e30c775_7a32_478e_8c3c_7312757f846b.slice/crio-23ddcd0a34800a22dd88e48d4597c3248553da8bff8b09439b285ea849e534cf WatchSource:0}: Error finding container 23ddcd0a34800a22dd88e48d4597c3248553da8bff8b09439b285ea849e534cf: Status 404 returned error can't find the container with id 23ddcd0a34800a22dd88e48d4597c3248553da8bff8b09439b285ea849e534cf Jan 25 08:12:37 crc kubenswrapper[4832]: I0125 08:12:37.333310 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-7bdb645866-q67lr"] Jan 25 08:12:37 crc kubenswrapper[4832]: W0125 08:12:37.335102 4832 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb618d12e_02c2_4ae7_872a_15bd233259b5.slice/crio-9fa37a2ad70e5bd7fc857a5a9ee6cb21628c38bc4801edf2dde5008944796817 WatchSource:0}: Error finding container 9fa37a2ad70e5bd7fc857a5a9ee6cb21628c38bc4801edf2dde5008944796817: Status 404 returned error can't find the container with id 9fa37a2ad70e5bd7fc857a5a9ee6cb21628c38bc4801edf2dde5008944796817 Jan 25 08:12:37 crc kubenswrapper[4832]: W0125 08:12:37.337554 4832 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd221c44f_6fb5_4b96_b84e_f1d55253ed08.slice/crio-5caa7e1431961539535f0414963d6dde5df5239f97032a90e7f1fa4fbb52bee2 WatchSource:0}: Error finding container 5caa7e1431961539535f0414963d6dde5df5239f97032a90e7f1fa4fbb52bee2: Status 404 returned error can't find the container with id 5caa7e1431961539535f0414963d6dde5df5239f97032a90e7f1fa4fbb52bee2 Jan 25 08:12:37 crc kubenswrapper[4832]: I0125 08:12:37.341216 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-6f75f45d54-cf7rg"] Jan 25 08:12:37 crc kubenswrapper[4832]: I0125 08:12:37.347281 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-7f86f8796f-hr9t5"] Jan 25 08:12:37 crc kubenswrapper[4832]: W0125 08:12:37.350577 4832 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8d21c83b_b981_4466_b81a_ed7954d1f3cb.slice/crio-43e3346991c372277eb7427e08f54d51e08935746e759d4ce1e1f8b354930a57 WatchSource:0}: Error finding container 43e3346991c372277eb7427e08f54d51e08935746e759d4ce1e1f8b354930a57: Status 404 returned error can't find the container with id 43e3346991c372277eb7427e08f54d51e08935746e759d4ce1e1f8b354930a57 Jan 25 08:12:37 crc kubenswrapper[4832]: I0125 08:12:37.460998 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-4k5f7"] Jan 25 08:12:37 crc kubenswrapper[4832]: W0125 08:12:37.464162 4832 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod31cef49b_390b_4029_bdc4_64893be3d183.slice/crio-f3b97427b05661a847f5fb0c29b91752aa924395fb8ea1b3ea5d09b7306c4ae0 WatchSource:0}: Error finding container f3b97427b05661a847f5fb0c29b91752aa924395fb8ea1b3ea5d09b7306c4ae0: Status 404 returned error can't find the container with id f3b97427b05661a847f5fb0c29b91752aa924395fb8ea1b3ea5d09b7306c4ae0 Jan 25 08:12:37 crc kubenswrapper[4832]: I0125 08:12:37.530887 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-547cbdb99f-zwlrf"] Jan 25 08:12:37 crc kubenswrapper[4832]: I0125 08:12:37.535952 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-85cd9769bb-59gds"] Jan 25 08:12:37 crc kubenswrapper[4832]: I0125 08:12:37.538431 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/1529f819-52bd-428f-970f-5f67f071e729-metrics-certs\") pod \"openstack-operator-controller-manager-745947945d-jwhxb\" (UID: \"1529f819-52bd-428f-970f-5f67f071e729\") " pod="openstack-operators/openstack-operator-controller-manager-745947945d-jwhxb" Jan 25 08:12:37 crc kubenswrapper[4832]: I0125 08:12:37.538472 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/1529f819-52bd-428f-970f-5f67f071e729-webhook-certs\") pod \"openstack-operator-controller-manager-745947945d-jwhxb\" (UID: \"1529f819-52bd-428f-970f-5f67f071e729\") " pod="openstack-operators/openstack-operator-controller-manager-745947945d-jwhxb" Jan 25 08:12:37 crc kubenswrapper[4832]: E0125 08:12:37.538617 4832 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 25 08:12:37 crc kubenswrapper[4832]: E0125 08:12:37.538665 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1529f819-52bd-428f-970f-5f67f071e729-webhook-certs podName:1529f819-52bd-428f-970f-5f67f071e729 nodeName:}" failed. No retries permitted until 2026-01-25 08:12:39.538649315 +0000 UTC m=+942.212472848 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/1529f819-52bd-428f-970f-5f67f071e729-webhook-certs") pod "openstack-operator-controller-manager-745947945d-jwhxb" (UID: "1529f819-52bd-428f-970f-5f67f071e729") : secret "webhook-server-cert" not found Jan 25 08:12:37 crc kubenswrapper[4832]: E0125 08:12:37.539048 4832 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 25 08:12:37 crc kubenswrapper[4832]: E0125 08:12:37.539073 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1529f819-52bd-428f-970f-5f67f071e729-metrics-certs podName:1529f819-52bd-428f-970f-5f67f071e729 nodeName:}" failed. No retries permitted until 2026-01-25 08:12:39.539065158 +0000 UTC m=+942.212888691 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/1529f819-52bd-428f-970f-5f67f071e729-metrics-certs") pod "openstack-operator-controller-manager-745947945d-jwhxb" (UID: "1529f819-52bd-428f-970f-5f67f071e729") : secret "metrics-server-cert" not found Jan 25 08:12:37 crc kubenswrapper[4832]: E0125 08:12:37.544909 4832 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/telemetry-operator@sha256:e02722d7581bfe1c5fc13e2fa6811d8665102ba86635c77547abf6b933cde127,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-rvsnv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod telemetry-operator-controller-manager-85cd9769bb-59gds_openstack-operators(47605944-bcb8-4196-9eb3-b26c2e923e70): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 25 08:12:37 crc kubenswrapper[4832]: E0125 08:12:37.547238 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-59gds" podUID="47605944-bcb8-4196-9eb3-b26c2e923e70" Jan 25 08:12:37 crc kubenswrapper[4832]: I0125 08:12:37.551085 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-69797bbcbd-qnxqc"] Jan 25 08:12:37 crc kubenswrapper[4832]: I0125 08:12:37.562985 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-f87nw"] Jan 25 08:12:37 crc kubenswrapper[4832]: E0125 08:12:37.572587 4832 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/swift-operator@sha256:445e951df2f21df6d33a466f75917e0f6103052ae751ae11887136e8ab165922,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-w8g9b,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod swift-operator-controller-manager-547cbdb99f-zwlrf_openstack-operators(eb801494-724f-482a-a359-896e5b735b62): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 25 08:12:37 crc kubenswrapper[4832]: E0125 08:12:37.573880 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-zwlrf" podUID="eb801494-724f-482a-a359-896e5b735b62" Jan 25 08:12:37 crc kubenswrapper[4832]: I0125 08:12:37.589724 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-564965969-57npv"] Jan 25 08:12:37 crc kubenswrapper[4832]: W0125 08:12:37.589856 4832 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcdb822ca_2a1d_4b10_8d44_f2cb33173358.slice/crio-47d0ee3afc8f6b2e2e04a400c242e86e0108cc724f0cc5bd694cc1fe21481ccf WatchSource:0}: Error finding container 47d0ee3afc8f6b2e2e04a400c242e86e0108cc724f0cc5bd694cc1fe21481ccf: Status 404 returned error can't find the container with id 47d0ee3afc8f6b2e2e04a400c242e86e0108cc724f0cc5bd694cc1fe21481ccf Jan 25 08:12:37 crc kubenswrapper[4832]: E0125 08:12:37.592702 4832 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:operator,Image:quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2,Command:[/manager],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:9782,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OPERATOR_NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{200 -3} {} 200m DecimalSI},memory: {{524288000 0} {} 500Mi BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-rn6tk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cluster-operator-manager-668c99d594-f87nw_openstack-operators(cdb822ca-2a1d-4b10-8d44-f2cb33173358): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 25 08:12:37 crc kubenswrapper[4832]: E0125 08:12:37.594141 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-f87nw" podUID="cdb822ca-2a1d-4b10-8d44-f2cb33173358" Jan 25 08:12:37 crc kubenswrapper[4832]: W0125 08:12:37.596970 4832 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1f038807_2bed_41a2_aecd_35d29e529eb8.slice/crio-0d92f9dc73d228031a2dcb8d97ac66dc266a9458245314ed691e5babd4d4a08c WatchSource:0}: Error finding container 0d92f9dc73d228031a2dcb8d97ac66dc266a9458245314ed691e5babd4d4a08c: Status 404 returned error can't find the container with id 0d92f9dc73d228031a2dcb8d97ac66dc266a9458245314ed691e5babd4d4a08c Jan 25 08:12:37 crc kubenswrapper[4832]: E0125 08:12:37.598662 4832 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/watcher-operator@sha256:7869203f6f97de780368d507636031090fed3b658d2f7771acbd4481bdfc870b,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-jw47d,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod watcher-operator-controller-manager-564965969-57npv_openstack-operators(1f038807-2bed-41a2-aecd-35d29e529eb8): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 25 08:12:37 crc kubenswrapper[4832]: E0125 08:12:37.600289 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/watcher-operator-controller-manager-564965969-57npv" podUID="1f038807-2bed-41a2-aecd-35d29e529eb8" Jan 25 08:12:38 crc kubenswrapper[4832]: I0125 08:12:38.154111 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-78d58447c5-hpqjz" event={"ID":"0c897c34-1c91-416c-91e2-65ae83958e10","Type":"ContainerStarted","Data":"b95a4b3e3bd572f890e7f0d49c2409cb297606dbef8f7bc6fa2bfb69e4cd8571"} Jan 25 08:12:38 crc kubenswrapper[4832]: I0125 08:12:38.155368 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-6f75f45d54-cf7rg" event={"ID":"8d21c83b-b981-4466-b81a-ed7954d1f3cb","Type":"ContainerStarted","Data":"43e3346991c372277eb7427e08f54d51e08935746e759d4ce1e1f8b354930a57"} Jan 25 08:12:38 crc kubenswrapper[4832]: I0125 08:12:38.162009 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-qnxqc" event={"ID":"c3356b9d-3a3c-4583-9803-d08fcb621401","Type":"ContainerStarted","Data":"d2fb23f6332014fa2fea2d1e913469e1fa83882a1dc25632cd93b2adfb50364a"} Jan 25 08:12:38 crc kubenswrapper[4832]: I0125 08:12:38.165222 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-f87nw" event={"ID":"cdb822ca-2a1d-4b10-8d44-f2cb33173358","Type":"ContainerStarted","Data":"47d0ee3afc8f6b2e2e04a400c242e86e0108cc724f0cc5bd694cc1fe21481ccf"} Jan 25 08:12:38 crc kubenswrapper[4832]: E0125 08:12:38.166884 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-f87nw" podUID="cdb822ca-2a1d-4b10-8d44-f2cb33173358" Jan 25 08:12:38 crc kubenswrapper[4832]: I0125 08:12:38.169294 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-59gds" event={"ID":"47605944-bcb8-4196-9eb3-b26c2e923e70","Type":"ContainerStarted","Data":"2ea71f2ccfecae3ad9ba0bb31195376e8bc95a1b19f6e366678d3e03b73c3a0e"} Jan 25 08:12:38 crc kubenswrapper[4832]: E0125 08:12:38.170874 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/telemetry-operator@sha256:e02722d7581bfe1c5fc13e2fa6811d8665102ba86635c77547abf6b933cde127\\\"\"" pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-59gds" podUID="47605944-bcb8-4196-9eb3-b26c2e923e70" Jan 25 08:12:38 crc kubenswrapper[4832]: I0125 08:12:38.171772 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-4k5f7" event={"ID":"31cef49b-390b-4029-bdc4-64893be3d183","Type":"ContainerStarted","Data":"f3b97427b05661a847f5fb0c29b91752aa924395fb8ea1b3ea5d09b7306c4ae0"} Jan 25 08:12:38 crc kubenswrapper[4832]: I0125 08:12:38.175522 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-79d5ccc684-lrsxz" event={"ID":"1e30c775-7a32-478e-8c3c-7312757f846b","Type":"ContainerStarted","Data":"23ddcd0a34800a22dd88e48d4597c3248553da8bff8b09439b285ea849e534cf"} Jan 25 08:12:38 crc kubenswrapper[4832]: I0125 08:12:38.176928 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-7bdb645866-q67lr" event={"ID":"d221c44f-6fb5-4b96-b84e-f1d55253ed08","Type":"ContainerStarted","Data":"5caa7e1431961539535f0414963d6dde5df5239f97032a90e7f1fa4fbb52bee2"} Jan 25 08:12:38 crc kubenswrapper[4832]: I0125 08:12:38.179812 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-564965969-57npv" event={"ID":"1f038807-2bed-41a2-aecd-35d29e529eb8","Type":"ContainerStarted","Data":"0d92f9dc73d228031a2dcb8d97ac66dc266a9458245314ed691e5babd4d4a08c"} Jan 25 08:12:38 crc kubenswrapper[4832]: I0125 08:12:38.182289 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-5f4cd88d46-642xd" event={"ID":"b618d12e-02c2-4ae7-872a-15bd233259b5","Type":"ContainerStarted","Data":"9fa37a2ad70e5bd7fc857a5a9ee6cb21628c38bc4801edf2dde5008944796817"} Jan 25 08:12:38 crc kubenswrapper[4832]: E0125 08:12:38.183934 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/watcher-operator@sha256:7869203f6f97de780368d507636031090fed3b658d2f7771acbd4481bdfc870b\\\"\"" pod="openstack-operators/watcher-operator-controller-manager-564965969-57npv" podUID="1f038807-2bed-41a2-aecd-35d29e529eb8" Jan 25 08:12:38 crc kubenswrapper[4832]: I0125 08:12:38.186060 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-mstsp" event={"ID":"d75c853c-428e-4f6a-8a82-a050b71af662","Type":"ContainerStarted","Data":"69095a8811b6b99ba02c4c1ab64eba3f3950e2c915db818e57287651f2ebea71"} Jan 25 08:12:38 crc kubenswrapper[4832]: I0125 08:12:38.199378 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-zwlrf" event={"ID":"eb801494-724f-482a-a359-896e5b735b62","Type":"ContainerStarted","Data":"abf9f3b8a848ede37bb49f2a7d1cc5c69cd37b7f609b44cf79bf9e516ca0fa76"} Jan 25 08:12:38 crc kubenswrapper[4832]: I0125 08:12:38.213372 4832 generic.go:334] "Generic (PLEG): container finished" podID="09f1c770-b9b1-40cf-9805-b88a1445218a" containerID="8c7d2065755c03c6d86cba8bd7e425579eebda18aacdca0c04ae93121b369e38" exitCode=0 Jan 25 08:12:38 crc kubenswrapper[4832]: I0125 08:12:38.213492 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qrg9b" event={"ID":"09f1c770-b9b1-40cf-9805-b88a1445218a","Type":"ContainerDied","Data":"8c7d2065755c03c6d86cba8bd7e425579eebda18aacdca0c04ae93121b369e38"} Jan 25 08:12:38 crc kubenswrapper[4832]: E0125 08:12:38.216026 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/swift-operator@sha256:445e951df2f21df6d33a466f75917e0f6103052ae751ae11887136e8ab165922\\\"\"" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-zwlrf" podUID="eb801494-724f-482a-a359-896e5b735b62" Jan 25 08:12:38 crc kubenswrapper[4832]: I0125 08:12:38.241659 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-nzjmz" event={"ID":"3f993c1e-81ae-4e86-9b28-eccb1db48f2b","Type":"ContainerStarted","Data":"9afda7018692248002a4822cb365a3e961d2dee157561d4badf7357d2eb55d67"} Jan 25 08:12:38 crc kubenswrapper[4832]: I0125 08:12:38.244006 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-7f86f8796f-hr9t5" event={"ID":"8251d5ba-3a9a-429c-ba20-1af897640ad3","Type":"ContainerStarted","Data":"89a6d32406295e4a06d4cf448da808dbde0dfa46a3e002a7946ac1b9c42c1288"} Jan 25 08:12:38 crc kubenswrapper[4832]: I0125 08:12:38.881243 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/29b29aa4-b326-4515-9842-6d848c208096-cert\") pod \"infra-operator-controller-manager-694cf4f878-vt5m9\" (UID: \"29b29aa4-b326-4515-9842-6d848c208096\") " pod="openstack-operators/infra-operator-controller-manager-694cf4f878-vt5m9" Jan 25 08:12:38 crc kubenswrapper[4832]: E0125 08:12:38.881473 4832 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 25 08:12:38 crc kubenswrapper[4832]: E0125 08:12:38.881737 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/29b29aa4-b326-4515-9842-6d848c208096-cert podName:29b29aa4-b326-4515-9842-6d848c208096 nodeName:}" failed. No retries permitted until 2026-01-25 08:12:42.881715723 +0000 UTC m=+945.555539256 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/29b29aa4-b326-4515-9842-6d848c208096-cert") pod "infra-operator-controller-manager-694cf4f878-vt5m9" (UID: "29b29aa4-b326-4515-9842-6d848c208096") : secret "infra-operator-webhook-server-cert" not found Jan 25 08:12:39 crc kubenswrapper[4832]: I0125 08:12:39.088136 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/3b784c4a-e1cf-42fb-ad96-dca059f63e79-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b854b8jhw\" (UID: \"3b784c4a-e1cf-42fb-ad96-dca059f63e79\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854b8jhw" Jan 25 08:12:39 crc kubenswrapper[4832]: E0125 08:12:39.088332 4832 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 25 08:12:39 crc kubenswrapper[4832]: E0125 08:12:39.088449 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3b784c4a-e1cf-42fb-ad96-dca059f63e79-cert podName:3b784c4a-e1cf-42fb-ad96-dca059f63e79 nodeName:}" failed. No retries permitted until 2026-01-25 08:12:43.088417855 +0000 UTC m=+945.762241388 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/3b784c4a-e1cf-42fb-ad96-dca059f63e79-cert") pod "openstack-baremetal-operator-controller-manager-6b68b8b854b8jhw" (UID: "3b784c4a-e1cf-42fb-ad96-dca059f63e79") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 25 08:12:39 crc kubenswrapper[4832]: E0125 08:12:39.275087 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/telemetry-operator@sha256:e02722d7581bfe1c5fc13e2fa6811d8665102ba86635c77547abf6b933cde127\\\"\"" pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-59gds" podUID="47605944-bcb8-4196-9eb3-b26c2e923e70" Jan 25 08:12:39 crc kubenswrapper[4832]: E0125 08:12:39.276497 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/swift-operator@sha256:445e951df2f21df6d33a466f75917e0f6103052ae751ae11887136e8ab165922\\\"\"" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-zwlrf" podUID="eb801494-724f-482a-a359-896e5b735b62" Jan 25 08:12:39 crc kubenswrapper[4832]: E0125 08:12:39.277040 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-f87nw" podUID="cdb822ca-2a1d-4b10-8d44-f2cb33173358" Jan 25 08:12:39 crc kubenswrapper[4832]: E0125 08:12:39.277410 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/watcher-operator@sha256:7869203f6f97de780368d507636031090fed3b658d2f7771acbd4481bdfc870b\\\"\"" pod="openstack-operators/watcher-operator-controller-manager-564965969-57npv" podUID="1f038807-2bed-41a2-aecd-35d29e529eb8" Jan 25 08:12:39 crc kubenswrapper[4832]: I0125 08:12:39.596857 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/1529f819-52bd-428f-970f-5f67f071e729-metrics-certs\") pod \"openstack-operator-controller-manager-745947945d-jwhxb\" (UID: \"1529f819-52bd-428f-970f-5f67f071e729\") " pod="openstack-operators/openstack-operator-controller-manager-745947945d-jwhxb" Jan 25 08:12:39 crc kubenswrapper[4832]: I0125 08:12:39.597442 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/1529f819-52bd-428f-970f-5f67f071e729-webhook-certs\") pod \"openstack-operator-controller-manager-745947945d-jwhxb\" (UID: \"1529f819-52bd-428f-970f-5f67f071e729\") " pod="openstack-operators/openstack-operator-controller-manager-745947945d-jwhxb" Jan 25 08:12:39 crc kubenswrapper[4832]: E0125 08:12:39.597315 4832 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 25 08:12:39 crc kubenswrapper[4832]: E0125 08:12:39.597742 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1529f819-52bd-428f-970f-5f67f071e729-metrics-certs podName:1529f819-52bd-428f-970f-5f67f071e729 nodeName:}" failed. No retries permitted until 2026-01-25 08:12:43.597718912 +0000 UTC m=+946.271542445 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/1529f819-52bd-428f-970f-5f67f071e729-metrics-certs") pod "openstack-operator-controller-manager-745947945d-jwhxb" (UID: "1529f819-52bd-428f-970f-5f67f071e729") : secret "metrics-server-cert" not found Jan 25 08:12:39 crc kubenswrapper[4832]: E0125 08:12:39.597654 4832 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 25 08:12:39 crc kubenswrapper[4832]: E0125 08:12:39.598195 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1529f819-52bd-428f-970f-5f67f071e729-webhook-certs podName:1529f819-52bd-428f-970f-5f67f071e729 nodeName:}" failed. No retries permitted until 2026-01-25 08:12:43.598186737 +0000 UTC m=+946.272010270 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/1529f819-52bd-428f-970f-5f67f071e729-webhook-certs") pod "openstack-operator-controller-manager-745947945d-jwhxb" (UID: "1529f819-52bd-428f-970f-5f67f071e729") : secret "webhook-server-cert" not found Jan 25 08:12:40 crc kubenswrapper[4832]: I0125 08:12:40.296124 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qrg9b" event={"ID":"09f1c770-b9b1-40cf-9805-b88a1445218a","Type":"ContainerStarted","Data":"7679a2b66424f2d90e13f781bede969d9e56a2601b0b7f50d985683f3759f239"} Jan 25 08:12:40 crc kubenswrapper[4832]: I0125 08:12:40.320685 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-qrg9b" podStartSLOduration=3.751719495 podStartE2EDuration="7.320666327s" podCreationTimestamp="2026-01-25 08:12:33 +0000 UTC" firstStartedPulling="2026-01-25 08:12:35.127575488 +0000 UTC m=+937.801399021" lastFinishedPulling="2026-01-25 08:12:38.69652233 +0000 UTC m=+941.370345853" observedRunningTime="2026-01-25 08:12:40.316224818 +0000 UTC m=+942.990048351" watchObservedRunningTime="2026-01-25 08:12:40.320666327 +0000 UTC m=+942.994489860" Jan 25 08:12:40 crc kubenswrapper[4832]: I0125 08:12:40.773485 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-7hnz5" Jan 25 08:12:40 crc kubenswrapper[4832]: I0125 08:12:40.774913 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-7hnz5" Jan 25 08:12:40 crc kubenswrapper[4832]: I0125 08:12:40.888209 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-7hnz5" Jan 25 08:12:41 crc kubenswrapper[4832]: I0125 08:12:41.386509 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-7hnz5" Jan 25 08:12:42 crc kubenswrapper[4832]: I0125 08:12:42.955247 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/29b29aa4-b326-4515-9842-6d848c208096-cert\") pod \"infra-operator-controller-manager-694cf4f878-vt5m9\" (UID: \"29b29aa4-b326-4515-9842-6d848c208096\") " pod="openstack-operators/infra-operator-controller-manager-694cf4f878-vt5m9" Jan 25 08:12:42 crc kubenswrapper[4832]: E0125 08:12:42.955408 4832 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 25 08:12:42 crc kubenswrapper[4832]: E0125 08:12:42.955975 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/29b29aa4-b326-4515-9842-6d848c208096-cert podName:29b29aa4-b326-4515-9842-6d848c208096 nodeName:}" failed. No retries permitted until 2026-01-25 08:12:50.955949865 +0000 UTC m=+953.629773398 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/29b29aa4-b326-4515-9842-6d848c208096-cert") pod "infra-operator-controller-manager-694cf4f878-vt5m9" (UID: "29b29aa4-b326-4515-9842-6d848c208096") : secret "infra-operator-webhook-server-cert" not found Jan 25 08:12:43 crc kubenswrapper[4832]: I0125 08:12:43.027941 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-7hnz5"] Jan 25 08:12:43 crc kubenswrapper[4832]: I0125 08:12:43.158657 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/3b784c4a-e1cf-42fb-ad96-dca059f63e79-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b854b8jhw\" (UID: \"3b784c4a-e1cf-42fb-ad96-dca059f63e79\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854b8jhw" Jan 25 08:12:43 crc kubenswrapper[4832]: E0125 08:12:43.158941 4832 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 25 08:12:43 crc kubenswrapper[4832]: E0125 08:12:43.159075 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3b784c4a-e1cf-42fb-ad96-dca059f63e79-cert podName:3b784c4a-e1cf-42fb-ad96-dca059f63e79 nodeName:}" failed. No retries permitted until 2026-01-25 08:12:51.159044693 +0000 UTC m=+953.832868266 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/3b784c4a-e1cf-42fb-ad96-dca059f63e79-cert") pod "openstack-baremetal-operator-controller-manager-6b68b8b854b8jhw" (UID: "3b784c4a-e1cf-42fb-ad96-dca059f63e79") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 25 08:12:43 crc kubenswrapper[4832]: I0125 08:12:43.555666 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-qrg9b" Jan 25 08:12:43 crc kubenswrapper[4832]: I0125 08:12:43.555714 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-qrg9b" Jan 25 08:12:43 crc kubenswrapper[4832]: I0125 08:12:43.620450 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-qrg9b" Jan 25 08:12:43 crc kubenswrapper[4832]: I0125 08:12:43.666232 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/1529f819-52bd-428f-970f-5f67f071e729-metrics-certs\") pod \"openstack-operator-controller-manager-745947945d-jwhxb\" (UID: \"1529f819-52bd-428f-970f-5f67f071e729\") " pod="openstack-operators/openstack-operator-controller-manager-745947945d-jwhxb" Jan 25 08:12:43 crc kubenswrapper[4832]: I0125 08:12:43.666283 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/1529f819-52bd-428f-970f-5f67f071e729-webhook-certs\") pod \"openstack-operator-controller-manager-745947945d-jwhxb\" (UID: \"1529f819-52bd-428f-970f-5f67f071e729\") " pod="openstack-operators/openstack-operator-controller-manager-745947945d-jwhxb" Jan 25 08:12:43 crc kubenswrapper[4832]: E0125 08:12:43.666483 4832 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 25 08:12:43 crc kubenswrapper[4832]: E0125 08:12:43.666526 4832 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 25 08:12:43 crc kubenswrapper[4832]: E0125 08:12:43.666557 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1529f819-52bd-428f-970f-5f67f071e729-metrics-certs podName:1529f819-52bd-428f-970f-5f67f071e729 nodeName:}" failed. No retries permitted until 2026-01-25 08:12:51.666539822 +0000 UTC m=+954.340363355 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/1529f819-52bd-428f-970f-5f67f071e729-metrics-certs") pod "openstack-operator-controller-manager-745947945d-jwhxb" (UID: "1529f819-52bd-428f-970f-5f67f071e729") : secret "metrics-server-cert" not found Jan 25 08:12:43 crc kubenswrapper[4832]: E0125 08:12:43.666641 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1529f819-52bd-428f-970f-5f67f071e729-webhook-certs podName:1529f819-52bd-428f-970f-5f67f071e729 nodeName:}" failed. No retries permitted until 2026-01-25 08:12:51.666579924 +0000 UTC m=+954.340403457 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/1529f819-52bd-428f-970f-5f67f071e729-webhook-certs") pod "openstack-operator-controller-manager-745947945d-jwhxb" (UID: "1529f819-52bd-428f-970f-5f67f071e729") : secret "webhook-server-cert" not found Jan 25 08:12:44 crc kubenswrapper[4832]: I0125 08:12:44.334633 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-7hnz5" podUID="464e0a0d-87e3-44d8-aa9d-2b95b2aa2781" containerName="registry-server" containerID="cri-o://5b0d07b034dc06627e4569cf551c91ce8308dd9d33993e65dadb8af7f28bbd1a" gracePeriod=2 Jan 25 08:12:44 crc kubenswrapper[4832]: I0125 08:12:44.381147 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-qrg9b" Jan 25 08:12:45 crc kubenswrapper[4832]: I0125 08:12:45.370258 4832 generic.go:334] "Generic (PLEG): container finished" podID="464e0a0d-87e3-44d8-aa9d-2b95b2aa2781" containerID="5b0d07b034dc06627e4569cf551c91ce8308dd9d33993e65dadb8af7f28bbd1a" exitCode=0 Jan 25 08:12:45 crc kubenswrapper[4832]: I0125 08:12:45.370313 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7hnz5" event={"ID":"464e0a0d-87e3-44d8-aa9d-2b95b2aa2781","Type":"ContainerDied","Data":"5b0d07b034dc06627e4569cf551c91ce8308dd9d33993e65dadb8af7f28bbd1a"} Jan 25 08:12:45 crc kubenswrapper[4832]: I0125 08:12:45.427476 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-qrg9b"] Jan 25 08:12:46 crc kubenswrapper[4832]: I0125 08:12:46.376691 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-qrg9b" podUID="09f1c770-b9b1-40cf-9805-b88a1445218a" containerName="registry-server" containerID="cri-o://7679a2b66424f2d90e13f781bede969d9e56a2601b0b7f50d985683f3759f239" gracePeriod=2 Jan 25 08:12:47 crc kubenswrapper[4832]: I0125 08:12:47.402669 4832 generic.go:334] "Generic (PLEG): container finished" podID="09f1c770-b9b1-40cf-9805-b88a1445218a" containerID="7679a2b66424f2d90e13f781bede969d9e56a2601b0b7f50d985683f3759f239" exitCode=0 Jan 25 08:12:47 crc kubenswrapper[4832]: I0125 08:12:47.402716 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qrg9b" event={"ID":"09f1c770-b9b1-40cf-9805-b88a1445218a","Type":"ContainerDied","Data":"7679a2b66424f2d90e13f781bede969d9e56a2601b0b7f50d985683f3759f239"} Jan 25 08:12:50 crc kubenswrapper[4832]: E0125 08:12:50.774051 4832 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 5b0d07b034dc06627e4569cf551c91ce8308dd9d33993e65dadb8af7f28bbd1a is running failed: container process not found" containerID="5b0d07b034dc06627e4569cf551c91ce8308dd9d33993e65dadb8af7f28bbd1a" cmd=["grpc_health_probe","-addr=:50051"] Jan 25 08:12:50 crc kubenswrapper[4832]: E0125 08:12:50.775375 4832 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 5b0d07b034dc06627e4569cf551c91ce8308dd9d33993e65dadb8af7f28bbd1a is running failed: container process not found" containerID="5b0d07b034dc06627e4569cf551c91ce8308dd9d33993e65dadb8af7f28bbd1a" cmd=["grpc_health_probe","-addr=:50051"] Jan 25 08:12:50 crc kubenswrapper[4832]: E0125 08:12:50.775978 4832 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 5b0d07b034dc06627e4569cf551c91ce8308dd9d33993e65dadb8af7f28bbd1a is running failed: container process not found" containerID="5b0d07b034dc06627e4569cf551c91ce8308dd9d33993e65dadb8af7f28bbd1a" cmd=["grpc_health_probe","-addr=:50051"] Jan 25 08:12:50 crc kubenswrapper[4832]: E0125 08:12:50.776028 4832 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 5b0d07b034dc06627e4569cf551c91ce8308dd9d33993e65dadb8af7f28bbd1a is running failed: container process not found" probeType="Readiness" pod="openshift-marketplace/certified-operators-7hnz5" podUID="464e0a0d-87e3-44d8-aa9d-2b95b2aa2781" containerName="registry-server" Jan 25 08:12:50 crc kubenswrapper[4832]: I0125 08:12:50.994155 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/29b29aa4-b326-4515-9842-6d848c208096-cert\") pod \"infra-operator-controller-manager-694cf4f878-vt5m9\" (UID: \"29b29aa4-b326-4515-9842-6d848c208096\") " pod="openstack-operators/infra-operator-controller-manager-694cf4f878-vt5m9" Jan 25 08:12:50 crc kubenswrapper[4832]: E0125 08:12:50.994330 4832 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 25 08:12:50 crc kubenswrapper[4832]: E0125 08:12:50.994471 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/29b29aa4-b326-4515-9842-6d848c208096-cert podName:29b29aa4-b326-4515-9842-6d848c208096 nodeName:}" failed. No retries permitted until 2026-01-25 08:13:06.994433739 +0000 UTC m=+969.668257312 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/29b29aa4-b326-4515-9842-6d848c208096-cert") pod "infra-operator-controller-manager-694cf4f878-vt5m9" (UID: "29b29aa4-b326-4515-9842-6d848c208096") : secret "infra-operator-webhook-server-cert" not found Jan 25 08:12:51 crc kubenswrapper[4832]: I0125 08:12:51.198348 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/3b784c4a-e1cf-42fb-ad96-dca059f63e79-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b854b8jhw\" (UID: \"3b784c4a-e1cf-42fb-ad96-dca059f63e79\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854b8jhw" Jan 25 08:12:51 crc kubenswrapper[4832]: E0125 08:12:51.198501 4832 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 25 08:12:51 crc kubenswrapper[4832]: E0125 08:12:51.198573 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3b784c4a-e1cf-42fb-ad96-dca059f63e79-cert podName:3b784c4a-e1cf-42fb-ad96-dca059f63e79 nodeName:}" failed. No retries permitted until 2026-01-25 08:13:07.198554668 +0000 UTC m=+969.872378201 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/3b784c4a-e1cf-42fb-ad96-dca059f63e79-cert") pod "openstack-baremetal-operator-controller-manager-6b68b8b854b8jhw" (UID: "3b784c4a-e1cf-42fb-ad96-dca059f63e79") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 25 08:12:51 crc kubenswrapper[4832]: I0125 08:12:51.705762 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/1529f819-52bd-428f-970f-5f67f071e729-metrics-certs\") pod \"openstack-operator-controller-manager-745947945d-jwhxb\" (UID: \"1529f819-52bd-428f-970f-5f67f071e729\") " pod="openstack-operators/openstack-operator-controller-manager-745947945d-jwhxb" Jan 25 08:12:51 crc kubenswrapper[4832]: I0125 08:12:51.705828 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/1529f819-52bd-428f-970f-5f67f071e729-webhook-certs\") pod \"openstack-operator-controller-manager-745947945d-jwhxb\" (UID: \"1529f819-52bd-428f-970f-5f67f071e729\") " pod="openstack-operators/openstack-operator-controller-manager-745947945d-jwhxb" Jan 25 08:12:51 crc kubenswrapper[4832]: E0125 08:12:51.705981 4832 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 25 08:12:51 crc kubenswrapper[4832]: E0125 08:12:51.705992 4832 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 25 08:12:51 crc kubenswrapper[4832]: E0125 08:12:51.706050 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1529f819-52bd-428f-970f-5f67f071e729-webhook-certs podName:1529f819-52bd-428f-970f-5f67f071e729 nodeName:}" failed. No retries permitted until 2026-01-25 08:13:07.706031028 +0000 UTC m=+970.379854561 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/1529f819-52bd-428f-970f-5f67f071e729-webhook-certs") pod "openstack-operator-controller-manager-745947945d-jwhxb" (UID: "1529f819-52bd-428f-970f-5f67f071e729") : secret "webhook-server-cert" not found Jan 25 08:12:51 crc kubenswrapper[4832]: E0125 08:12:51.706085 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1529f819-52bd-428f-970f-5f67f071e729-metrics-certs podName:1529f819-52bd-428f-970f-5f67f071e729 nodeName:}" failed. No retries permitted until 2026-01-25 08:13:07.706065149 +0000 UTC m=+970.379888682 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/1529f819-52bd-428f-970f-5f67f071e729-metrics-certs") pod "openstack-operator-controller-manager-745947945d-jwhxb" (UID: "1529f819-52bd-428f-970f-5f67f071e729") : secret "metrics-server-cert" not found Jan 25 08:12:53 crc kubenswrapper[4832]: E0125 08:12:53.555463 4832 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 7679a2b66424f2d90e13f781bede969d9e56a2601b0b7f50d985683f3759f239 is running failed: container process not found" containerID="7679a2b66424f2d90e13f781bede969d9e56a2601b0b7f50d985683f3759f239" cmd=["grpc_health_probe","-addr=:50051"] Jan 25 08:12:53 crc kubenswrapper[4832]: E0125 08:12:53.556290 4832 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 7679a2b66424f2d90e13f781bede969d9e56a2601b0b7f50d985683f3759f239 is running failed: container process not found" containerID="7679a2b66424f2d90e13f781bede969d9e56a2601b0b7f50d985683f3759f239" cmd=["grpc_health_probe","-addr=:50051"] Jan 25 08:12:53 crc kubenswrapper[4832]: E0125 08:12:53.556704 4832 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 7679a2b66424f2d90e13f781bede969d9e56a2601b0b7f50d985683f3759f239 is running failed: container process not found" containerID="7679a2b66424f2d90e13f781bede969d9e56a2601b0b7f50d985683f3759f239" cmd=["grpc_health_probe","-addr=:50051"] Jan 25 08:12:53 crc kubenswrapper[4832]: E0125 08:12:53.556733 4832 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 7679a2b66424f2d90e13f781bede969d9e56a2601b0b7f50d985683f3759f239 is running failed: container process not found" probeType="Readiness" pod="openshift-marketplace/community-operators-qrg9b" podUID="09f1c770-b9b1-40cf-9805-b88a1445218a" containerName="registry-server" Jan 25 08:12:55 crc kubenswrapper[4832]: E0125 08:12:55.054412 4832 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/placement-operator@sha256:013c0ad82d21a21c7eece5cd4b5d5c4b8eb410b6671ac33a6f3fb78c8510811d" Jan 25 08:12:55 crc kubenswrapper[4832]: E0125 08:12:55.054586 4832 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/placement-operator@sha256:013c0ad82d21a21c7eece5cd4b5d5c4b8eb410b6671ac33a6f3fb78c8510811d,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-pbrbl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod placement-operator-controller-manager-79d5ccc684-lrsxz_openstack-operators(1e30c775-7a32-478e-8c3c-7312757f846b): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 25 08:12:55 crc kubenswrapper[4832]: E0125 08:12:55.055867 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/placement-operator-controller-manager-79d5ccc684-lrsxz" podUID="1e30c775-7a32-478e-8c3c-7312757f846b" Jan 25 08:12:55 crc kubenswrapper[4832]: E0125 08:12:55.456266 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/placement-operator@sha256:013c0ad82d21a21c7eece5cd4b5d5c4b8eb410b6671ac33a6f3fb78c8510811d\\\"\"" pod="openstack-operators/placement-operator-controller-manager-79d5ccc684-lrsxz" podUID="1e30c775-7a32-478e-8c3c-7312757f846b" Jan 25 08:12:55 crc kubenswrapper[4832]: E0125 08:12:55.789661 4832 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/designate-operator@sha256:6c88312afa9673f7b72c558368034d7a488ead73080cdcdf581fe85b99263ece" Jan 25 08:12:55 crc kubenswrapper[4832]: E0125 08:12:55.789860 4832 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/designate-operator@sha256:6c88312afa9673f7b72c558368034d7a488ead73080cdcdf581fe85b99263ece,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-jq2jr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod designate-operator-controller-manager-b45d7bf98-75hsw_openstack-operators(0cac9e7d-b342-4b55-a667-76fa1c144080): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 25 08:12:55 crc kubenswrapper[4832]: E0125 08:12:55.790986 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-75hsw" podUID="0cac9e7d-b342-4b55-a667-76fa1c144080" Jan 25 08:12:56 crc kubenswrapper[4832]: I0125 08:12:56.291051 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-v7dkf"] Jan 25 08:12:56 crc kubenswrapper[4832]: I0125 08:12:56.292666 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-v7dkf" Jan 25 08:12:56 crc kubenswrapper[4832]: I0125 08:12:56.298967 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-v7dkf"] Jan 25 08:12:56 crc kubenswrapper[4832]: I0125 08:12:56.470973 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f7c74f9f-348d-4f8d-ab88-8bfd200a3f20-catalog-content\") pod \"redhat-marketplace-v7dkf\" (UID: \"f7c74f9f-348d-4f8d-ab88-8bfd200a3f20\") " pod="openshift-marketplace/redhat-marketplace-v7dkf" Jan 25 08:12:56 crc kubenswrapper[4832]: I0125 08:12:56.471072 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fqdh4\" (UniqueName: \"kubernetes.io/projected/f7c74f9f-348d-4f8d-ab88-8bfd200a3f20-kube-api-access-fqdh4\") pod \"redhat-marketplace-v7dkf\" (UID: \"f7c74f9f-348d-4f8d-ab88-8bfd200a3f20\") " pod="openshift-marketplace/redhat-marketplace-v7dkf" Jan 25 08:12:56 crc kubenswrapper[4832]: I0125 08:12:56.471272 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f7c74f9f-348d-4f8d-ab88-8bfd200a3f20-utilities\") pod \"redhat-marketplace-v7dkf\" (UID: \"f7c74f9f-348d-4f8d-ab88-8bfd200a3f20\") " pod="openshift-marketplace/redhat-marketplace-v7dkf" Jan 25 08:12:56 crc kubenswrapper[4832]: E0125 08:12:56.485917 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/designate-operator@sha256:6c88312afa9673f7b72c558368034d7a488ead73080cdcdf581fe85b99263ece\\\"\"" pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-75hsw" podUID="0cac9e7d-b342-4b55-a667-76fa1c144080" Jan 25 08:12:56 crc kubenswrapper[4832]: I0125 08:12:56.572155 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f7c74f9f-348d-4f8d-ab88-8bfd200a3f20-utilities\") pod \"redhat-marketplace-v7dkf\" (UID: \"f7c74f9f-348d-4f8d-ab88-8bfd200a3f20\") " pod="openshift-marketplace/redhat-marketplace-v7dkf" Jan 25 08:12:56 crc kubenswrapper[4832]: I0125 08:12:56.572490 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f7c74f9f-348d-4f8d-ab88-8bfd200a3f20-catalog-content\") pod \"redhat-marketplace-v7dkf\" (UID: \"f7c74f9f-348d-4f8d-ab88-8bfd200a3f20\") " pod="openshift-marketplace/redhat-marketplace-v7dkf" Jan 25 08:12:56 crc kubenswrapper[4832]: I0125 08:12:56.572522 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fqdh4\" (UniqueName: \"kubernetes.io/projected/f7c74f9f-348d-4f8d-ab88-8bfd200a3f20-kube-api-access-fqdh4\") pod \"redhat-marketplace-v7dkf\" (UID: \"f7c74f9f-348d-4f8d-ab88-8bfd200a3f20\") " pod="openshift-marketplace/redhat-marketplace-v7dkf" Jan 25 08:12:56 crc kubenswrapper[4832]: I0125 08:12:56.572906 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f7c74f9f-348d-4f8d-ab88-8bfd200a3f20-catalog-content\") pod \"redhat-marketplace-v7dkf\" (UID: \"f7c74f9f-348d-4f8d-ab88-8bfd200a3f20\") " pod="openshift-marketplace/redhat-marketplace-v7dkf" Jan 25 08:12:56 crc kubenswrapper[4832]: I0125 08:12:56.572901 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f7c74f9f-348d-4f8d-ab88-8bfd200a3f20-utilities\") pod \"redhat-marketplace-v7dkf\" (UID: \"f7c74f9f-348d-4f8d-ab88-8bfd200a3f20\") " pod="openshift-marketplace/redhat-marketplace-v7dkf" Jan 25 08:12:56 crc kubenswrapper[4832]: I0125 08:12:56.610496 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fqdh4\" (UniqueName: \"kubernetes.io/projected/f7c74f9f-348d-4f8d-ab88-8bfd200a3f20-kube-api-access-fqdh4\") pod \"redhat-marketplace-v7dkf\" (UID: \"f7c74f9f-348d-4f8d-ab88-8bfd200a3f20\") " pod="openshift-marketplace/redhat-marketplace-v7dkf" Jan 25 08:12:56 crc kubenswrapper[4832]: I0125 08:12:56.624345 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-v7dkf" Jan 25 08:12:56 crc kubenswrapper[4832]: E0125 08:12:56.822464 4832 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/neutron-operator@sha256:816d474f502d730d6a2522a272b0e09a2d579ac63617817655d60c54bda4191e" Jan 25 08:12:56 crc kubenswrapper[4832]: E0125 08:12:56.822646 4832 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/neutron-operator@sha256:816d474f502d730d6a2522a272b0e09a2d579ac63617817655d60c54bda4191e,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-5sfqf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod neutron-operator-controller-manager-78d58447c5-hpqjz_openstack-operators(0c897c34-1c91-416c-91e2-65ae83958e10): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 25 08:12:56 crc kubenswrapper[4832]: E0125 08:12:56.823878 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/neutron-operator-controller-manager-78d58447c5-hpqjz" podUID="0c897c34-1c91-416c-91e2-65ae83958e10" Jan 25 08:12:57 crc kubenswrapper[4832]: E0125 08:12:57.477726 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/neutron-operator@sha256:816d474f502d730d6a2522a272b0e09a2d579ac63617817655d60c54bda4191e\\\"\"" pod="openstack-operators/neutron-operator-controller-manager-78d58447c5-hpqjz" podUID="0c897c34-1c91-416c-91e2-65ae83958e10" Jan 25 08:12:57 crc kubenswrapper[4832]: E0125 08:12:57.817624 4832 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/barbican-operator@sha256:c94116e32fb9af850accd9d7ae46765559eef3fbe2ba75472c1c1ac91b2c33fd" Jan 25 08:12:57 crc kubenswrapper[4832]: E0125 08:12:57.817855 4832 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/barbican-operator@sha256:c94116e32fb9af850accd9d7ae46765559eef3fbe2ba75472c1c1ac91b2c33fd,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-kct5t,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod barbican-operator-controller-manager-7f86f8796f-hr9t5_openstack-operators(8251d5ba-3a9a-429c-ba20-1af897640ad3): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 25 08:12:57 crc kubenswrapper[4832]: E0125 08:12:57.819047 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/barbican-operator-controller-manager-7f86f8796f-hr9t5" podUID="8251d5ba-3a9a-429c-ba20-1af897640ad3" Jan 25 08:12:58 crc kubenswrapper[4832]: E0125 08:12:58.482661 4832 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/test-operator@sha256:c8dde42dafd41026ed2e4cfc26efc0fff63c4ba9d31326ae7dc644ccceaafa9d" Jan 25 08:12:58 crc kubenswrapper[4832]: E0125 08:12:58.482825 4832 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/test-operator@sha256:c8dde42dafd41026ed2e4cfc26efc0fff63c4ba9d31326ae7dc644ccceaafa9d,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-96d8l,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod test-operator-controller-manager-69797bbcbd-qnxqc_openstack-operators(c3356b9d-3a3c-4583-9803-d08fcb621401): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 25 08:12:58 crc kubenswrapper[4832]: E0125 08:12:58.484717 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/barbican-operator@sha256:c94116e32fb9af850accd9d7ae46765559eef3fbe2ba75472c1c1ac91b2c33fd\\\"\"" pod="openstack-operators/barbican-operator-controller-manager-7f86f8796f-hr9t5" podUID="8251d5ba-3a9a-429c-ba20-1af897640ad3" Jan 25 08:12:58 crc kubenswrapper[4832]: E0125 08:12:58.484845 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-qnxqc" podUID="c3356b9d-3a3c-4583-9803-d08fcb621401" Jan 25 08:12:59 crc kubenswrapper[4832]: E0125 08:12:59.490157 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/test-operator@sha256:c8dde42dafd41026ed2e4cfc26efc0fff63c4ba9d31326ae7dc644ccceaafa9d\\\"\"" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-qnxqc" podUID="c3356b9d-3a3c-4583-9803-d08fcb621401" Jan 25 08:13:00 crc kubenswrapper[4832]: E0125 08:13:00.305684 4832 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/ironic-operator@sha256:4d55bd6418df3f63f4d3fe47bebf3f5498a520b3e14af98fe16c85ef9fd54d5e" Jan 25 08:13:00 crc kubenswrapper[4832]: E0125 08:13:00.306080 4832 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/ironic-operator@sha256:4d55bd6418df3f63f4d3fe47bebf3f5498a520b3e14af98fe16c85ef9fd54d5e,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-cl5tf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ironic-operator-controller-manager-598f7747c9-t8jng_openstack-operators(44be34d2-851c-4bf5-a3fb-87607d045d1f): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 25 08:13:00 crc kubenswrapper[4832]: E0125 08:13:00.307301 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/ironic-operator-controller-manager-598f7747c9-t8jng" podUID="44be34d2-851c-4bf5-a3fb-87607d045d1f" Jan 25 08:13:00 crc kubenswrapper[4832]: E0125 08:13:00.495328 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/ironic-operator@sha256:4d55bd6418df3f63f4d3fe47bebf3f5498a520b3e14af98fe16c85ef9fd54d5e\\\"\"" pod="openstack-operators/ironic-operator-controller-manager-598f7747c9-t8jng" podUID="44be34d2-851c-4bf5-a3fb-87607d045d1f" Jan 25 08:13:00 crc kubenswrapper[4832]: E0125 08:13:00.774032 4832 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 5b0d07b034dc06627e4569cf551c91ce8308dd9d33993e65dadb8af7f28bbd1a is running failed: container process not found" containerID="5b0d07b034dc06627e4569cf551c91ce8308dd9d33993e65dadb8af7f28bbd1a" cmd=["grpc_health_probe","-addr=:50051"] Jan 25 08:13:00 crc kubenswrapper[4832]: E0125 08:13:00.774868 4832 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 5b0d07b034dc06627e4569cf551c91ce8308dd9d33993e65dadb8af7f28bbd1a is running failed: container process not found" containerID="5b0d07b034dc06627e4569cf551c91ce8308dd9d33993e65dadb8af7f28bbd1a" cmd=["grpc_health_probe","-addr=:50051"] Jan 25 08:13:00 crc kubenswrapper[4832]: E0125 08:13:00.775185 4832 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 5b0d07b034dc06627e4569cf551c91ce8308dd9d33993e65dadb8af7f28bbd1a is running failed: container process not found" containerID="5b0d07b034dc06627e4569cf551c91ce8308dd9d33993e65dadb8af7f28bbd1a" cmd=["grpc_health_probe","-addr=:50051"] Jan 25 08:13:00 crc kubenswrapper[4832]: E0125 08:13:00.775229 4832 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 5b0d07b034dc06627e4569cf551c91ce8308dd9d33993e65dadb8af7f28bbd1a is running failed: container process not found" probeType="Readiness" pod="openshift-marketplace/certified-operators-7hnz5" podUID="464e0a0d-87e3-44d8-aa9d-2b95b2aa2781" containerName="registry-server" Jan 25 08:13:00 crc kubenswrapper[4832]: E0125 08:13:00.920945 4832 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/manila-operator@sha256:8bee4480babd6fd8f686e0ba52a304acb6ffb90f09c7c57e7f5df5f7658836d8" Jan 25 08:13:00 crc kubenswrapper[4832]: E0125 08:13:00.921134 4832 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/manila-operator@sha256:8bee4480babd6fd8f686e0ba52a304acb6ffb90f09c7c57e7f5df5f7658836d8,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-5krl5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod manila-operator-controller-manager-78c6999f6f-mstsp_openstack-operators(d75c853c-428e-4f6a-8a82-a050b71af662): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 25 08:13:00 crc kubenswrapper[4832]: E0125 08:13:00.922492 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-mstsp" podUID="d75c853c-428e-4f6a-8a82-a050b71af662" Jan 25 08:13:00 crc kubenswrapper[4832]: I0125 08:13:00.968148 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7hnz5" Jan 25 08:13:01 crc kubenswrapper[4832]: I0125 08:13:01.141740 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/464e0a0d-87e3-44d8-aa9d-2b95b2aa2781-catalog-content\") pod \"464e0a0d-87e3-44d8-aa9d-2b95b2aa2781\" (UID: \"464e0a0d-87e3-44d8-aa9d-2b95b2aa2781\") " Jan 25 08:13:01 crc kubenswrapper[4832]: I0125 08:13:01.141859 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/464e0a0d-87e3-44d8-aa9d-2b95b2aa2781-utilities\") pod \"464e0a0d-87e3-44d8-aa9d-2b95b2aa2781\" (UID: \"464e0a0d-87e3-44d8-aa9d-2b95b2aa2781\") " Jan 25 08:13:01 crc kubenswrapper[4832]: I0125 08:13:01.141902 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j2k9s\" (UniqueName: \"kubernetes.io/projected/464e0a0d-87e3-44d8-aa9d-2b95b2aa2781-kube-api-access-j2k9s\") pod \"464e0a0d-87e3-44d8-aa9d-2b95b2aa2781\" (UID: \"464e0a0d-87e3-44d8-aa9d-2b95b2aa2781\") " Jan 25 08:13:01 crc kubenswrapper[4832]: I0125 08:13:01.143743 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/464e0a0d-87e3-44d8-aa9d-2b95b2aa2781-utilities" (OuterVolumeSpecName: "utilities") pod "464e0a0d-87e3-44d8-aa9d-2b95b2aa2781" (UID: "464e0a0d-87e3-44d8-aa9d-2b95b2aa2781"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 25 08:13:01 crc kubenswrapper[4832]: I0125 08:13:01.159949 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/464e0a0d-87e3-44d8-aa9d-2b95b2aa2781-kube-api-access-j2k9s" (OuterVolumeSpecName: "kube-api-access-j2k9s") pod "464e0a0d-87e3-44d8-aa9d-2b95b2aa2781" (UID: "464e0a0d-87e3-44d8-aa9d-2b95b2aa2781"). InnerVolumeSpecName "kube-api-access-j2k9s". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 25 08:13:01 crc kubenswrapper[4832]: I0125 08:13:01.185849 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/464e0a0d-87e3-44d8-aa9d-2b95b2aa2781-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "464e0a0d-87e3-44d8-aa9d-2b95b2aa2781" (UID: "464e0a0d-87e3-44d8-aa9d-2b95b2aa2781"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 25 08:13:01 crc kubenswrapper[4832]: I0125 08:13:01.243222 4832 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/464e0a0d-87e3-44d8-aa9d-2b95b2aa2781-utilities\") on node \"crc\" DevicePath \"\"" Jan 25 08:13:01 crc kubenswrapper[4832]: I0125 08:13:01.243260 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j2k9s\" (UniqueName: \"kubernetes.io/projected/464e0a0d-87e3-44d8-aa9d-2b95b2aa2781-kube-api-access-j2k9s\") on node \"crc\" DevicePath \"\"" Jan 25 08:13:01 crc kubenswrapper[4832]: I0125 08:13:01.243272 4832 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/464e0a0d-87e3-44d8-aa9d-2b95b2aa2781-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 25 08:13:01 crc kubenswrapper[4832]: I0125 08:13:01.501541 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7hnz5" Jan 25 08:13:01 crc kubenswrapper[4832]: I0125 08:13:01.501728 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7hnz5" event={"ID":"464e0a0d-87e3-44d8-aa9d-2b95b2aa2781","Type":"ContainerDied","Data":"76c94a4ada191fab81c74a8135e8103d72ac6e7ba3a3431370fab69e42a13715"} Jan 25 08:13:01 crc kubenswrapper[4832]: I0125 08:13:01.501843 4832 scope.go:117] "RemoveContainer" containerID="5b0d07b034dc06627e4569cf551c91ce8308dd9d33993e65dadb8af7f28bbd1a" Jan 25 08:13:01 crc kubenswrapper[4832]: E0125 08:13:01.502843 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/manila-operator@sha256:8bee4480babd6fd8f686e0ba52a304acb6ffb90f09c7c57e7f5df5f7658836d8\\\"\"" pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-mstsp" podUID="d75c853c-428e-4f6a-8a82-a050b71af662" Jan 25 08:13:01 crc kubenswrapper[4832]: I0125 08:13:01.538858 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-7hnz5"] Jan 25 08:13:01 crc kubenswrapper[4832]: I0125 08:13:01.544418 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-7hnz5"] Jan 25 08:13:01 crc kubenswrapper[4832]: I0125 08:13:01.679780 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="464e0a0d-87e3-44d8-aa9d-2b95b2aa2781" path="/var/lib/kubelet/pods/464e0a0d-87e3-44d8-aa9d-2b95b2aa2781/volumes" Jan 25 08:13:03 crc kubenswrapper[4832]: E0125 08:13:03.554857 4832 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 7679a2b66424f2d90e13f781bede969d9e56a2601b0b7f50d985683f3759f239 is running failed: container process not found" containerID="7679a2b66424f2d90e13f781bede969d9e56a2601b0b7f50d985683f3759f239" cmd=["grpc_health_probe","-addr=:50051"] Jan 25 08:13:03 crc kubenswrapper[4832]: E0125 08:13:03.555338 4832 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 7679a2b66424f2d90e13f781bede969d9e56a2601b0b7f50d985683f3759f239 is running failed: container process not found" containerID="7679a2b66424f2d90e13f781bede969d9e56a2601b0b7f50d985683f3759f239" cmd=["grpc_health_probe","-addr=:50051"] Jan 25 08:13:03 crc kubenswrapper[4832]: E0125 08:13:03.555808 4832 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 7679a2b66424f2d90e13f781bede969d9e56a2601b0b7f50d985683f3759f239 is running failed: container process not found" containerID="7679a2b66424f2d90e13f781bede969d9e56a2601b0b7f50d985683f3759f239" cmd=["grpc_health_probe","-addr=:50051"] Jan 25 08:13:03 crc kubenswrapper[4832]: E0125 08:13:03.555961 4832 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 7679a2b66424f2d90e13f781bede969d9e56a2601b0b7f50d985683f3759f239 is running failed: container process not found" probeType="Readiness" pod="openshift-marketplace/community-operators-qrg9b" podUID="09f1c770-b9b1-40cf-9805-b88a1445218a" containerName="registry-server" Jan 25 08:13:04 crc kubenswrapper[4832]: E0125 08:13:04.117981 4832 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/keystone-operator@sha256:8e340ff11922b38e811261de96982e1aff5f4eb8f225d1d9f5973025a4fe8349" Jan 25 08:13:04 crc kubenswrapper[4832]: E0125 08:13:04.119080 4832 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/keystone-operator@sha256:8e340ff11922b38e811261de96982e1aff5f4eb8f225d1d9f5973025a4fe8349,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-qcrt8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod keystone-operator-controller-manager-b8b6d4659-vvwcx_openstack-operators(50da9b0d-da00-4211-95cd-0218828341e5): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 25 08:13:04 crc kubenswrapper[4832]: E0125 08:13:04.120765 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-vvwcx" podUID="50da9b0d-da00-4211-95cd-0218828341e5" Jan 25 08:13:04 crc kubenswrapper[4832]: E0125 08:13:04.522619 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/keystone-operator@sha256:8e340ff11922b38e811261de96982e1aff5f4eb8f225d1d9f5973025a4fe8349\\\"\"" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-vvwcx" podUID="50da9b0d-da00-4211-95cd-0218828341e5" Jan 25 08:13:04 crc kubenswrapper[4832]: E0125 08:13:04.668869 4832 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/nova-operator@sha256:8abfbec47f0119a6c22c61a0ff80a4b1c6c14439a327bc75d4c529c5d8f59658" Jan 25 08:13:04 crc kubenswrapper[4832]: E0125 08:13:04.669079 4832 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/nova-operator@sha256:8abfbec47f0119a6c22c61a0ff80a4b1c6c14439a327bc75d4c529c5d8f59658,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-mcv5t,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod nova-operator-controller-manager-7bdb645866-q67lr_openstack-operators(d221c44f-6fb5-4b96-b84e-f1d55253ed08): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 25 08:13:04 crc kubenswrapper[4832]: E0125 08:13:04.670278 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/nova-operator-controller-manager-7bdb645866-q67lr" podUID="d221c44f-6fb5-4b96-b84e-f1d55253ed08" Jan 25 08:13:04 crc kubenswrapper[4832]: I0125 08:13:04.694513 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-qrg9b" Jan 25 08:13:04 crc kubenswrapper[4832]: I0125 08:13:04.895265 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/09f1c770-b9b1-40cf-9805-b88a1445218a-catalog-content\") pod \"09f1c770-b9b1-40cf-9805-b88a1445218a\" (UID: \"09f1c770-b9b1-40cf-9805-b88a1445218a\") " Jan 25 08:13:04 crc kubenswrapper[4832]: I0125 08:13:04.895338 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zrcwn\" (UniqueName: \"kubernetes.io/projected/09f1c770-b9b1-40cf-9805-b88a1445218a-kube-api-access-zrcwn\") pod \"09f1c770-b9b1-40cf-9805-b88a1445218a\" (UID: \"09f1c770-b9b1-40cf-9805-b88a1445218a\") " Jan 25 08:13:04 crc kubenswrapper[4832]: I0125 08:13:04.895514 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/09f1c770-b9b1-40cf-9805-b88a1445218a-utilities\") pod \"09f1c770-b9b1-40cf-9805-b88a1445218a\" (UID: \"09f1c770-b9b1-40cf-9805-b88a1445218a\") " Jan 25 08:13:04 crc kubenswrapper[4832]: I0125 08:13:04.896777 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/09f1c770-b9b1-40cf-9805-b88a1445218a-utilities" (OuterVolumeSpecName: "utilities") pod "09f1c770-b9b1-40cf-9805-b88a1445218a" (UID: "09f1c770-b9b1-40cf-9805-b88a1445218a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 25 08:13:04 crc kubenswrapper[4832]: I0125 08:13:04.901055 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09f1c770-b9b1-40cf-9805-b88a1445218a-kube-api-access-zrcwn" (OuterVolumeSpecName: "kube-api-access-zrcwn") pod "09f1c770-b9b1-40cf-9805-b88a1445218a" (UID: "09f1c770-b9b1-40cf-9805-b88a1445218a"). InnerVolumeSpecName "kube-api-access-zrcwn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 25 08:13:04 crc kubenswrapper[4832]: I0125 08:13:04.949134 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/09f1c770-b9b1-40cf-9805-b88a1445218a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "09f1c770-b9b1-40cf-9805-b88a1445218a" (UID: "09f1c770-b9b1-40cf-9805-b88a1445218a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 25 08:13:04 crc kubenswrapper[4832]: I0125 08:13:04.997016 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zrcwn\" (UniqueName: \"kubernetes.io/projected/09f1c770-b9b1-40cf-9805-b88a1445218a-kube-api-access-zrcwn\") on node \"crc\" DevicePath \"\"" Jan 25 08:13:04 crc kubenswrapper[4832]: I0125 08:13:04.997046 4832 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/09f1c770-b9b1-40cf-9805-b88a1445218a-utilities\") on node \"crc\" DevicePath \"\"" Jan 25 08:13:04 crc kubenswrapper[4832]: I0125 08:13:04.997056 4832 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/09f1c770-b9b1-40cf-9805-b88a1445218a-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 25 08:13:05 crc kubenswrapper[4832]: I0125 08:13:05.109138 4832 scope.go:117] "RemoveContainer" containerID="64279ddfe0fd6c4111fa0a57d49500f98d2c05e2c63437a405d802ef9cb276f3" Jan 25 08:13:05 crc kubenswrapper[4832]: I0125 08:13:05.194072 4832 scope.go:117] "RemoveContainer" containerID="484291b5b6ffa715120bf1be4f1dc156505e4b81f1b8b5b9bc44cd8664377e72" Jan 25 08:13:05 crc kubenswrapper[4832]: I0125 08:13:05.562730 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qrg9b" event={"ID":"09f1c770-b9b1-40cf-9805-b88a1445218a","Type":"ContainerDied","Data":"b02f06b863a28d731d0354cd161b29f46c3652314add722082a9acd658808e5f"} Jan 25 08:13:05 crc kubenswrapper[4832]: I0125 08:13:05.562791 4832 scope.go:117] "RemoveContainer" containerID="7679a2b66424f2d90e13f781bede969d9e56a2601b0b7f50d985683f3759f239" Jan 25 08:13:05 crc kubenswrapper[4832]: I0125 08:13:05.562925 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-qrg9b" Jan 25 08:13:05 crc kubenswrapper[4832]: I0125 08:13:05.567254 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-h4c7b" event={"ID":"efdb6007-fdd7-4a18-9dba-4f1571f6f822","Type":"ContainerStarted","Data":"a26295ed7db006cb31b0efbdbcc6e05b27c36a574264afd782e0087aec214df5"} Jan 25 08:13:05 crc kubenswrapper[4832]: I0125 08:13:05.567928 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-h4c7b" Jan 25 08:13:05 crc kubenswrapper[4832]: I0125 08:13:05.583816 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-nzjmz" event={"ID":"3f993c1e-81ae-4e86-9b28-eccb1db48f2b","Type":"ContainerStarted","Data":"266de99137c3e56f710f327aa3cffdece96d59aca7df29fd3ccd356eb9ae777e"} Jan 25 08:13:05 crc kubenswrapper[4832]: I0125 08:13:05.584522 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-nzjmz" Jan 25 08:13:05 crc kubenswrapper[4832]: I0125 08:13:05.600355 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-h4c7b" podStartSLOduration=4.391937371 podStartE2EDuration="31.600318071s" podCreationTimestamp="2026-01-25 08:12:34 +0000 UTC" firstStartedPulling="2026-01-25 08:12:36.389601489 +0000 UTC m=+939.063425022" lastFinishedPulling="2026-01-25 08:13:03.597982179 +0000 UTC m=+966.271805722" observedRunningTime="2026-01-25 08:13:05.59106096 +0000 UTC m=+968.264884513" watchObservedRunningTime="2026-01-25 08:13:05.600318071 +0000 UTC m=+968.274141604" Jan 25 08:13:05 crc kubenswrapper[4832]: I0125 08:13:05.601464 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-7478f7dbf9-qdwdw" event={"ID":"b3a8f752-cc73-4933-88d1-3b661a42ead2","Type":"ContainerStarted","Data":"3435ce1b47adb5c92443237bb39429390ddf033be91a01c1358ddd343f5fe22b"} Jan 25 08:13:05 crc kubenswrapper[4832]: I0125 08:13:05.602551 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/cinder-operator-controller-manager-7478f7dbf9-qdwdw" Jan 25 08:13:05 crc kubenswrapper[4832]: I0125 08:13:05.614576 4832 scope.go:117] "RemoveContainer" containerID="8c7d2065755c03c6d86cba8bd7e425579eebda18aacdca0c04ae93121b369e38" Jan 25 08:13:05 crc kubenswrapper[4832]: I0125 08:13:05.616856 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-v7dkf"] Jan 25 08:13:05 crc kubenswrapper[4832]: I0125 08:13:05.622432 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-mgsq7" event={"ID":"b1702aab-2dd8-488f-8a7f-93f43df4b0ab","Type":"ContainerStarted","Data":"b909082c67bd694a4b23b6a339ed8683d39087a688e05c66936a23c7d62abc1a"} Jan 25 08:13:05 crc kubenswrapper[4832]: I0125 08:13:05.623172 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-mgsq7" Jan 25 08:13:05 crc kubenswrapper[4832]: I0125 08:13:05.626940 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-nzjmz" podStartSLOduration=5.387205651 podStartE2EDuration="31.626907887s" podCreationTimestamp="2026-01-25 08:12:34 +0000 UTC" firstStartedPulling="2026-01-25 08:12:37.358278703 +0000 UTC m=+940.032102236" lastFinishedPulling="2026-01-25 08:13:03.597980939 +0000 UTC m=+966.271804472" observedRunningTime="2026-01-25 08:13:05.613861077 +0000 UTC m=+968.287684630" watchObservedRunningTime="2026-01-25 08:13:05.626907887 +0000 UTC m=+968.300731420" Jan 25 08:13:05 crc kubenswrapper[4832]: E0125 08:13:05.632175 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/nova-operator@sha256:8abfbec47f0119a6c22c61a0ff80a4b1c6c14439a327bc75d4c529c5d8f59658\\\"\"" pod="openstack-operators/nova-operator-controller-manager-7bdb645866-q67lr" podUID="d221c44f-6fb5-4b96-b84e-f1d55253ed08" Jan 25 08:13:05 crc kubenswrapper[4832]: W0125 08:13:05.642397 4832 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf7c74f9f_348d_4f8d_ab88_8bfd200a3f20.slice/crio-86050764efb19f0a2faca9c8391593efa294aebe7d15f368d5321e0627c51af1 WatchSource:0}: Error finding container 86050764efb19f0a2faca9c8391593efa294aebe7d15f368d5321e0627c51af1: Status 404 returned error can't find the container with id 86050764efb19f0a2faca9c8391593efa294aebe7d15f368d5321e0627c51af1 Jan 25 08:13:05 crc kubenswrapper[4832]: I0125 08:13:05.657843 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-qrg9b"] Jan 25 08:13:05 crc kubenswrapper[4832]: I0125 08:13:05.662811 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-qrg9b"] Jan 25 08:13:05 crc kubenswrapper[4832]: I0125 08:13:05.666487 4832 scope.go:117] "RemoveContainer" containerID="3c0da3ec0e400b7084c9b356e526fcbdb60ae830140eadc704c95246af074504" Jan 25 08:13:05 crc kubenswrapper[4832]: I0125 08:13:05.688018 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09f1c770-b9b1-40cf-9805-b88a1445218a" path="/var/lib/kubelet/pods/09f1c770-b9b1-40cf-9805-b88a1445218a/volumes" Jan 25 08:13:05 crc kubenswrapper[4832]: I0125 08:13:05.738423 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-mgsq7" podStartSLOduration=4.61292871 podStartE2EDuration="31.738399853s" podCreationTimestamp="2026-01-25 08:12:34 +0000 UTC" firstStartedPulling="2026-01-25 08:12:36.472607749 +0000 UTC m=+939.146431282" lastFinishedPulling="2026-01-25 08:13:03.598078892 +0000 UTC m=+966.271902425" observedRunningTime="2026-01-25 08:13:05.735971527 +0000 UTC m=+968.409795070" watchObservedRunningTime="2026-01-25 08:13:05.738399853 +0000 UTC m=+968.412223396" Jan 25 08:13:05 crc kubenswrapper[4832]: I0125 08:13:05.780685 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/cinder-operator-controller-manager-7478f7dbf9-qdwdw" podStartSLOduration=4.562532385 podStartE2EDuration="31.780658883s" podCreationTimestamp="2026-01-25 08:12:34 +0000 UTC" firstStartedPulling="2026-01-25 08:12:36.379896663 +0000 UTC m=+939.053720196" lastFinishedPulling="2026-01-25 08:13:03.598023161 +0000 UTC m=+966.271846694" observedRunningTime="2026-01-25 08:13:05.765126794 +0000 UTC m=+968.438950327" watchObservedRunningTime="2026-01-25 08:13:05.780658883 +0000 UTC m=+968.454482416" Jan 25 08:13:06 crc kubenswrapper[4832]: I0125 08:13:06.646131 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-59gds" event={"ID":"47605944-bcb8-4196-9eb3-b26c2e923e70","Type":"ContainerStarted","Data":"20f68b6eaf3af9bbe79030328e65dfde89416c22f6ce8a981c66eda4772cc47d"} Jan 25 08:13:06 crc kubenswrapper[4832]: I0125 08:13:06.646345 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-59gds" Jan 25 08:13:06 crc kubenswrapper[4832]: I0125 08:13:06.647610 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-4k5f7" event={"ID":"31cef49b-390b-4029-bdc4-64893be3d183","Type":"ContainerStarted","Data":"b59281a1e332c941df498b9810e0dba3903812ae147e8a9940aeb47859b3538c"} Jan 25 08:13:06 crc kubenswrapper[4832]: I0125 08:13:06.647740 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-4k5f7" Jan 25 08:13:06 crc kubenswrapper[4832]: I0125 08:13:06.649265 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-zwlrf" event={"ID":"eb801494-724f-482a-a359-896e5b735b62","Type":"ContainerStarted","Data":"7819e12f959013be65508b53c0a0270f65178c4f95390b3574083e53af6966e4"} Jan 25 08:13:06 crc kubenswrapper[4832]: I0125 08:13:06.649403 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-zwlrf" Jan 25 08:13:06 crc kubenswrapper[4832]: I0125 08:13:06.650649 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-6f75f45d54-cf7rg" event={"ID":"8d21c83b-b981-4466-b81a-ed7954d1f3cb","Type":"ContainerStarted","Data":"e214ada3a8eff71d16502c82cf54b0536969283ddbd28e98309929cce1322f9c"} Jan 25 08:13:06 crc kubenswrapper[4832]: I0125 08:13:06.650691 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ovn-operator-controller-manager-6f75f45d54-cf7rg" Jan 25 08:13:06 crc kubenswrapper[4832]: I0125 08:13:06.652063 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-5f4cd88d46-642xd" event={"ID":"b618d12e-02c2-4ae7-872a-15bd233259b5","Type":"ContainerStarted","Data":"6babc7c82e11a302c20b7bdf92b558d1f35ee9fd94b9bf802d2f7ca4ec0041cc"} Jan 25 08:13:06 crc kubenswrapper[4832]: I0125 08:13:06.652180 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/octavia-operator-controller-manager-5f4cd88d46-642xd" Jan 25 08:13:06 crc kubenswrapper[4832]: I0125 08:13:06.653616 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-f87nw" event={"ID":"cdb822ca-2a1d-4b10-8d44-f2cb33173358","Type":"ContainerStarted","Data":"afc6989a494b3d6aee6c83ca27e70385f2ab0e2fe910555e3d9733e698d8217a"} Jan 25 08:13:06 crc kubenswrapper[4832]: I0125 08:13:06.655059 4832 generic.go:334] "Generic (PLEG): container finished" podID="f7c74f9f-348d-4f8d-ab88-8bfd200a3f20" containerID="c95cd03a27adfaa6d2eea4ce6fc11fa61f23c0602da6039b8456362066cbc31f" exitCode=0 Jan 25 08:13:06 crc kubenswrapper[4832]: I0125 08:13:06.655100 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-v7dkf" event={"ID":"f7c74f9f-348d-4f8d-ab88-8bfd200a3f20","Type":"ContainerDied","Data":"c95cd03a27adfaa6d2eea4ce6fc11fa61f23c0602da6039b8456362066cbc31f"} Jan 25 08:13:06 crc kubenswrapper[4832]: I0125 08:13:06.655128 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-v7dkf" event={"ID":"f7c74f9f-348d-4f8d-ab88-8bfd200a3f20","Type":"ContainerStarted","Data":"86050764efb19f0a2faca9c8391593efa294aebe7d15f368d5321e0627c51af1"} Jan 25 08:13:06 crc kubenswrapper[4832]: I0125 08:13:06.658828 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-564965969-57npv" event={"ID":"1f038807-2bed-41a2-aecd-35d29e529eb8","Type":"ContainerStarted","Data":"6a258cc99ed8e4c6e78ec2a175394cbc2da5bef8089d8377fd4653c87ef171ba"} Jan 25 08:13:06 crc kubenswrapper[4832]: I0125 08:13:06.665650 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-59gds" podStartSLOduration=4.094930839 podStartE2EDuration="31.665625324s" podCreationTimestamp="2026-01-25 08:12:35 +0000 UTC" firstStartedPulling="2026-01-25 08:12:37.544796529 +0000 UTC m=+940.218620062" lastFinishedPulling="2026-01-25 08:13:05.115491014 +0000 UTC m=+967.789314547" observedRunningTime="2026-01-25 08:13:06.663997713 +0000 UTC m=+969.337821246" watchObservedRunningTime="2026-01-25 08:13:06.665625324 +0000 UTC m=+969.339448857" Jan 25 08:13:06 crc kubenswrapper[4832]: I0125 08:13:06.726118 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-4k5f7" podStartSLOduration=6.607269541 podStartE2EDuration="32.726097737s" podCreationTimestamp="2026-01-25 08:12:34 +0000 UTC" firstStartedPulling="2026-01-25 08:12:37.479231176 +0000 UTC m=+940.153054709" lastFinishedPulling="2026-01-25 08:13:03.598059382 +0000 UTC m=+966.271882905" observedRunningTime="2026-01-25 08:13:06.694871614 +0000 UTC m=+969.368695147" watchObservedRunningTime="2026-01-25 08:13:06.726097737 +0000 UTC m=+969.399921270" Jan 25 08:13:06 crc kubenswrapper[4832]: I0125 08:13:06.727878 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/watcher-operator-controller-manager-564965969-57npv" podStartSLOduration=4.211133034 podStartE2EDuration="31.727868862s" podCreationTimestamp="2026-01-25 08:12:35 +0000 UTC" firstStartedPulling="2026-01-25 08:12:37.59856438 +0000 UTC m=+940.272387923" lastFinishedPulling="2026-01-25 08:13:05.115300218 +0000 UTC m=+967.789123751" observedRunningTime="2026-01-25 08:13:06.720644804 +0000 UTC m=+969.394468347" watchObservedRunningTime="2026-01-25 08:13:06.727868862 +0000 UTC m=+969.401692395" Jan 25 08:13:06 crc kubenswrapper[4832]: I0125 08:13:06.738425 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-f87nw" podStartSLOduration=4.150771985 podStartE2EDuration="31.738406303s" podCreationTimestamp="2026-01-25 08:12:35 +0000 UTC" firstStartedPulling="2026-01-25 08:12:37.592610922 +0000 UTC m=+940.266434455" lastFinishedPulling="2026-01-25 08:13:05.18024523 +0000 UTC m=+967.854068773" observedRunningTime="2026-01-25 08:13:06.736000868 +0000 UTC m=+969.409824401" watchObservedRunningTime="2026-01-25 08:13:06.738406303 +0000 UTC m=+969.412229836" Jan 25 08:13:06 crc kubenswrapper[4832]: I0125 08:13:06.771220 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-zwlrf" podStartSLOduration=4.227923851 podStartE2EDuration="31.771195884s" podCreationTimestamp="2026-01-25 08:12:35 +0000 UTC" firstStartedPulling="2026-01-25 08:12:37.572248952 +0000 UTC m=+940.246072485" lastFinishedPulling="2026-01-25 08:13:05.115520985 +0000 UTC m=+967.789344518" observedRunningTime="2026-01-25 08:13:06.767341573 +0000 UTC m=+969.441165126" watchObservedRunningTime="2026-01-25 08:13:06.771195884 +0000 UTC m=+969.445019407" Jan 25 08:13:06 crc kubenswrapper[4832]: I0125 08:13:06.800322 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/octavia-operator-controller-manager-5f4cd88d46-642xd" podStartSLOduration=6.543199116 podStartE2EDuration="32.80029726s" podCreationTimestamp="2026-01-25 08:12:34 +0000 UTC" firstStartedPulling="2026-01-25 08:12:37.340922157 +0000 UTC m=+940.014745690" lastFinishedPulling="2026-01-25 08:13:03.598020291 +0000 UTC m=+966.271843834" observedRunningTime="2026-01-25 08:13:06.799549276 +0000 UTC m=+969.473372809" watchObservedRunningTime="2026-01-25 08:13:06.80029726 +0000 UTC m=+969.474120793" Jan 25 08:13:07 crc kubenswrapper[4832]: I0125 08:13:07.030627 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/29b29aa4-b326-4515-9842-6d848c208096-cert\") pod \"infra-operator-controller-manager-694cf4f878-vt5m9\" (UID: \"29b29aa4-b326-4515-9842-6d848c208096\") " pod="openstack-operators/infra-operator-controller-manager-694cf4f878-vt5m9" Jan 25 08:13:07 crc kubenswrapper[4832]: I0125 08:13:07.039201 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/29b29aa4-b326-4515-9842-6d848c208096-cert\") pod \"infra-operator-controller-manager-694cf4f878-vt5m9\" (UID: \"29b29aa4-b326-4515-9842-6d848c208096\") " pod="openstack-operators/infra-operator-controller-manager-694cf4f878-vt5m9" Jan 25 08:13:07 crc kubenswrapper[4832]: I0125 08:13:07.105876 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-controller-manager-dockercfg-zzlmb" Jan 25 08:13:07 crc kubenswrapper[4832]: I0125 08:13:07.114305 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-694cf4f878-vt5m9" Jan 25 08:13:07 crc kubenswrapper[4832]: I0125 08:13:07.233474 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/3b784c4a-e1cf-42fb-ad96-dca059f63e79-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b854b8jhw\" (UID: \"3b784c4a-e1cf-42fb-ad96-dca059f63e79\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854b8jhw" Jan 25 08:13:07 crc kubenswrapper[4832]: I0125 08:13:07.237574 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/3b784c4a-e1cf-42fb-ad96-dca059f63e79-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b854b8jhw\" (UID: \"3b784c4a-e1cf-42fb-ad96-dca059f63e79\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854b8jhw" Jan 25 08:13:07 crc kubenswrapper[4832]: I0125 08:13:07.455731 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-controller-manager-dockercfg-8r76f" Jan 25 08:13:07 crc kubenswrapper[4832]: I0125 08:13:07.464071 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854b8jhw" Jan 25 08:13:07 crc kubenswrapper[4832]: I0125 08:13:07.615217 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ovn-operator-controller-manager-6f75f45d54-cf7rg" podStartSLOduration=6.376521274 podStartE2EDuration="32.615195328s" podCreationTimestamp="2026-01-25 08:12:35 +0000 UTC" firstStartedPulling="2026-01-25 08:12:37.359240553 +0000 UTC m=+940.033064086" lastFinishedPulling="2026-01-25 08:13:03.597914607 +0000 UTC m=+966.271738140" observedRunningTime="2026-01-25 08:13:06.825083559 +0000 UTC m=+969.498907092" watchObservedRunningTime="2026-01-25 08:13:07.615195328 +0000 UTC m=+970.289018861" Jan 25 08:13:07 crc kubenswrapper[4832]: I0125 08:13:07.617365 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-694cf4f878-vt5m9"] Jan 25 08:13:07 crc kubenswrapper[4832]: I0125 08:13:07.685423 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-694cf4f878-vt5m9" event={"ID":"29b29aa4-b326-4515-9842-6d848c208096","Type":"ContainerStarted","Data":"05d06184550e610c8f1d46803a8e490d2a803bc4c09675e70de059830e88348c"} Jan 25 08:13:07 crc kubenswrapper[4832]: I0125 08:13:07.689179 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-v7dkf" event={"ID":"f7c74f9f-348d-4f8d-ab88-8bfd200a3f20","Type":"ContainerStarted","Data":"25f944c24e831edf765fcbaa71a2ac3894bf02c29f60e3fca789c4ce3eb083eb"} Jan 25 08:13:07 crc kubenswrapper[4832]: I0125 08:13:07.745211 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/1529f819-52bd-428f-970f-5f67f071e729-metrics-certs\") pod \"openstack-operator-controller-manager-745947945d-jwhxb\" (UID: \"1529f819-52bd-428f-970f-5f67f071e729\") " pod="openstack-operators/openstack-operator-controller-manager-745947945d-jwhxb" Jan 25 08:13:07 crc kubenswrapper[4832]: I0125 08:13:07.745255 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/1529f819-52bd-428f-970f-5f67f071e729-webhook-certs\") pod \"openstack-operator-controller-manager-745947945d-jwhxb\" (UID: \"1529f819-52bd-428f-970f-5f67f071e729\") " pod="openstack-operators/openstack-operator-controller-manager-745947945d-jwhxb" Jan 25 08:13:07 crc kubenswrapper[4832]: I0125 08:13:07.752475 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/1529f819-52bd-428f-970f-5f67f071e729-webhook-certs\") pod \"openstack-operator-controller-manager-745947945d-jwhxb\" (UID: \"1529f819-52bd-428f-970f-5f67f071e729\") " pod="openstack-operators/openstack-operator-controller-manager-745947945d-jwhxb" Jan 25 08:13:07 crc kubenswrapper[4832]: I0125 08:13:07.753199 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/1529f819-52bd-428f-970f-5f67f071e729-metrics-certs\") pod \"openstack-operator-controller-manager-745947945d-jwhxb\" (UID: \"1529f819-52bd-428f-970f-5f67f071e729\") " pod="openstack-operators/openstack-operator-controller-manager-745947945d-jwhxb" Jan 25 08:13:07 crc kubenswrapper[4832]: I0125 08:13:07.794177 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854b8jhw"] Jan 25 08:13:07 crc kubenswrapper[4832]: I0125 08:13:07.981427 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-manager-dockercfg-46tnh" Jan 25 08:13:07 crc kubenswrapper[4832]: I0125 08:13:07.990448 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-745947945d-jwhxb" Jan 25 08:13:08 crc kubenswrapper[4832]: I0125 08:13:08.260909 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-745947945d-jwhxb"] Jan 25 08:13:08 crc kubenswrapper[4832]: W0125 08:13:08.270919 4832 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1529f819_52bd_428f_970f_5f67f071e729.slice/crio-eae667a599c2aadbaf4d931d1b866a8cc75bd7aacba0b7a2a2e97c7193146706 WatchSource:0}: Error finding container eae667a599c2aadbaf4d931d1b866a8cc75bd7aacba0b7a2a2e97c7193146706: Status 404 returned error can't find the container with id eae667a599c2aadbaf4d931d1b866a8cc75bd7aacba0b7a2a2e97c7193146706 Jan 25 08:13:08 crc kubenswrapper[4832]: I0125 08:13:08.701060 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-745947945d-jwhxb" event={"ID":"1529f819-52bd-428f-970f-5f67f071e729","Type":"ContainerStarted","Data":"4e55a3f7c9d529236e7b365eef98b34bdeae70fd4958d0f75d786415f2ef7658"} Jan 25 08:13:08 crc kubenswrapper[4832]: I0125 08:13:08.702524 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-manager-745947945d-jwhxb" Jan 25 08:13:08 crc kubenswrapper[4832]: I0125 08:13:08.702568 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-745947945d-jwhxb" event={"ID":"1529f819-52bd-428f-970f-5f67f071e729","Type":"ContainerStarted","Data":"eae667a599c2aadbaf4d931d1b866a8cc75bd7aacba0b7a2a2e97c7193146706"} Jan 25 08:13:08 crc kubenswrapper[4832]: I0125 08:13:08.721237 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854b8jhw" event={"ID":"3b784c4a-e1cf-42fb-ad96-dca059f63e79","Type":"ContainerStarted","Data":"3d46154c3142ed8481ec9f4717e9d0cedad4bf276c9d31c683cae05b85bba738"} Jan 25 08:13:08 crc kubenswrapper[4832]: I0125 08:13:08.723997 4832 generic.go:334] "Generic (PLEG): container finished" podID="f7c74f9f-348d-4f8d-ab88-8bfd200a3f20" containerID="25f944c24e831edf765fcbaa71a2ac3894bf02c29f60e3fca789c4ce3eb083eb" exitCode=0 Jan 25 08:13:08 crc kubenswrapper[4832]: I0125 08:13:08.724039 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-v7dkf" event={"ID":"f7c74f9f-348d-4f8d-ab88-8bfd200a3f20","Type":"ContainerDied","Data":"25f944c24e831edf765fcbaa71a2ac3894bf02c29f60e3fca789c4ce3eb083eb"} Jan 25 08:13:08 crc kubenswrapper[4832]: I0125 08:13:08.746175 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-manager-745947945d-jwhxb" podStartSLOduration=33.746145755 podStartE2EDuration="33.746145755s" podCreationTimestamp="2026-01-25 08:12:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-25 08:13:08.738444013 +0000 UTC m=+971.412267566" watchObservedRunningTime="2026-01-25 08:13:08.746145755 +0000 UTC m=+971.419969288" Jan 25 08:13:12 crc kubenswrapper[4832]: I0125 08:13:12.788740 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-694cf4f878-vt5m9" event={"ID":"29b29aa4-b326-4515-9842-6d848c208096","Type":"ContainerStarted","Data":"8036114003fc4510099eddfd52eb96ca84c2ffd7b300a5861e759399818c0b5f"} Jan 25 08:13:12 crc kubenswrapper[4832]: I0125 08:13:12.789926 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/infra-operator-controller-manager-694cf4f878-vt5m9" Jan 25 08:13:12 crc kubenswrapper[4832]: I0125 08:13:12.792445 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-79d5ccc684-lrsxz" event={"ID":"1e30c775-7a32-478e-8c3c-7312757f846b","Type":"ContainerStarted","Data":"5e452f0f4782c86d751adc7f41ccb13b3047b87f08e34ad5044331b82a8515ea"} Jan 25 08:13:12 crc kubenswrapper[4832]: I0125 08:13:12.804109 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-75hsw" event={"ID":"0cac9e7d-b342-4b55-a667-76fa1c144080","Type":"ContainerStarted","Data":"b40475c81045ab54bbf4757b2544309a33fd2e362b0ccd8a9dcb55214bcccfe5"} Jan 25 08:13:12 crc kubenswrapper[4832]: I0125 08:13:12.804434 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-75hsw" Jan 25 08:13:12 crc kubenswrapper[4832]: I0125 08:13:12.809883 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854b8jhw" Jan 25 08:13:12 crc kubenswrapper[4832]: I0125 08:13:12.814896 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/infra-operator-controller-manager-694cf4f878-vt5m9" podStartSLOduration=34.144029245 podStartE2EDuration="38.814871587s" podCreationTimestamp="2026-01-25 08:12:34 +0000 UTC" firstStartedPulling="2026-01-25 08:13:07.625923295 +0000 UTC m=+970.299746828" lastFinishedPulling="2026-01-25 08:13:12.296765637 +0000 UTC m=+974.970589170" observedRunningTime="2026-01-25 08:13:12.810168059 +0000 UTC m=+975.483991592" watchObservedRunningTime="2026-01-25 08:13:12.814871587 +0000 UTC m=+975.488695120" Jan 25 08:13:12 crc kubenswrapper[4832]: I0125 08:13:12.817310 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-7f86f8796f-hr9t5" event={"ID":"8251d5ba-3a9a-429c-ba20-1af897640ad3","Type":"ContainerStarted","Data":"05bee986bbb0e8e38b96b64f811a76881d73dcc528c37662bca52776c48daffa"} Jan 25 08:13:12 crc kubenswrapper[4832]: I0125 08:13:12.817888 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/barbican-operator-controller-manager-7f86f8796f-hr9t5" Jan 25 08:13:12 crc kubenswrapper[4832]: I0125 08:13:12.833569 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-75hsw" podStartSLOduration=2.873810862 podStartE2EDuration="38.833547543s" podCreationTimestamp="2026-01-25 08:12:34 +0000 UTC" firstStartedPulling="2026-01-25 08:12:36.348160825 +0000 UTC m=+939.021984348" lastFinishedPulling="2026-01-25 08:13:12.307897496 +0000 UTC m=+974.981721029" observedRunningTime="2026-01-25 08:13:12.832071716 +0000 UTC m=+975.505895249" watchObservedRunningTime="2026-01-25 08:13:12.833547543 +0000 UTC m=+975.507371076" Jan 25 08:13:12 crc kubenswrapper[4832]: I0125 08:13:12.858312 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854b8jhw" podStartSLOduration=33.360708067 podStartE2EDuration="37.858295299s" podCreationTimestamp="2026-01-25 08:12:35 +0000 UTC" firstStartedPulling="2026-01-25 08:13:07.808687603 +0000 UTC m=+970.482511136" lastFinishedPulling="2026-01-25 08:13:12.306274815 +0000 UTC m=+974.980098368" observedRunningTime="2026-01-25 08:13:12.85196117 +0000 UTC m=+975.525784703" watchObservedRunningTime="2026-01-25 08:13:12.858295299 +0000 UTC m=+975.532118832" Jan 25 08:13:12 crc kubenswrapper[4832]: I0125 08:13:12.880325 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-v7dkf" podStartSLOduration=11.229095717 podStartE2EDuration="16.8803089s" podCreationTimestamp="2026-01-25 08:12:56 +0000 UTC" firstStartedPulling="2026-01-25 08:13:06.656619031 +0000 UTC m=+969.330442564" lastFinishedPulling="2026-01-25 08:13:12.307832224 +0000 UTC m=+974.981655747" observedRunningTime="2026-01-25 08:13:12.877893664 +0000 UTC m=+975.551717197" watchObservedRunningTime="2026-01-25 08:13:12.8803089 +0000 UTC m=+975.554132433" Jan 25 08:13:12 crc kubenswrapper[4832]: I0125 08:13:12.901358 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/barbican-operator-controller-manager-7f86f8796f-hr9t5" podStartSLOduration=3.961691242 podStartE2EDuration="38.90132932s" podCreationTimestamp="2026-01-25 08:12:34 +0000 UTC" firstStartedPulling="2026-01-25 08:12:37.366916055 +0000 UTC m=+940.040739588" lastFinishedPulling="2026-01-25 08:13:12.306554133 +0000 UTC m=+974.980377666" observedRunningTime="2026-01-25 08:13:12.89431981 +0000 UTC m=+975.568143343" watchObservedRunningTime="2026-01-25 08:13:12.90132932 +0000 UTC m=+975.575152853" Jan 25 08:13:13 crc kubenswrapper[4832]: I0125 08:13:13.824395 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-v7dkf" event={"ID":"f7c74f9f-348d-4f8d-ab88-8bfd200a3f20","Type":"ContainerStarted","Data":"7b9ed1bbf6eb9e9871448c49a6e32b5ddbaa6f6397a92b3e4926fd025e9b2707"} Jan 25 08:13:13 crc kubenswrapper[4832]: I0125 08:13:13.825450 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-598f7747c9-t8jng" event={"ID":"44be34d2-851c-4bf5-a3fb-87607d045d1f","Type":"ContainerStarted","Data":"bc5d211544e9e0859736d130242a216e121ed8dedfba875e2708a8a622140077"} Jan 25 08:13:13 crc kubenswrapper[4832]: I0125 08:13:13.825766 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ironic-operator-controller-manager-598f7747c9-t8jng" Jan 25 08:13:13 crc kubenswrapper[4832]: I0125 08:13:13.826573 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-78d58447c5-hpqjz" event={"ID":"0c897c34-1c91-416c-91e2-65ae83958e10","Type":"ContainerStarted","Data":"5d06d72f18fa56b2e730fccc9701639b6c783d5c837ced7e397ff2f142b3387d"} Jan 25 08:13:13 crc kubenswrapper[4832]: I0125 08:13:13.826699 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/neutron-operator-controller-manager-78d58447c5-hpqjz" Jan 25 08:13:13 crc kubenswrapper[4832]: I0125 08:13:13.828061 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854b8jhw" event={"ID":"3b784c4a-e1cf-42fb-ad96-dca059f63e79","Type":"ContainerStarted","Data":"d8dfa9d1b579ec4014b901d7edec4479a17c0e5951f7e4f5ddd626e97de81121"} Jan 25 08:13:13 crc kubenswrapper[4832]: I0125 08:13:13.843085 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ironic-operator-controller-manager-598f7747c9-t8jng" podStartSLOduration=3.364637854 podStartE2EDuration="39.843070165s" podCreationTimestamp="2026-01-25 08:12:34 +0000 UTC" firstStartedPulling="2026-01-25 08:12:37.086124614 +0000 UTC m=+939.759948147" lastFinishedPulling="2026-01-25 08:13:13.564556925 +0000 UTC m=+976.238380458" observedRunningTime="2026-01-25 08:13:13.841844097 +0000 UTC m=+976.515667630" watchObservedRunningTime="2026-01-25 08:13:13.843070165 +0000 UTC m=+976.516893698" Jan 25 08:13:13 crc kubenswrapper[4832]: I0125 08:13:13.874118 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/neutron-operator-controller-manager-78d58447c5-hpqjz" podStartSLOduration=4.875902347 podStartE2EDuration="39.874103299s" podCreationTimestamp="2026-01-25 08:12:34 +0000 UTC" firstStartedPulling="2026-01-25 08:12:37.298618226 +0000 UTC m=+939.972441759" lastFinishedPulling="2026-01-25 08:13:12.296819168 +0000 UTC m=+974.970642711" observedRunningTime="2026-01-25 08:13:13.85789166 +0000 UTC m=+976.531715193" watchObservedRunningTime="2026-01-25 08:13:13.874103299 +0000 UTC m=+976.547926832" Jan 25 08:13:13 crc kubenswrapper[4832]: I0125 08:13:13.874358 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/placement-operator-controller-manager-79d5ccc684-lrsxz" podStartSLOduration=3.90428494 podStartE2EDuration="38.874353867s" podCreationTimestamp="2026-01-25 08:12:35 +0000 UTC" firstStartedPulling="2026-01-25 08:12:37.340834954 +0000 UTC m=+940.014658487" lastFinishedPulling="2026-01-25 08:13:12.310903881 +0000 UTC m=+974.984727414" observedRunningTime="2026-01-25 08:13:13.86933144 +0000 UTC m=+976.543154973" watchObservedRunningTime="2026-01-25 08:13:13.874353867 +0000 UTC m=+976.548177400" Jan 25 08:13:15 crc kubenswrapper[4832]: I0125 08:13:15.084466 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/cinder-operator-controller-manager-7478f7dbf9-qdwdw" Jan 25 08:13:15 crc kubenswrapper[4832]: I0125 08:13:15.147579 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-mgsq7" Jan 25 08:13:15 crc kubenswrapper[4832]: I0125 08:13:15.220676 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-h4c7b" Jan 25 08:13:15 crc kubenswrapper[4832]: I0125 08:13:15.299196 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-nzjmz" Jan 25 08:13:15 crc kubenswrapper[4832]: I0125 08:13:15.520766 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-4k5f7" Jan 25 08:13:15 crc kubenswrapper[4832]: I0125 08:13:15.548434 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/placement-operator-controller-manager-79d5ccc684-lrsxz" Jan 25 08:13:15 crc kubenswrapper[4832]: I0125 08:13:15.630864 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/octavia-operator-controller-manager-5f4cd88d46-642xd" Jan 25 08:13:15 crc kubenswrapper[4832]: I0125 08:13:15.731777 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ovn-operator-controller-manager-6f75f45d54-cf7rg" Jan 25 08:13:15 crc kubenswrapper[4832]: I0125 08:13:15.798184 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-zwlrf" Jan 25 08:13:15 crc kubenswrapper[4832]: I0125 08:13:15.846776 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-qnxqc" event={"ID":"c3356b9d-3a3c-4583-9803-d08fcb621401","Type":"ContainerStarted","Data":"c68aff5cf1c5059efca50d02af2b1ac228f9c63e840955dc9fcc2c0a95d43ba5"} Jan 25 08:13:15 crc kubenswrapper[4832]: I0125 08:13:15.847197 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-qnxqc" Jan 25 08:13:15 crc kubenswrapper[4832]: I0125 08:13:15.861689 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-59gds" Jan 25 08:13:15 crc kubenswrapper[4832]: I0125 08:13:15.862958 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-qnxqc" podStartSLOduration=3.293809207 podStartE2EDuration="40.862940366s" podCreationTimestamp="2026-01-25 08:12:35 +0000 UTC" firstStartedPulling="2026-01-25 08:12:37.567546754 +0000 UTC m=+940.241370287" lastFinishedPulling="2026-01-25 08:13:15.136677913 +0000 UTC m=+977.810501446" observedRunningTime="2026-01-25 08:13:15.859017943 +0000 UTC m=+978.532841476" watchObservedRunningTime="2026-01-25 08:13:15.862940366 +0000 UTC m=+978.536763899" Jan 25 08:13:16 crc kubenswrapper[4832]: I0125 08:13:16.197733 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/watcher-operator-controller-manager-564965969-57npv" Jan 25 08:13:16 crc kubenswrapper[4832]: I0125 08:13:16.199967 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/watcher-operator-controller-manager-564965969-57npv" Jan 25 08:13:16 crc kubenswrapper[4832]: I0125 08:13:16.624575 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-v7dkf" Jan 25 08:13:16 crc kubenswrapper[4832]: I0125 08:13:16.624663 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-v7dkf" Jan 25 08:13:16 crc kubenswrapper[4832]: I0125 08:13:16.689688 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-v7dkf" Jan 25 08:13:17 crc kubenswrapper[4832]: I0125 08:13:17.119556 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/infra-operator-controller-manager-694cf4f878-vt5m9" Jan 25 08:13:17 crc kubenswrapper[4832]: I0125 08:13:17.472146 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854b8jhw" Jan 25 08:13:17 crc kubenswrapper[4832]: I0125 08:13:17.998443 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-manager-745947945d-jwhxb" Jan 25 08:13:24 crc kubenswrapper[4832]: I0125 08:13:24.920650 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-mstsp" event={"ID":"d75c853c-428e-4f6a-8a82-a050b71af662","Type":"ContainerStarted","Data":"568d47f05acf7c92515c3c06bcb456a8304605d2d4be1d92c2fd191f6a065265"} Jan 25 08:13:24 crc kubenswrapper[4832]: I0125 08:13:24.921770 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-mstsp" Jan 25 08:13:24 crc kubenswrapper[4832]: I0125 08:13:24.922410 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-vvwcx" event={"ID":"50da9b0d-da00-4211-95cd-0218828341e5","Type":"ContainerStarted","Data":"176c4dfddcc5094b1dffc1b8360ce3ff29b81e282d28917c63376e8b111cfe9d"} Jan 25 08:13:24 crc kubenswrapper[4832]: I0125 08:13:24.922648 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-vvwcx" Jan 25 08:13:24 crc kubenswrapper[4832]: I0125 08:13:24.943431 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-mstsp" podStartSLOduration=12.084414621 podStartE2EDuration="50.943406629s" podCreationTimestamp="2026-01-25 08:12:34 +0000 UTC" firstStartedPulling="2026-01-25 08:12:37.292836335 +0000 UTC m=+939.966659868" lastFinishedPulling="2026-01-25 08:13:16.151828343 +0000 UTC m=+978.825651876" observedRunningTime="2026-01-25 08:13:24.936572415 +0000 UTC m=+987.610395958" watchObservedRunningTime="2026-01-25 08:13:24.943406629 +0000 UTC m=+987.617230162" Jan 25 08:13:25 crc kubenswrapper[4832]: I0125 08:13:25.107776 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-75hsw" Jan 25 08:13:25 crc kubenswrapper[4832]: I0125 08:13:25.122231 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-vvwcx" podStartSLOduration=12.088716181 podStartE2EDuration="51.12221099s" podCreationTimestamp="2026-01-25 08:12:34 +0000 UTC" firstStartedPulling="2026-01-25 08:12:37.076279224 +0000 UTC m=+939.750102757" lastFinishedPulling="2026-01-25 08:13:16.109774033 +0000 UTC m=+978.783597566" observedRunningTime="2026-01-25 08:13:24.955014363 +0000 UTC m=+987.628837916" watchObservedRunningTime="2026-01-25 08:13:25.12221099 +0000 UTC m=+987.796034523" Jan 25 08:13:25 crc kubenswrapper[4832]: I0125 08:13:25.392688 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ironic-operator-controller-manager-598f7747c9-t8jng" Jan 25 08:13:25 crc kubenswrapper[4832]: I0125 08:13:25.531992 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/neutron-operator-controller-manager-78d58447c5-hpqjz" Jan 25 08:13:25 crc kubenswrapper[4832]: I0125 08:13:25.551334 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/placement-operator-controller-manager-79d5ccc684-lrsxz" Jan 25 08:13:26 crc kubenswrapper[4832]: I0125 08:13:26.074507 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/barbican-operator-controller-manager-7f86f8796f-hr9t5" Jan 25 08:13:26 crc kubenswrapper[4832]: I0125 08:13:26.238250 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-qnxqc" Jan 25 08:13:26 crc kubenswrapper[4832]: I0125 08:13:26.686078 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-v7dkf" Jan 25 08:13:26 crc kubenswrapper[4832]: I0125 08:13:26.732457 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-v7dkf"] Jan 25 08:13:26 crc kubenswrapper[4832]: I0125 08:13:26.934959 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-v7dkf" podUID="f7c74f9f-348d-4f8d-ab88-8bfd200a3f20" containerName="registry-server" containerID="cri-o://7b9ed1bbf6eb9e9871448c49a6e32b5ddbaa6f6397a92b3e4926fd025e9b2707" gracePeriod=2 Jan 25 08:13:28 crc kubenswrapper[4832]: I0125 08:13:28.975058 4832 generic.go:334] "Generic (PLEG): container finished" podID="f7c74f9f-348d-4f8d-ab88-8bfd200a3f20" containerID="7b9ed1bbf6eb9e9871448c49a6e32b5ddbaa6f6397a92b3e4926fd025e9b2707" exitCode=0 Jan 25 08:13:28 crc kubenswrapper[4832]: I0125 08:13:28.975123 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-v7dkf" event={"ID":"f7c74f9f-348d-4f8d-ab88-8bfd200a3f20","Type":"ContainerDied","Data":"7b9ed1bbf6eb9e9871448c49a6e32b5ddbaa6f6397a92b3e4926fd025e9b2707"} Jan 25 08:13:29 crc kubenswrapper[4832]: I0125 08:13:29.669023 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-v7dkf" Jan 25 08:13:29 crc kubenswrapper[4832]: I0125 08:13:29.768998 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f7c74f9f-348d-4f8d-ab88-8bfd200a3f20-catalog-content\") pod \"f7c74f9f-348d-4f8d-ab88-8bfd200a3f20\" (UID: \"f7c74f9f-348d-4f8d-ab88-8bfd200a3f20\") " Jan 25 08:13:29 crc kubenswrapper[4832]: I0125 08:13:29.769160 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fqdh4\" (UniqueName: \"kubernetes.io/projected/f7c74f9f-348d-4f8d-ab88-8bfd200a3f20-kube-api-access-fqdh4\") pod \"f7c74f9f-348d-4f8d-ab88-8bfd200a3f20\" (UID: \"f7c74f9f-348d-4f8d-ab88-8bfd200a3f20\") " Jan 25 08:13:29 crc kubenswrapper[4832]: I0125 08:13:29.769238 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f7c74f9f-348d-4f8d-ab88-8bfd200a3f20-utilities\") pod \"f7c74f9f-348d-4f8d-ab88-8bfd200a3f20\" (UID: \"f7c74f9f-348d-4f8d-ab88-8bfd200a3f20\") " Jan 25 08:13:29 crc kubenswrapper[4832]: I0125 08:13:29.770600 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f7c74f9f-348d-4f8d-ab88-8bfd200a3f20-utilities" (OuterVolumeSpecName: "utilities") pod "f7c74f9f-348d-4f8d-ab88-8bfd200a3f20" (UID: "f7c74f9f-348d-4f8d-ab88-8bfd200a3f20"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 25 08:13:29 crc kubenswrapper[4832]: I0125 08:13:29.779686 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f7c74f9f-348d-4f8d-ab88-8bfd200a3f20-kube-api-access-fqdh4" (OuterVolumeSpecName: "kube-api-access-fqdh4") pod "f7c74f9f-348d-4f8d-ab88-8bfd200a3f20" (UID: "f7c74f9f-348d-4f8d-ab88-8bfd200a3f20"). InnerVolumeSpecName "kube-api-access-fqdh4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 25 08:13:29 crc kubenswrapper[4832]: I0125 08:13:29.789661 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f7c74f9f-348d-4f8d-ab88-8bfd200a3f20-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f7c74f9f-348d-4f8d-ab88-8bfd200a3f20" (UID: "f7c74f9f-348d-4f8d-ab88-8bfd200a3f20"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 25 08:13:29 crc kubenswrapper[4832]: I0125 08:13:29.871635 4832 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f7c74f9f-348d-4f8d-ab88-8bfd200a3f20-utilities\") on node \"crc\" DevicePath \"\"" Jan 25 08:13:29 crc kubenswrapper[4832]: I0125 08:13:29.871666 4832 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f7c74f9f-348d-4f8d-ab88-8bfd200a3f20-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 25 08:13:29 crc kubenswrapper[4832]: I0125 08:13:29.871677 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fqdh4\" (UniqueName: \"kubernetes.io/projected/f7c74f9f-348d-4f8d-ab88-8bfd200a3f20-kube-api-access-fqdh4\") on node \"crc\" DevicePath \"\"" Jan 25 08:13:29 crc kubenswrapper[4832]: I0125 08:13:29.987628 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-v7dkf" event={"ID":"f7c74f9f-348d-4f8d-ab88-8bfd200a3f20","Type":"ContainerDied","Data":"86050764efb19f0a2faca9c8391593efa294aebe7d15f368d5321e0627c51af1"} Jan 25 08:13:29 crc kubenswrapper[4832]: I0125 08:13:29.987716 4832 scope.go:117] "RemoveContainer" containerID="7b9ed1bbf6eb9e9871448c49a6e32b5ddbaa6f6397a92b3e4926fd025e9b2707" Jan 25 08:13:29 crc kubenswrapper[4832]: I0125 08:13:29.987827 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-v7dkf" Jan 25 08:13:30 crc kubenswrapper[4832]: I0125 08:13:30.008823 4832 scope.go:117] "RemoveContainer" containerID="25f944c24e831edf765fcbaa71a2ac3894bf02c29f60e3fca789c4ce3eb083eb" Jan 25 08:13:30 crc kubenswrapper[4832]: I0125 08:13:30.038126 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-v7dkf"] Jan 25 08:13:30 crc kubenswrapper[4832]: I0125 08:13:30.041023 4832 scope.go:117] "RemoveContainer" containerID="c95cd03a27adfaa6d2eea4ce6fc11fa61f23c0602da6039b8456362066cbc31f" Jan 25 08:13:30 crc kubenswrapper[4832]: I0125 08:13:30.045591 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-v7dkf"] Jan 25 08:13:30 crc kubenswrapper[4832]: I0125 08:13:30.995805 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-7bdb645866-q67lr" event={"ID":"d221c44f-6fb5-4b96-b84e-f1d55253ed08","Type":"ContainerStarted","Data":"73c145169afd1bcdacd919bdfec0a320b900910d431b65557b4dff6603360fdd"} Jan 25 08:13:30 crc kubenswrapper[4832]: I0125 08:13:30.996459 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/nova-operator-controller-manager-7bdb645866-q67lr" Jan 25 08:13:31 crc kubenswrapper[4832]: I0125 08:13:31.015932 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/nova-operator-controller-manager-7bdb645866-q67lr" podStartSLOduration=4.239741415 podStartE2EDuration="57.015904697s" podCreationTimestamp="2026-01-25 08:12:34 +0000 UTC" firstStartedPulling="2026-01-25 08:12:37.34292624 +0000 UTC m=+940.016749763" lastFinishedPulling="2026-01-25 08:13:30.119089512 +0000 UTC m=+992.792913045" observedRunningTime="2026-01-25 08:13:31.010628202 +0000 UTC m=+993.684451785" watchObservedRunningTime="2026-01-25 08:13:31.015904697 +0000 UTC m=+993.689728270" Jan 25 08:13:31 crc kubenswrapper[4832]: I0125 08:13:31.677842 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f7c74f9f-348d-4f8d-ab88-8bfd200a3f20" path="/var/lib/kubelet/pods/f7c74f9f-348d-4f8d-ab88-8bfd200a3f20/volumes" Jan 25 08:13:35 crc kubenswrapper[4832]: I0125 08:13:35.426764 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-mstsp" Jan 25 08:13:35 crc kubenswrapper[4832]: I0125 08:13:35.430676 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-vvwcx" Jan 25 08:13:35 crc kubenswrapper[4832]: I0125 08:13:35.562995 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/nova-operator-controller-manager-7bdb645866-q67lr" Jan 25 08:13:52 crc kubenswrapper[4832]: I0125 08:13:52.158088 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-qrk8t"] Jan 25 08:13:52 crc kubenswrapper[4832]: E0125 08:13:52.158852 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="09f1c770-b9b1-40cf-9805-b88a1445218a" containerName="extract-utilities" Jan 25 08:13:52 crc kubenswrapper[4832]: I0125 08:13:52.158864 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="09f1c770-b9b1-40cf-9805-b88a1445218a" containerName="extract-utilities" Jan 25 08:13:52 crc kubenswrapper[4832]: E0125 08:13:52.158874 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="464e0a0d-87e3-44d8-aa9d-2b95b2aa2781" containerName="extract-content" Jan 25 08:13:52 crc kubenswrapper[4832]: I0125 08:13:52.158880 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="464e0a0d-87e3-44d8-aa9d-2b95b2aa2781" containerName="extract-content" Jan 25 08:13:52 crc kubenswrapper[4832]: E0125 08:13:52.158892 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="09f1c770-b9b1-40cf-9805-b88a1445218a" containerName="registry-server" Jan 25 08:13:52 crc kubenswrapper[4832]: I0125 08:13:52.158898 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="09f1c770-b9b1-40cf-9805-b88a1445218a" containerName="registry-server" Jan 25 08:13:52 crc kubenswrapper[4832]: E0125 08:13:52.158907 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="09f1c770-b9b1-40cf-9805-b88a1445218a" containerName="extract-content" Jan 25 08:13:52 crc kubenswrapper[4832]: I0125 08:13:52.158913 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="09f1c770-b9b1-40cf-9805-b88a1445218a" containerName="extract-content" Jan 25 08:13:52 crc kubenswrapper[4832]: E0125 08:13:52.158922 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f7c74f9f-348d-4f8d-ab88-8bfd200a3f20" containerName="registry-server" Jan 25 08:13:52 crc kubenswrapper[4832]: I0125 08:13:52.158930 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="f7c74f9f-348d-4f8d-ab88-8bfd200a3f20" containerName="registry-server" Jan 25 08:13:52 crc kubenswrapper[4832]: E0125 08:13:52.158941 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="464e0a0d-87e3-44d8-aa9d-2b95b2aa2781" containerName="registry-server" Jan 25 08:13:52 crc kubenswrapper[4832]: I0125 08:13:52.158946 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="464e0a0d-87e3-44d8-aa9d-2b95b2aa2781" containerName="registry-server" Jan 25 08:13:52 crc kubenswrapper[4832]: E0125 08:13:52.158958 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f7c74f9f-348d-4f8d-ab88-8bfd200a3f20" containerName="extract-utilities" Jan 25 08:13:52 crc kubenswrapper[4832]: I0125 08:13:52.158964 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="f7c74f9f-348d-4f8d-ab88-8bfd200a3f20" containerName="extract-utilities" Jan 25 08:13:52 crc kubenswrapper[4832]: E0125 08:13:52.158973 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="464e0a0d-87e3-44d8-aa9d-2b95b2aa2781" containerName="extract-utilities" Jan 25 08:13:52 crc kubenswrapper[4832]: I0125 08:13:52.158979 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="464e0a0d-87e3-44d8-aa9d-2b95b2aa2781" containerName="extract-utilities" Jan 25 08:13:52 crc kubenswrapper[4832]: E0125 08:13:52.158993 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f7c74f9f-348d-4f8d-ab88-8bfd200a3f20" containerName="extract-content" Jan 25 08:13:52 crc kubenswrapper[4832]: I0125 08:13:52.158998 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="f7c74f9f-348d-4f8d-ab88-8bfd200a3f20" containerName="extract-content" Jan 25 08:13:52 crc kubenswrapper[4832]: I0125 08:13:52.159130 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="09f1c770-b9b1-40cf-9805-b88a1445218a" containerName="registry-server" Jan 25 08:13:52 crc kubenswrapper[4832]: I0125 08:13:52.159141 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="464e0a0d-87e3-44d8-aa9d-2b95b2aa2781" containerName="registry-server" Jan 25 08:13:52 crc kubenswrapper[4832]: I0125 08:13:52.159153 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="f7c74f9f-348d-4f8d-ab88-8bfd200a3f20" containerName="registry-server" Jan 25 08:13:52 crc kubenswrapper[4832]: I0125 08:13:52.159814 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-qrk8t" Jan 25 08:13:52 crc kubenswrapper[4832]: I0125 08:13:52.166240 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openshift-service-ca.crt" Jan 25 08:13:52 crc kubenswrapper[4832]: I0125 08:13:52.166479 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dnsmasq-dns-dockercfg-mq2v8" Jan 25 08:13:52 crc kubenswrapper[4832]: I0125 08:13:52.166605 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns" Jan 25 08:13:52 crc kubenswrapper[4832]: I0125 08:13:52.170666 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"kube-root-ca.crt" Jan 25 08:13:52 crc kubenswrapper[4832]: I0125 08:13:52.174454 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-qrk8t"] Jan 25 08:13:52 crc kubenswrapper[4832]: I0125 08:13:52.263154 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-hl42z"] Jan 25 08:13:52 crc kubenswrapper[4832]: I0125 08:13:52.264410 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-hl42z" Jan 25 08:13:52 crc kubenswrapper[4832]: I0125 08:13:52.272208 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-svc" Jan 25 08:13:52 crc kubenswrapper[4832]: I0125 08:13:52.279561 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-hl42z"] Jan 25 08:13:52 crc kubenswrapper[4832]: I0125 08:13:52.301208 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c2fcg\" (UniqueName: \"kubernetes.io/projected/5e04d739-fa58-4eeb-aa09-415c9472a144-kube-api-access-c2fcg\") pod \"dnsmasq-dns-675f4bcbfc-qrk8t\" (UID: \"5e04d739-fa58-4eeb-aa09-415c9472a144\") " pod="openstack/dnsmasq-dns-675f4bcbfc-qrk8t" Jan 25 08:13:52 crc kubenswrapper[4832]: I0125 08:13:52.301297 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5e04d739-fa58-4eeb-aa09-415c9472a144-config\") pod \"dnsmasq-dns-675f4bcbfc-qrk8t\" (UID: \"5e04d739-fa58-4eeb-aa09-415c9472a144\") " pod="openstack/dnsmasq-dns-675f4bcbfc-qrk8t" Jan 25 08:13:52 crc kubenswrapper[4832]: I0125 08:13:52.402412 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c2fcg\" (UniqueName: \"kubernetes.io/projected/5e04d739-fa58-4eeb-aa09-415c9472a144-kube-api-access-c2fcg\") pod \"dnsmasq-dns-675f4bcbfc-qrk8t\" (UID: \"5e04d739-fa58-4eeb-aa09-415c9472a144\") " pod="openstack/dnsmasq-dns-675f4bcbfc-qrk8t" Jan 25 08:13:52 crc kubenswrapper[4832]: I0125 08:13:52.402467 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5e04d739-fa58-4eeb-aa09-415c9472a144-config\") pod \"dnsmasq-dns-675f4bcbfc-qrk8t\" (UID: \"5e04d739-fa58-4eeb-aa09-415c9472a144\") " pod="openstack/dnsmasq-dns-675f4bcbfc-qrk8t" Jan 25 08:13:52 crc kubenswrapper[4832]: I0125 08:13:52.402504 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b53b2f44-1755-45cd-b63e-32e5109e10c1-config\") pod \"dnsmasq-dns-78dd6ddcc-hl42z\" (UID: \"b53b2f44-1755-45cd-b63e-32e5109e10c1\") " pod="openstack/dnsmasq-dns-78dd6ddcc-hl42z" Jan 25 08:13:52 crc kubenswrapper[4832]: I0125 08:13:52.402541 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vzppj\" (UniqueName: \"kubernetes.io/projected/b53b2f44-1755-45cd-b63e-32e5109e10c1-kube-api-access-vzppj\") pod \"dnsmasq-dns-78dd6ddcc-hl42z\" (UID: \"b53b2f44-1755-45cd-b63e-32e5109e10c1\") " pod="openstack/dnsmasq-dns-78dd6ddcc-hl42z" Jan 25 08:13:52 crc kubenswrapper[4832]: I0125 08:13:52.402577 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b53b2f44-1755-45cd-b63e-32e5109e10c1-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-hl42z\" (UID: \"b53b2f44-1755-45cd-b63e-32e5109e10c1\") " pod="openstack/dnsmasq-dns-78dd6ddcc-hl42z" Jan 25 08:13:52 crc kubenswrapper[4832]: I0125 08:13:52.403437 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5e04d739-fa58-4eeb-aa09-415c9472a144-config\") pod \"dnsmasq-dns-675f4bcbfc-qrk8t\" (UID: \"5e04d739-fa58-4eeb-aa09-415c9472a144\") " pod="openstack/dnsmasq-dns-675f4bcbfc-qrk8t" Jan 25 08:13:52 crc kubenswrapper[4832]: I0125 08:13:52.421789 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c2fcg\" (UniqueName: \"kubernetes.io/projected/5e04d739-fa58-4eeb-aa09-415c9472a144-kube-api-access-c2fcg\") pod \"dnsmasq-dns-675f4bcbfc-qrk8t\" (UID: \"5e04d739-fa58-4eeb-aa09-415c9472a144\") " pod="openstack/dnsmasq-dns-675f4bcbfc-qrk8t" Jan 25 08:13:52 crc kubenswrapper[4832]: I0125 08:13:52.487248 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-qrk8t" Jan 25 08:13:52 crc kubenswrapper[4832]: I0125 08:13:52.504155 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b53b2f44-1755-45cd-b63e-32e5109e10c1-config\") pod \"dnsmasq-dns-78dd6ddcc-hl42z\" (UID: \"b53b2f44-1755-45cd-b63e-32e5109e10c1\") " pod="openstack/dnsmasq-dns-78dd6ddcc-hl42z" Jan 25 08:13:52 crc kubenswrapper[4832]: I0125 08:13:52.504262 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vzppj\" (UniqueName: \"kubernetes.io/projected/b53b2f44-1755-45cd-b63e-32e5109e10c1-kube-api-access-vzppj\") pod \"dnsmasq-dns-78dd6ddcc-hl42z\" (UID: \"b53b2f44-1755-45cd-b63e-32e5109e10c1\") " pod="openstack/dnsmasq-dns-78dd6ddcc-hl42z" Jan 25 08:13:52 crc kubenswrapper[4832]: I0125 08:13:52.504304 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b53b2f44-1755-45cd-b63e-32e5109e10c1-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-hl42z\" (UID: \"b53b2f44-1755-45cd-b63e-32e5109e10c1\") " pod="openstack/dnsmasq-dns-78dd6ddcc-hl42z" Jan 25 08:13:52 crc kubenswrapper[4832]: I0125 08:13:52.505232 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b53b2f44-1755-45cd-b63e-32e5109e10c1-config\") pod \"dnsmasq-dns-78dd6ddcc-hl42z\" (UID: \"b53b2f44-1755-45cd-b63e-32e5109e10c1\") " pod="openstack/dnsmasq-dns-78dd6ddcc-hl42z" Jan 25 08:13:52 crc kubenswrapper[4832]: I0125 08:13:52.505317 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b53b2f44-1755-45cd-b63e-32e5109e10c1-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-hl42z\" (UID: \"b53b2f44-1755-45cd-b63e-32e5109e10c1\") " pod="openstack/dnsmasq-dns-78dd6ddcc-hl42z" Jan 25 08:13:52 crc kubenswrapper[4832]: I0125 08:13:52.525547 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vzppj\" (UniqueName: \"kubernetes.io/projected/b53b2f44-1755-45cd-b63e-32e5109e10c1-kube-api-access-vzppj\") pod \"dnsmasq-dns-78dd6ddcc-hl42z\" (UID: \"b53b2f44-1755-45cd-b63e-32e5109e10c1\") " pod="openstack/dnsmasq-dns-78dd6ddcc-hl42z" Jan 25 08:13:52 crc kubenswrapper[4832]: I0125 08:13:52.582359 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-hl42z" Jan 25 08:13:52 crc kubenswrapper[4832]: I0125 08:13:52.830836 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-hl42z"] Jan 25 08:13:52 crc kubenswrapper[4832]: I0125 08:13:52.837739 4832 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 25 08:13:52 crc kubenswrapper[4832]: I0125 08:13:52.945284 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-qrk8t"] Jan 25 08:13:52 crc kubenswrapper[4832]: W0125 08:13:52.946566 4832 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5e04d739_fa58_4eeb_aa09_415c9472a144.slice/crio-78b4b5bf1115971f71d6430b0f4ffe7b99851af8e22edb776ece15ae7484b005 WatchSource:0}: Error finding container 78b4b5bf1115971f71d6430b0f4ffe7b99851af8e22edb776ece15ae7484b005: Status 404 returned error can't find the container with id 78b4b5bf1115971f71d6430b0f4ffe7b99851af8e22edb776ece15ae7484b005 Jan 25 08:13:53 crc kubenswrapper[4832]: I0125 08:13:53.182746 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78dd6ddcc-hl42z" event={"ID":"b53b2f44-1755-45cd-b63e-32e5109e10c1","Type":"ContainerStarted","Data":"c0d453652a34580cbe6669f2ed97f5fee359f954bb39e438527b7a52cf1d47ba"} Jan 25 08:13:53 crc kubenswrapper[4832]: I0125 08:13:53.183748 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-675f4bcbfc-qrk8t" event={"ID":"5e04d739-fa58-4eeb-aa09-415c9472a144","Type":"ContainerStarted","Data":"78b4b5bf1115971f71d6430b0f4ffe7b99851af8e22edb776ece15ae7484b005"} Jan 25 08:13:55 crc kubenswrapper[4832]: I0125 08:13:55.024141 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-qrk8t"] Jan 25 08:13:55 crc kubenswrapper[4832]: I0125 08:13:55.050103 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-gfs8w"] Jan 25 08:13:55 crc kubenswrapper[4832]: I0125 08:13:55.051280 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-gfs8w" Jan 25 08:13:55 crc kubenswrapper[4832]: I0125 08:13:55.065244 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-gfs8w"] Jan 25 08:13:55 crc kubenswrapper[4832]: I0125 08:13:55.165369 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/daa59b36-5024-41ae-88f1-49703006f341-dns-svc\") pod \"dnsmasq-dns-666b6646f7-gfs8w\" (UID: \"daa59b36-5024-41ae-88f1-49703006f341\") " pod="openstack/dnsmasq-dns-666b6646f7-gfs8w" Jan 25 08:13:55 crc kubenswrapper[4832]: I0125 08:13:55.165436 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/daa59b36-5024-41ae-88f1-49703006f341-config\") pod \"dnsmasq-dns-666b6646f7-gfs8w\" (UID: \"daa59b36-5024-41ae-88f1-49703006f341\") " pod="openstack/dnsmasq-dns-666b6646f7-gfs8w" Jan 25 08:13:55 crc kubenswrapper[4832]: I0125 08:13:55.165473 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s5w8s\" (UniqueName: \"kubernetes.io/projected/daa59b36-5024-41ae-88f1-49703006f341-kube-api-access-s5w8s\") pod \"dnsmasq-dns-666b6646f7-gfs8w\" (UID: \"daa59b36-5024-41ae-88f1-49703006f341\") " pod="openstack/dnsmasq-dns-666b6646f7-gfs8w" Jan 25 08:13:55 crc kubenswrapper[4832]: I0125 08:13:55.266469 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/daa59b36-5024-41ae-88f1-49703006f341-dns-svc\") pod \"dnsmasq-dns-666b6646f7-gfs8w\" (UID: \"daa59b36-5024-41ae-88f1-49703006f341\") " pod="openstack/dnsmasq-dns-666b6646f7-gfs8w" Jan 25 08:13:55 crc kubenswrapper[4832]: I0125 08:13:55.266535 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/daa59b36-5024-41ae-88f1-49703006f341-config\") pod \"dnsmasq-dns-666b6646f7-gfs8w\" (UID: \"daa59b36-5024-41ae-88f1-49703006f341\") " pod="openstack/dnsmasq-dns-666b6646f7-gfs8w" Jan 25 08:13:55 crc kubenswrapper[4832]: I0125 08:13:55.266591 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s5w8s\" (UniqueName: \"kubernetes.io/projected/daa59b36-5024-41ae-88f1-49703006f341-kube-api-access-s5w8s\") pod \"dnsmasq-dns-666b6646f7-gfs8w\" (UID: \"daa59b36-5024-41ae-88f1-49703006f341\") " pod="openstack/dnsmasq-dns-666b6646f7-gfs8w" Jan 25 08:13:55 crc kubenswrapper[4832]: I0125 08:13:55.268330 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/daa59b36-5024-41ae-88f1-49703006f341-config\") pod \"dnsmasq-dns-666b6646f7-gfs8w\" (UID: \"daa59b36-5024-41ae-88f1-49703006f341\") " pod="openstack/dnsmasq-dns-666b6646f7-gfs8w" Jan 25 08:13:55 crc kubenswrapper[4832]: I0125 08:13:55.269140 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/daa59b36-5024-41ae-88f1-49703006f341-dns-svc\") pod \"dnsmasq-dns-666b6646f7-gfs8w\" (UID: \"daa59b36-5024-41ae-88f1-49703006f341\") " pod="openstack/dnsmasq-dns-666b6646f7-gfs8w" Jan 25 08:13:55 crc kubenswrapper[4832]: I0125 08:13:55.304561 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s5w8s\" (UniqueName: \"kubernetes.io/projected/daa59b36-5024-41ae-88f1-49703006f341-kube-api-access-s5w8s\") pod \"dnsmasq-dns-666b6646f7-gfs8w\" (UID: \"daa59b36-5024-41ae-88f1-49703006f341\") " pod="openstack/dnsmasq-dns-666b6646f7-gfs8w" Jan 25 08:13:55 crc kubenswrapper[4832]: I0125 08:13:55.368836 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-hl42z"] Jan 25 08:13:55 crc kubenswrapper[4832]: I0125 08:13:55.379914 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-gfs8w" Jan 25 08:13:55 crc kubenswrapper[4832]: I0125 08:13:55.399230 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-jwr5g"] Jan 25 08:13:55 crc kubenswrapper[4832]: I0125 08:13:55.400435 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-jwr5g" Jan 25 08:13:55 crc kubenswrapper[4832]: I0125 08:13:55.418313 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-jwr5g"] Jan 25 08:13:55 crc kubenswrapper[4832]: I0125 08:13:55.470793 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/01866d50-e28c-44e2-a57d-5d5a7ea04626-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-jwr5g\" (UID: \"01866d50-e28c-44e2-a57d-5d5a7ea04626\") " pod="openstack/dnsmasq-dns-57d769cc4f-jwr5g" Jan 25 08:13:55 crc kubenswrapper[4832]: I0125 08:13:55.470840 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-plwkd\" (UniqueName: \"kubernetes.io/projected/01866d50-e28c-44e2-a57d-5d5a7ea04626-kube-api-access-plwkd\") pod \"dnsmasq-dns-57d769cc4f-jwr5g\" (UID: \"01866d50-e28c-44e2-a57d-5d5a7ea04626\") " pod="openstack/dnsmasq-dns-57d769cc4f-jwr5g" Jan 25 08:13:55 crc kubenswrapper[4832]: I0125 08:13:55.470872 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01866d50-e28c-44e2-a57d-5d5a7ea04626-config\") pod \"dnsmasq-dns-57d769cc4f-jwr5g\" (UID: \"01866d50-e28c-44e2-a57d-5d5a7ea04626\") " pod="openstack/dnsmasq-dns-57d769cc4f-jwr5g" Jan 25 08:13:55 crc kubenswrapper[4832]: I0125 08:13:55.572648 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/01866d50-e28c-44e2-a57d-5d5a7ea04626-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-jwr5g\" (UID: \"01866d50-e28c-44e2-a57d-5d5a7ea04626\") " pod="openstack/dnsmasq-dns-57d769cc4f-jwr5g" Jan 25 08:13:55 crc kubenswrapper[4832]: I0125 08:13:55.573936 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-plwkd\" (UniqueName: \"kubernetes.io/projected/01866d50-e28c-44e2-a57d-5d5a7ea04626-kube-api-access-plwkd\") pod \"dnsmasq-dns-57d769cc4f-jwr5g\" (UID: \"01866d50-e28c-44e2-a57d-5d5a7ea04626\") " pod="openstack/dnsmasq-dns-57d769cc4f-jwr5g" Jan 25 08:13:55 crc kubenswrapper[4832]: I0125 08:13:55.573973 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01866d50-e28c-44e2-a57d-5d5a7ea04626-config\") pod \"dnsmasq-dns-57d769cc4f-jwr5g\" (UID: \"01866d50-e28c-44e2-a57d-5d5a7ea04626\") " pod="openstack/dnsmasq-dns-57d769cc4f-jwr5g" Jan 25 08:13:55 crc kubenswrapper[4832]: I0125 08:13:55.575029 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/01866d50-e28c-44e2-a57d-5d5a7ea04626-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-jwr5g\" (UID: \"01866d50-e28c-44e2-a57d-5d5a7ea04626\") " pod="openstack/dnsmasq-dns-57d769cc4f-jwr5g" Jan 25 08:13:55 crc kubenswrapper[4832]: I0125 08:13:55.575323 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01866d50-e28c-44e2-a57d-5d5a7ea04626-config\") pod \"dnsmasq-dns-57d769cc4f-jwr5g\" (UID: \"01866d50-e28c-44e2-a57d-5d5a7ea04626\") " pod="openstack/dnsmasq-dns-57d769cc4f-jwr5g" Jan 25 08:13:55 crc kubenswrapper[4832]: I0125 08:13:55.630028 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-plwkd\" (UniqueName: \"kubernetes.io/projected/01866d50-e28c-44e2-a57d-5d5a7ea04626-kube-api-access-plwkd\") pod \"dnsmasq-dns-57d769cc4f-jwr5g\" (UID: \"01866d50-e28c-44e2-a57d-5d5a7ea04626\") " pod="openstack/dnsmasq-dns-57d769cc4f-jwr5g" Jan 25 08:13:55 crc kubenswrapper[4832]: I0125 08:13:55.726506 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-jwr5g" Jan 25 08:13:55 crc kubenswrapper[4832]: I0125 08:13:55.849162 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-gfs8w"] Jan 25 08:13:55 crc kubenswrapper[4832]: W0125 08:13:55.856279 4832 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddaa59b36_5024_41ae_88f1_49703006f341.slice/crio-29df7776dfcb0b19da7dda07e5b50df41cc8f74c6a56241899a92f768bc74ef2 WatchSource:0}: Error finding container 29df7776dfcb0b19da7dda07e5b50df41cc8f74c6a56241899a92f768bc74ef2: Status 404 returned error can't find the container with id 29df7776dfcb0b19da7dda07e5b50df41cc8f74c6a56241899a92f768bc74ef2 Jan 25 08:13:56 crc kubenswrapper[4832]: I0125 08:13:56.220629 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-gfs8w" event={"ID":"daa59b36-5024-41ae-88f1-49703006f341","Type":"ContainerStarted","Data":"29df7776dfcb0b19da7dda07e5b50df41cc8f74c6a56241899a92f768bc74ef2"} Jan 25 08:13:56 crc kubenswrapper[4832]: I0125 08:13:56.236658 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Jan 25 08:13:56 crc kubenswrapper[4832]: I0125 08:13:56.239835 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 25 08:13:56 crc kubenswrapper[4832]: I0125 08:13:56.243989 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Jan 25 08:13:56 crc kubenswrapper[4832]: I0125 08:13:56.244146 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Jan 25 08:13:56 crc kubenswrapper[4832]: I0125 08:13:56.245719 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Jan 25 08:13:56 crc kubenswrapper[4832]: I0125 08:13:56.245807 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Jan 25 08:13:56 crc kubenswrapper[4832]: I0125 08:13:56.245942 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Jan 25 08:13:56 crc kubenswrapper[4832]: I0125 08:13:56.246169 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Jan 25 08:13:56 crc kubenswrapper[4832]: I0125 08:13:56.246453 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-ktmhd" Jan 25 08:13:56 crc kubenswrapper[4832]: I0125 08:13:56.254889 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 25 08:13:56 crc kubenswrapper[4832]: I0125 08:13:56.287737 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/2f80d9a5-5d45-4053-875c-908242efc5e9-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"2f80d9a5-5d45-4053-875c-908242efc5e9\") " pod="openstack/rabbitmq-server-0" Jan 25 08:13:56 crc kubenswrapper[4832]: I0125 08:13:56.287790 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/2f80d9a5-5d45-4053-875c-908242efc5e9-pod-info\") pod \"rabbitmq-server-0\" (UID: \"2f80d9a5-5d45-4053-875c-908242efc5e9\") " pod="openstack/rabbitmq-server-0" Jan 25 08:13:56 crc kubenswrapper[4832]: I0125 08:13:56.287815 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/2f80d9a5-5d45-4053-875c-908242efc5e9-server-conf\") pod \"rabbitmq-server-0\" (UID: \"2f80d9a5-5d45-4053-875c-908242efc5e9\") " pod="openstack/rabbitmq-server-0" Jan 25 08:13:56 crc kubenswrapper[4832]: I0125 08:13:56.287849 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/2f80d9a5-5d45-4053-875c-908242efc5e9-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"2f80d9a5-5d45-4053-875c-908242efc5e9\") " pod="openstack/rabbitmq-server-0" Jan 25 08:13:56 crc kubenswrapper[4832]: I0125 08:13:56.287882 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/2f80d9a5-5d45-4053-875c-908242efc5e9-config-data\") pod \"rabbitmq-server-0\" (UID: \"2f80d9a5-5d45-4053-875c-908242efc5e9\") " pod="openstack/rabbitmq-server-0" Jan 25 08:13:56 crc kubenswrapper[4832]: I0125 08:13:56.287896 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/2f80d9a5-5d45-4053-875c-908242efc5e9-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"2f80d9a5-5d45-4053-875c-908242efc5e9\") " pod="openstack/rabbitmq-server-0" Jan 25 08:13:56 crc kubenswrapper[4832]: I0125 08:13:56.287920 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hvwf4\" (UniqueName: \"kubernetes.io/projected/2f80d9a5-5d45-4053-875c-908242efc5e9-kube-api-access-hvwf4\") pod \"rabbitmq-server-0\" (UID: \"2f80d9a5-5d45-4053-875c-908242efc5e9\") " pod="openstack/rabbitmq-server-0" Jan 25 08:13:56 crc kubenswrapper[4832]: I0125 08:13:56.287942 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"rabbitmq-server-0\" (UID: \"2f80d9a5-5d45-4053-875c-908242efc5e9\") " pod="openstack/rabbitmq-server-0" Jan 25 08:13:56 crc kubenswrapper[4832]: I0125 08:13:56.287960 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/2f80d9a5-5d45-4053-875c-908242efc5e9-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"2f80d9a5-5d45-4053-875c-908242efc5e9\") " pod="openstack/rabbitmq-server-0" Jan 25 08:13:56 crc kubenswrapper[4832]: I0125 08:13:56.288148 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/2f80d9a5-5d45-4053-875c-908242efc5e9-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"2f80d9a5-5d45-4053-875c-908242efc5e9\") " pod="openstack/rabbitmq-server-0" Jan 25 08:13:56 crc kubenswrapper[4832]: I0125 08:13:56.288194 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/2f80d9a5-5d45-4053-875c-908242efc5e9-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"2f80d9a5-5d45-4053-875c-908242efc5e9\") " pod="openstack/rabbitmq-server-0" Jan 25 08:13:56 crc kubenswrapper[4832]: I0125 08:13:56.389561 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/2f80d9a5-5d45-4053-875c-908242efc5e9-server-conf\") pod \"rabbitmq-server-0\" (UID: \"2f80d9a5-5d45-4053-875c-908242efc5e9\") " pod="openstack/rabbitmq-server-0" Jan 25 08:13:56 crc kubenswrapper[4832]: I0125 08:13:56.389645 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/2f80d9a5-5d45-4053-875c-908242efc5e9-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"2f80d9a5-5d45-4053-875c-908242efc5e9\") " pod="openstack/rabbitmq-server-0" Jan 25 08:13:56 crc kubenswrapper[4832]: I0125 08:13:56.389743 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/2f80d9a5-5d45-4053-875c-908242efc5e9-config-data\") pod \"rabbitmq-server-0\" (UID: \"2f80d9a5-5d45-4053-875c-908242efc5e9\") " pod="openstack/rabbitmq-server-0" Jan 25 08:13:56 crc kubenswrapper[4832]: I0125 08:13:56.389764 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/2f80d9a5-5d45-4053-875c-908242efc5e9-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"2f80d9a5-5d45-4053-875c-908242efc5e9\") " pod="openstack/rabbitmq-server-0" Jan 25 08:13:56 crc kubenswrapper[4832]: I0125 08:13:56.389810 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hvwf4\" (UniqueName: \"kubernetes.io/projected/2f80d9a5-5d45-4053-875c-908242efc5e9-kube-api-access-hvwf4\") pod \"rabbitmq-server-0\" (UID: \"2f80d9a5-5d45-4053-875c-908242efc5e9\") " pod="openstack/rabbitmq-server-0" Jan 25 08:13:56 crc kubenswrapper[4832]: I0125 08:13:56.389835 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"rabbitmq-server-0\" (UID: \"2f80d9a5-5d45-4053-875c-908242efc5e9\") " pod="openstack/rabbitmq-server-0" Jan 25 08:13:56 crc kubenswrapper[4832]: I0125 08:13:56.389853 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/2f80d9a5-5d45-4053-875c-908242efc5e9-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"2f80d9a5-5d45-4053-875c-908242efc5e9\") " pod="openstack/rabbitmq-server-0" Jan 25 08:13:56 crc kubenswrapper[4832]: I0125 08:13:56.389935 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/2f80d9a5-5d45-4053-875c-908242efc5e9-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"2f80d9a5-5d45-4053-875c-908242efc5e9\") " pod="openstack/rabbitmq-server-0" Jan 25 08:13:56 crc kubenswrapper[4832]: I0125 08:13:56.389952 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/2f80d9a5-5d45-4053-875c-908242efc5e9-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"2f80d9a5-5d45-4053-875c-908242efc5e9\") " pod="openstack/rabbitmq-server-0" Jan 25 08:13:56 crc kubenswrapper[4832]: I0125 08:13:56.390055 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/2f80d9a5-5d45-4053-875c-908242efc5e9-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"2f80d9a5-5d45-4053-875c-908242efc5e9\") " pod="openstack/rabbitmq-server-0" Jan 25 08:13:56 crc kubenswrapper[4832]: I0125 08:13:56.390079 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/2f80d9a5-5d45-4053-875c-908242efc5e9-pod-info\") pod \"rabbitmq-server-0\" (UID: \"2f80d9a5-5d45-4053-875c-908242efc5e9\") " pod="openstack/rabbitmq-server-0" Jan 25 08:13:56 crc kubenswrapper[4832]: I0125 08:13:56.390642 4832 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"rabbitmq-server-0\" (UID: \"2f80d9a5-5d45-4053-875c-908242efc5e9\") device mount path \"/mnt/openstack/pv11\"" pod="openstack/rabbitmq-server-0" Jan 25 08:13:56 crc kubenswrapper[4832]: I0125 08:13:56.390761 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/2f80d9a5-5d45-4053-875c-908242efc5e9-server-conf\") pod \"rabbitmq-server-0\" (UID: \"2f80d9a5-5d45-4053-875c-908242efc5e9\") " pod="openstack/rabbitmq-server-0" Jan 25 08:13:56 crc kubenswrapper[4832]: I0125 08:13:56.391072 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/2f80d9a5-5d45-4053-875c-908242efc5e9-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"2f80d9a5-5d45-4053-875c-908242efc5e9\") " pod="openstack/rabbitmq-server-0" Jan 25 08:13:56 crc kubenswrapper[4832]: I0125 08:13:56.392587 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/2f80d9a5-5d45-4053-875c-908242efc5e9-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"2f80d9a5-5d45-4053-875c-908242efc5e9\") " pod="openstack/rabbitmq-server-0" Jan 25 08:13:56 crc kubenswrapper[4832]: I0125 08:13:56.393640 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/2f80d9a5-5d45-4053-875c-908242efc5e9-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"2f80d9a5-5d45-4053-875c-908242efc5e9\") " pod="openstack/rabbitmq-server-0" Jan 25 08:13:56 crc kubenswrapper[4832]: I0125 08:13:56.397632 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/2f80d9a5-5d45-4053-875c-908242efc5e9-config-data\") pod \"rabbitmq-server-0\" (UID: \"2f80d9a5-5d45-4053-875c-908242efc5e9\") " pod="openstack/rabbitmq-server-0" Jan 25 08:13:56 crc kubenswrapper[4832]: I0125 08:13:56.404665 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/2f80d9a5-5d45-4053-875c-908242efc5e9-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"2f80d9a5-5d45-4053-875c-908242efc5e9\") " pod="openstack/rabbitmq-server-0" Jan 25 08:13:56 crc kubenswrapper[4832]: I0125 08:13:56.404811 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/2f80d9a5-5d45-4053-875c-908242efc5e9-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"2f80d9a5-5d45-4053-875c-908242efc5e9\") " pod="openstack/rabbitmq-server-0" Jan 25 08:13:56 crc kubenswrapper[4832]: I0125 08:13:56.405513 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-jwr5g"] Jan 25 08:13:56 crc kubenswrapper[4832]: I0125 08:13:56.409654 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/2f80d9a5-5d45-4053-875c-908242efc5e9-pod-info\") pod \"rabbitmq-server-0\" (UID: \"2f80d9a5-5d45-4053-875c-908242efc5e9\") " pod="openstack/rabbitmq-server-0" Jan 25 08:13:56 crc kubenswrapper[4832]: I0125 08:13:56.410572 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/2f80d9a5-5d45-4053-875c-908242efc5e9-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"2f80d9a5-5d45-4053-875c-908242efc5e9\") " pod="openstack/rabbitmq-server-0" Jan 25 08:13:56 crc kubenswrapper[4832]: I0125 08:13:56.412179 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hvwf4\" (UniqueName: \"kubernetes.io/projected/2f80d9a5-5d45-4053-875c-908242efc5e9-kube-api-access-hvwf4\") pod \"rabbitmq-server-0\" (UID: \"2f80d9a5-5d45-4053-875c-908242efc5e9\") " pod="openstack/rabbitmq-server-0" Jan 25 08:13:56 crc kubenswrapper[4832]: I0125 08:13:56.414272 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"rabbitmq-server-0\" (UID: \"2f80d9a5-5d45-4053-875c-908242efc5e9\") " pod="openstack/rabbitmq-server-0" Jan 25 08:13:56 crc kubenswrapper[4832]: I0125 08:13:56.558428 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 25 08:13:56 crc kubenswrapper[4832]: I0125 08:13:56.560805 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 25 08:13:56 crc kubenswrapper[4832]: I0125 08:13:56.566470 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Jan 25 08:13:56 crc kubenswrapper[4832]: I0125 08:13:56.566665 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Jan 25 08:13:56 crc kubenswrapper[4832]: I0125 08:13:56.566686 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Jan 25 08:13:56 crc kubenswrapper[4832]: I0125 08:13:56.566804 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Jan 25 08:13:56 crc kubenswrapper[4832]: I0125 08:13:56.566859 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Jan 25 08:13:56 crc kubenswrapper[4832]: I0125 08:13:56.566806 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Jan 25 08:13:56 crc kubenswrapper[4832]: I0125 08:13:56.567081 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-2tqqh" Jan 25 08:13:56 crc kubenswrapper[4832]: I0125 08:13:56.567751 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 25 08:13:56 crc kubenswrapper[4832]: I0125 08:13:56.593868 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 25 08:13:56 crc kubenswrapper[4832]: I0125 08:13:56.597221 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/9b86227f-350e-4e03-aefd-00f308ccb238-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"9b86227f-350e-4e03-aefd-00f308ccb238\") " pod="openstack/rabbitmq-cell1-server-0" Jan 25 08:13:56 crc kubenswrapper[4832]: I0125 08:13:56.597249 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"9b86227f-350e-4e03-aefd-00f308ccb238\") " pod="openstack/rabbitmq-cell1-server-0" Jan 25 08:13:56 crc kubenswrapper[4832]: I0125 08:13:56.597274 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/9b86227f-350e-4e03-aefd-00f308ccb238-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"9b86227f-350e-4e03-aefd-00f308ccb238\") " pod="openstack/rabbitmq-cell1-server-0" Jan 25 08:13:56 crc kubenswrapper[4832]: I0125 08:13:56.597299 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/9b86227f-350e-4e03-aefd-00f308ccb238-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"9b86227f-350e-4e03-aefd-00f308ccb238\") " pod="openstack/rabbitmq-cell1-server-0" Jan 25 08:13:56 crc kubenswrapper[4832]: I0125 08:13:56.597351 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/9b86227f-350e-4e03-aefd-00f308ccb238-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"9b86227f-350e-4e03-aefd-00f308ccb238\") " pod="openstack/rabbitmq-cell1-server-0" Jan 25 08:13:56 crc kubenswrapper[4832]: I0125 08:13:56.597373 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gptfm\" (UniqueName: \"kubernetes.io/projected/9b86227f-350e-4e03-aefd-00f308ccb238-kube-api-access-gptfm\") pod \"rabbitmq-cell1-server-0\" (UID: \"9b86227f-350e-4e03-aefd-00f308ccb238\") " pod="openstack/rabbitmq-cell1-server-0" Jan 25 08:13:56 crc kubenswrapper[4832]: I0125 08:13:56.597403 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/9b86227f-350e-4e03-aefd-00f308ccb238-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"9b86227f-350e-4e03-aefd-00f308ccb238\") " pod="openstack/rabbitmq-cell1-server-0" Jan 25 08:13:56 crc kubenswrapper[4832]: I0125 08:13:56.597425 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/9b86227f-350e-4e03-aefd-00f308ccb238-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"9b86227f-350e-4e03-aefd-00f308ccb238\") " pod="openstack/rabbitmq-cell1-server-0" Jan 25 08:13:56 crc kubenswrapper[4832]: I0125 08:13:56.597452 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/9b86227f-350e-4e03-aefd-00f308ccb238-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"9b86227f-350e-4e03-aefd-00f308ccb238\") " pod="openstack/rabbitmq-cell1-server-0" Jan 25 08:13:56 crc kubenswrapper[4832]: I0125 08:13:56.597473 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/9b86227f-350e-4e03-aefd-00f308ccb238-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"9b86227f-350e-4e03-aefd-00f308ccb238\") " pod="openstack/rabbitmq-cell1-server-0" Jan 25 08:13:56 crc kubenswrapper[4832]: I0125 08:13:56.597496 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/9b86227f-350e-4e03-aefd-00f308ccb238-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"9b86227f-350e-4e03-aefd-00f308ccb238\") " pod="openstack/rabbitmq-cell1-server-0" Jan 25 08:13:56 crc kubenswrapper[4832]: I0125 08:13:56.698269 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/9b86227f-350e-4e03-aefd-00f308ccb238-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"9b86227f-350e-4e03-aefd-00f308ccb238\") " pod="openstack/rabbitmq-cell1-server-0" Jan 25 08:13:56 crc kubenswrapper[4832]: I0125 08:13:56.698323 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/9b86227f-350e-4e03-aefd-00f308ccb238-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"9b86227f-350e-4e03-aefd-00f308ccb238\") " pod="openstack/rabbitmq-cell1-server-0" Jan 25 08:13:56 crc kubenswrapper[4832]: I0125 08:13:56.698356 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/9b86227f-350e-4e03-aefd-00f308ccb238-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"9b86227f-350e-4e03-aefd-00f308ccb238\") " pod="openstack/rabbitmq-cell1-server-0" Jan 25 08:13:56 crc kubenswrapper[4832]: I0125 08:13:56.698375 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/9b86227f-350e-4e03-aefd-00f308ccb238-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"9b86227f-350e-4e03-aefd-00f308ccb238\") " pod="openstack/rabbitmq-cell1-server-0" Jan 25 08:13:56 crc kubenswrapper[4832]: I0125 08:13:56.698406 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"9b86227f-350e-4e03-aefd-00f308ccb238\") " pod="openstack/rabbitmq-cell1-server-0" Jan 25 08:13:56 crc kubenswrapper[4832]: I0125 08:13:56.698436 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/9b86227f-350e-4e03-aefd-00f308ccb238-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"9b86227f-350e-4e03-aefd-00f308ccb238\") " pod="openstack/rabbitmq-cell1-server-0" Jan 25 08:13:56 crc kubenswrapper[4832]: I0125 08:13:56.698462 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/9b86227f-350e-4e03-aefd-00f308ccb238-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"9b86227f-350e-4e03-aefd-00f308ccb238\") " pod="openstack/rabbitmq-cell1-server-0" Jan 25 08:13:56 crc kubenswrapper[4832]: I0125 08:13:56.698507 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/9b86227f-350e-4e03-aefd-00f308ccb238-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"9b86227f-350e-4e03-aefd-00f308ccb238\") " pod="openstack/rabbitmq-cell1-server-0" Jan 25 08:13:56 crc kubenswrapper[4832]: I0125 08:13:56.698530 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gptfm\" (UniqueName: \"kubernetes.io/projected/9b86227f-350e-4e03-aefd-00f308ccb238-kube-api-access-gptfm\") pod \"rabbitmq-cell1-server-0\" (UID: \"9b86227f-350e-4e03-aefd-00f308ccb238\") " pod="openstack/rabbitmq-cell1-server-0" Jan 25 08:13:56 crc kubenswrapper[4832]: I0125 08:13:56.698552 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/9b86227f-350e-4e03-aefd-00f308ccb238-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"9b86227f-350e-4e03-aefd-00f308ccb238\") " pod="openstack/rabbitmq-cell1-server-0" Jan 25 08:13:56 crc kubenswrapper[4832]: I0125 08:13:56.698579 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/9b86227f-350e-4e03-aefd-00f308ccb238-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"9b86227f-350e-4e03-aefd-00f308ccb238\") " pod="openstack/rabbitmq-cell1-server-0" Jan 25 08:13:56 crc kubenswrapper[4832]: I0125 08:13:56.702154 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/9b86227f-350e-4e03-aefd-00f308ccb238-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"9b86227f-350e-4e03-aefd-00f308ccb238\") " pod="openstack/rabbitmq-cell1-server-0" Jan 25 08:13:56 crc kubenswrapper[4832]: I0125 08:13:56.702458 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/9b86227f-350e-4e03-aefd-00f308ccb238-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"9b86227f-350e-4e03-aefd-00f308ccb238\") " pod="openstack/rabbitmq-cell1-server-0" Jan 25 08:13:56 crc kubenswrapper[4832]: I0125 08:13:56.702677 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/9b86227f-350e-4e03-aefd-00f308ccb238-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"9b86227f-350e-4e03-aefd-00f308ccb238\") " pod="openstack/rabbitmq-cell1-server-0" Jan 25 08:13:56 crc kubenswrapper[4832]: I0125 08:13:56.703015 4832 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"9b86227f-350e-4e03-aefd-00f308ccb238\") device mount path \"/mnt/openstack/pv01\"" pod="openstack/rabbitmq-cell1-server-0" Jan 25 08:13:56 crc kubenswrapper[4832]: I0125 08:13:56.703464 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/9b86227f-350e-4e03-aefd-00f308ccb238-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"9b86227f-350e-4e03-aefd-00f308ccb238\") " pod="openstack/rabbitmq-cell1-server-0" Jan 25 08:13:56 crc kubenswrapper[4832]: I0125 08:13:56.711471 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/9b86227f-350e-4e03-aefd-00f308ccb238-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"9b86227f-350e-4e03-aefd-00f308ccb238\") " pod="openstack/rabbitmq-cell1-server-0" Jan 25 08:13:56 crc kubenswrapper[4832]: I0125 08:13:56.713119 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/9b86227f-350e-4e03-aefd-00f308ccb238-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"9b86227f-350e-4e03-aefd-00f308ccb238\") " pod="openstack/rabbitmq-cell1-server-0" Jan 25 08:13:56 crc kubenswrapper[4832]: I0125 08:13:56.714263 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/9b86227f-350e-4e03-aefd-00f308ccb238-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"9b86227f-350e-4e03-aefd-00f308ccb238\") " pod="openstack/rabbitmq-cell1-server-0" Jan 25 08:13:56 crc kubenswrapper[4832]: I0125 08:13:56.719457 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gptfm\" (UniqueName: \"kubernetes.io/projected/9b86227f-350e-4e03-aefd-00f308ccb238-kube-api-access-gptfm\") pod \"rabbitmq-cell1-server-0\" (UID: \"9b86227f-350e-4e03-aefd-00f308ccb238\") " pod="openstack/rabbitmq-cell1-server-0" Jan 25 08:13:56 crc kubenswrapper[4832]: I0125 08:13:56.736208 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/9b86227f-350e-4e03-aefd-00f308ccb238-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"9b86227f-350e-4e03-aefd-00f308ccb238\") " pod="openstack/rabbitmq-cell1-server-0" Jan 25 08:13:56 crc kubenswrapper[4832]: I0125 08:13:56.736889 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/9b86227f-350e-4e03-aefd-00f308ccb238-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"9b86227f-350e-4e03-aefd-00f308ccb238\") " pod="openstack/rabbitmq-cell1-server-0" Jan 25 08:13:56 crc kubenswrapper[4832]: I0125 08:13:56.740586 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"9b86227f-350e-4e03-aefd-00f308ccb238\") " pod="openstack/rabbitmq-cell1-server-0" Jan 25 08:13:56 crc kubenswrapper[4832]: I0125 08:13:56.908744 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 25 08:13:57 crc kubenswrapper[4832]: I0125 08:13:57.082623 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 25 08:13:57 crc kubenswrapper[4832]: I0125 08:13:57.232100 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-jwr5g" event={"ID":"01866d50-e28c-44e2-a57d-5d5a7ea04626","Type":"ContainerStarted","Data":"f69eab5bb55672d1730590ea6bb7d002c0dae06eae0ead6b7108f7959b4a80f6"} Jan 25 08:13:57 crc kubenswrapper[4832]: I0125 08:13:57.454973 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 25 08:13:57 crc kubenswrapper[4832]: I0125 08:13:57.718039 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-galera-0"] Jan 25 08:13:57 crc kubenswrapper[4832]: I0125 08:13:57.719881 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Jan 25 08:13:57 crc kubenswrapper[4832]: I0125 08:13:57.730788 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Jan 25 08:13:57 crc kubenswrapper[4832]: I0125 08:13:57.730858 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-dockercfg-pl429" Jan 25 08:13:57 crc kubenswrapper[4832]: I0125 08:13:57.731106 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-svc" Jan 25 08:13:57 crc kubenswrapper[4832]: I0125 08:13:57.731169 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-scripts" Jan 25 08:13:57 crc kubenswrapper[4832]: I0125 08:13:57.731216 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config-data" Jan 25 08:13:57 crc kubenswrapper[4832]: I0125 08:13:57.735853 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"combined-ca-bundle" Jan 25 08:13:57 crc kubenswrapper[4832]: I0125 08:13:57.850755 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/9ca53255-293b-4c35-a202-ac7ad7ac8d65-kolla-config\") pod \"openstack-galera-0\" (UID: \"9ca53255-293b-4c35-a202-ac7ad7ac8d65\") " pod="openstack/openstack-galera-0" Jan 25 08:13:57 crc kubenswrapper[4832]: I0125 08:13:57.850825 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/9ca53255-293b-4c35-a202-ac7ad7ac8d65-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"9ca53255-293b-4c35-a202-ac7ad7ac8d65\") " pod="openstack/openstack-galera-0" Jan 25 08:13:57 crc kubenswrapper[4832]: I0125 08:13:57.850992 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"openstack-galera-0\" (UID: \"9ca53255-293b-4c35-a202-ac7ad7ac8d65\") " pod="openstack/openstack-galera-0" Jan 25 08:13:57 crc kubenswrapper[4832]: I0125 08:13:57.851045 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9ca53255-293b-4c35-a202-ac7ad7ac8d65-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"9ca53255-293b-4c35-a202-ac7ad7ac8d65\") " pod="openstack/openstack-galera-0" Jan 25 08:13:57 crc kubenswrapper[4832]: I0125 08:13:57.851112 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/9ca53255-293b-4c35-a202-ac7ad7ac8d65-config-data-default\") pod \"openstack-galera-0\" (UID: \"9ca53255-293b-4c35-a202-ac7ad7ac8d65\") " pod="openstack/openstack-galera-0" Jan 25 08:13:57 crc kubenswrapper[4832]: I0125 08:13:57.851141 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/9ca53255-293b-4c35-a202-ac7ad7ac8d65-config-data-generated\") pod \"openstack-galera-0\" (UID: \"9ca53255-293b-4c35-a202-ac7ad7ac8d65\") " pod="openstack/openstack-galera-0" Jan 25 08:13:57 crc kubenswrapper[4832]: I0125 08:13:57.851160 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9ca53255-293b-4c35-a202-ac7ad7ac8d65-operator-scripts\") pod \"openstack-galera-0\" (UID: \"9ca53255-293b-4c35-a202-ac7ad7ac8d65\") " pod="openstack/openstack-galera-0" Jan 25 08:13:57 crc kubenswrapper[4832]: I0125 08:13:57.851186 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jm9b7\" (UniqueName: \"kubernetes.io/projected/9ca53255-293b-4c35-a202-ac7ad7ac8d65-kube-api-access-jm9b7\") pod \"openstack-galera-0\" (UID: \"9ca53255-293b-4c35-a202-ac7ad7ac8d65\") " pod="openstack/openstack-galera-0" Jan 25 08:13:57 crc kubenswrapper[4832]: I0125 08:13:57.955041 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"openstack-galera-0\" (UID: \"9ca53255-293b-4c35-a202-ac7ad7ac8d65\") " pod="openstack/openstack-galera-0" Jan 25 08:13:57 crc kubenswrapper[4832]: I0125 08:13:57.955292 4832 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"openstack-galera-0\" (UID: \"9ca53255-293b-4c35-a202-ac7ad7ac8d65\") device mount path \"/mnt/openstack/pv04\"" pod="openstack/openstack-galera-0" Jan 25 08:13:57 crc kubenswrapper[4832]: I0125 08:13:57.955159 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9ca53255-293b-4c35-a202-ac7ad7ac8d65-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"9ca53255-293b-4c35-a202-ac7ad7ac8d65\") " pod="openstack/openstack-galera-0" Jan 25 08:13:57 crc kubenswrapper[4832]: I0125 08:13:57.955841 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/9ca53255-293b-4c35-a202-ac7ad7ac8d65-config-data-default\") pod \"openstack-galera-0\" (UID: \"9ca53255-293b-4c35-a202-ac7ad7ac8d65\") " pod="openstack/openstack-galera-0" Jan 25 08:13:57 crc kubenswrapper[4832]: I0125 08:13:57.955881 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/9ca53255-293b-4c35-a202-ac7ad7ac8d65-config-data-generated\") pod \"openstack-galera-0\" (UID: \"9ca53255-293b-4c35-a202-ac7ad7ac8d65\") " pod="openstack/openstack-galera-0" Jan 25 08:13:57 crc kubenswrapper[4832]: I0125 08:13:57.955905 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9ca53255-293b-4c35-a202-ac7ad7ac8d65-operator-scripts\") pod \"openstack-galera-0\" (UID: \"9ca53255-293b-4c35-a202-ac7ad7ac8d65\") " pod="openstack/openstack-galera-0" Jan 25 08:13:57 crc kubenswrapper[4832]: I0125 08:13:57.955958 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jm9b7\" (UniqueName: \"kubernetes.io/projected/9ca53255-293b-4c35-a202-ac7ad7ac8d65-kube-api-access-jm9b7\") pod \"openstack-galera-0\" (UID: \"9ca53255-293b-4c35-a202-ac7ad7ac8d65\") " pod="openstack/openstack-galera-0" Jan 25 08:13:57 crc kubenswrapper[4832]: I0125 08:13:57.956000 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/9ca53255-293b-4c35-a202-ac7ad7ac8d65-kolla-config\") pod \"openstack-galera-0\" (UID: \"9ca53255-293b-4c35-a202-ac7ad7ac8d65\") " pod="openstack/openstack-galera-0" Jan 25 08:13:57 crc kubenswrapper[4832]: I0125 08:13:57.956034 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/9ca53255-293b-4c35-a202-ac7ad7ac8d65-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"9ca53255-293b-4c35-a202-ac7ad7ac8d65\") " pod="openstack/openstack-galera-0" Jan 25 08:13:57 crc kubenswrapper[4832]: I0125 08:13:57.956691 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/9ca53255-293b-4c35-a202-ac7ad7ac8d65-config-data-generated\") pod \"openstack-galera-0\" (UID: \"9ca53255-293b-4c35-a202-ac7ad7ac8d65\") " pod="openstack/openstack-galera-0" Jan 25 08:13:57 crc kubenswrapper[4832]: I0125 08:13:57.957029 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/9ca53255-293b-4c35-a202-ac7ad7ac8d65-config-data-default\") pod \"openstack-galera-0\" (UID: \"9ca53255-293b-4c35-a202-ac7ad7ac8d65\") " pod="openstack/openstack-galera-0" Jan 25 08:13:57 crc kubenswrapper[4832]: I0125 08:13:57.957258 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/9ca53255-293b-4c35-a202-ac7ad7ac8d65-kolla-config\") pod \"openstack-galera-0\" (UID: \"9ca53255-293b-4c35-a202-ac7ad7ac8d65\") " pod="openstack/openstack-galera-0" Jan 25 08:13:57 crc kubenswrapper[4832]: I0125 08:13:57.959551 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9ca53255-293b-4c35-a202-ac7ad7ac8d65-operator-scripts\") pod \"openstack-galera-0\" (UID: \"9ca53255-293b-4c35-a202-ac7ad7ac8d65\") " pod="openstack/openstack-galera-0" Jan 25 08:13:57 crc kubenswrapper[4832]: I0125 08:13:57.965123 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9ca53255-293b-4c35-a202-ac7ad7ac8d65-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"9ca53255-293b-4c35-a202-ac7ad7ac8d65\") " pod="openstack/openstack-galera-0" Jan 25 08:13:57 crc kubenswrapper[4832]: I0125 08:13:57.986342 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/9ca53255-293b-4c35-a202-ac7ad7ac8d65-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"9ca53255-293b-4c35-a202-ac7ad7ac8d65\") " pod="openstack/openstack-galera-0" Jan 25 08:13:57 crc kubenswrapper[4832]: I0125 08:13:57.987833 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jm9b7\" (UniqueName: \"kubernetes.io/projected/9ca53255-293b-4c35-a202-ac7ad7ac8d65-kube-api-access-jm9b7\") pod \"openstack-galera-0\" (UID: \"9ca53255-293b-4c35-a202-ac7ad7ac8d65\") " pod="openstack/openstack-galera-0" Jan 25 08:13:57 crc kubenswrapper[4832]: I0125 08:13:57.989503 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"openstack-galera-0\" (UID: \"9ca53255-293b-4c35-a202-ac7ad7ac8d65\") " pod="openstack/openstack-galera-0" Jan 25 08:13:58 crc kubenswrapper[4832]: I0125 08:13:58.048461 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Jan 25 08:13:59 crc kubenswrapper[4832]: I0125 08:13:59.083563 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-cell1-galera-0"] Jan 25 08:13:59 crc kubenswrapper[4832]: I0125 08:13:59.093011 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Jan 25 08:13:59 crc kubenswrapper[4832]: I0125 08:13:59.110944 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-scripts" Jan 25 08:13:59 crc kubenswrapper[4832]: I0125 08:13:59.111150 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Jan 25 08:13:59 crc kubenswrapper[4832]: I0125 08:13:59.111639 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-cell1-dockercfg-lvtsz" Jan 25 08:13:59 crc kubenswrapper[4832]: I0125 08:13:59.111646 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-cell1-svc" Jan 25 08:13:59 crc kubenswrapper[4832]: I0125 08:13:59.111873 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-config-data" Jan 25 08:13:59 crc kubenswrapper[4832]: I0125 08:13:59.284205 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/43f07a95-68ce-4138-b2ff-ef2543e68e46-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"43f07a95-68ce-4138-b2ff-ef2543e68e46\") " pod="openstack/openstack-cell1-galera-0" Jan 25 08:13:59 crc kubenswrapper[4832]: I0125 08:13:59.284268 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/43f07a95-68ce-4138-b2ff-ef2543e68e46-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"43f07a95-68ce-4138-b2ff-ef2543e68e46\") " pod="openstack/openstack-cell1-galera-0" Jan 25 08:13:59 crc kubenswrapper[4832]: I0125 08:13:59.284301 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v9gj6\" (UniqueName: \"kubernetes.io/projected/43f07a95-68ce-4138-b2ff-ef2543e68e46-kube-api-access-v9gj6\") pod \"openstack-cell1-galera-0\" (UID: \"43f07a95-68ce-4138-b2ff-ef2543e68e46\") " pod="openstack/openstack-cell1-galera-0" Jan 25 08:13:59 crc kubenswrapper[4832]: I0125 08:13:59.284321 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/43f07a95-68ce-4138-b2ff-ef2543e68e46-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"43f07a95-68ce-4138-b2ff-ef2543e68e46\") " pod="openstack/openstack-cell1-galera-0" Jan 25 08:13:59 crc kubenswrapper[4832]: I0125 08:13:59.284340 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/43f07a95-68ce-4138-b2ff-ef2543e68e46-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"43f07a95-68ce-4138-b2ff-ef2543e68e46\") " pod="openstack/openstack-cell1-galera-0" Jan 25 08:13:59 crc kubenswrapper[4832]: I0125 08:13:59.284368 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/43f07a95-68ce-4138-b2ff-ef2543e68e46-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"43f07a95-68ce-4138-b2ff-ef2543e68e46\") " pod="openstack/openstack-cell1-galera-0" Jan 25 08:13:59 crc kubenswrapper[4832]: I0125 08:13:59.284416 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"openstack-cell1-galera-0\" (UID: \"43f07a95-68ce-4138-b2ff-ef2543e68e46\") " pod="openstack/openstack-cell1-galera-0" Jan 25 08:13:59 crc kubenswrapper[4832]: I0125 08:13:59.284477 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/43f07a95-68ce-4138-b2ff-ef2543e68e46-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"43f07a95-68ce-4138-b2ff-ef2543e68e46\") " pod="openstack/openstack-cell1-galera-0" Jan 25 08:13:59 crc kubenswrapper[4832]: I0125 08:13:59.380886 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/memcached-0"] Jan 25 08:13:59 crc kubenswrapper[4832]: I0125 08:13:59.381913 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Jan 25 08:13:59 crc kubenswrapper[4832]: I0125 08:13:59.384595 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-memcached-svc" Jan 25 08:13:59 crc kubenswrapper[4832]: I0125 08:13:59.384774 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"memcached-config-data" Jan 25 08:13:59 crc kubenswrapper[4832]: I0125 08:13:59.385027 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"memcached-memcached-dockercfg-vbjb4" Jan 25 08:13:59 crc kubenswrapper[4832]: I0125 08:13:59.386244 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/43f07a95-68ce-4138-b2ff-ef2543e68e46-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"43f07a95-68ce-4138-b2ff-ef2543e68e46\") " pod="openstack/openstack-cell1-galera-0" Jan 25 08:13:59 crc kubenswrapper[4832]: I0125 08:13:59.386300 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/43f07a95-68ce-4138-b2ff-ef2543e68e46-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"43f07a95-68ce-4138-b2ff-ef2543e68e46\") " pod="openstack/openstack-cell1-galera-0" Jan 25 08:13:59 crc kubenswrapper[4832]: I0125 08:13:59.386321 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/43f07a95-68ce-4138-b2ff-ef2543e68e46-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"43f07a95-68ce-4138-b2ff-ef2543e68e46\") " pod="openstack/openstack-cell1-galera-0" Jan 25 08:13:59 crc kubenswrapper[4832]: I0125 08:13:59.386345 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v9gj6\" (UniqueName: \"kubernetes.io/projected/43f07a95-68ce-4138-b2ff-ef2543e68e46-kube-api-access-v9gj6\") pod \"openstack-cell1-galera-0\" (UID: \"43f07a95-68ce-4138-b2ff-ef2543e68e46\") " pod="openstack/openstack-cell1-galera-0" Jan 25 08:13:59 crc kubenswrapper[4832]: I0125 08:13:59.386365 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/43f07a95-68ce-4138-b2ff-ef2543e68e46-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"43f07a95-68ce-4138-b2ff-ef2543e68e46\") " pod="openstack/openstack-cell1-galera-0" Jan 25 08:13:59 crc kubenswrapper[4832]: I0125 08:13:59.386397 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/43f07a95-68ce-4138-b2ff-ef2543e68e46-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"43f07a95-68ce-4138-b2ff-ef2543e68e46\") " pod="openstack/openstack-cell1-galera-0" Jan 25 08:13:59 crc kubenswrapper[4832]: I0125 08:13:59.386418 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/43f07a95-68ce-4138-b2ff-ef2543e68e46-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"43f07a95-68ce-4138-b2ff-ef2543e68e46\") " pod="openstack/openstack-cell1-galera-0" Jan 25 08:13:59 crc kubenswrapper[4832]: I0125 08:13:59.386440 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"openstack-cell1-galera-0\" (UID: \"43f07a95-68ce-4138-b2ff-ef2543e68e46\") " pod="openstack/openstack-cell1-galera-0" Jan 25 08:13:59 crc kubenswrapper[4832]: I0125 08:13:59.387745 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/43f07a95-68ce-4138-b2ff-ef2543e68e46-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"43f07a95-68ce-4138-b2ff-ef2543e68e46\") " pod="openstack/openstack-cell1-galera-0" Jan 25 08:13:59 crc kubenswrapper[4832]: I0125 08:13:59.388480 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/43f07a95-68ce-4138-b2ff-ef2543e68e46-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"43f07a95-68ce-4138-b2ff-ef2543e68e46\") " pod="openstack/openstack-cell1-galera-0" Jan 25 08:13:59 crc kubenswrapper[4832]: I0125 08:13:59.389529 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/43f07a95-68ce-4138-b2ff-ef2543e68e46-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"43f07a95-68ce-4138-b2ff-ef2543e68e46\") " pod="openstack/openstack-cell1-galera-0" Jan 25 08:13:59 crc kubenswrapper[4832]: I0125 08:13:59.390210 4832 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"openstack-cell1-galera-0\" (UID: \"43f07a95-68ce-4138-b2ff-ef2543e68e46\") device mount path \"/mnt/openstack/pv08\"" pod="openstack/openstack-cell1-galera-0" Jan 25 08:13:59 crc kubenswrapper[4832]: I0125 08:13:59.391064 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/43f07a95-68ce-4138-b2ff-ef2543e68e46-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"43f07a95-68ce-4138-b2ff-ef2543e68e46\") " pod="openstack/openstack-cell1-galera-0" Jan 25 08:13:59 crc kubenswrapper[4832]: I0125 08:13:59.391424 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/43f07a95-68ce-4138-b2ff-ef2543e68e46-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"43f07a95-68ce-4138-b2ff-ef2543e68e46\") " pod="openstack/openstack-cell1-galera-0" Jan 25 08:13:59 crc kubenswrapper[4832]: I0125 08:13:59.392414 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/43f07a95-68ce-4138-b2ff-ef2543e68e46-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"43f07a95-68ce-4138-b2ff-ef2543e68e46\") " pod="openstack/openstack-cell1-galera-0" Jan 25 08:13:59 crc kubenswrapper[4832]: I0125 08:13:59.411758 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Jan 25 08:13:59 crc kubenswrapper[4832]: I0125 08:13:59.421176 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"openstack-cell1-galera-0\" (UID: \"43f07a95-68ce-4138-b2ff-ef2543e68e46\") " pod="openstack/openstack-cell1-galera-0" Jan 25 08:13:59 crc kubenswrapper[4832]: I0125 08:13:59.424235 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v9gj6\" (UniqueName: \"kubernetes.io/projected/43f07a95-68ce-4138-b2ff-ef2543e68e46-kube-api-access-v9gj6\") pod \"openstack-cell1-galera-0\" (UID: \"43f07a95-68ce-4138-b2ff-ef2543e68e46\") " pod="openstack/openstack-cell1-galera-0" Jan 25 08:13:59 crc kubenswrapper[4832]: I0125 08:13:59.488001 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/44713664-4137-4321-baff-36c54dcbae96-combined-ca-bundle\") pod \"memcached-0\" (UID: \"44713664-4137-4321-baff-36c54dcbae96\") " pod="openstack/memcached-0" Jan 25 08:13:59 crc kubenswrapper[4832]: I0125 08:13:59.488105 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/44713664-4137-4321-baff-36c54dcbae96-memcached-tls-certs\") pod \"memcached-0\" (UID: \"44713664-4137-4321-baff-36c54dcbae96\") " pod="openstack/memcached-0" Jan 25 08:13:59 crc kubenswrapper[4832]: I0125 08:13:59.488140 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-28mwv\" (UniqueName: \"kubernetes.io/projected/44713664-4137-4321-baff-36c54dcbae96-kube-api-access-28mwv\") pod \"memcached-0\" (UID: \"44713664-4137-4321-baff-36c54dcbae96\") " pod="openstack/memcached-0" Jan 25 08:13:59 crc kubenswrapper[4832]: I0125 08:13:59.488162 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/44713664-4137-4321-baff-36c54dcbae96-kolla-config\") pod \"memcached-0\" (UID: \"44713664-4137-4321-baff-36c54dcbae96\") " pod="openstack/memcached-0" Jan 25 08:13:59 crc kubenswrapper[4832]: I0125 08:13:59.488182 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/44713664-4137-4321-baff-36c54dcbae96-config-data\") pod \"memcached-0\" (UID: \"44713664-4137-4321-baff-36c54dcbae96\") " pod="openstack/memcached-0" Jan 25 08:13:59 crc kubenswrapper[4832]: I0125 08:13:59.594873 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/44713664-4137-4321-baff-36c54dcbae96-memcached-tls-certs\") pod \"memcached-0\" (UID: \"44713664-4137-4321-baff-36c54dcbae96\") " pod="openstack/memcached-0" Jan 25 08:13:59 crc kubenswrapper[4832]: I0125 08:13:59.594919 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-28mwv\" (UniqueName: \"kubernetes.io/projected/44713664-4137-4321-baff-36c54dcbae96-kube-api-access-28mwv\") pod \"memcached-0\" (UID: \"44713664-4137-4321-baff-36c54dcbae96\") " pod="openstack/memcached-0" Jan 25 08:13:59 crc kubenswrapper[4832]: I0125 08:13:59.594948 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/44713664-4137-4321-baff-36c54dcbae96-kolla-config\") pod \"memcached-0\" (UID: \"44713664-4137-4321-baff-36c54dcbae96\") " pod="openstack/memcached-0" Jan 25 08:13:59 crc kubenswrapper[4832]: I0125 08:13:59.594970 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/44713664-4137-4321-baff-36c54dcbae96-config-data\") pod \"memcached-0\" (UID: \"44713664-4137-4321-baff-36c54dcbae96\") " pod="openstack/memcached-0" Jan 25 08:13:59 crc kubenswrapper[4832]: I0125 08:13:59.595009 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/44713664-4137-4321-baff-36c54dcbae96-combined-ca-bundle\") pod \"memcached-0\" (UID: \"44713664-4137-4321-baff-36c54dcbae96\") " pod="openstack/memcached-0" Jan 25 08:13:59 crc kubenswrapper[4832]: I0125 08:13:59.596303 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/44713664-4137-4321-baff-36c54dcbae96-kolla-config\") pod \"memcached-0\" (UID: \"44713664-4137-4321-baff-36c54dcbae96\") " pod="openstack/memcached-0" Jan 25 08:13:59 crc kubenswrapper[4832]: I0125 08:13:59.596509 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/44713664-4137-4321-baff-36c54dcbae96-config-data\") pod \"memcached-0\" (UID: \"44713664-4137-4321-baff-36c54dcbae96\") " pod="openstack/memcached-0" Jan 25 08:13:59 crc kubenswrapper[4832]: I0125 08:13:59.601720 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/44713664-4137-4321-baff-36c54dcbae96-memcached-tls-certs\") pod \"memcached-0\" (UID: \"44713664-4137-4321-baff-36c54dcbae96\") " pod="openstack/memcached-0" Jan 25 08:13:59 crc kubenswrapper[4832]: I0125 08:13:59.611344 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/44713664-4137-4321-baff-36c54dcbae96-combined-ca-bundle\") pod \"memcached-0\" (UID: \"44713664-4137-4321-baff-36c54dcbae96\") " pod="openstack/memcached-0" Jan 25 08:13:59 crc kubenswrapper[4832]: I0125 08:13:59.613687 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-28mwv\" (UniqueName: \"kubernetes.io/projected/44713664-4137-4321-baff-36c54dcbae96-kube-api-access-28mwv\") pod \"memcached-0\" (UID: \"44713664-4137-4321-baff-36c54dcbae96\") " pod="openstack/memcached-0" Jan 25 08:13:59 crc kubenswrapper[4832]: I0125 08:13:59.758683 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Jan 25 08:13:59 crc kubenswrapper[4832]: I0125 08:13:59.768613 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Jan 25 08:14:01 crc kubenswrapper[4832]: I0125 08:14:00.998936 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Jan 25 08:14:01 crc kubenswrapper[4832]: I0125 08:14:01.001655 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 25 08:14:01 crc kubenswrapper[4832]: I0125 08:14:01.004601 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-ceilometer-dockercfg-kqhgm" Jan 25 08:14:01 crc kubenswrapper[4832]: I0125 08:14:01.034513 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-585mz\" (UniqueName: \"kubernetes.io/projected/2bf96fb8-1a77-4546-ba91-aa18499fa5c4-kube-api-access-585mz\") pod \"kube-state-metrics-0\" (UID: \"2bf96fb8-1a77-4546-ba91-aa18499fa5c4\") " pod="openstack/kube-state-metrics-0" Jan 25 08:14:01 crc kubenswrapper[4832]: I0125 08:14:01.036312 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 25 08:14:01 crc kubenswrapper[4832]: I0125 08:14:01.136658 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-585mz\" (UniqueName: \"kubernetes.io/projected/2bf96fb8-1a77-4546-ba91-aa18499fa5c4-kube-api-access-585mz\") pod \"kube-state-metrics-0\" (UID: \"2bf96fb8-1a77-4546-ba91-aa18499fa5c4\") " pod="openstack/kube-state-metrics-0" Jan 25 08:14:01 crc kubenswrapper[4832]: I0125 08:14:01.154411 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-585mz\" (UniqueName: \"kubernetes.io/projected/2bf96fb8-1a77-4546-ba91-aa18499fa5c4-kube-api-access-585mz\") pod \"kube-state-metrics-0\" (UID: \"2bf96fb8-1a77-4546-ba91-aa18499fa5c4\") " pod="openstack/kube-state-metrics-0" Jan 25 08:14:01 crc kubenswrapper[4832]: I0125 08:14:01.326241 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 25 08:14:02 crc kubenswrapper[4832]: W0125 08:14:02.552791 4832 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9b86227f_350e_4e03_aefd_00f308ccb238.slice/crio-000c97a78739b10d84af5f007299db50fd9d0dfbe104f338dba76f12ec758ed4 WatchSource:0}: Error finding container 000c97a78739b10d84af5f007299db50fd9d0dfbe104f338dba76f12ec758ed4: Status 404 returned error can't find the container with id 000c97a78739b10d84af5f007299db50fd9d0dfbe104f338dba76f12ec758ed4 Jan 25 08:14:02 crc kubenswrapper[4832]: W0125 08:14:02.557581 4832 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2f80d9a5_5d45_4053_875c_908242efc5e9.slice/crio-0d94bc578c73ae11547fdb3111358597a30e981ca6604de55f0df30a236b7445 WatchSource:0}: Error finding container 0d94bc578c73ae11547fdb3111358597a30e981ca6604de55f0df30a236b7445: Status 404 returned error can't find the container with id 0d94bc578c73ae11547fdb3111358597a30e981ca6604de55f0df30a236b7445 Jan 25 08:14:03 crc kubenswrapper[4832]: I0125 08:14:03.305576 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"2f80d9a5-5d45-4053-875c-908242efc5e9","Type":"ContainerStarted","Data":"0d94bc578c73ae11547fdb3111358597a30e981ca6604de55f0df30a236b7445"} Jan 25 08:14:03 crc kubenswrapper[4832]: I0125 08:14:03.306729 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"9b86227f-350e-4e03-aefd-00f308ccb238","Type":"ContainerStarted","Data":"000c97a78739b10d84af5f007299db50fd9d0dfbe104f338dba76f12ec758ed4"} Jan 25 08:14:04 crc kubenswrapper[4832]: I0125 08:14:04.907184 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-n6hrr"] Jan 25 08:14:04 crc kubenswrapper[4832]: I0125 08:14:04.908954 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-n6hrr" Jan 25 08:14:04 crc kubenswrapper[4832]: I0125 08:14:04.913240 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncontroller-ovncontroller-dockercfg-4r5jd" Jan 25 08:14:04 crc kubenswrapper[4832]: I0125 08:14:04.913490 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovncontroller-ovndbs" Jan 25 08:14:04 crc kubenswrapper[4832]: I0125 08:14:04.913867 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-scripts" Jan 25 08:14:04 crc kubenswrapper[4832]: I0125 08:14:04.925121 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-ovs-tk26k"] Jan 25 08:14:04 crc kubenswrapper[4832]: I0125 08:14:04.926958 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-tk26k" Jan 25 08:14:04 crc kubenswrapper[4832]: I0125 08:14:04.931074 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-nb-0"] Jan 25 08:14:04 crc kubenswrapper[4832]: I0125 08:14:04.932539 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Jan 25 08:14:04 crc kubenswrapper[4832]: I0125 08:14:04.934371 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-config" Jan 25 08:14:04 crc kubenswrapper[4832]: I0125 08:14:04.934656 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-nb-ovndbs" Jan 25 08:14:04 crc kubenswrapper[4832]: I0125 08:14:04.934858 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovn-metrics" Jan 25 08:14:04 crc kubenswrapper[4832]: I0125 08:14:04.934976 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-nb-dockercfg-8gg8f" Jan 25 08:14:04 crc kubenswrapper[4832]: I0125 08:14:04.935093 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-scripts" Jan 25 08:14:04 crc kubenswrapper[4832]: I0125 08:14:04.946542 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-tk26k"] Jan 25 08:14:04 crc kubenswrapper[4832]: I0125 08:14:04.998009 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/54cecc85-b18f-4136-bd00-cbcc0f680643-var-log-ovn\") pod \"ovn-controller-n6hrr\" (UID: \"54cecc85-b18f-4136-bd00-cbcc0f680643\") " pod="openstack/ovn-controller-n6hrr" Jan 25 08:14:04 crc kubenswrapper[4832]: I0125 08:14:04.998050 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/54cecc85-b18f-4136-bd00-cbcc0f680643-combined-ca-bundle\") pod \"ovn-controller-n6hrr\" (UID: \"54cecc85-b18f-4136-bd00-cbcc0f680643\") " pod="openstack/ovn-controller-n6hrr" Jan 25 08:14:04 crc kubenswrapper[4832]: I0125 08:14:04.998217 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/54cecc85-b18f-4136-bd00-cbcc0f680643-var-run\") pod \"ovn-controller-n6hrr\" (UID: \"54cecc85-b18f-4136-bd00-cbcc0f680643\") " pod="openstack/ovn-controller-n6hrr" Jan 25 08:14:04 crc kubenswrapper[4832]: I0125 08:14:04.999429 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k75w9\" (UniqueName: \"kubernetes.io/projected/54cecc85-b18f-4136-bd00-cbcc0f680643-kube-api-access-k75w9\") pod \"ovn-controller-n6hrr\" (UID: \"54cecc85-b18f-4136-bd00-cbcc0f680643\") " pod="openstack/ovn-controller-n6hrr" Jan 25 08:14:04 crc kubenswrapper[4832]: I0125 08:14:04.999511 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/54cecc85-b18f-4136-bd00-cbcc0f680643-ovn-controller-tls-certs\") pod \"ovn-controller-n6hrr\" (UID: \"54cecc85-b18f-4136-bd00-cbcc0f680643\") " pod="openstack/ovn-controller-n6hrr" Jan 25 08:14:04 crc kubenswrapper[4832]: I0125 08:14:04.999541 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/54cecc85-b18f-4136-bd00-cbcc0f680643-scripts\") pod \"ovn-controller-n6hrr\" (UID: \"54cecc85-b18f-4136-bd00-cbcc0f680643\") " pod="openstack/ovn-controller-n6hrr" Jan 25 08:14:04 crc kubenswrapper[4832]: I0125 08:14:04.999566 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/54cecc85-b18f-4136-bd00-cbcc0f680643-var-run-ovn\") pod \"ovn-controller-n6hrr\" (UID: \"54cecc85-b18f-4136-bd00-cbcc0f680643\") " pod="openstack/ovn-controller-n6hrr" Jan 25 08:14:05 crc kubenswrapper[4832]: I0125 08:14:05.012408 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-n6hrr"] Jan 25 08:14:05 crc kubenswrapper[4832]: I0125 08:14:05.044399 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Jan 25 08:14:05 crc kubenswrapper[4832]: I0125 08:14:05.101121 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/1eb6b5ae-927c-4920-9ad4-bc1936555efb-var-log\") pod \"ovn-controller-ovs-tk26k\" (UID: \"1eb6b5ae-927c-4920-9ad4-bc1936555efb\") " pod="openstack/ovn-controller-ovs-tk26k" Jan 25 08:14:05 crc kubenswrapper[4832]: I0125 08:14:05.101372 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/1eb6b5ae-927c-4920-9ad4-bc1936555efb-scripts\") pod \"ovn-controller-ovs-tk26k\" (UID: \"1eb6b5ae-927c-4920-9ad4-bc1936555efb\") " pod="openstack/ovn-controller-ovs-tk26k" Jan 25 08:14:05 crc kubenswrapper[4832]: I0125 08:14:05.101486 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"ovsdbserver-nb-0\" (UID: \"0d2475d7-df45-45d0-a604-22b5008d000f\") " pod="openstack/ovsdbserver-nb-0" Jan 25 08:14:05 crc kubenswrapper[4832]: I0125 08:14:05.101574 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/54cecc85-b18f-4136-bd00-cbcc0f680643-var-log-ovn\") pod \"ovn-controller-n6hrr\" (UID: \"54cecc85-b18f-4136-bd00-cbcc0f680643\") " pod="openstack/ovn-controller-n6hrr" Jan 25 08:14:05 crc kubenswrapper[4832]: I0125 08:14:05.101661 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/54cecc85-b18f-4136-bd00-cbcc0f680643-combined-ca-bundle\") pod \"ovn-controller-n6hrr\" (UID: \"54cecc85-b18f-4136-bd00-cbcc0f680643\") " pod="openstack/ovn-controller-n6hrr" Jan 25 08:14:05 crc kubenswrapper[4832]: I0125 08:14:05.101735 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zcjt9\" (UniqueName: \"kubernetes.io/projected/0d2475d7-df45-45d0-a604-22b5008d000f-kube-api-access-zcjt9\") pod \"ovsdbserver-nb-0\" (UID: \"0d2475d7-df45-45d0-a604-22b5008d000f\") " pod="openstack/ovsdbserver-nb-0" Jan 25 08:14:05 crc kubenswrapper[4832]: I0125 08:14:05.101842 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/0d2475d7-df45-45d0-a604-22b5008d000f-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"0d2475d7-df45-45d0-a604-22b5008d000f\") " pod="openstack/ovsdbserver-nb-0" Jan 25 08:14:05 crc kubenswrapper[4832]: I0125 08:14:05.101918 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0d2475d7-df45-45d0-a604-22b5008d000f-config\") pod \"ovsdbserver-nb-0\" (UID: \"0d2475d7-df45-45d0-a604-22b5008d000f\") " pod="openstack/ovsdbserver-nb-0" Jan 25 08:14:05 crc kubenswrapper[4832]: I0125 08:14:05.102001 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/54cecc85-b18f-4136-bd00-cbcc0f680643-var-run\") pod \"ovn-controller-n6hrr\" (UID: \"54cecc85-b18f-4136-bd00-cbcc0f680643\") " pod="openstack/ovn-controller-n6hrr" Jan 25 08:14:05 crc kubenswrapper[4832]: I0125 08:14:05.102093 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k75w9\" (UniqueName: \"kubernetes.io/projected/54cecc85-b18f-4136-bd00-cbcc0f680643-kube-api-access-k75w9\") pod \"ovn-controller-n6hrr\" (UID: \"54cecc85-b18f-4136-bd00-cbcc0f680643\") " pod="openstack/ovn-controller-n6hrr" Jan 25 08:14:05 crc kubenswrapper[4832]: I0125 08:14:05.102190 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/0d2475d7-df45-45d0-a604-22b5008d000f-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"0d2475d7-df45-45d0-a604-22b5008d000f\") " pod="openstack/ovsdbserver-nb-0" Jan 25 08:14:05 crc kubenswrapper[4832]: I0125 08:14:05.102139 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/54cecc85-b18f-4136-bd00-cbcc0f680643-var-log-ovn\") pod \"ovn-controller-n6hrr\" (UID: \"54cecc85-b18f-4136-bd00-cbcc0f680643\") " pod="openstack/ovn-controller-n6hrr" Jan 25 08:14:05 crc kubenswrapper[4832]: I0125 08:14:05.102275 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/54cecc85-b18f-4136-bd00-cbcc0f680643-var-run\") pod \"ovn-controller-n6hrr\" (UID: \"54cecc85-b18f-4136-bd00-cbcc0f680643\") " pod="openstack/ovn-controller-n6hrr" Jan 25 08:14:05 crc kubenswrapper[4832]: I0125 08:14:05.102370 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/54cecc85-b18f-4136-bd00-cbcc0f680643-ovn-controller-tls-certs\") pod \"ovn-controller-n6hrr\" (UID: \"54cecc85-b18f-4136-bd00-cbcc0f680643\") " pod="openstack/ovn-controller-n6hrr" Jan 25 08:14:05 crc kubenswrapper[4832]: I0125 08:14:05.102515 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/0d2475d7-df45-45d0-a604-22b5008d000f-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"0d2475d7-df45-45d0-a604-22b5008d000f\") " pod="openstack/ovsdbserver-nb-0" Jan 25 08:14:05 crc kubenswrapper[4832]: I0125 08:14:05.102594 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/0d2475d7-df45-45d0-a604-22b5008d000f-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"0d2475d7-df45-45d0-a604-22b5008d000f\") " pod="openstack/ovsdbserver-nb-0" Jan 25 08:14:05 crc kubenswrapper[4832]: I0125 08:14:05.102683 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/54cecc85-b18f-4136-bd00-cbcc0f680643-scripts\") pod \"ovn-controller-n6hrr\" (UID: \"54cecc85-b18f-4136-bd00-cbcc0f680643\") " pod="openstack/ovn-controller-n6hrr" Jan 25 08:14:05 crc kubenswrapper[4832]: I0125 08:14:05.102760 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/1eb6b5ae-927c-4920-9ad4-bc1936555efb-var-run\") pod \"ovn-controller-ovs-tk26k\" (UID: \"1eb6b5ae-927c-4920-9ad4-bc1936555efb\") " pod="openstack/ovn-controller-ovs-tk26k" Jan 25 08:14:05 crc kubenswrapper[4832]: I0125 08:14:05.102834 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/54cecc85-b18f-4136-bd00-cbcc0f680643-var-run-ovn\") pod \"ovn-controller-n6hrr\" (UID: \"54cecc85-b18f-4136-bd00-cbcc0f680643\") " pod="openstack/ovn-controller-n6hrr" Jan 25 08:14:05 crc kubenswrapper[4832]: I0125 08:14:05.102905 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/1eb6b5ae-927c-4920-9ad4-bc1936555efb-etc-ovs\") pod \"ovn-controller-ovs-tk26k\" (UID: \"1eb6b5ae-927c-4920-9ad4-bc1936555efb\") " pod="openstack/ovn-controller-ovs-tk26k" Jan 25 08:14:05 crc kubenswrapper[4832]: I0125 08:14:05.102982 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/1eb6b5ae-927c-4920-9ad4-bc1936555efb-var-lib\") pod \"ovn-controller-ovs-tk26k\" (UID: \"1eb6b5ae-927c-4920-9ad4-bc1936555efb\") " pod="openstack/ovn-controller-ovs-tk26k" Jan 25 08:14:05 crc kubenswrapper[4832]: I0125 08:14:05.103071 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k4lvk\" (UniqueName: \"kubernetes.io/projected/1eb6b5ae-927c-4920-9ad4-bc1936555efb-kube-api-access-k4lvk\") pod \"ovn-controller-ovs-tk26k\" (UID: \"1eb6b5ae-927c-4920-9ad4-bc1936555efb\") " pod="openstack/ovn-controller-ovs-tk26k" Jan 25 08:14:05 crc kubenswrapper[4832]: I0125 08:14:05.103146 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0d2475d7-df45-45d0-a604-22b5008d000f-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"0d2475d7-df45-45d0-a604-22b5008d000f\") " pod="openstack/ovsdbserver-nb-0" Jan 25 08:14:05 crc kubenswrapper[4832]: I0125 08:14:05.103002 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/54cecc85-b18f-4136-bd00-cbcc0f680643-var-run-ovn\") pod \"ovn-controller-n6hrr\" (UID: \"54cecc85-b18f-4136-bd00-cbcc0f680643\") " pod="openstack/ovn-controller-n6hrr" Jan 25 08:14:05 crc kubenswrapper[4832]: I0125 08:14:05.104666 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/54cecc85-b18f-4136-bd00-cbcc0f680643-scripts\") pod \"ovn-controller-n6hrr\" (UID: \"54cecc85-b18f-4136-bd00-cbcc0f680643\") " pod="openstack/ovn-controller-n6hrr" Jan 25 08:14:05 crc kubenswrapper[4832]: I0125 08:14:05.107086 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/54cecc85-b18f-4136-bd00-cbcc0f680643-ovn-controller-tls-certs\") pod \"ovn-controller-n6hrr\" (UID: \"54cecc85-b18f-4136-bd00-cbcc0f680643\") " pod="openstack/ovn-controller-n6hrr" Jan 25 08:14:05 crc kubenswrapper[4832]: I0125 08:14:05.109963 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/54cecc85-b18f-4136-bd00-cbcc0f680643-combined-ca-bundle\") pod \"ovn-controller-n6hrr\" (UID: \"54cecc85-b18f-4136-bd00-cbcc0f680643\") " pod="openstack/ovn-controller-n6hrr" Jan 25 08:14:05 crc kubenswrapper[4832]: I0125 08:14:05.117537 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k75w9\" (UniqueName: \"kubernetes.io/projected/54cecc85-b18f-4136-bd00-cbcc0f680643-kube-api-access-k75w9\") pod \"ovn-controller-n6hrr\" (UID: \"54cecc85-b18f-4136-bd00-cbcc0f680643\") " pod="openstack/ovn-controller-n6hrr" Jan 25 08:14:05 crc kubenswrapper[4832]: I0125 08:14:05.204909 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/1eb6b5ae-927c-4920-9ad4-bc1936555efb-var-run\") pod \"ovn-controller-ovs-tk26k\" (UID: \"1eb6b5ae-927c-4920-9ad4-bc1936555efb\") " pod="openstack/ovn-controller-ovs-tk26k" Jan 25 08:14:05 crc kubenswrapper[4832]: I0125 08:14:05.205231 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/0d2475d7-df45-45d0-a604-22b5008d000f-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"0d2475d7-df45-45d0-a604-22b5008d000f\") " pod="openstack/ovsdbserver-nb-0" Jan 25 08:14:05 crc kubenswrapper[4832]: I0125 08:14:05.205258 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/1eb6b5ae-927c-4920-9ad4-bc1936555efb-etc-ovs\") pod \"ovn-controller-ovs-tk26k\" (UID: \"1eb6b5ae-927c-4920-9ad4-bc1936555efb\") " pod="openstack/ovn-controller-ovs-tk26k" Jan 25 08:14:05 crc kubenswrapper[4832]: I0125 08:14:05.205274 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/1eb6b5ae-927c-4920-9ad4-bc1936555efb-var-lib\") pod \"ovn-controller-ovs-tk26k\" (UID: \"1eb6b5ae-927c-4920-9ad4-bc1936555efb\") " pod="openstack/ovn-controller-ovs-tk26k" Jan 25 08:14:05 crc kubenswrapper[4832]: I0125 08:14:05.205293 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k4lvk\" (UniqueName: \"kubernetes.io/projected/1eb6b5ae-927c-4920-9ad4-bc1936555efb-kube-api-access-k4lvk\") pod \"ovn-controller-ovs-tk26k\" (UID: \"1eb6b5ae-927c-4920-9ad4-bc1936555efb\") " pod="openstack/ovn-controller-ovs-tk26k" Jan 25 08:14:05 crc kubenswrapper[4832]: I0125 08:14:05.205312 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0d2475d7-df45-45d0-a604-22b5008d000f-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"0d2475d7-df45-45d0-a604-22b5008d000f\") " pod="openstack/ovsdbserver-nb-0" Jan 25 08:14:05 crc kubenswrapper[4832]: I0125 08:14:05.205354 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/1eb6b5ae-927c-4920-9ad4-bc1936555efb-var-log\") pod \"ovn-controller-ovs-tk26k\" (UID: \"1eb6b5ae-927c-4920-9ad4-bc1936555efb\") " pod="openstack/ovn-controller-ovs-tk26k" Jan 25 08:14:05 crc kubenswrapper[4832]: I0125 08:14:05.205376 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/1eb6b5ae-927c-4920-9ad4-bc1936555efb-scripts\") pod \"ovn-controller-ovs-tk26k\" (UID: \"1eb6b5ae-927c-4920-9ad4-bc1936555efb\") " pod="openstack/ovn-controller-ovs-tk26k" Jan 25 08:14:05 crc kubenswrapper[4832]: I0125 08:14:05.205409 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"ovsdbserver-nb-0\" (UID: \"0d2475d7-df45-45d0-a604-22b5008d000f\") " pod="openstack/ovsdbserver-nb-0" Jan 25 08:14:05 crc kubenswrapper[4832]: I0125 08:14:05.205432 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zcjt9\" (UniqueName: \"kubernetes.io/projected/0d2475d7-df45-45d0-a604-22b5008d000f-kube-api-access-zcjt9\") pod \"ovsdbserver-nb-0\" (UID: \"0d2475d7-df45-45d0-a604-22b5008d000f\") " pod="openstack/ovsdbserver-nb-0" Jan 25 08:14:05 crc kubenswrapper[4832]: I0125 08:14:05.205476 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/0d2475d7-df45-45d0-a604-22b5008d000f-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"0d2475d7-df45-45d0-a604-22b5008d000f\") " pod="openstack/ovsdbserver-nb-0" Jan 25 08:14:05 crc kubenswrapper[4832]: I0125 08:14:05.205492 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0d2475d7-df45-45d0-a604-22b5008d000f-config\") pod \"ovsdbserver-nb-0\" (UID: \"0d2475d7-df45-45d0-a604-22b5008d000f\") " pod="openstack/ovsdbserver-nb-0" Jan 25 08:14:05 crc kubenswrapper[4832]: I0125 08:14:05.205519 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/0d2475d7-df45-45d0-a604-22b5008d000f-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"0d2475d7-df45-45d0-a604-22b5008d000f\") " pod="openstack/ovsdbserver-nb-0" Jan 25 08:14:05 crc kubenswrapper[4832]: I0125 08:14:05.205541 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/0d2475d7-df45-45d0-a604-22b5008d000f-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"0d2475d7-df45-45d0-a604-22b5008d000f\") " pod="openstack/ovsdbserver-nb-0" Jan 25 08:14:05 crc kubenswrapper[4832]: I0125 08:14:05.205968 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/0d2475d7-df45-45d0-a604-22b5008d000f-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"0d2475d7-df45-45d0-a604-22b5008d000f\") " pod="openstack/ovsdbserver-nb-0" Jan 25 08:14:05 crc kubenswrapper[4832]: I0125 08:14:05.205090 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/1eb6b5ae-927c-4920-9ad4-bc1936555efb-var-run\") pod \"ovn-controller-ovs-tk26k\" (UID: \"1eb6b5ae-927c-4920-9ad4-bc1936555efb\") " pod="openstack/ovn-controller-ovs-tk26k" Jan 25 08:14:05 crc kubenswrapper[4832]: I0125 08:14:05.206887 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/1eb6b5ae-927c-4920-9ad4-bc1936555efb-etc-ovs\") pod \"ovn-controller-ovs-tk26k\" (UID: \"1eb6b5ae-927c-4920-9ad4-bc1936555efb\") " pod="openstack/ovn-controller-ovs-tk26k" Jan 25 08:14:05 crc kubenswrapper[4832]: I0125 08:14:05.206968 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/1eb6b5ae-927c-4920-9ad4-bc1936555efb-var-lib\") pod \"ovn-controller-ovs-tk26k\" (UID: \"1eb6b5ae-927c-4920-9ad4-bc1936555efb\") " pod="openstack/ovn-controller-ovs-tk26k" Jan 25 08:14:05 crc kubenswrapper[4832]: I0125 08:14:05.207113 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/1eb6b5ae-927c-4920-9ad4-bc1936555efb-var-log\") pod \"ovn-controller-ovs-tk26k\" (UID: \"1eb6b5ae-927c-4920-9ad4-bc1936555efb\") " pod="openstack/ovn-controller-ovs-tk26k" Jan 25 08:14:05 crc kubenswrapper[4832]: I0125 08:14:05.207274 4832 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"ovsdbserver-nb-0\" (UID: \"0d2475d7-df45-45d0-a604-22b5008d000f\") device mount path \"/mnt/openstack/pv03\"" pod="openstack/ovsdbserver-nb-0" Jan 25 08:14:05 crc kubenswrapper[4832]: I0125 08:14:05.208234 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0d2475d7-df45-45d0-a604-22b5008d000f-config\") pod \"ovsdbserver-nb-0\" (UID: \"0d2475d7-df45-45d0-a604-22b5008d000f\") " pod="openstack/ovsdbserver-nb-0" Jan 25 08:14:05 crc kubenswrapper[4832]: I0125 08:14:05.209088 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/0d2475d7-df45-45d0-a604-22b5008d000f-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"0d2475d7-df45-45d0-a604-22b5008d000f\") " pod="openstack/ovsdbserver-nb-0" Jan 25 08:14:05 crc kubenswrapper[4832]: I0125 08:14:05.210253 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/0d2475d7-df45-45d0-a604-22b5008d000f-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"0d2475d7-df45-45d0-a604-22b5008d000f\") " pod="openstack/ovsdbserver-nb-0" Jan 25 08:14:05 crc kubenswrapper[4832]: I0125 08:14:05.210338 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/0d2475d7-df45-45d0-a604-22b5008d000f-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"0d2475d7-df45-45d0-a604-22b5008d000f\") " pod="openstack/ovsdbserver-nb-0" Jan 25 08:14:05 crc kubenswrapper[4832]: I0125 08:14:05.211163 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0d2475d7-df45-45d0-a604-22b5008d000f-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"0d2475d7-df45-45d0-a604-22b5008d000f\") " pod="openstack/ovsdbserver-nb-0" Jan 25 08:14:05 crc kubenswrapper[4832]: I0125 08:14:05.217813 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/1eb6b5ae-927c-4920-9ad4-bc1936555efb-scripts\") pod \"ovn-controller-ovs-tk26k\" (UID: \"1eb6b5ae-927c-4920-9ad4-bc1936555efb\") " pod="openstack/ovn-controller-ovs-tk26k" Jan 25 08:14:05 crc kubenswrapper[4832]: I0125 08:14:05.223032 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k4lvk\" (UniqueName: \"kubernetes.io/projected/1eb6b5ae-927c-4920-9ad4-bc1936555efb-kube-api-access-k4lvk\") pod \"ovn-controller-ovs-tk26k\" (UID: \"1eb6b5ae-927c-4920-9ad4-bc1936555efb\") " pod="openstack/ovn-controller-ovs-tk26k" Jan 25 08:14:05 crc kubenswrapper[4832]: I0125 08:14:05.225419 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"ovsdbserver-nb-0\" (UID: \"0d2475d7-df45-45d0-a604-22b5008d000f\") " pod="openstack/ovsdbserver-nb-0" Jan 25 08:14:05 crc kubenswrapper[4832]: I0125 08:14:05.226353 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zcjt9\" (UniqueName: \"kubernetes.io/projected/0d2475d7-df45-45d0-a604-22b5008d000f-kube-api-access-zcjt9\") pod \"ovsdbserver-nb-0\" (UID: \"0d2475d7-df45-45d0-a604-22b5008d000f\") " pod="openstack/ovsdbserver-nb-0" Jan 25 08:14:05 crc kubenswrapper[4832]: I0125 08:14:05.280969 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-n6hrr" Jan 25 08:14:05 crc kubenswrapper[4832]: I0125 08:14:05.291008 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-tk26k" Jan 25 08:14:05 crc kubenswrapper[4832]: I0125 08:14:05.298406 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Jan 25 08:14:08 crc kubenswrapper[4832]: I0125 08:14:08.954558 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-sb-0"] Jan 25 08:14:08 crc kubenswrapper[4832]: I0125 08:14:08.956582 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Jan 25 08:14:08 crc kubenswrapper[4832]: I0125 08:14:08.958974 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-scripts" Jan 25 08:14:08 crc kubenswrapper[4832]: I0125 08:14:08.960586 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-sb-ovndbs" Jan 25 08:14:08 crc kubenswrapper[4832]: I0125 08:14:08.960780 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-config" Jan 25 08:14:08 crc kubenswrapper[4832]: I0125 08:14:08.973764 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-sb-dockercfg-fghrl" Jan 25 08:14:08 crc kubenswrapper[4832]: I0125 08:14:08.995975 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Jan 25 08:14:09 crc kubenswrapper[4832]: I0125 08:14:09.080139 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/666395bf-0cf6-4e7a-a0d0-2ad1a8928424-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"666395bf-0cf6-4e7a-a0d0-2ad1a8928424\") " pod="openstack/ovsdbserver-sb-0" Jan 25 08:14:09 crc kubenswrapper[4832]: I0125 08:14:09.080220 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/666395bf-0cf6-4e7a-a0d0-2ad1a8928424-config\") pod \"ovsdbserver-sb-0\" (UID: \"666395bf-0cf6-4e7a-a0d0-2ad1a8928424\") " pod="openstack/ovsdbserver-sb-0" Jan 25 08:14:09 crc kubenswrapper[4832]: I0125 08:14:09.080550 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bkpmf\" (UniqueName: \"kubernetes.io/projected/666395bf-0cf6-4e7a-a0d0-2ad1a8928424-kube-api-access-bkpmf\") pod \"ovsdbserver-sb-0\" (UID: \"666395bf-0cf6-4e7a-a0d0-2ad1a8928424\") " pod="openstack/ovsdbserver-sb-0" Jan 25 08:14:09 crc kubenswrapper[4832]: I0125 08:14:09.081035 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/666395bf-0cf6-4e7a-a0d0-2ad1a8928424-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"666395bf-0cf6-4e7a-a0d0-2ad1a8928424\") " pod="openstack/ovsdbserver-sb-0" Jan 25 08:14:09 crc kubenswrapper[4832]: I0125 08:14:09.081134 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"ovsdbserver-sb-0\" (UID: \"666395bf-0cf6-4e7a-a0d0-2ad1a8928424\") " pod="openstack/ovsdbserver-sb-0" Jan 25 08:14:09 crc kubenswrapper[4832]: I0125 08:14:09.081165 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/666395bf-0cf6-4e7a-a0d0-2ad1a8928424-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"666395bf-0cf6-4e7a-a0d0-2ad1a8928424\") " pod="openstack/ovsdbserver-sb-0" Jan 25 08:14:09 crc kubenswrapper[4832]: I0125 08:14:09.081184 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/666395bf-0cf6-4e7a-a0d0-2ad1a8928424-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"666395bf-0cf6-4e7a-a0d0-2ad1a8928424\") " pod="openstack/ovsdbserver-sb-0" Jan 25 08:14:09 crc kubenswrapper[4832]: I0125 08:14:09.081279 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/666395bf-0cf6-4e7a-a0d0-2ad1a8928424-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"666395bf-0cf6-4e7a-a0d0-2ad1a8928424\") " pod="openstack/ovsdbserver-sb-0" Jan 25 08:14:09 crc kubenswrapper[4832]: I0125 08:14:09.183154 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/666395bf-0cf6-4e7a-a0d0-2ad1a8928424-config\") pod \"ovsdbserver-sb-0\" (UID: \"666395bf-0cf6-4e7a-a0d0-2ad1a8928424\") " pod="openstack/ovsdbserver-sb-0" Jan 25 08:14:09 crc kubenswrapper[4832]: I0125 08:14:09.183995 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bkpmf\" (UniqueName: \"kubernetes.io/projected/666395bf-0cf6-4e7a-a0d0-2ad1a8928424-kube-api-access-bkpmf\") pod \"ovsdbserver-sb-0\" (UID: \"666395bf-0cf6-4e7a-a0d0-2ad1a8928424\") " pod="openstack/ovsdbserver-sb-0" Jan 25 08:14:09 crc kubenswrapper[4832]: I0125 08:14:09.184086 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/666395bf-0cf6-4e7a-a0d0-2ad1a8928424-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"666395bf-0cf6-4e7a-a0d0-2ad1a8928424\") " pod="openstack/ovsdbserver-sb-0" Jan 25 08:14:09 crc kubenswrapper[4832]: I0125 08:14:09.184114 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"ovsdbserver-sb-0\" (UID: \"666395bf-0cf6-4e7a-a0d0-2ad1a8928424\") " pod="openstack/ovsdbserver-sb-0" Jan 25 08:14:09 crc kubenswrapper[4832]: I0125 08:14:09.184131 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/666395bf-0cf6-4e7a-a0d0-2ad1a8928424-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"666395bf-0cf6-4e7a-a0d0-2ad1a8928424\") " pod="openstack/ovsdbserver-sb-0" Jan 25 08:14:09 crc kubenswrapper[4832]: I0125 08:14:09.184146 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/666395bf-0cf6-4e7a-a0d0-2ad1a8928424-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"666395bf-0cf6-4e7a-a0d0-2ad1a8928424\") " pod="openstack/ovsdbserver-sb-0" Jan 25 08:14:09 crc kubenswrapper[4832]: I0125 08:14:09.184166 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/666395bf-0cf6-4e7a-a0d0-2ad1a8928424-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"666395bf-0cf6-4e7a-a0d0-2ad1a8928424\") " pod="openstack/ovsdbserver-sb-0" Jan 25 08:14:09 crc kubenswrapper[4832]: I0125 08:14:09.184199 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/666395bf-0cf6-4e7a-a0d0-2ad1a8928424-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"666395bf-0cf6-4e7a-a0d0-2ad1a8928424\") " pod="openstack/ovsdbserver-sb-0" Jan 25 08:14:09 crc kubenswrapper[4832]: I0125 08:14:09.183947 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/666395bf-0cf6-4e7a-a0d0-2ad1a8928424-config\") pod \"ovsdbserver-sb-0\" (UID: \"666395bf-0cf6-4e7a-a0d0-2ad1a8928424\") " pod="openstack/ovsdbserver-sb-0" Jan 25 08:14:09 crc kubenswrapper[4832]: I0125 08:14:09.184996 4832 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"ovsdbserver-sb-0\" (UID: \"666395bf-0cf6-4e7a-a0d0-2ad1a8928424\") device mount path \"/mnt/openstack/pv09\"" pod="openstack/ovsdbserver-sb-0" Jan 25 08:14:09 crc kubenswrapper[4832]: I0125 08:14:09.185062 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/666395bf-0cf6-4e7a-a0d0-2ad1a8928424-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"666395bf-0cf6-4e7a-a0d0-2ad1a8928424\") " pod="openstack/ovsdbserver-sb-0" Jan 25 08:14:09 crc kubenswrapper[4832]: I0125 08:14:09.185628 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/666395bf-0cf6-4e7a-a0d0-2ad1a8928424-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"666395bf-0cf6-4e7a-a0d0-2ad1a8928424\") " pod="openstack/ovsdbserver-sb-0" Jan 25 08:14:09 crc kubenswrapper[4832]: I0125 08:14:09.193747 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/666395bf-0cf6-4e7a-a0d0-2ad1a8928424-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"666395bf-0cf6-4e7a-a0d0-2ad1a8928424\") " pod="openstack/ovsdbserver-sb-0" Jan 25 08:14:09 crc kubenswrapper[4832]: I0125 08:14:09.195254 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/666395bf-0cf6-4e7a-a0d0-2ad1a8928424-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"666395bf-0cf6-4e7a-a0d0-2ad1a8928424\") " pod="openstack/ovsdbserver-sb-0" Jan 25 08:14:09 crc kubenswrapper[4832]: I0125 08:14:09.195627 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/666395bf-0cf6-4e7a-a0d0-2ad1a8928424-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"666395bf-0cf6-4e7a-a0d0-2ad1a8928424\") " pod="openstack/ovsdbserver-sb-0" Jan 25 08:14:09 crc kubenswrapper[4832]: I0125 08:14:09.204319 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bkpmf\" (UniqueName: \"kubernetes.io/projected/666395bf-0cf6-4e7a-a0d0-2ad1a8928424-kube-api-access-bkpmf\") pod \"ovsdbserver-sb-0\" (UID: \"666395bf-0cf6-4e7a-a0d0-2ad1a8928424\") " pod="openstack/ovsdbserver-sb-0" Jan 25 08:14:09 crc kubenswrapper[4832]: I0125 08:14:09.212125 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"ovsdbserver-sb-0\" (UID: \"666395bf-0cf6-4e7a-a0d0-2ad1a8928424\") " pod="openstack/ovsdbserver-sb-0" Jan 25 08:14:09 crc kubenswrapper[4832]: I0125 08:14:09.296381 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Jan 25 08:14:19 crc kubenswrapper[4832]: E0125 08:14:19.348076 4832 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Jan 25 08:14:19 crc kubenswrapper[4832]: E0125 08:14:19.348716 4832 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n659h4h664hbh658h587h67ch89h587h8fh679hc6hf9h55fh644h5d5h698h68dh5cdh5ffh669h54ch9h689hb8hd4h5bfhd8h5d7h5fh665h574q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-plwkd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-57d769cc4f-jwr5g_openstack(01866d50-e28c-44e2-a57d-5d5a7ea04626): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 25 08:14:19 crc kubenswrapper[4832]: E0125 08:14:19.349927 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-57d769cc4f-jwr5g" podUID="01866d50-e28c-44e2-a57d-5d5a7ea04626" Jan 25 08:14:19 crc kubenswrapper[4832]: E0125 08:14:19.423124 4832 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Jan 25 08:14:19 crc kubenswrapper[4832]: E0125 08:14:19.423626 4832 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n68chd6h679hbfh55fhc6h5ffh5d8h94h56ch589hb4hc5h57bh677hcdh655h8dh667h675h654h66ch567h8fh659h5b4h675h566h55bh54h67dh6dq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-s5w8s,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-666b6646f7-gfs8w_openstack(daa59b36-5024-41ae-88f1-49703006f341): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 25 08:14:19 crc kubenswrapper[4832]: E0125 08:14:19.425263 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-666b6646f7-gfs8w" podUID="daa59b36-5024-41ae-88f1-49703006f341" Jan 25 08:14:19 crc kubenswrapper[4832]: E0125 08:14:19.429775 4832 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Jan 25 08:14:19 crc kubenswrapper[4832]: E0125 08:14:19.429899 4832 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nffh5bdhf4h5f8h79h55h77h58fh56dh7bh6fh578hbch55dh68h56bhd9h65dh57ch658hc9h566h666h688h58h65dh684h5d7h6ch575h5d6h88q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-c2fcg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-675f4bcbfc-qrk8t_openstack(5e04d739-fa58-4eeb-aa09-415c9472a144): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 25 08:14:19 crc kubenswrapper[4832]: E0125 08:14:19.431016 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-675f4bcbfc-qrk8t" podUID="5e04d739-fa58-4eeb-aa09-415c9472a144" Jan 25 08:14:19 crc kubenswrapper[4832]: E0125 08:14:19.439857 4832 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Jan 25 08:14:19 crc kubenswrapper[4832]: E0125 08:14:19.439980 4832 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:ndfhb5h667h568h584h5f9h58dh565h664h587h597h577h64bh5c4h66fh647hbdh68ch5c5h68dh686h5f7h64hd7hc6h55fh57bh98h57fh87h5fh57fq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-vzppj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-78dd6ddcc-hl42z_openstack(b53b2f44-1755-45cd-b63e-32e5109e10c1): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 25 08:14:19 crc kubenswrapper[4832]: E0125 08:14:19.441314 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-78dd6ddcc-hl42z" podUID="b53b2f44-1755-45cd-b63e-32e5109e10c1" Jan 25 08:14:19 crc kubenswrapper[4832]: E0125 08:14:19.464822 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified\\\"\"" pod="openstack/dnsmasq-dns-666b6646f7-gfs8w" podUID="daa59b36-5024-41ae-88f1-49703006f341" Jan 25 08:14:19 crc kubenswrapper[4832]: E0125 08:14:19.464835 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified\\\"\"" pod="openstack/dnsmasq-dns-57d769cc4f-jwr5g" podUID="01866d50-e28c-44e2-a57d-5d5a7ea04626" Jan 25 08:14:19 crc kubenswrapper[4832]: I0125 08:14:19.871306 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-qrk8t" Jan 25 08:14:20 crc kubenswrapper[4832]: I0125 08:14:19.996372 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c2fcg\" (UniqueName: \"kubernetes.io/projected/5e04d739-fa58-4eeb-aa09-415c9472a144-kube-api-access-c2fcg\") pod \"5e04d739-fa58-4eeb-aa09-415c9472a144\" (UID: \"5e04d739-fa58-4eeb-aa09-415c9472a144\") " Jan 25 08:14:20 crc kubenswrapper[4832]: I0125 08:14:19.996509 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5e04d739-fa58-4eeb-aa09-415c9472a144-config\") pod \"5e04d739-fa58-4eeb-aa09-415c9472a144\" (UID: \"5e04d739-fa58-4eeb-aa09-415c9472a144\") " Jan 25 08:14:20 crc kubenswrapper[4832]: I0125 08:14:19.997368 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5e04d739-fa58-4eeb-aa09-415c9472a144-config" (OuterVolumeSpecName: "config") pod "5e04d739-fa58-4eeb-aa09-415c9472a144" (UID: "5e04d739-fa58-4eeb-aa09-415c9472a144"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 25 08:14:20 crc kubenswrapper[4832]: I0125 08:14:19.998800 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 25 08:14:20 crc kubenswrapper[4832]: I0125 08:14:20.005539 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Jan 25 08:14:20 crc kubenswrapper[4832]: I0125 08:14:20.016668 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5e04d739-fa58-4eeb-aa09-415c9472a144-kube-api-access-c2fcg" (OuterVolumeSpecName: "kube-api-access-c2fcg") pod "5e04d739-fa58-4eeb-aa09-415c9472a144" (UID: "5e04d739-fa58-4eeb-aa09-415c9472a144"). InnerVolumeSpecName "kube-api-access-c2fcg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 25 08:14:20 crc kubenswrapper[4832]: W0125 08:14:20.018829 4832 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod43f07a95_68ce_4138_b2ff_ef2543e68e46.slice/crio-514b7869cb7965d24398579b6e3c61cff30b9eff66577a22411d84dd6e72bd8f WatchSource:0}: Error finding container 514b7869cb7965d24398579b6e3c61cff30b9eff66577a22411d84dd6e72bd8f: Status 404 returned error can't find the container with id 514b7869cb7965d24398579b6e3c61cff30b9eff66577a22411d84dd6e72bd8f Jan 25 08:14:20 crc kubenswrapper[4832]: I0125 08:14:20.035694 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Jan 25 08:14:20 crc kubenswrapper[4832]: I0125 08:14:20.102361 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c2fcg\" (UniqueName: \"kubernetes.io/projected/5e04d739-fa58-4eeb-aa09-415c9472a144-kube-api-access-c2fcg\") on node \"crc\" DevicePath \"\"" Jan 25 08:14:20 crc kubenswrapper[4832]: I0125 08:14:20.102496 4832 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5e04d739-fa58-4eeb-aa09-415c9472a144-config\") on node \"crc\" DevicePath \"\"" Jan 25 08:14:20 crc kubenswrapper[4832]: I0125 08:14:20.155298 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-tk26k"] Jan 25 08:14:20 crc kubenswrapper[4832]: I0125 08:14:20.168237 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-n6hrr"] Jan 25 08:14:20 crc kubenswrapper[4832]: I0125 08:14:20.194911 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Jan 25 08:14:20 crc kubenswrapper[4832]: W0125 08:14:20.206336 4832 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9ca53255_293b_4c35_a202_ac7ad7ac8d65.slice/crio-36e6bf47cfe7d52d1c535c8c1a449bb4f3ea3f38a2f7d108b0cfb593809227d5 WatchSource:0}: Error finding container 36e6bf47cfe7d52d1c535c8c1a449bb4f3ea3f38a2f7d108b0cfb593809227d5: Status 404 returned error can't find the container with id 36e6bf47cfe7d52d1c535c8c1a449bb4f3ea3f38a2f7d108b0cfb593809227d5 Jan 25 08:14:20 crc kubenswrapper[4832]: W0125 08:14:20.264427 4832 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0d2475d7_df45_45d0_a604_22b5008d000f.slice/crio-d819964c6fa3def9418729d0f8886d2c6a8372ff8cac92364bc7eb20a670273c WatchSource:0}: Error finding container d819964c6fa3def9418729d0f8886d2c6a8372ff8cac92364bc7eb20a670273c: Status 404 returned error can't find the container with id d819964c6fa3def9418729d0f8886d2c6a8372ff8cac92364bc7eb20a670273c Jan 25 08:14:20 crc kubenswrapper[4832]: I0125 08:14:20.266211 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Jan 25 08:14:20 crc kubenswrapper[4832]: I0125 08:14:20.463123 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"44713664-4137-4321-baff-36c54dcbae96","Type":"ContainerStarted","Data":"25dcaeeef4dfbbe12ad97a49533a370c52eff96096d344dadb9142f0c278a35d"} Jan 25 08:14:20 crc kubenswrapper[4832]: I0125 08:14:20.464394 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-n6hrr" event={"ID":"54cecc85-b18f-4136-bd00-cbcc0f680643","Type":"ContainerStarted","Data":"1a4af0c77d925f9c508c9eb67f157f65fdfdb43a2ffad849a45fadebc4efde3f"} Jan 25 08:14:20 crc kubenswrapper[4832]: I0125 08:14:20.465405 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"9ca53255-293b-4c35-a202-ac7ad7ac8d65","Type":"ContainerStarted","Data":"36e6bf47cfe7d52d1c535c8c1a449bb4f3ea3f38a2f7d108b0cfb593809227d5"} Jan 25 08:14:20 crc kubenswrapper[4832]: I0125 08:14:20.466536 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"43f07a95-68ce-4138-b2ff-ef2543e68e46","Type":"ContainerStarted","Data":"514b7869cb7965d24398579b6e3c61cff30b9eff66577a22411d84dd6e72bd8f"} Jan 25 08:14:20 crc kubenswrapper[4832]: I0125 08:14:20.467451 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-tk26k" event={"ID":"1eb6b5ae-927c-4920-9ad4-bc1936555efb","Type":"ContainerStarted","Data":"8562cda2ceabde6c1fcad8ced6818e463d9b914e569e45067bae2a03894c508a"} Jan 25 08:14:20 crc kubenswrapper[4832]: I0125 08:14:20.468492 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-675f4bcbfc-qrk8t" event={"ID":"5e04d739-fa58-4eeb-aa09-415c9472a144","Type":"ContainerDied","Data":"78b4b5bf1115971f71d6430b0f4ffe7b99851af8e22edb776ece15ae7484b005"} Jan 25 08:14:20 crc kubenswrapper[4832]: I0125 08:14:20.468520 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-qrk8t" Jan 25 08:14:20 crc kubenswrapper[4832]: I0125 08:14:20.469826 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"0d2475d7-df45-45d0-a604-22b5008d000f","Type":"ContainerStarted","Data":"d819964c6fa3def9418729d0f8886d2c6a8372ff8cac92364bc7eb20a670273c"} Jan 25 08:14:20 crc kubenswrapper[4832]: I0125 08:14:20.473132 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"2bf96fb8-1a77-4546-ba91-aa18499fa5c4","Type":"ContainerStarted","Data":"631abf2a2e5554c2327a2bbc655e10f6c7c1fba7de706586683185004fe4b4b0"} Jan 25 08:14:20 crc kubenswrapper[4832]: I0125 08:14:20.527929 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-qrk8t"] Jan 25 08:14:20 crc kubenswrapper[4832]: I0125 08:14:20.533485 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-qrk8t"] Jan 25 08:14:20 crc kubenswrapper[4832]: I0125 08:14:20.792136 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-hl42z" Jan 25 08:14:20 crc kubenswrapper[4832]: I0125 08:14:20.895575 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Jan 25 08:14:20 crc kubenswrapper[4832]: I0125 08:14:20.916482 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b53b2f44-1755-45cd-b63e-32e5109e10c1-config\") pod \"b53b2f44-1755-45cd-b63e-32e5109e10c1\" (UID: \"b53b2f44-1755-45cd-b63e-32e5109e10c1\") " Jan 25 08:14:20 crc kubenswrapper[4832]: I0125 08:14:20.916610 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vzppj\" (UniqueName: \"kubernetes.io/projected/b53b2f44-1755-45cd-b63e-32e5109e10c1-kube-api-access-vzppj\") pod \"b53b2f44-1755-45cd-b63e-32e5109e10c1\" (UID: \"b53b2f44-1755-45cd-b63e-32e5109e10c1\") " Jan 25 08:14:20 crc kubenswrapper[4832]: I0125 08:14:20.916692 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b53b2f44-1755-45cd-b63e-32e5109e10c1-dns-svc\") pod \"b53b2f44-1755-45cd-b63e-32e5109e10c1\" (UID: \"b53b2f44-1755-45cd-b63e-32e5109e10c1\") " Jan 25 08:14:20 crc kubenswrapper[4832]: I0125 08:14:20.917421 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b53b2f44-1755-45cd-b63e-32e5109e10c1-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "b53b2f44-1755-45cd-b63e-32e5109e10c1" (UID: "b53b2f44-1755-45cd-b63e-32e5109e10c1"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 25 08:14:20 crc kubenswrapper[4832]: I0125 08:14:20.917578 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b53b2f44-1755-45cd-b63e-32e5109e10c1-config" (OuterVolumeSpecName: "config") pod "b53b2f44-1755-45cd-b63e-32e5109e10c1" (UID: "b53b2f44-1755-45cd-b63e-32e5109e10c1"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 25 08:14:20 crc kubenswrapper[4832]: I0125 08:14:20.937610 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b53b2f44-1755-45cd-b63e-32e5109e10c1-kube-api-access-vzppj" (OuterVolumeSpecName: "kube-api-access-vzppj") pod "b53b2f44-1755-45cd-b63e-32e5109e10c1" (UID: "b53b2f44-1755-45cd-b63e-32e5109e10c1"). InnerVolumeSpecName "kube-api-access-vzppj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 25 08:14:21 crc kubenswrapper[4832]: I0125 08:14:21.021103 4832 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b53b2f44-1755-45cd-b63e-32e5109e10c1-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 25 08:14:21 crc kubenswrapper[4832]: I0125 08:14:21.021146 4832 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b53b2f44-1755-45cd-b63e-32e5109e10c1-config\") on node \"crc\" DevicePath \"\"" Jan 25 08:14:21 crc kubenswrapper[4832]: I0125 08:14:21.021162 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vzppj\" (UniqueName: \"kubernetes.io/projected/b53b2f44-1755-45cd-b63e-32e5109e10c1-kube-api-access-vzppj\") on node \"crc\" DevicePath \"\"" Jan 25 08:14:21 crc kubenswrapper[4832]: I0125 08:14:21.482491 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"666395bf-0cf6-4e7a-a0d0-2ad1a8928424","Type":"ContainerStarted","Data":"f4671a8b560fc4fa342fbbcf832915cba14c7733646e2921377102437bdc78c3"} Jan 25 08:14:21 crc kubenswrapper[4832]: I0125 08:14:21.485091 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"2f80d9a5-5d45-4053-875c-908242efc5e9","Type":"ContainerStarted","Data":"8c6a9c3ffb2f64548b47ebec87882784fa19f4d77d6e1f3a9d7a92e52d67191e"} Jan 25 08:14:21 crc kubenswrapper[4832]: I0125 08:14:21.487308 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78dd6ddcc-hl42z" event={"ID":"b53b2f44-1755-45cd-b63e-32e5109e10c1","Type":"ContainerDied","Data":"c0d453652a34580cbe6669f2ed97f5fee359f954bb39e438527b7a52cf1d47ba"} Jan 25 08:14:21 crc kubenswrapper[4832]: I0125 08:14:21.487344 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-hl42z" Jan 25 08:14:21 crc kubenswrapper[4832]: I0125 08:14:21.489522 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"9b86227f-350e-4e03-aefd-00f308ccb238","Type":"ContainerStarted","Data":"b460c04d4adb8e23c0d8d586e6e38768fc8da8021c8d34a10874eaba07e58ccf"} Jan 25 08:14:21 crc kubenswrapper[4832]: I0125 08:14:21.572092 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-hl42z"] Jan 25 08:14:21 crc kubenswrapper[4832]: I0125 08:14:21.592619 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-hl42z"] Jan 25 08:14:21 crc kubenswrapper[4832]: I0125 08:14:21.690023 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5e04d739-fa58-4eeb-aa09-415c9472a144" path="/var/lib/kubelet/pods/5e04d739-fa58-4eeb-aa09-415c9472a144/volumes" Jan 25 08:14:21 crc kubenswrapper[4832]: I0125 08:14:21.690410 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b53b2f44-1755-45cd-b63e-32e5109e10c1" path="/var/lib/kubelet/pods/b53b2f44-1755-45cd-b63e-32e5109e10c1/volumes" Jan 25 08:14:22 crc kubenswrapper[4832]: I0125 08:14:22.149542 4832 patch_prober.go:28] interesting pod/machine-config-daemon-9r9sz container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 25 08:14:22 crc kubenswrapper[4832]: I0125 08:14:22.149592 4832 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-9r9sz" podUID="1fb47e8e-c812-41b4-9be7-3fad81e121b0" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 25 08:14:27 crc kubenswrapper[4832]: I0125 08:14:27.536675 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"9ca53255-293b-4c35-a202-ac7ad7ac8d65","Type":"ContainerStarted","Data":"4b28db0c6a30a5e2f8f2e1aec450e0c53b4ec439dadb63cb2c7959ef679980fe"} Jan 25 08:14:27 crc kubenswrapper[4832]: I0125 08:14:27.539348 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-n6hrr" event={"ID":"54cecc85-b18f-4136-bd00-cbcc0f680643","Type":"ContainerStarted","Data":"aa672d561857300d796c2003bc1a3a3c777fe9e189e5be305f02ecce50671269"} Jan 25 08:14:27 crc kubenswrapper[4832]: I0125 08:14:27.539702 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-n6hrr" Jan 25 08:14:27 crc kubenswrapper[4832]: I0125 08:14:27.541838 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"666395bf-0cf6-4e7a-a0d0-2ad1a8928424","Type":"ContainerStarted","Data":"5dd3347660f0810ffd94083068453fc7e0377b818247dcab39c2d6a3a66bb2f3"} Jan 25 08:14:27 crc kubenswrapper[4832]: I0125 08:14:27.545146 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"43f07a95-68ce-4138-b2ff-ef2543e68e46","Type":"ContainerStarted","Data":"7e6ff94248f2324fbc1429e3bb25d85982d1ab4e4f1897c16be7aaeb51c98659"} Jan 25 08:14:27 crc kubenswrapper[4832]: I0125 08:14:27.547509 4832 generic.go:334] "Generic (PLEG): container finished" podID="1eb6b5ae-927c-4920-9ad4-bc1936555efb" containerID="ef833a6b7674b683311372725024bcaf8a1788a9f43536ded6952277a04c5852" exitCode=0 Jan 25 08:14:27 crc kubenswrapper[4832]: I0125 08:14:27.547569 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-tk26k" event={"ID":"1eb6b5ae-927c-4920-9ad4-bc1936555efb","Type":"ContainerDied","Data":"ef833a6b7674b683311372725024bcaf8a1788a9f43536ded6952277a04c5852"} Jan 25 08:14:27 crc kubenswrapper[4832]: I0125 08:14:27.549848 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"0d2475d7-df45-45d0-a604-22b5008d000f","Type":"ContainerStarted","Data":"ff778ce8a41fe311b3d6cfa73b2d4e37d81472cf8c66e0b000b5d3feca2f6afb"} Jan 25 08:14:27 crc kubenswrapper[4832]: I0125 08:14:27.551734 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"2bf96fb8-1a77-4546-ba91-aa18499fa5c4","Type":"ContainerStarted","Data":"782826cc8e1662afe1f667341008333bafa7d7142321c45593db4d079f0b255d"} Jan 25 08:14:27 crc kubenswrapper[4832]: I0125 08:14:27.552475 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Jan 25 08:14:27 crc kubenswrapper[4832]: I0125 08:14:27.559972 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"44713664-4137-4321-baff-36c54dcbae96","Type":"ContainerStarted","Data":"bdd2a99e71d7b9b0114c78a83f9e05400b505447416c68409d2b1c57c1a02c01"} Jan 25 08:14:27 crc kubenswrapper[4832]: I0125 08:14:27.561214 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/memcached-0" Jan 25 08:14:27 crc kubenswrapper[4832]: I0125 08:14:27.600399 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-n6hrr" podStartSLOduration=17.106291955 podStartE2EDuration="23.600355683s" podCreationTimestamp="2026-01-25 08:14:04 +0000 UTC" firstStartedPulling="2026-01-25 08:14:20.180967405 +0000 UTC m=+1042.854790938" lastFinishedPulling="2026-01-25 08:14:26.675031133 +0000 UTC m=+1049.348854666" observedRunningTime="2026-01-25 08:14:27.596614946 +0000 UTC m=+1050.270438479" watchObservedRunningTime="2026-01-25 08:14:27.600355683 +0000 UTC m=+1050.274179216" Jan 25 08:14:27 crc kubenswrapper[4832]: I0125 08:14:27.623417 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/memcached-0" podStartSLOduration=22.361480423 podStartE2EDuration="28.623398696s" podCreationTimestamp="2026-01-25 08:13:59 +0000 UTC" firstStartedPulling="2026-01-25 08:14:20.035797408 +0000 UTC m=+1042.709620941" lastFinishedPulling="2026-01-25 08:14:26.297715671 +0000 UTC m=+1048.971539214" observedRunningTime="2026-01-25 08:14:27.621486686 +0000 UTC m=+1050.295310249" watchObservedRunningTime="2026-01-25 08:14:27.623398696 +0000 UTC m=+1050.297222249" Jan 25 08:14:27 crc kubenswrapper[4832]: I0125 08:14:27.668983 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=20.933111 podStartE2EDuration="27.668957716s" podCreationTimestamp="2026-01-25 08:14:00 +0000 UTC" firstStartedPulling="2026-01-25 08:14:20.010840885 +0000 UTC m=+1042.684664418" lastFinishedPulling="2026-01-25 08:14:26.746687601 +0000 UTC m=+1049.420511134" observedRunningTime="2026-01-25 08:14:27.662533634 +0000 UTC m=+1050.336357167" watchObservedRunningTime="2026-01-25 08:14:27.668957716 +0000 UTC m=+1050.342781249" Jan 25 08:14:28 crc kubenswrapper[4832]: I0125 08:14:28.577911 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-tk26k" event={"ID":"1eb6b5ae-927c-4920-9ad4-bc1936555efb","Type":"ContainerStarted","Data":"988d111dc0f9e035f87eda8d6bacd2e39c59210cc7121f5a6fc7b24510668ce2"} Jan 25 08:14:28 crc kubenswrapper[4832]: I0125 08:14:28.578296 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-tk26k" event={"ID":"1eb6b5ae-927c-4920-9ad4-bc1936555efb","Type":"ContainerStarted","Data":"233aa0e184ac428c5a4b70c84d8c865eb0a30033388423e2fd88096e9ce31865"} Jan 25 08:14:29 crc kubenswrapper[4832]: I0125 08:14:29.585089 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-tk26k" Jan 25 08:14:29 crc kubenswrapper[4832]: I0125 08:14:29.585133 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-tk26k" Jan 25 08:14:29 crc kubenswrapper[4832]: I0125 08:14:29.697328 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-ovs-tk26k" podStartSLOduration=19.260515983 podStartE2EDuration="25.697308303s" podCreationTimestamp="2026-01-25 08:14:04 +0000 UTC" firstStartedPulling="2026-01-25 08:14:20.180968815 +0000 UTC m=+1042.854792348" lastFinishedPulling="2026-01-25 08:14:26.617761145 +0000 UTC m=+1049.291584668" observedRunningTime="2026-01-25 08:14:28.600069118 +0000 UTC m=+1051.273892661" watchObservedRunningTime="2026-01-25 08:14:29.697308303 +0000 UTC m=+1052.371131836" Jan 25 08:14:30 crc kubenswrapper[4832]: I0125 08:14:30.598670 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"0d2475d7-df45-45d0-a604-22b5008d000f","Type":"ContainerStarted","Data":"7ab60f8bb0e9d9491c928b39969e490d83cb3d3dad05ec9ccb799e3acfe8572c"} Jan 25 08:14:30 crc kubenswrapper[4832]: I0125 08:14:30.603145 4832 generic.go:334] "Generic (PLEG): container finished" podID="01866d50-e28c-44e2-a57d-5d5a7ea04626" containerID="46cfb850c0e9af0ac1e3fcff67eefb9ea921fb9e5f5addd01348f3481ebfb60f" exitCode=0 Jan 25 08:14:30 crc kubenswrapper[4832]: I0125 08:14:30.603190 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-jwr5g" event={"ID":"01866d50-e28c-44e2-a57d-5d5a7ea04626","Type":"ContainerDied","Data":"46cfb850c0e9af0ac1e3fcff67eefb9ea921fb9e5f5addd01348f3481ebfb60f"} Jan 25 08:14:30 crc kubenswrapper[4832]: I0125 08:14:30.608270 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"666395bf-0cf6-4e7a-a0d0-2ad1a8928424","Type":"ContainerStarted","Data":"98d949bffdaac0bff9cb2c65f70778fb37621d0451cdd6f574f1369914828f84"} Jan 25 08:14:30 crc kubenswrapper[4832]: I0125 08:14:30.648787 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-nb-0" podStartSLOduration=17.843891659 podStartE2EDuration="27.648768614s" podCreationTimestamp="2026-01-25 08:14:03 +0000 UTC" firstStartedPulling="2026-01-25 08:14:20.266763907 +0000 UTC m=+1042.940587440" lastFinishedPulling="2026-01-25 08:14:30.071640862 +0000 UTC m=+1052.745464395" observedRunningTime="2026-01-25 08:14:30.622884652 +0000 UTC m=+1053.296708205" watchObservedRunningTime="2026-01-25 08:14:30.648768614 +0000 UTC m=+1053.322592147" Jan 25 08:14:30 crc kubenswrapper[4832]: I0125 08:14:30.665883 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-sb-0" podStartSLOduration=14.530670983 podStartE2EDuration="23.66586516s" podCreationTimestamp="2026-01-25 08:14:07 +0000 UTC" firstStartedPulling="2026-01-25 08:14:20.964461354 +0000 UTC m=+1043.638284887" lastFinishedPulling="2026-01-25 08:14:30.099655531 +0000 UTC m=+1052.773479064" observedRunningTime="2026-01-25 08:14:30.660194062 +0000 UTC m=+1053.334017635" watchObservedRunningTime="2026-01-25 08:14:30.66586516 +0000 UTC m=+1053.339688693" Jan 25 08:14:31 crc kubenswrapper[4832]: I0125 08:14:31.333896 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Jan 25 08:14:31 crc kubenswrapper[4832]: I0125 08:14:31.617933 4832 generic.go:334] "Generic (PLEG): container finished" podID="9ca53255-293b-4c35-a202-ac7ad7ac8d65" containerID="4b28db0c6a30a5e2f8f2e1aec450e0c53b4ec439dadb63cb2c7959ef679980fe" exitCode=0 Jan 25 08:14:31 crc kubenswrapper[4832]: I0125 08:14:31.617995 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"9ca53255-293b-4c35-a202-ac7ad7ac8d65","Type":"ContainerDied","Data":"4b28db0c6a30a5e2f8f2e1aec450e0c53b4ec439dadb63cb2c7959ef679980fe"} Jan 25 08:14:31 crc kubenswrapper[4832]: I0125 08:14:31.621804 4832 generic.go:334] "Generic (PLEG): container finished" podID="43f07a95-68ce-4138-b2ff-ef2543e68e46" containerID="7e6ff94248f2324fbc1429e3bb25d85982d1ab4e4f1897c16be7aaeb51c98659" exitCode=0 Jan 25 08:14:31 crc kubenswrapper[4832]: I0125 08:14:31.621887 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"43f07a95-68ce-4138-b2ff-ef2543e68e46","Type":"ContainerDied","Data":"7e6ff94248f2324fbc1429e3bb25d85982d1ab4e4f1897c16be7aaeb51c98659"} Jan 25 08:14:31 crc kubenswrapper[4832]: I0125 08:14:31.626306 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-jwr5g" event={"ID":"01866d50-e28c-44e2-a57d-5d5a7ea04626","Type":"ContainerStarted","Data":"8bf07fdd97df61bdadea7415e14b4bf6a6b8ea3df8c02106c763d60ceaff618e"} Jan 25 08:14:31 crc kubenswrapper[4832]: I0125 08:14:31.626728 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-57d769cc4f-jwr5g" Jan 25 08:14:31 crc kubenswrapper[4832]: I0125 08:14:31.680456 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-57d769cc4f-jwr5g" podStartSLOduration=2.999934499 podStartE2EDuration="36.680436872s" podCreationTimestamp="2026-01-25 08:13:55 +0000 UTC" firstStartedPulling="2026-01-25 08:13:56.424448324 +0000 UTC m=+1019.098271857" lastFinishedPulling="2026-01-25 08:14:30.104950697 +0000 UTC m=+1052.778774230" observedRunningTime="2026-01-25 08:14:31.678795691 +0000 UTC m=+1054.352619224" watchObservedRunningTime="2026-01-25 08:14:31.680436872 +0000 UTC m=+1054.354260405" Jan 25 08:14:32 crc kubenswrapper[4832]: I0125 08:14:32.299851 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-nb-0" Jan 25 08:14:32 crc kubenswrapper[4832]: I0125 08:14:32.334211 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-nb-0" Jan 25 08:14:32 crc kubenswrapper[4832]: I0125 08:14:32.639888 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"9ca53255-293b-4c35-a202-ac7ad7ac8d65","Type":"ContainerStarted","Data":"9c244733d5eff4d87f37b56fb1bbb90b7ad6653c3d69b7294a7e9a49a0e9dc47"} Jan 25 08:14:32 crc kubenswrapper[4832]: I0125 08:14:32.643173 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"43f07a95-68ce-4138-b2ff-ef2543e68e46","Type":"ContainerStarted","Data":"59c4b221247b6b4cb642d84a629e09da25c1ad5a6a16cc872543c2c95891ed07"} Jan 25 08:14:32 crc kubenswrapper[4832]: I0125 08:14:32.643766 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-nb-0" Jan 25 08:14:32 crc kubenswrapper[4832]: I0125 08:14:32.675870 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-galera-0" podStartSLOduration=30.203165623 podStartE2EDuration="36.675849101s" podCreationTimestamp="2026-01-25 08:13:56 +0000 UTC" firstStartedPulling="2026-01-25 08:14:20.207897039 +0000 UTC m=+1042.881720572" lastFinishedPulling="2026-01-25 08:14:26.680580517 +0000 UTC m=+1049.354404050" observedRunningTime="2026-01-25 08:14:32.659319063 +0000 UTC m=+1055.333142636" watchObservedRunningTime="2026-01-25 08:14:32.675849101 +0000 UTC m=+1055.349672634" Jan 25 08:14:32 crc kubenswrapper[4832]: I0125 08:14:32.687201 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-nb-0" Jan 25 08:14:32 crc kubenswrapper[4832]: I0125 08:14:32.689137 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-cell1-galera-0" podStartSLOduration=28.094847295 podStartE2EDuration="34.689118128s" podCreationTimestamp="2026-01-25 08:13:58 +0000 UTC" firstStartedPulling="2026-01-25 08:14:20.023460181 +0000 UTC m=+1042.697283724" lastFinishedPulling="2026-01-25 08:14:26.617731024 +0000 UTC m=+1049.291554557" observedRunningTime="2026-01-25 08:14:32.684427801 +0000 UTC m=+1055.358251334" watchObservedRunningTime="2026-01-25 08:14:32.689118128 +0000 UTC m=+1055.362941661" Jan 25 08:14:32 crc kubenswrapper[4832]: I0125 08:14:32.950775 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-gfs8w"] Jan 25 08:14:33 crc kubenswrapper[4832]: I0125 08:14:33.001748 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7fd796d7df-hfhnp"] Jan 25 08:14:33 crc kubenswrapper[4832]: I0125 08:14:33.003019 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7fd796d7df-hfhnp" Jan 25 08:14:33 crc kubenswrapper[4832]: I0125 08:14:33.008971 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-metrics-hcd8h"] Jan 25 08:14:33 crc kubenswrapper[4832]: I0125 08:14:33.009238 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-nb" Jan 25 08:14:33 crc kubenswrapper[4832]: I0125 08:14:33.010379 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-hcd8h" Jan 25 08:14:33 crc kubenswrapper[4832]: I0125 08:14:33.013257 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-metrics-config" Jan 25 08:14:33 crc kubenswrapper[4832]: I0125 08:14:33.016517 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7fd796d7df-hfhnp"] Jan 25 08:14:33 crc kubenswrapper[4832]: I0125 08:14:33.038132 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-hcd8h"] Jan 25 08:14:33 crc kubenswrapper[4832]: I0125 08:14:33.127630 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b65xx\" (UniqueName: \"kubernetes.io/projected/4b6aa9f6-e110-4147-a8d0-b1c8287226d1-kube-api-access-b65xx\") pod \"ovn-controller-metrics-hcd8h\" (UID: \"4b6aa9f6-e110-4147-a8d0-b1c8287226d1\") " pod="openstack/ovn-controller-metrics-hcd8h" Jan 25 08:14:33 crc kubenswrapper[4832]: I0125 08:14:33.127698 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bbbad5d-1634-4187-b9d8-0748dca46ba3-config\") pod \"dnsmasq-dns-7fd796d7df-hfhnp\" (UID: \"1bbbad5d-1634-4187-b9d8-0748dca46ba3\") " pod="openstack/dnsmasq-dns-7fd796d7df-hfhnp" Jan 25 08:14:33 crc kubenswrapper[4832]: I0125 08:14:33.127725 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/4b6aa9f6-e110-4147-a8d0-b1c8287226d1-ovn-rundir\") pod \"ovn-controller-metrics-hcd8h\" (UID: \"4b6aa9f6-e110-4147-a8d0-b1c8287226d1\") " pod="openstack/ovn-controller-metrics-hcd8h" Jan 25 08:14:33 crc kubenswrapper[4832]: I0125 08:14:33.127776 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pn6nf\" (UniqueName: \"kubernetes.io/projected/1bbbad5d-1634-4187-b9d8-0748dca46ba3-kube-api-access-pn6nf\") pod \"dnsmasq-dns-7fd796d7df-hfhnp\" (UID: \"1bbbad5d-1634-4187-b9d8-0748dca46ba3\") " pod="openstack/dnsmasq-dns-7fd796d7df-hfhnp" Jan 25 08:14:33 crc kubenswrapper[4832]: I0125 08:14:33.127798 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4b6aa9f6-e110-4147-a8d0-b1c8287226d1-config\") pod \"ovn-controller-metrics-hcd8h\" (UID: \"4b6aa9f6-e110-4147-a8d0-b1c8287226d1\") " pod="openstack/ovn-controller-metrics-hcd8h" Jan 25 08:14:33 crc kubenswrapper[4832]: I0125 08:14:33.127813 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4b6aa9f6-e110-4147-a8d0-b1c8287226d1-combined-ca-bundle\") pod \"ovn-controller-metrics-hcd8h\" (UID: \"4b6aa9f6-e110-4147-a8d0-b1c8287226d1\") " pod="openstack/ovn-controller-metrics-hcd8h" Jan 25 08:14:33 crc kubenswrapper[4832]: I0125 08:14:33.127832 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/4b6aa9f6-e110-4147-a8d0-b1c8287226d1-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-hcd8h\" (UID: \"4b6aa9f6-e110-4147-a8d0-b1c8287226d1\") " pod="openstack/ovn-controller-metrics-hcd8h" Jan 25 08:14:33 crc kubenswrapper[4832]: I0125 08:14:33.127851 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1bbbad5d-1634-4187-b9d8-0748dca46ba3-dns-svc\") pod \"dnsmasq-dns-7fd796d7df-hfhnp\" (UID: \"1bbbad5d-1634-4187-b9d8-0748dca46ba3\") " pod="openstack/dnsmasq-dns-7fd796d7df-hfhnp" Jan 25 08:14:33 crc kubenswrapper[4832]: I0125 08:14:33.127878 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1bbbad5d-1634-4187-b9d8-0748dca46ba3-ovsdbserver-nb\") pod \"dnsmasq-dns-7fd796d7df-hfhnp\" (UID: \"1bbbad5d-1634-4187-b9d8-0748dca46ba3\") " pod="openstack/dnsmasq-dns-7fd796d7df-hfhnp" Jan 25 08:14:33 crc kubenswrapper[4832]: I0125 08:14:33.127899 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/4b6aa9f6-e110-4147-a8d0-b1c8287226d1-ovs-rundir\") pod \"ovn-controller-metrics-hcd8h\" (UID: \"4b6aa9f6-e110-4147-a8d0-b1c8287226d1\") " pod="openstack/ovn-controller-metrics-hcd8h" Jan 25 08:14:33 crc kubenswrapper[4832]: I0125 08:14:33.229845 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bbbad5d-1634-4187-b9d8-0748dca46ba3-config\") pod \"dnsmasq-dns-7fd796d7df-hfhnp\" (UID: \"1bbbad5d-1634-4187-b9d8-0748dca46ba3\") " pod="openstack/dnsmasq-dns-7fd796d7df-hfhnp" Jan 25 08:14:33 crc kubenswrapper[4832]: I0125 08:14:33.230204 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/4b6aa9f6-e110-4147-a8d0-b1c8287226d1-ovn-rundir\") pod \"ovn-controller-metrics-hcd8h\" (UID: \"4b6aa9f6-e110-4147-a8d0-b1c8287226d1\") " pod="openstack/ovn-controller-metrics-hcd8h" Jan 25 08:14:33 crc kubenswrapper[4832]: I0125 08:14:33.230258 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pn6nf\" (UniqueName: \"kubernetes.io/projected/1bbbad5d-1634-4187-b9d8-0748dca46ba3-kube-api-access-pn6nf\") pod \"dnsmasq-dns-7fd796d7df-hfhnp\" (UID: \"1bbbad5d-1634-4187-b9d8-0748dca46ba3\") " pod="openstack/dnsmasq-dns-7fd796d7df-hfhnp" Jan 25 08:14:33 crc kubenswrapper[4832]: I0125 08:14:33.230280 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4b6aa9f6-e110-4147-a8d0-b1c8287226d1-combined-ca-bundle\") pod \"ovn-controller-metrics-hcd8h\" (UID: \"4b6aa9f6-e110-4147-a8d0-b1c8287226d1\") " pod="openstack/ovn-controller-metrics-hcd8h" Jan 25 08:14:33 crc kubenswrapper[4832]: I0125 08:14:33.230295 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4b6aa9f6-e110-4147-a8d0-b1c8287226d1-config\") pod \"ovn-controller-metrics-hcd8h\" (UID: \"4b6aa9f6-e110-4147-a8d0-b1c8287226d1\") " pod="openstack/ovn-controller-metrics-hcd8h" Jan 25 08:14:33 crc kubenswrapper[4832]: I0125 08:14:33.230314 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/4b6aa9f6-e110-4147-a8d0-b1c8287226d1-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-hcd8h\" (UID: \"4b6aa9f6-e110-4147-a8d0-b1c8287226d1\") " pod="openstack/ovn-controller-metrics-hcd8h" Jan 25 08:14:33 crc kubenswrapper[4832]: I0125 08:14:33.230336 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1bbbad5d-1634-4187-b9d8-0748dca46ba3-dns-svc\") pod \"dnsmasq-dns-7fd796d7df-hfhnp\" (UID: \"1bbbad5d-1634-4187-b9d8-0748dca46ba3\") " pod="openstack/dnsmasq-dns-7fd796d7df-hfhnp" Jan 25 08:14:33 crc kubenswrapper[4832]: I0125 08:14:33.230375 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1bbbad5d-1634-4187-b9d8-0748dca46ba3-ovsdbserver-nb\") pod \"dnsmasq-dns-7fd796d7df-hfhnp\" (UID: \"1bbbad5d-1634-4187-b9d8-0748dca46ba3\") " pod="openstack/dnsmasq-dns-7fd796d7df-hfhnp" Jan 25 08:14:33 crc kubenswrapper[4832]: I0125 08:14:33.230417 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/4b6aa9f6-e110-4147-a8d0-b1c8287226d1-ovs-rundir\") pod \"ovn-controller-metrics-hcd8h\" (UID: \"4b6aa9f6-e110-4147-a8d0-b1c8287226d1\") " pod="openstack/ovn-controller-metrics-hcd8h" Jan 25 08:14:33 crc kubenswrapper[4832]: I0125 08:14:33.230444 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b65xx\" (UniqueName: \"kubernetes.io/projected/4b6aa9f6-e110-4147-a8d0-b1c8287226d1-kube-api-access-b65xx\") pod \"ovn-controller-metrics-hcd8h\" (UID: \"4b6aa9f6-e110-4147-a8d0-b1c8287226d1\") " pod="openstack/ovn-controller-metrics-hcd8h" Jan 25 08:14:33 crc kubenswrapper[4832]: I0125 08:14:33.231623 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bbbad5d-1634-4187-b9d8-0748dca46ba3-config\") pod \"dnsmasq-dns-7fd796d7df-hfhnp\" (UID: \"1bbbad5d-1634-4187-b9d8-0748dca46ba3\") " pod="openstack/dnsmasq-dns-7fd796d7df-hfhnp" Jan 25 08:14:33 crc kubenswrapper[4832]: I0125 08:14:33.232209 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4b6aa9f6-e110-4147-a8d0-b1c8287226d1-config\") pod \"ovn-controller-metrics-hcd8h\" (UID: \"4b6aa9f6-e110-4147-a8d0-b1c8287226d1\") " pod="openstack/ovn-controller-metrics-hcd8h" Jan 25 08:14:33 crc kubenswrapper[4832]: I0125 08:14:33.233522 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1bbbad5d-1634-4187-b9d8-0748dca46ba3-dns-svc\") pod \"dnsmasq-dns-7fd796d7df-hfhnp\" (UID: \"1bbbad5d-1634-4187-b9d8-0748dca46ba3\") " pod="openstack/dnsmasq-dns-7fd796d7df-hfhnp" Jan 25 08:14:33 crc kubenswrapper[4832]: I0125 08:14:33.234735 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1bbbad5d-1634-4187-b9d8-0748dca46ba3-ovsdbserver-nb\") pod \"dnsmasq-dns-7fd796d7df-hfhnp\" (UID: \"1bbbad5d-1634-4187-b9d8-0748dca46ba3\") " pod="openstack/dnsmasq-dns-7fd796d7df-hfhnp" Jan 25 08:14:33 crc kubenswrapper[4832]: I0125 08:14:33.234798 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/4b6aa9f6-e110-4147-a8d0-b1c8287226d1-ovs-rundir\") pod \"ovn-controller-metrics-hcd8h\" (UID: \"4b6aa9f6-e110-4147-a8d0-b1c8287226d1\") " pod="openstack/ovn-controller-metrics-hcd8h" Jan 25 08:14:33 crc kubenswrapper[4832]: I0125 08:14:33.235524 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/4b6aa9f6-e110-4147-a8d0-b1c8287226d1-ovn-rundir\") pod \"ovn-controller-metrics-hcd8h\" (UID: \"4b6aa9f6-e110-4147-a8d0-b1c8287226d1\") " pod="openstack/ovn-controller-metrics-hcd8h" Jan 25 08:14:33 crc kubenswrapper[4832]: I0125 08:14:33.243344 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/4b6aa9f6-e110-4147-a8d0-b1c8287226d1-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-hcd8h\" (UID: \"4b6aa9f6-e110-4147-a8d0-b1c8287226d1\") " pod="openstack/ovn-controller-metrics-hcd8h" Jan 25 08:14:33 crc kubenswrapper[4832]: I0125 08:14:33.246418 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4b6aa9f6-e110-4147-a8d0-b1c8287226d1-combined-ca-bundle\") pod \"ovn-controller-metrics-hcd8h\" (UID: \"4b6aa9f6-e110-4147-a8d0-b1c8287226d1\") " pod="openstack/ovn-controller-metrics-hcd8h" Jan 25 08:14:33 crc kubenswrapper[4832]: I0125 08:14:33.250134 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b65xx\" (UniqueName: \"kubernetes.io/projected/4b6aa9f6-e110-4147-a8d0-b1c8287226d1-kube-api-access-b65xx\") pod \"ovn-controller-metrics-hcd8h\" (UID: \"4b6aa9f6-e110-4147-a8d0-b1c8287226d1\") " pod="openstack/ovn-controller-metrics-hcd8h" Jan 25 08:14:33 crc kubenswrapper[4832]: I0125 08:14:33.262303 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pn6nf\" (UniqueName: \"kubernetes.io/projected/1bbbad5d-1634-4187-b9d8-0748dca46ba3-kube-api-access-pn6nf\") pod \"dnsmasq-dns-7fd796d7df-hfhnp\" (UID: \"1bbbad5d-1634-4187-b9d8-0748dca46ba3\") " pod="openstack/dnsmasq-dns-7fd796d7df-hfhnp" Jan 25 08:14:33 crc kubenswrapper[4832]: I0125 08:14:33.299142 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-sb-0" Jan 25 08:14:33 crc kubenswrapper[4832]: I0125 08:14:33.309328 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-jwr5g"] Jan 25 08:14:33 crc kubenswrapper[4832]: I0125 08:14:33.331534 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7fd796d7df-hfhnp" Jan 25 08:14:33 crc kubenswrapper[4832]: I0125 08:14:33.340376 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-ccnpl"] Jan 25 08:14:33 crc kubenswrapper[4832]: I0125 08:14:33.341507 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-gfs8w" Jan 25 08:14:33 crc kubenswrapper[4832]: I0125 08:14:33.342052 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86db49b7ff-ccnpl" Jan 25 08:14:33 crc kubenswrapper[4832]: I0125 08:14:33.345159 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-sb" Jan 25 08:14:33 crc kubenswrapper[4832]: I0125 08:14:33.350627 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-hcd8h" Jan 25 08:14:33 crc kubenswrapper[4832]: I0125 08:14:33.358640 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-ccnpl"] Jan 25 08:14:33 crc kubenswrapper[4832]: I0125 08:14:33.362903 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-sb-0" Jan 25 08:14:33 crc kubenswrapper[4832]: I0125 08:14:33.434347 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s5w8s\" (UniqueName: \"kubernetes.io/projected/daa59b36-5024-41ae-88f1-49703006f341-kube-api-access-s5w8s\") pod \"daa59b36-5024-41ae-88f1-49703006f341\" (UID: \"daa59b36-5024-41ae-88f1-49703006f341\") " Jan 25 08:14:33 crc kubenswrapper[4832]: I0125 08:14:33.434435 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/daa59b36-5024-41ae-88f1-49703006f341-dns-svc\") pod \"daa59b36-5024-41ae-88f1-49703006f341\" (UID: \"daa59b36-5024-41ae-88f1-49703006f341\") " Jan 25 08:14:33 crc kubenswrapper[4832]: I0125 08:14:33.434640 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/daa59b36-5024-41ae-88f1-49703006f341-config\") pod \"daa59b36-5024-41ae-88f1-49703006f341\" (UID: \"daa59b36-5024-41ae-88f1-49703006f341\") " Jan 25 08:14:33 crc kubenswrapper[4832]: I0125 08:14:33.434906 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gqgbs\" (UniqueName: \"kubernetes.io/projected/1adf8f99-37eb-4472-83a1-13c3500fadfe-kube-api-access-gqgbs\") pod \"dnsmasq-dns-86db49b7ff-ccnpl\" (UID: \"1adf8f99-37eb-4472-83a1-13c3500fadfe\") " pod="openstack/dnsmasq-dns-86db49b7ff-ccnpl" Jan 25 08:14:33 crc kubenswrapper[4832]: I0125 08:14:33.434978 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1adf8f99-37eb-4472-83a1-13c3500fadfe-ovsdbserver-nb\") pod \"dnsmasq-dns-86db49b7ff-ccnpl\" (UID: \"1adf8f99-37eb-4472-83a1-13c3500fadfe\") " pod="openstack/dnsmasq-dns-86db49b7ff-ccnpl" Jan 25 08:14:33 crc kubenswrapper[4832]: I0125 08:14:33.435007 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1adf8f99-37eb-4472-83a1-13c3500fadfe-config\") pod \"dnsmasq-dns-86db49b7ff-ccnpl\" (UID: \"1adf8f99-37eb-4472-83a1-13c3500fadfe\") " pod="openstack/dnsmasq-dns-86db49b7ff-ccnpl" Jan 25 08:14:33 crc kubenswrapper[4832]: I0125 08:14:33.435029 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1adf8f99-37eb-4472-83a1-13c3500fadfe-ovsdbserver-sb\") pod \"dnsmasq-dns-86db49b7ff-ccnpl\" (UID: \"1adf8f99-37eb-4472-83a1-13c3500fadfe\") " pod="openstack/dnsmasq-dns-86db49b7ff-ccnpl" Jan 25 08:14:33 crc kubenswrapper[4832]: I0125 08:14:33.435170 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1adf8f99-37eb-4472-83a1-13c3500fadfe-dns-svc\") pod \"dnsmasq-dns-86db49b7ff-ccnpl\" (UID: \"1adf8f99-37eb-4472-83a1-13c3500fadfe\") " pod="openstack/dnsmasq-dns-86db49b7ff-ccnpl" Jan 25 08:14:33 crc kubenswrapper[4832]: I0125 08:14:33.444215 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/daa59b36-5024-41ae-88f1-49703006f341-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "daa59b36-5024-41ae-88f1-49703006f341" (UID: "daa59b36-5024-41ae-88f1-49703006f341"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 25 08:14:33 crc kubenswrapper[4832]: I0125 08:14:33.444581 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/daa59b36-5024-41ae-88f1-49703006f341-config" (OuterVolumeSpecName: "config") pod "daa59b36-5024-41ae-88f1-49703006f341" (UID: "daa59b36-5024-41ae-88f1-49703006f341"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 25 08:14:33 crc kubenswrapper[4832]: I0125 08:14:33.451427 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/daa59b36-5024-41ae-88f1-49703006f341-kube-api-access-s5w8s" (OuterVolumeSpecName: "kube-api-access-s5w8s") pod "daa59b36-5024-41ae-88f1-49703006f341" (UID: "daa59b36-5024-41ae-88f1-49703006f341"). InnerVolumeSpecName "kube-api-access-s5w8s". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 25 08:14:33 crc kubenswrapper[4832]: I0125 08:14:33.541790 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1adf8f99-37eb-4472-83a1-13c3500fadfe-ovsdbserver-nb\") pod \"dnsmasq-dns-86db49b7ff-ccnpl\" (UID: \"1adf8f99-37eb-4472-83a1-13c3500fadfe\") " pod="openstack/dnsmasq-dns-86db49b7ff-ccnpl" Jan 25 08:14:33 crc kubenswrapper[4832]: I0125 08:14:33.541830 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1adf8f99-37eb-4472-83a1-13c3500fadfe-config\") pod \"dnsmasq-dns-86db49b7ff-ccnpl\" (UID: \"1adf8f99-37eb-4472-83a1-13c3500fadfe\") " pod="openstack/dnsmasq-dns-86db49b7ff-ccnpl" Jan 25 08:14:33 crc kubenswrapper[4832]: I0125 08:14:33.541849 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1adf8f99-37eb-4472-83a1-13c3500fadfe-ovsdbserver-sb\") pod \"dnsmasq-dns-86db49b7ff-ccnpl\" (UID: \"1adf8f99-37eb-4472-83a1-13c3500fadfe\") " pod="openstack/dnsmasq-dns-86db49b7ff-ccnpl" Jan 25 08:14:33 crc kubenswrapper[4832]: I0125 08:14:33.541967 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1adf8f99-37eb-4472-83a1-13c3500fadfe-dns-svc\") pod \"dnsmasq-dns-86db49b7ff-ccnpl\" (UID: \"1adf8f99-37eb-4472-83a1-13c3500fadfe\") " pod="openstack/dnsmasq-dns-86db49b7ff-ccnpl" Jan 25 08:14:33 crc kubenswrapper[4832]: I0125 08:14:33.543048 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gqgbs\" (UniqueName: \"kubernetes.io/projected/1adf8f99-37eb-4472-83a1-13c3500fadfe-kube-api-access-gqgbs\") pod \"dnsmasq-dns-86db49b7ff-ccnpl\" (UID: \"1adf8f99-37eb-4472-83a1-13c3500fadfe\") " pod="openstack/dnsmasq-dns-86db49b7ff-ccnpl" Jan 25 08:14:33 crc kubenswrapper[4832]: I0125 08:14:33.543047 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1adf8f99-37eb-4472-83a1-13c3500fadfe-dns-svc\") pod \"dnsmasq-dns-86db49b7ff-ccnpl\" (UID: \"1adf8f99-37eb-4472-83a1-13c3500fadfe\") " pod="openstack/dnsmasq-dns-86db49b7ff-ccnpl" Jan 25 08:14:33 crc kubenswrapper[4832]: I0125 08:14:33.542985 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1adf8f99-37eb-4472-83a1-13c3500fadfe-ovsdbserver-sb\") pod \"dnsmasq-dns-86db49b7ff-ccnpl\" (UID: \"1adf8f99-37eb-4472-83a1-13c3500fadfe\") " pod="openstack/dnsmasq-dns-86db49b7ff-ccnpl" Jan 25 08:14:33 crc kubenswrapper[4832]: I0125 08:14:33.543083 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1adf8f99-37eb-4472-83a1-13c3500fadfe-config\") pod \"dnsmasq-dns-86db49b7ff-ccnpl\" (UID: \"1adf8f99-37eb-4472-83a1-13c3500fadfe\") " pod="openstack/dnsmasq-dns-86db49b7ff-ccnpl" Jan 25 08:14:33 crc kubenswrapper[4832]: I0125 08:14:33.543079 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1adf8f99-37eb-4472-83a1-13c3500fadfe-ovsdbserver-nb\") pod \"dnsmasq-dns-86db49b7ff-ccnpl\" (UID: \"1adf8f99-37eb-4472-83a1-13c3500fadfe\") " pod="openstack/dnsmasq-dns-86db49b7ff-ccnpl" Jan 25 08:14:33 crc kubenswrapper[4832]: I0125 08:14:33.543210 4832 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/daa59b36-5024-41ae-88f1-49703006f341-config\") on node \"crc\" DevicePath \"\"" Jan 25 08:14:33 crc kubenswrapper[4832]: I0125 08:14:33.543238 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s5w8s\" (UniqueName: \"kubernetes.io/projected/daa59b36-5024-41ae-88f1-49703006f341-kube-api-access-s5w8s\") on node \"crc\" DevicePath \"\"" Jan 25 08:14:33 crc kubenswrapper[4832]: I0125 08:14:33.543255 4832 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/daa59b36-5024-41ae-88f1-49703006f341-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 25 08:14:33 crc kubenswrapper[4832]: I0125 08:14:33.563061 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gqgbs\" (UniqueName: \"kubernetes.io/projected/1adf8f99-37eb-4472-83a1-13c3500fadfe-kube-api-access-gqgbs\") pod \"dnsmasq-dns-86db49b7ff-ccnpl\" (UID: \"1adf8f99-37eb-4472-83a1-13c3500fadfe\") " pod="openstack/dnsmasq-dns-86db49b7ff-ccnpl" Jan 25 08:14:33 crc kubenswrapper[4832]: I0125 08:14:33.650872 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-gfs8w" event={"ID":"daa59b36-5024-41ae-88f1-49703006f341","Type":"ContainerDied","Data":"29df7776dfcb0b19da7dda07e5b50df41cc8f74c6a56241899a92f768bc74ef2"} Jan 25 08:14:33 crc kubenswrapper[4832]: I0125 08:14:33.651003 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-57d769cc4f-jwr5g" podUID="01866d50-e28c-44e2-a57d-5d5a7ea04626" containerName="dnsmasq-dns" containerID="cri-o://8bf07fdd97df61bdadea7415e14b4bf6a6b8ea3df8c02106c763d60ceaff618e" gracePeriod=10 Jan 25 08:14:33 crc kubenswrapper[4832]: I0125 08:14:33.651050 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-gfs8w" Jan 25 08:14:33 crc kubenswrapper[4832]: I0125 08:14:33.652283 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-sb-0" Jan 25 08:14:33 crc kubenswrapper[4832]: I0125 08:14:33.682764 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86db49b7ff-ccnpl" Jan 25 08:14:33 crc kubenswrapper[4832]: I0125 08:14:33.714110 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-sb-0" Jan 25 08:14:33 crc kubenswrapper[4832]: I0125 08:14:33.880254 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-hcd8h"] Jan 25 08:14:33 crc kubenswrapper[4832]: I0125 08:14:33.916663 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-northd-0"] Jan 25 08:14:33 crc kubenswrapper[4832]: I0125 08:14:33.918059 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Jan 25 08:14:33 crc kubenswrapper[4832]: I0125 08:14:33.920084 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-config" Jan 25 08:14:33 crc kubenswrapper[4832]: I0125 08:14:33.920264 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-scripts" Jan 25 08:14:33 crc kubenswrapper[4832]: I0125 08:14:33.920667 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovnnorthd-ovnnorthd-dockercfg-tj6tr" Jan 25 08:14:33 crc kubenswrapper[4832]: I0125 08:14:33.926295 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovnnorthd-ovndbs" Jan 25 08:14:33 crc kubenswrapper[4832]: I0125 08:14:33.929660 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Jan 25 08:14:33 crc kubenswrapper[4832]: I0125 08:14:33.948261 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7fd796d7df-hfhnp"] Jan 25 08:14:34 crc kubenswrapper[4832]: I0125 08:14:34.053475 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-64xc7\" (UniqueName: \"kubernetes.io/projected/828fc400-0bbb-4fbb-ae6c-7aa12c12864a-kube-api-access-64xc7\") pod \"ovn-northd-0\" (UID: \"828fc400-0bbb-4fbb-ae6c-7aa12c12864a\") " pod="openstack/ovn-northd-0" Jan 25 08:14:34 crc kubenswrapper[4832]: I0125 08:14:34.053519 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/828fc400-0bbb-4fbb-ae6c-7aa12c12864a-config\") pod \"ovn-northd-0\" (UID: \"828fc400-0bbb-4fbb-ae6c-7aa12c12864a\") " pod="openstack/ovn-northd-0" Jan 25 08:14:34 crc kubenswrapper[4832]: I0125 08:14:34.053585 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/828fc400-0bbb-4fbb-ae6c-7aa12c12864a-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"828fc400-0bbb-4fbb-ae6c-7aa12c12864a\") " pod="openstack/ovn-northd-0" Jan 25 08:14:34 crc kubenswrapper[4832]: I0125 08:14:34.053607 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/828fc400-0bbb-4fbb-ae6c-7aa12c12864a-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"828fc400-0bbb-4fbb-ae6c-7aa12c12864a\") " pod="openstack/ovn-northd-0" Jan 25 08:14:34 crc kubenswrapper[4832]: I0125 08:14:34.053632 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/828fc400-0bbb-4fbb-ae6c-7aa12c12864a-scripts\") pod \"ovn-northd-0\" (UID: \"828fc400-0bbb-4fbb-ae6c-7aa12c12864a\") " pod="openstack/ovn-northd-0" Jan 25 08:14:34 crc kubenswrapper[4832]: I0125 08:14:34.053657 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/828fc400-0bbb-4fbb-ae6c-7aa12c12864a-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"828fc400-0bbb-4fbb-ae6c-7aa12c12864a\") " pod="openstack/ovn-northd-0" Jan 25 08:14:34 crc kubenswrapper[4832]: I0125 08:14:34.053676 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/828fc400-0bbb-4fbb-ae6c-7aa12c12864a-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"828fc400-0bbb-4fbb-ae6c-7aa12c12864a\") " pod="openstack/ovn-northd-0" Jan 25 08:14:34 crc kubenswrapper[4832]: I0125 08:14:34.155347 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/828fc400-0bbb-4fbb-ae6c-7aa12c12864a-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"828fc400-0bbb-4fbb-ae6c-7aa12c12864a\") " pod="openstack/ovn-northd-0" Jan 25 08:14:34 crc kubenswrapper[4832]: I0125 08:14:34.155458 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/828fc400-0bbb-4fbb-ae6c-7aa12c12864a-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"828fc400-0bbb-4fbb-ae6c-7aa12c12864a\") " pod="openstack/ovn-northd-0" Jan 25 08:14:34 crc kubenswrapper[4832]: I0125 08:14:34.155492 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/828fc400-0bbb-4fbb-ae6c-7aa12c12864a-scripts\") pod \"ovn-northd-0\" (UID: \"828fc400-0bbb-4fbb-ae6c-7aa12c12864a\") " pod="openstack/ovn-northd-0" Jan 25 08:14:34 crc kubenswrapper[4832]: I0125 08:14:34.155521 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/828fc400-0bbb-4fbb-ae6c-7aa12c12864a-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"828fc400-0bbb-4fbb-ae6c-7aa12c12864a\") " pod="openstack/ovn-northd-0" Jan 25 08:14:34 crc kubenswrapper[4832]: I0125 08:14:34.155538 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/828fc400-0bbb-4fbb-ae6c-7aa12c12864a-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"828fc400-0bbb-4fbb-ae6c-7aa12c12864a\") " pod="openstack/ovn-northd-0" Jan 25 08:14:34 crc kubenswrapper[4832]: I0125 08:14:34.155595 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/828fc400-0bbb-4fbb-ae6c-7aa12c12864a-config\") pod \"ovn-northd-0\" (UID: \"828fc400-0bbb-4fbb-ae6c-7aa12c12864a\") " pod="openstack/ovn-northd-0" Jan 25 08:14:34 crc kubenswrapper[4832]: I0125 08:14:34.155609 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-64xc7\" (UniqueName: \"kubernetes.io/projected/828fc400-0bbb-4fbb-ae6c-7aa12c12864a-kube-api-access-64xc7\") pod \"ovn-northd-0\" (UID: \"828fc400-0bbb-4fbb-ae6c-7aa12c12864a\") " pod="openstack/ovn-northd-0" Jan 25 08:14:34 crc kubenswrapper[4832]: I0125 08:14:34.156333 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/828fc400-0bbb-4fbb-ae6c-7aa12c12864a-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"828fc400-0bbb-4fbb-ae6c-7aa12c12864a\") " pod="openstack/ovn-northd-0" Jan 25 08:14:34 crc kubenswrapper[4832]: I0125 08:14:34.158668 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/828fc400-0bbb-4fbb-ae6c-7aa12c12864a-scripts\") pod \"ovn-northd-0\" (UID: \"828fc400-0bbb-4fbb-ae6c-7aa12c12864a\") " pod="openstack/ovn-northd-0" Jan 25 08:14:34 crc kubenswrapper[4832]: I0125 08:14:34.159187 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/828fc400-0bbb-4fbb-ae6c-7aa12c12864a-config\") pod \"ovn-northd-0\" (UID: \"828fc400-0bbb-4fbb-ae6c-7aa12c12864a\") " pod="openstack/ovn-northd-0" Jan 25 08:14:34 crc kubenswrapper[4832]: I0125 08:14:34.161488 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/828fc400-0bbb-4fbb-ae6c-7aa12c12864a-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"828fc400-0bbb-4fbb-ae6c-7aa12c12864a\") " pod="openstack/ovn-northd-0" Jan 25 08:14:34 crc kubenswrapper[4832]: I0125 08:14:34.162301 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/828fc400-0bbb-4fbb-ae6c-7aa12c12864a-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"828fc400-0bbb-4fbb-ae6c-7aa12c12864a\") " pod="openstack/ovn-northd-0" Jan 25 08:14:34 crc kubenswrapper[4832]: I0125 08:14:34.166123 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/828fc400-0bbb-4fbb-ae6c-7aa12c12864a-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"828fc400-0bbb-4fbb-ae6c-7aa12c12864a\") " pod="openstack/ovn-northd-0" Jan 25 08:14:34 crc kubenswrapper[4832]: I0125 08:14:34.191475 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-64xc7\" (UniqueName: \"kubernetes.io/projected/828fc400-0bbb-4fbb-ae6c-7aa12c12864a-kube-api-access-64xc7\") pod \"ovn-northd-0\" (UID: \"828fc400-0bbb-4fbb-ae6c-7aa12c12864a\") " pod="openstack/ovn-northd-0" Jan 25 08:14:34 crc kubenswrapper[4832]: I0125 08:14:34.236116 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-jwr5g" Jan 25 08:14:34 crc kubenswrapper[4832]: I0125 08:14:34.257277 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Jan 25 08:14:34 crc kubenswrapper[4832]: I0125 08:14:34.361540 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-plwkd\" (UniqueName: \"kubernetes.io/projected/01866d50-e28c-44e2-a57d-5d5a7ea04626-kube-api-access-plwkd\") pod \"01866d50-e28c-44e2-a57d-5d5a7ea04626\" (UID: \"01866d50-e28c-44e2-a57d-5d5a7ea04626\") " Jan 25 08:14:34 crc kubenswrapper[4832]: I0125 08:14:34.361619 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/01866d50-e28c-44e2-a57d-5d5a7ea04626-dns-svc\") pod \"01866d50-e28c-44e2-a57d-5d5a7ea04626\" (UID: \"01866d50-e28c-44e2-a57d-5d5a7ea04626\") " Jan 25 08:14:34 crc kubenswrapper[4832]: I0125 08:14:34.361677 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01866d50-e28c-44e2-a57d-5d5a7ea04626-config\") pod \"01866d50-e28c-44e2-a57d-5d5a7ea04626\" (UID: \"01866d50-e28c-44e2-a57d-5d5a7ea04626\") " Jan 25 08:14:34 crc kubenswrapper[4832]: I0125 08:14:34.373770 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01866d50-e28c-44e2-a57d-5d5a7ea04626-kube-api-access-plwkd" (OuterVolumeSpecName: "kube-api-access-plwkd") pod "01866d50-e28c-44e2-a57d-5d5a7ea04626" (UID: "01866d50-e28c-44e2-a57d-5d5a7ea04626"). InnerVolumeSpecName "kube-api-access-plwkd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 25 08:14:34 crc kubenswrapper[4832]: I0125 08:14:34.407437 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-ccnpl"] Jan 25 08:14:34 crc kubenswrapper[4832]: I0125 08:14:34.413573 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01866d50-e28c-44e2-a57d-5d5a7ea04626-config" (OuterVolumeSpecName: "config") pod "01866d50-e28c-44e2-a57d-5d5a7ea04626" (UID: "01866d50-e28c-44e2-a57d-5d5a7ea04626"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 25 08:14:34 crc kubenswrapper[4832]: I0125 08:14:34.431074 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01866d50-e28c-44e2-a57d-5d5a7ea04626-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "01866d50-e28c-44e2-a57d-5d5a7ea04626" (UID: "01866d50-e28c-44e2-a57d-5d5a7ea04626"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 25 08:14:34 crc kubenswrapper[4832]: I0125 08:14:34.464496 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-plwkd\" (UniqueName: \"kubernetes.io/projected/01866d50-e28c-44e2-a57d-5d5a7ea04626-kube-api-access-plwkd\") on node \"crc\" DevicePath \"\"" Jan 25 08:14:34 crc kubenswrapper[4832]: I0125 08:14:34.464608 4832 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/01866d50-e28c-44e2-a57d-5d5a7ea04626-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 25 08:14:34 crc kubenswrapper[4832]: I0125 08:14:34.464668 4832 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01866d50-e28c-44e2-a57d-5d5a7ea04626-config\") on node \"crc\" DevicePath \"\"" Jan 25 08:14:34 crc kubenswrapper[4832]: I0125 08:14:34.659174 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-hcd8h" event={"ID":"4b6aa9f6-e110-4147-a8d0-b1c8287226d1","Type":"ContainerStarted","Data":"dc7d94e28298837db5af08f1f790227d9262ebf9517b1e10f52b7e20a4a7b963"} Jan 25 08:14:34 crc kubenswrapper[4832]: I0125 08:14:34.660523 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7fd796d7df-hfhnp" event={"ID":"1bbbad5d-1634-4187-b9d8-0748dca46ba3","Type":"ContainerStarted","Data":"28dd5309f3900267aaca0f15cb0099ae806ec0174635f08b8e5f767c24e1b542"} Jan 25 08:14:34 crc kubenswrapper[4832]: I0125 08:14:34.661600 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-ccnpl" event={"ID":"1adf8f99-37eb-4472-83a1-13c3500fadfe","Type":"ContainerStarted","Data":"5a9e5497a27b7a9c01c0c1e22a1df7fedfb9472b228c3e83304eae5890c9e1f7"} Jan 25 08:14:34 crc kubenswrapper[4832]: I0125 08:14:34.663745 4832 generic.go:334] "Generic (PLEG): container finished" podID="01866d50-e28c-44e2-a57d-5d5a7ea04626" containerID="8bf07fdd97df61bdadea7415e14b4bf6a6b8ea3df8c02106c763d60ceaff618e" exitCode=0 Jan 25 08:14:34 crc kubenswrapper[4832]: I0125 08:14:34.663796 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-jwr5g" Jan 25 08:14:34 crc kubenswrapper[4832]: I0125 08:14:34.663838 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-jwr5g" event={"ID":"01866d50-e28c-44e2-a57d-5d5a7ea04626","Type":"ContainerDied","Data":"8bf07fdd97df61bdadea7415e14b4bf6a6b8ea3df8c02106c763d60ceaff618e"} Jan 25 08:14:34 crc kubenswrapper[4832]: I0125 08:14:34.663888 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-jwr5g" event={"ID":"01866d50-e28c-44e2-a57d-5d5a7ea04626","Type":"ContainerDied","Data":"f69eab5bb55672d1730590ea6bb7d002c0dae06eae0ead6b7108f7959b4a80f6"} Jan 25 08:14:34 crc kubenswrapper[4832]: I0125 08:14:34.663919 4832 scope.go:117] "RemoveContainer" containerID="8bf07fdd97df61bdadea7415e14b4bf6a6b8ea3df8c02106c763d60ceaff618e" Jan 25 08:14:34 crc kubenswrapper[4832]: I0125 08:14:34.696619 4832 scope.go:117] "RemoveContainer" containerID="46cfb850c0e9af0ac1e3fcff67eefb9ea921fb9e5f5addd01348f3481ebfb60f" Jan 25 08:14:34 crc kubenswrapper[4832]: I0125 08:14:34.697905 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-jwr5g"] Jan 25 08:14:34 crc kubenswrapper[4832]: I0125 08:14:34.705488 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-jwr5g"] Jan 25 08:14:34 crc kubenswrapper[4832]: I0125 08:14:34.717251 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Jan 25 08:14:34 crc kubenswrapper[4832]: I0125 08:14:34.732021 4832 scope.go:117] "RemoveContainer" containerID="8bf07fdd97df61bdadea7415e14b4bf6a6b8ea3df8c02106c763d60ceaff618e" Jan 25 08:14:34 crc kubenswrapper[4832]: E0125 08:14:34.735185 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8bf07fdd97df61bdadea7415e14b4bf6a6b8ea3df8c02106c763d60ceaff618e\": container with ID starting with 8bf07fdd97df61bdadea7415e14b4bf6a6b8ea3df8c02106c763d60ceaff618e not found: ID does not exist" containerID="8bf07fdd97df61bdadea7415e14b4bf6a6b8ea3df8c02106c763d60ceaff618e" Jan 25 08:14:34 crc kubenswrapper[4832]: I0125 08:14:34.735232 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8bf07fdd97df61bdadea7415e14b4bf6a6b8ea3df8c02106c763d60ceaff618e"} err="failed to get container status \"8bf07fdd97df61bdadea7415e14b4bf6a6b8ea3df8c02106c763d60ceaff618e\": rpc error: code = NotFound desc = could not find container \"8bf07fdd97df61bdadea7415e14b4bf6a6b8ea3df8c02106c763d60ceaff618e\": container with ID starting with 8bf07fdd97df61bdadea7415e14b4bf6a6b8ea3df8c02106c763d60ceaff618e not found: ID does not exist" Jan 25 08:14:34 crc kubenswrapper[4832]: I0125 08:14:34.735262 4832 scope.go:117] "RemoveContainer" containerID="46cfb850c0e9af0ac1e3fcff67eefb9ea921fb9e5f5addd01348f3481ebfb60f" Jan 25 08:14:34 crc kubenswrapper[4832]: E0125 08:14:34.735609 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"46cfb850c0e9af0ac1e3fcff67eefb9ea921fb9e5f5addd01348f3481ebfb60f\": container with ID starting with 46cfb850c0e9af0ac1e3fcff67eefb9ea921fb9e5f5addd01348f3481ebfb60f not found: ID does not exist" containerID="46cfb850c0e9af0ac1e3fcff67eefb9ea921fb9e5f5addd01348f3481ebfb60f" Jan 25 08:14:34 crc kubenswrapper[4832]: I0125 08:14:34.735645 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"46cfb850c0e9af0ac1e3fcff67eefb9ea921fb9e5f5addd01348f3481ebfb60f"} err="failed to get container status \"46cfb850c0e9af0ac1e3fcff67eefb9ea921fb9e5f5addd01348f3481ebfb60f\": rpc error: code = NotFound desc = could not find container \"46cfb850c0e9af0ac1e3fcff67eefb9ea921fb9e5f5addd01348f3481ebfb60f\": container with ID starting with 46cfb850c0e9af0ac1e3fcff67eefb9ea921fb9e5f5addd01348f3481ebfb60f not found: ID does not exist" Jan 25 08:14:34 crc kubenswrapper[4832]: I0125 08:14:34.770025 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/memcached-0" Jan 25 08:14:35 crc kubenswrapper[4832]: I0125 08:14:35.680716 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01866d50-e28c-44e2-a57d-5d5a7ea04626" path="/var/lib/kubelet/pods/01866d50-e28c-44e2-a57d-5d5a7ea04626/volumes" Jan 25 08:14:35 crc kubenswrapper[4832]: I0125 08:14:35.681795 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"828fc400-0bbb-4fbb-ae6c-7aa12c12864a","Type":"ContainerStarted","Data":"8a6049f52297622c046f8af0135fe3b10a36817dcbc4f1016241c1d0f8f47456"} Jan 25 08:14:38 crc kubenswrapper[4832]: I0125 08:14:38.049555 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-galera-0" Jan 25 08:14:38 crc kubenswrapper[4832]: I0125 08:14:38.049988 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-galera-0" Jan 25 08:14:39 crc kubenswrapper[4832]: I0125 08:14:39.759780 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-cell1-galera-0" Jan 25 08:14:39 crc kubenswrapper[4832]: I0125 08:14:39.759870 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-cell1-galera-0" Jan 25 08:14:41 crc kubenswrapper[4832]: I0125 08:14:41.268823 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7fd796d7df-hfhnp"] Jan 25 08:14:41 crc kubenswrapper[4832]: I0125 08:14:41.303975 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-698758b865-vswdl"] Jan 25 08:14:41 crc kubenswrapper[4832]: E0125 08:14:41.314653 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="01866d50-e28c-44e2-a57d-5d5a7ea04626" containerName="init" Jan 25 08:14:41 crc kubenswrapper[4832]: I0125 08:14:41.314683 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="01866d50-e28c-44e2-a57d-5d5a7ea04626" containerName="init" Jan 25 08:14:41 crc kubenswrapper[4832]: E0125 08:14:41.314701 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="01866d50-e28c-44e2-a57d-5d5a7ea04626" containerName="dnsmasq-dns" Jan 25 08:14:41 crc kubenswrapper[4832]: I0125 08:14:41.314708 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="01866d50-e28c-44e2-a57d-5d5a7ea04626" containerName="dnsmasq-dns" Jan 25 08:14:41 crc kubenswrapper[4832]: I0125 08:14:41.314870 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="01866d50-e28c-44e2-a57d-5d5a7ea04626" containerName="dnsmasq-dns" Jan 25 08:14:41 crc kubenswrapper[4832]: I0125 08:14:41.315727 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-698758b865-vswdl" Jan 25 08:14:41 crc kubenswrapper[4832]: I0125 08:14:41.343578 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-698758b865-vswdl"] Jan 25 08:14:41 crc kubenswrapper[4832]: I0125 08:14:41.433330 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2mcj2\" (UniqueName: \"kubernetes.io/projected/d36bac18-e73f-4718-b2b7-89fc54febd73-kube-api-access-2mcj2\") pod \"dnsmasq-dns-698758b865-vswdl\" (UID: \"d36bac18-e73f-4718-b2b7-89fc54febd73\") " pod="openstack/dnsmasq-dns-698758b865-vswdl" Jan 25 08:14:41 crc kubenswrapper[4832]: I0125 08:14:41.433416 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d36bac18-e73f-4718-b2b7-89fc54febd73-ovsdbserver-nb\") pod \"dnsmasq-dns-698758b865-vswdl\" (UID: \"d36bac18-e73f-4718-b2b7-89fc54febd73\") " pod="openstack/dnsmasq-dns-698758b865-vswdl" Jan 25 08:14:41 crc kubenswrapper[4832]: I0125 08:14:41.433446 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d36bac18-e73f-4718-b2b7-89fc54febd73-ovsdbserver-sb\") pod \"dnsmasq-dns-698758b865-vswdl\" (UID: \"d36bac18-e73f-4718-b2b7-89fc54febd73\") " pod="openstack/dnsmasq-dns-698758b865-vswdl" Jan 25 08:14:41 crc kubenswrapper[4832]: I0125 08:14:41.433471 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d36bac18-e73f-4718-b2b7-89fc54febd73-config\") pod \"dnsmasq-dns-698758b865-vswdl\" (UID: \"d36bac18-e73f-4718-b2b7-89fc54febd73\") " pod="openstack/dnsmasq-dns-698758b865-vswdl" Jan 25 08:14:41 crc kubenswrapper[4832]: I0125 08:14:41.433495 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d36bac18-e73f-4718-b2b7-89fc54febd73-dns-svc\") pod \"dnsmasq-dns-698758b865-vswdl\" (UID: \"d36bac18-e73f-4718-b2b7-89fc54febd73\") " pod="openstack/dnsmasq-dns-698758b865-vswdl" Jan 25 08:14:41 crc kubenswrapper[4832]: I0125 08:14:41.534705 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2mcj2\" (UniqueName: \"kubernetes.io/projected/d36bac18-e73f-4718-b2b7-89fc54febd73-kube-api-access-2mcj2\") pod \"dnsmasq-dns-698758b865-vswdl\" (UID: \"d36bac18-e73f-4718-b2b7-89fc54febd73\") " pod="openstack/dnsmasq-dns-698758b865-vswdl" Jan 25 08:14:41 crc kubenswrapper[4832]: I0125 08:14:41.535098 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d36bac18-e73f-4718-b2b7-89fc54febd73-ovsdbserver-nb\") pod \"dnsmasq-dns-698758b865-vswdl\" (UID: \"d36bac18-e73f-4718-b2b7-89fc54febd73\") " pod="openstack/dnsmasq-dns-698758b865-vswdl" Jan 25 08:14:41 crc kubenswrapper[4832]: I0125 08:14:41.535121 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d36bac18-e73f-4718-b2b7-89fc54febd73-ovsdbserver-sb\") pod \"dnsmasq-dns-698758b865-vswdl\" (UID: \"d36bac18-e73f-4718-b2b7-89fc54febd73\") " pod="openstack/dnsmasq-dns-698758b865-vswdl" Jan 25 08:14:41 crc kubenswrapper[4832]: I0125 08:14:41.535147 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d36bac18-e73f-4718-b2b7-89fc54febd73-config\") pod \"dnsmasq-dns-698758b865-vswdl\" (UID: \"d36bac18-e73f-4718-b2b7-89fc54febd73\") " pod="openstack/dnsmasq-dns-698758b865-vswdl" Jan 25 08:14:41 crc kubenswrapper[4832]: I0125 08:14:41.535168 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d36bac18-e73f-4718-b2b7-89fc54febd73-dns-svc\") pod \"dnsmasq-dns-698758b865-vswdl\" (UID: \"d36bac18-e73f-4718-b2b7-89fc54febd73\") " pod="openstack/dnsmasq-dns-698758b865-vswdl" Jan 25 08:14:41 crc kubenswrapper[4832]: I0125 08:14:41.536151 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d36bac18-e73f-4718-b2b7-89fc54febd73-dns-svc\") pod \"dnsmasq-dns-698758b865-vswdl\" (UID: \"d36bac18-e73f-4718-b2b7-89fc54febd73\") " pod="openstack/dnsmasq-dns-698758b865-vswdl" Jan 25 08:14:41 crc kubenswrapper[4832]: I0125 08:14:41.536244 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d36bac18-e73f-4718-b2b7-89fc54febd73-ovsdbserver-sb\") pod \"dnsmasq-dns-698758b865-vswdl\" (UID: \"d36bac18-e73f-4718-b2b7-89fc54febd73\") " pod="openstack/dnsmasq-dns-698758b865-vswdl" Jan 25 08:14:41 crc kubenswrapper[4832]: I0125 08:14:41.536379 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d36bac18-e73f-4718-b2b7-89fc54febd73-config\") pod \"dnsmasq-dns-698758b865-vswdl\" (UID: \"d36bac18-e73f-4718-b2b7-89fc54febd73\") " pod="openstack/dnsmasq-dns-698758b865-vswdl" Jan 25 08:14:41 crc kubenswrapper[4832]: I0125 08:14:41.536667 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d36bac18-e73f-4718-b2b7-89fc54febd73-ovsdbserver-nb\") pod \"dnsmasq-dns-698758b865-vswdl\" (UID: \"d36bac18-e73f-4718-b2b7-89fc54febd73\") " pod="openstack/dnsmasq-dns-698758b865-vswdl" Jan 25 08:14:41 crc kubenswrapper[4832]: I0125 08:14:41.565081 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2mcj2\" (UniqueName: \"kubernetes.io/projected/d36bac18-e73f-4718-b2b7-89fc54febd73-kube-api-access-2mcj2\") pod \"dnsmasq-dns-698758b865-vswdl\" (UID: \"d36bac18-e73f-4718-b2b7-89fc54febd73\") " pod="openstack/dnsmasq-dns-698758b865-vswdl" Jan 25 08:14:41 crc kubenswrapper[4832]: I0125 08:14:41.652506 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-698758b865-vswdl" Jan 25 08:14:41 crc kubenswrapper[4832]: I0125 08:14:41.723693 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7fd796d7df-hfhnp" event={"ID":"1bbbad5d-1634-4187-b9d8-0748dca46ba3","Type":"ContainerStarted","Data":"bbeb9f60155b56ea289d86ec6b408ac7b703ed4455c73737806b1cea6ed7ad80"} Jan 25 08:14:42 crc kubenswrapper[4832]: I0125 08:14:42.122107 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-698758b865-vswdl"] Jan 25 08:14:42 crc kubenswrapper[4832]: W0125 08:14:42.128863 4832 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd36bac18_e73f_4718_b2b7_89fc54febd73.slice/crio-29d2f489404d2649fc8b8f47acbb40f91aa617609dffd2fafca640fca875c641 WatchSource:0}: Error finding container 29d2f489404d2649fc8b8f47acbb40f91aa617609dffd2fafca640fca875c641: Status 404 returned error can't find the container with id 29d2f489404d2649fc8b8f47acbb40f91aa617609dffd2fafca640fca875c641 Jan 25 08:14:42 crc kubenswrapper[4832]: I0125 08:14:42.431510 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-storage-0"] Jan 25 08:14:42 crc kubenswrapper[4832]: I0125 08:14:42.450718 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Jan 25 08:14:42 crc kubenswrapper[4832]: I0125 08:14:42.454814 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-files" Jan 25 08:14:42 crc kubenswrapper[4832]: I0125 08:14:42.455026 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-conf" Jan 25 08:14:42 crc kubenswrapper[4832]: I0125 08:14:42.455563 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-storage-config-data" Jan 25 08:14:42 crc kubenswrapper[4832]: I0125 08:14:42.457614 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-swift-dockercfg-vlb7z" Jan 25 08:14:42 crc kubenswrapper[4832]: I0125 08:14:42.481557 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Jan 25 08:14:42 crc kubenswrapper[4832]: I0125 08:14:42.567974 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6jxnv\" (UniqueName: \"kubernetes.io/projected/68ef9e02-9e33-48c3-a32b-ceae36687171-kube-api-access-6jxnv\") pod \"swift-storage-0\" (UID: \"68ef9e02-9e33-48c3-a32b-ceae36687171\") " pod="openstack/swift-storage-0" Jan 25 08:14:42 crc kubenswrapper[4832]: I0125 08:14:42.568050 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/68ef9e02-9e33-48c3-a32b-ceae36687171-cache\") pod \"swift-storage-0\" (UID: \"68ef9e02-9e33-48c3-a32b-ceae36687171\") " pod="openstack/swift-storage-0" Jan 25 08:14:42 crc kubenswrapper[4832]: I0125 08:14:42.568201 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"swift-storage-0\" (UID: \"68ef9e02-9e33-48c3-a32b-ceae36687171\") " pod="openstack/swift-storage-0" Jan 25 08:14:42 crc kubenswrapper[4832]: I0125 08:14:42.568324 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/68ef9e02-9e33-48c3-a32b-ceae36687171-etc-swift\") pod \"swift-storage-0\" (UID: \"68ef9e02-9e33-48c3-a32b-ceae36687171\") " pod="openstack/swift-storage-0" Jan 25 08:14:42 crc kubenswrapper[4832]: I0125 08:14:42.568543 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/68ef9e02-9e33-48c3-a32b-ceae36687171-lock\") pod \"swift-storage-0\" (UID: \"68ef9e02-9e33-48c3-a32b-ceae36687171\") " pod="openstack/swift-storage-0" Jan 25 08:14:42 crc kubenswrapper[4832]: I0125 08:14:42.568634 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/68ef9e02-9e33-48c3-a32b-ceae36687171-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"68ef9e02-9e33-48c3-a32b-ceae36687171\") " pod="openstack/swift-storage-0" Jan 25 08:14:42 crc kubenswrapper[4832]: I0125 08:14:42.669844 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/68ef9e02-9e33-48c3-a32b-ceae36687171-cache\") pod \"swift-storage-0\" (UID: \"68ef9e02-9e33-48c3-a32b-ceae36687171\") " pod="openstack/swift-storage-0" Jan 25 08:14:42 crc kubenswrapper[4832]: I0125 08:14:42.669952 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"swift-storage-0\" (UID: \"68ef9e02-9e33-48c3-a32b-ceae36687171\") " pod="openstack/swift-storage-0" Jan 25 08:14:42 crc kubenswrapper[4832]: I0125 08:14:42.669989 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/68ef9e02-9e33-48c3-a32b-ceae36687171-etc-swift\") pod \"swift-storage-0\" (UID: \"68ef9e02-9e33-48c3-a32b-ceae36687171\") " pod="openstack/swift-storage-0" Jan 25 08:14:42 crc kubenswrapper[4832]: I0125 08:14:42.670033 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/68ef9e02-9e33-48c3-a32b-ceae36687171-lock\") pod \"swift-storage-0\" (UID: \"68ef9e02-9e33-48c3-a32b-ceae36687171\") " pod="openstack/swift-storage-0" Jan 25 08:14:42 crc kubenswrapper[4832]: I0125 08:14:42.670058 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/68ef9e02-9e33-48c3-a32b-ceae36687171-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"68ef9e02-9e33-48c3-a32b-ceae36687171\") " pod="openstack/swift-storage-0" Jan 25 08:14:42 crc kubenswrapper[4832]: I0125 08:14:42.670112 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6jxnv\" (UniqueName: \"kubernetes.io/projected/68ef9e02-9e33-48c3-a32b-ceae36687171-kube-api-access-6jxnv\") pod \"swift-storage-0\" (UID: \"68ef9e02-9e33-48c3-a32b-ceae36687171\") " pod="openstack/swift-storage-0" Jan 25 08:14:42 crc kubenswrapper[4832]: E0125 08:14:42.671210 4832 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 25 08:14:42 crc kubenswrapper[4832]: E0125 08:14:42.671239 4832 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 25 08:14:42 crc kubenswrapper[4832]: E0125 08:14:42.671283 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/68ef9e02-9e33-48c3-a32b-ceae36687171-etc-swift podName:68ef9e02-9e33-48c3-a32b-ceae36687171 nodeName:}" failed. No retries permitted until 2026-01-25 08:14:43.171267206 +0000 UTC m=+1065.845090739 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/68ef9e02-9e33-48c3-a32b-ceae36687171-etc-swift") pod "swift-storage-0" (UID: "68ef9e02-9e33-48c3-a32b-ceae36687171") : configmap "swift-ring-files" not found Jan 25 08:14:42 crc kubenswrapper[4832]: I0125 08:14:42.671584 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/68ef9e02-9e33-48c3-a32b-ceae36687171-cache\") pod \"swift-storage-0\" (UID: \"68ef9e02-9e33-48c3-a32b-ceae36687171\") " pod="openstack/swift-storage-0" Jan 25 08:14:42 crc kubenswrapper[4832]: I0125 08:14:42.671697 4832 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"swift-storage-0\" (UID: \"68ef9e02-9e33-48c3-a32b-ceae36687171\") device mount path \"/mnt/openstack/pv12\"" pod="openstack/swift-storage-0" Jan 25 08:14:42 crc kubenswrapper[4832]: I0125 08:14:42.671823 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/68ef9e02-9e33-48c3-a32b-ceae36687171-lock\") pod \"swift-storage-0\" (UID: \"68ef9e02-9e33-48c3-a32b-ceae36687171\") " pod="openstack/swift-storage-0" Jan 25 08:14:42 crc kubenswrapper[4832]: I0125 08:14:42.677183 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/68ef9e02-9e33-48c3-a32b-ceae36687171-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"68ef9e02-9e33-48c3-a32b-ceae36687171\") " pod="openstack/swift-storage-0" Jan 25 08:14:42 crc kubenswrapper[4832]: I0125 08:14:42.685888 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6jxnv\" (UniqueName: \"kubernetes.io/projected/68ef9e02-9e33-48c3-a32b-ceae36687171-kube-api-access-6jxnv\") pod \"swift-storage-0\" (UID: \"68ef9e02-9e33-48c3-a32b-ceae36687171\") " pod="openstack/swift-storage-0" Jan 25 08:14:42 crc kubenswrapper[4832]: I0125 08:14:42.692804 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"swift-storage-0\" (UID: \"68ef9e02-9e33-48c3-a32b-ceae36687171\") " pod="openstack/swift-storage-0" Jan 25 08:14:42 crc kubenswrapper[4832]: I0125 08:14:42.732821 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-vswdl" event={"ID":"d36bac18-e73f-4718-b2b7-89fc54febd73","Type":"ContainerStarted","Data":"29d2f489404d2649fc8b8f47acbb40f91aa617609dffd2fafca640fca875c641"} Jan 25 08:14:42 crc kubenswrapper[4832]: I0125 08:14:42.733954 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-hcd8h" event={"ID":"4b6aa9f6-e110-4147-a8d0-b1c8287226d1","Type":"ContainerStarted","Data":"11fb00a7b8ade5adb66b0e0632ef63919515432e6364df42640ed581c1e2a7fa"} Jan 25 08:14:43 crc kubenswrapper[4832]: I0125 08:14:43.016088 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-ring-rebalance-s7nx7"] Jan 25 08:14:43 crc kubenswrapper[4832]: I0125 08:14:43.017196 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-s7nx7" Jan 25 08:14:43 crc kubenswrapper[4832]: I0125 08:14:43.020348 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-config-data" Jan 25 08:14:43 crc kubenswrapper[4832]: I0125 08:14:43.020592 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Jan 25 08:14:43 crc kubenswrapper[4832]: I0125 08:14:43.020756 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-scripts" Jan 25 08:14:43 crc kubenswrapper[4832]: I0125 08:14:43.041653 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-s7nx7"] Jan 25 08:14:43 crc kubenswrapper[4832]: I0125 08:14:43.178318 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/8780670c-4459-4064-a5ee-d22abf7923aa-ring-data-devices\") pod \"swift-ring-rebalance-s7nx7\" (UID: \"8780670c-4459-4064-a5ee-d22abf7923aa\") " pod="openstack/swift-ring-rebalance-s7nx7" Jan 25 08:14:43 crc kubenswrapper[4832]: I0125 08:14:43.178366 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/8780670c-4459-4064-a5ee-d22abf7923aa-etc-swift\") pod \"swift-ring-rebalance-s7nx7\" (UID: \"8780670c-4459-4064-a5ee-d22abf7923aa\") " pod="openstack/swift-ring-rebalance-s7nx7" Jan 25 08:14:43 crc kubenswrapper[4832]: I0125 08:14:43.178430 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/8780670c-4459-4064-a5ee-d22abf7923aa-dispersionconf\") pod \"swift-ring-rebalance-s7nx7\" (UID: \"8780670c-4459-4064-a5ee-d22abf7923aa\") " pod="openstack/swift-ring-rebalance-s7nx7" Jan 25 08:14:43 crc kubenswrapper[4832]: I0125 08:14:43.178456 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8780670c-4459-4064-a5ee-d22abf7923aa-combined-ca-bundle\") pod \"swift-ring-rebalance-s7nx7\" (UID: \"8780670c-4459-4064-a5ee-d22abf7923aa\") " pod="openstack/swift-ring-rebalance-s7nx7" Jan 25 08:14:43 crc kubenswrapper[4832]: I0125 08:14:43.178544 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/68ef9e02-9e33-48c3-a32b-ceae36687171-etc-swift\") pod \"swift-storage-0\" (UID: \"68ef9e02-9e33-48c3-a32b-ceae36687171\") " pod="openstack/swift-storage-0" Jan 25 08:14:43 crc kubenswrapper[4832]: I0125 08:14:43.178576 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/8780670c-4459-4064-a5ee-d22abf7923aa-scripts\") pod \"swift-ring-rebalance-s7nx7\" (UID: \"8780670c-4459-4064-a5ee-d22abf7923aa\") " pod="openstack/swift-ring-rebalance-s7nx7" Jan 25 08:14:43 crc kubenswrapper[4832]: I0125 08:14:43.178604 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/8780670c-4459-4064-a5ee-d22abf7923aa-swiftconf\") pod \"swift-ring-rebalance-s7nx7\" (UID: \"8780670c-4459-4064-a5ee-d22abf7923aa\") " pod="openstack/swift-ring-rebalance-s7nx7" Jan 25 08:14:43 crc kubenswrapper[4832]: I0125 08:14:43.178652 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2vh7r\" (UniqueName: \"kubernetes.io/projected/8780670c-4459-4064-a5ee-d22abf7923aa-kube-api-access-2vh7r\") pod \"swift-ring-rebalance-s7nx7\" (UID: \"8780670c-4459-4064-a5ee-d22abf7923aa\") " pod="openstack/swift-ring-rebalance-s7nx7" Jan 25 08:14:43 crc kubenswrapper[4832]: E0125 08:14:43.178752 4832 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 25 08:14:43 crc kubenswrapper[4832]: E0125 08:14:43.178786 4832 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 25 08:14:43 crc kubenswrapper[4832]: E0125 08:14:43.178841 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/68ef9e02-9e33-48c3-a32b-ceae36687171-etc-swift podName:68ef9e02-9e33-48c3-a32b-ceae36687171 nodeName:}" failed. No retries permitted until 2026-01-25 08:14:44.178823186 +0000 UTC m=+1066.852646719 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/68ef9e02-9e33-48c3-a32b-ceae36687171-etc-swift") pod "swift-storage-0" (UID: "68ef9e02-9e33-48c3-a32b-ceae36687171") : configmap "swift-ring-files" not found Jan 25 08:14:43 crc kubenswrapper[4832]: I0125 08:14:43.279655 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/8780670c-4459-4064-a5ee-d22abf7923aa-ring-data-devices\") pod \"swift-ring-rebalance-s7nx7\" (UID: \"8780670c-4459-4064-a5ee-d22abf7923aa\") " pod="openstack/swift-ring-rebalance-s7nx7" Jan 25 08:14:43 crc kubenswrapper[4832]: I0125 08:14:43.279701 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/8780670c-4459-4064-a5ee-d22abf7923aa-etc-swift\") pod \"swift-ring-rebalance-s7nx7\" (UID: \"8780670c-4459-4064-a5ee-d22abf7923aa\") " pod="openstack/swift-ring-rebalance-s7nx7" Jan 25 08:14:43 crc kubenswrapper[4832]: I0125 08:14:43.279725 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/8780670c-4459-4064-a5ee-d22abf7923aa-dispersionconf\") pod \"swift-ring-rebalance-s7nx7\" (UID: \"8780670c-4459-4064-a5ee-d22abf7923aa\") " pod="openstack/swift-ring-rebalance-s7nx7" Jan 25 08:14:43 crc kubenswrapper[4832]: I0125 08:14:43.279746 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8780670c-4459-4064-a5ee-d22abf7923aa-combined-ca-bundle\") pod \"swift-ring-rebalance-s7nx7\" (UID: \"8780670c-4459-4064-a5ee-d22abf7923aa\") " pod="openstack/swift-ring-rebalance-s7nx7" Jan 25 08:14:43 crc kubenswrapper[4832]: I0125 08:14:43.279837 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/8780670c-4459-4064-a5ee-d22abf7923aa-scripts\") pod \"swift-ring-rebalance-s7nx7\" (UID: \"8780670c-4459-4064-a5ee-d22abf7923aa\") " pod="openstack/swift-ring-rebalance-s7nx7" Jan 25 08:14:43 crc kubenswrapper[4832]: I0125 08:14:43.279862 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/8780670c-4459-4064-a5ee-d22abf7923aa-swiftconf\") pod \"swift-ring-rebalance-s7nx7\" (UID: \"8780670c-4459-4064-a5ee-d22abf7923aa\") " pod="openstack/swift-ring-rebalance-s7nx7" Jan 25 08:14:43 crc kubenswrapper[4832]: I0125 08:14:43.279912 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2vh7r\" (UniqueName: \"kubernetes.io/projected/8780670c-4459-4064-a5ee-d22abf7923aa-kube-api-access-2vh7r\") pod \"swift-ring-rebalance-s7nx7\" (UID: \"8780670c-4459-4064-a5ee-d22abf7923aa\") " pod="openstack/swift-ring-rebalance-s7nx7" Jan 25 08:14:43 crc kubenswrapper[4832]: I0125 08:14:43.280131 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/8780670c-4459-4064-a5ee-d22abf7923aa-etc-swift\") pod \"swift-ring-rebalance-s7nx7\" (UID: \"8780670c-4459-4064-a5ee-d22abf7923aa\") " pod="openstack/swift-ring-rebalance-s7nx7" Jan 25 08:14:43 crc kubenswrapper[4832]: I0125 08:14:43.280501 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/8780670c-4459-4064-a5ee-d22abf7923aa-ring-data-devices\") pod \"swift-ring-rebalance-s7nx7\" (UID: \"8780670c-4459-4064-a5ee-d22abf7923aa\") " pod="openstack/swift-ring-rebalance-s7nx7" Jan 25 08:14:43 crc kubenswrapper[4832]: I0125 08:14:43.280707 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/8780670c-4459-4064-a5ee-d22abf7923aa-scripts\") pod \"swift-ring-rebalance-s7nx7\" (UID: \"8780670c-4459-4064-a5ee-d22abf7923aa\") " pod="openstack/swift-ring-rebalance-s7nx7" Jan 25 08:14:43 crc kubenswrapper[4832]: I0125 08:14:43.284557 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/8780670c-4459-4064-a5ee-d22abf7923aa-swiftconf\") pod \"swift-ring-rebalance-s7nx7\" (UID: \"8780670c-4459-4064-a5ee-d22abf7923aa\") " pod="openstack/swift-ring-rebalance-s7nx7" Jan 25 08:14:43 crc kubenswrapper[4832]: I0125 08:14:43.284768 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/8780670c-4459-4064-a5ee-d22abf7923aa-dispersionconf\") pod \"swift-ring-rebalance-s7nx7\" (UID: \"8780670c-4459-4064-a5ee-d22abf7923aa\") " pod="openstack/swift-ring-rebalance-s7nx7" Jan 25 08:14:43 crc kubenswrapper[4832]: I0125 08:14:43.287444 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8780670c-4459-4064-a5ee-d22abf7923aa-combined-ca-bundle\") pod \"swift-ring-rebalance-s7nx7\" (UID: \"8780670c-4459-4064-a5ee-d22abf7923aa\") " pod="openstack/swift-ring-rebalance-s7nx7" Jan 25 08:14:43 crc kubenswrapper[4832]: I0125 08:14:43.302762 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2vh7r\" (UniqueName: \"kubernetes.io/projected/8780670c-4459-4064-a5ee-d22abf7923aa-kube-api-access-2vh7r\") pod \"swift-ring-rebalance-s7nx7\" (UID: \"8780670c-4459-4064-a5ee-d22abf7923aa\") " pod="openstack/swift-ring-rebalance-s7nx7" Jan 25 08:14:43 crc kubenswrapper[4832]: I0125 08:14:43.330157 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-s7nx7" Jan 25 08:14:43 crc kubenswrapper[4832]: I0125 08:14:43.741824 4832 generic.go:334] "Generic (PLEG): container finished" podID="1adf8f99-37eb-4472-83a1-13c3500fadfe" containerID="8b14c3580ea07bc8194d982cd30d2aed67f35e96a302dd8547da50bd4e6f7561" exitCode=0 Jan 25 08:14:43 crc kubenswrapper[4832]: I0125 08:14:43.742032 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-ccnpl" event={"ID":"1adf8f99-37eb-4472-83a1-13c3500fadfe","Type":"ContainerDied","Data":"8b14c3580ea07bc8194d982cd30d2aed67f35e96a302dd8547da50bd4e6f7561"} Jan 25 08:14:43 crc kubenswrapper[4832]: I0125 08:14:43.743757 4832 generic.go:334] "Generic (PLEG): container finished" podID="d36bac18-e73f-4718-b2b7-89fc54febd73" containerID="db57b244a480c2cd03b457004010e87222d6aaee3be4574b4d43bf073cb5417a" exitCode=0 Jan 25 08:14:43 crc kubenswrapper[4832]: I0125 08:14:43.743827 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-vswdl" event={"ID":"d36bac18-e73f-4718-b2b7-89fc54febd73","Type":"ContainerDied","Data":"db57b244a480c2cd03b457004010e87222d6aaee3be4574b4d43bf073cb5417a"} Jan 25 08:14:43 crc kubenswrapper[4832]: I0125 08:14:43.745319 4832 generic.go:334] "Generic (PLEG): container finished" podID="1bbbad5d-1634-4187-b9d8-0748dca46ba3" containerID="bbeb9f60155b56ea289d86ec6b408ac7b703ed4455c73737806b1cea6ed7ad80" exitCode=0 Jan 25 08:14:43 crc kubenswrapper[4832]: I0125 08:14:43.746374 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7fd796d7df-hfhnp" event={"ID":"1bbbad5d-1634-4187-b9d8-0748dca46ba3","Type":"ContainerDied","Data":"bbeb9f60155b56ea289d86ec6b408ac7b703ed4455c73737806b1cea6ed7ad80"} Jan 25 08:14:43 crc kubenswrapper[4832]: I0125 08:14:43.794116 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-metrics-hcd8h" podStartSLOduration=11.794096345 podStartE2EDuration="11.794096345s" podCreationTimestamp="2026-01-25 08:14:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-25 08:14:43.783952547 +0000 UTC m=+1066.457776100" watchObservedRunningTime="2026-01-25 08:14:43.794096345 +0000 UTC m=+1066.467919888" Jan 25 08:14:43 crc kubenswrapper[4832]: I0125 08:14:43.823130 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-s7nx7"] Jan 25 08:14:44 crc kubenswrapper[4832]: E0125 08:14:44.109111 4832 log.go:32] "CreateContainer in sandbox from runtime service failed" err=< Jan 25 08:14:44 crc kubenswrapper[4832]: rpc error: code = Unknown desc = container create failed: mount `/var/lib/kubelet/pods/1adf8f99-37eb-4472-83a1-13c3500fadfe/volume-subpaths/dns-svc/dnsmasq-dns/1` to `etc/dnsmasq.d/hosts/dns-svc`: No such file or directory Jan 25 08:14:44 crc kubenswrapper[4832]: > podSandboxID="5a9e5497a27b7a9c01c0c1e22a1df7fedfb9472b228c3e83304eae5890c9e1f7" Jan 25 08:14:44 crc kubenswrapper[4832]: E0125 08:14:44.109244 4832 kuberuntime_manager.go:1274] "Unhandled Error" err=< Jan 25 08:14:44 crc kubenswrapper[4832]: container &Container{Name:dnsmasq-dns,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n599h5cbh7ch5d4h66fh676hdbh546h95h88h5ffh55ch7fhch57ch687hddhc7h5fdh57dh674h56fh64ch98h9bh557h55dh646h54ch54fh5c4h597q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovsdbserver-nb,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/ovsdbserver-nb,SubPath:ovsdbserver-nb,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovsdbserver-sb,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/ovsdbserver-sb,SubPath:ovsdbserver-sb,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-gqgbs,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:nil,TCPSocket:&TCPSocketAction{Port:{0 5353 },Host:,},GRPC:nil,},InitialDelaySeconds:3,TimeoutSeconds:5,PeriodSeconds:3,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:nil,TCPSocket:&TCPSocketAction{Port:{0 5353 },Host:,},GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-86db49b7ff-ccnpl_openstack(1adf8f99-37eb-4472-83a1-13c3500fadfe): CreateContainerError: container create failed: mount `/var/lib/kubelet/pods/1adf8f99-37eb-4472-83a1-13c3500fadfe/volume-subpaths/dns-svc/dnsmasq-dns/1` to `etc/dnsmasq.d/hosts/dns-svc`: No such file or directory Jan 25 08:14:44 crc kubenswrapper[4832]: > logger="UnhandledError" Jan 25 08:14:44 crc kubenswrapper[4832]: E0125 08:14:44.110810 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dnsmasq-dns\" with CreateContainerError: \"container create failed: mount `/var/lib/kubelet/pods/1adf8f99-37eb-4472-83a1-13c3500fadfe/volume-subpaths/dns-svc/dnsmasq-dns/1` to `etc/dnsmasq.d/hosts/dns-svc`: No such file or directory\\n\"" pod="openstack/dnsmasq-dns-86db49b7ff-ccnpl" podUID="1adf8f99-37eb-4472-83a1-13c3500fadfe" Jan 25 08:14:44 crc kubenswrapper[4832]: I0125 08:14:44.203937 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/68ef9e02-9e33-48c3-a32b-ceae36687171-etc-swift\") pod \"swift-storage-0\" (UID: \"68ef9e02-9e33-48c3-a32b-ceae36687171\") " pod="openstack/swift-storage-0" Jan 25 08:14:44 crc kubenswrapper[4832]: E0125 08:14:44.204206 4832 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 25 08:14:44 crc kubenswrapper[4832]: E0125 08:14:44.204224 4832 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 25 08:14:44 crc kubenswrapper[4832]: E0125 08:14:44.204267 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/68ef9e02-9e33-48c3-a32b-ceae36687171-etc-swift podName:68ef9e02-9e33-48c3-a32b-ceae36687171 nodeName:}" failed. No retries permitted until 2026-01-25 08:14:46.204253908 +0000 UTC m=+1068.878077441 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/68ef9e02-9e33-48c3-a32b-ceae36687171-etc-swift") pod "swift-storage-0" (UID: "68ef9e02-9e33-48c3-a32b-ceae36687171") : configmap "swift-ring-files" not found Jan 25 08:14:44 crc kubenswrapper[4832]: I0125 08:14:44.209632 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7fd796d7df-hfhnp" Jan 25 08:14:44 crc kubenswrapper[4832]: I0125 08:14:44.304604 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bbbad5d-1634-4187-b9d8-0748dca46ba3-config\") pod \"1bbbad5d-1634-4187-b9d8-0748dca46ba3\" (UID: \"1bbbad5d-1634-4187-b9d8-0748dca46ba3\") " Jan 25 08:14:44 crc kubenswrapper[4832]: I0125 08:14:44.305068 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1bbbad5d-1634-4187-b9d8-0748dca46ba3-ovsdbserver-nb\") pod \"1bbbad5d-1634-4187-b9d8-0748dca46ba3\" (UID: \"1bbbad5d-1634-4187-b9d8-0748dca46ba3\") " Jan 25 08:14:44 crc kubenswrapper[4832]: I0125 08:14:44.305139 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1bbbad5d-1634-4187-b9d8-0748dca46ba3-dns-svc\") pod \"1bbbad5d-1634-4187-b9d8-0748dca46ba3\" (UID: \"1bbbad5d-1634-4187-b9d8-0748dca46ba3\") " Jan 25 08:14:44 crc kubenswrapper[4832]: I0125 08:14:44.305160 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pn6nf\" (UniqueName: \"kubernetes.io/projected/1bbbad5d-1634-4187-b9d8-0748dca46ba3-kube-api-access-pn6nf\") pod \"1bbbad5d-1634-4187-b9d8-0748dca46ba3\" (UID: \"1bbbad5d-1634-4187-b9d8-0748dca46ba3\") " Jan 25 08:14:44 crc kubenswrapper[4832]: I0125 08:14:44.334514 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1bbbad5d-1634-4187-b9d8-0748dca46ba3-kube-api-access-pn6nf" (OuterVolumeSpecName: "kube-api-access-pn6nf") pod "1bbbad5d-1634-4187-b9d8-0748dca46ba3" (UID: "1bbbad5d-1634-4187-b9d8-0748dca46ba3"). InnerVolumeSpecName "kube-api-access-pn6nf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 25 08:14:44 crc kubenswrapper[4832]: I0125 08:14:44.355510 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bbbad5d-1634-4187-b9d8-0748dca46ba3-config" (OuterVolumeSpecName: "config") pod "1bbbad5d-1634-4187-b9d8-0748dca46ba3" (UID: "1bbbad5d-1634-4187-b9d8-0748dca46ba3"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 25 08:14:44 crc kubenswrapper[4832]: I0125 08:14:44.358804 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bbbad5d-1634-4187-b9d8-0748dca46ba3-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "1bbbad5d-1634-4187-b9d8-0748dca46ba3" (UID: "1bbbad5d-1634-4187-b9d8-0748dca46ba3"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 25 08:14:44 crc kubenswrapper[4832]: I0125 08:14:44.361410 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bbbad5d-1634-4187-b9d8-0748dca46ba3-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "1bbbad5d-1634-4187-b9d8-0748dca46ba3" (UID: "1bbbad5d-1634-4187-b9d8-0748dca46ba3"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 25 08:14:44 crc kubenswrapper[4832]: I0125 08:14:44.407695 4832 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1bbbad5d-1634-4187-b9d8-0748dca46ba3-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 25 08:14:44 crc kubenswrapper[4832]: I0125 08:14:44.407725 4832 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1bbbad5d-1634-4187-b9d8-0748dca46ba3-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 25 08:14:44 crc kubenswrapper[4832]: I0125 08:14:44.407735 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pn6nf\" (UniqueName: \"kubernetes.io/projected/1bbbad5d-1634-4187-b9d8-0748dca46ba3-kube-api-access-pn6nf\") on node \"crc\" DevicePath \"\"" Jan 25 08:14:44 crc kubenswrapper[4832]: I0125 08:14:44.407751 4832 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bbbad5d-1634-4187-b9d8-0748dca46ba3-config\") on node \"crc\" DevicePath \"\"" Jan 25 08:14:44 crc kubenswrapper[4832]: I0125 08:14:44.753737 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-s7nx7" event={"ID":"8780670c-4459-4064-a5ee-d22abf7923aa","Type":"ContainerStarted","Data":"f2de4a6d987cd68c871d5df5cde98883b8140c607650deacac6b613d03330cdf"} Jan 25 08:14:44 crc kubenswrapper[4832]: I0125 08:14:44.756174 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"828fc400-0bbb-4fbb-ae6c-7aa12c12864a","Type":"ContainerStarted","Data":"41936c2f8abdd329764c51086f44b9ccc4c7f2d8034094169f5046f35098e04f"} Jan 25 08:14:44 crc kubenswrapper[4832]: I0125 08:14:44.756218 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"828fc400-0bbb-4fbb-ae6c-7aa12c12864a","Type":"ContainerStarted","Data":"4a8c23f336eb2fb93b3692a28ce5a2dc8a3cad3aaab29a1da74a234df017eec8"} Jan 25 08:14:44 crc kubenswrapper[4832]: I0125 08:14:44.756513 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-northd-0" Jan 25 08:14:44 crc kubenswrapper[4832]: I0125 08:14:44.759344 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-vswdl" event={"ID":"d36bac18-e73f-4718-b2b7-89fc54febd73","Type":"ContainerStarted","Data":"9d3da0a7bdd1779a51a05bb43d06cfc2079f43c7facd448746b691f4951b451d"} Jan 25 08:14:44 crc kubenswrapper[4832]: I0125 08:14:44.759782 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-698758b865-vswdl" Jan 25 08:14:44 crc kubenswrapper[4832]: I0125 08:14:44.769636 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7fd796d7df-hfhnp" Jan 25 08:14:44 crc kubenswrapper[4832]: I0125 08:14:44.770246 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7fd796d7df-hfhnp" event={"ID":"1bbbad5d-1634-4187-b9d8-0748dca46ba3","Type":"ContainerDied","Data":"28dd5309f3900267aaca0f15cb0099ae806ec0174635f08b8e5f767c24e1b542"} Jan 25 08:14:44 crc kubenswrapper[4832]: I0125 08:14:44.772095 4832 scope.go:117] "RemoveContainer" containerID="bbeb9f60155b56ea289d86ec6b408ac7b703ed4455c73737806b1cea6ed7ad80" Jan 25 08:14:44 crc kubenswrapper[4832]: I0125 08:14:44.795153 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-northd-0" podStartSLOduration=2.73532203 podStartE2EDuration="11.795132512s" podCreationTimestamp="2026-01-25 08:14:33 +0000 UTC" firstStartedPulling="2026-01-25 08:14:34.74310394 +0000 UTC m=+1057.416927473" lastFinishedPulling="2026-01-25 08:14:43.802914422 +0000 UTC m=+1066.476737955" observedRunningTime="2026-01-25 08:14:44.786493461 +0000 UTC m=+1067.460317014" watchObservedRunningTime="2026-01-25 08:14:44.795132512 +0000 UTC m=+1067.468956045" Jan 25 08:14:44 crc kubenswrapper[4832]: I0125 08:14:44.814335 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-698758b865-vswdl" podStartSLOduration=3.814319044 podStartE2EDuration="3.814319044s" podCreationTimestamp="2026-01-25 08:14:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-25 08:14:44.808883543 +0000 UTC m=+1067.482707096" watchObservedRunningTime="2026-01-25 08:14:44.814319044 +0000 UTC m=+1067.488142577" Jan 25 08:14:44 crc kubenswrapper[4832]: I0125 08:14:44.956614 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7fd796d7df-hfhnp"] Jan 25 08:14:44 crc kubenswrapper[4832]: I0125 08:14:44.962011 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7fd796d7df-hfhnp"] Jan 25 08:14:45 crc kubenswrapper[4832]: I0125 08:14:45.681560 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1bbbad5d-1634-4187-b9d8-0748dca46ba3" path="/var/lib/kubelet/pods/1bbbad5d-1634-4187-b9d8-0748dca46ba3/volumes" Jan 25 08:14:45 crc kubenswrapper[4832]: I0125 08:14:45.785200 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-ccnpl" event={"ID":"1adf8f99-37eb-4472-83a1-13c3500fadfe","Type":"ContainerStarted","Data":"e574064622c4daf8fe17a54f40aa590e39a9af8a7dc5c4f8056a9bba8d66795f"} Jan 25 08:14:45 crc kubenswrapper[4832]: I0125 08:14:45.786357 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-86db49b7ff-ccnpl" Jan 25 08:14:45 crc kubenswrapper[4832]: I0125 08:14:45.805397 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-86db49b7ff-ccnpl" podStartSLOduration=12.805366066 podStartE2EDuration="12.805366066s" podCreationTimestamp="2026-01-25 08:14:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-25 08:14:45.799940796 +0000 UTC m=+1068.473764329" watchObservedRunningTime="2026-01-25 08:14:45.805366066 +0000 UTC m=+1068.479189599" Jan 25 08:14:46 crc kubenswrapper[4832]: I0125 08:14:46.235118 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/68ef9e02-9e33-48c3-a32b-ceae36687171-etc-swift\") pod \"swift-storage-0\" (UID: \"68ef9e02-9e33-48c3-a32b-ceae36687171\") " pod="openstack/swift-storage-0" Jan 25 08:14:46 crc kubenswrapper[4832]: E0125 08:14:46.235314 4832 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 25 08:14:46 crc kubenswrapper[4832]: E0125 08:14:46.235588 4832 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 25 08:14:46 crc kubenswrapper[4832]: E0125 08:14:46.235651 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/68ef9e02-9e33-48c3-a32b-ceae36687171-etc-swift podName:68ef9e02-9e33-48c3-a32b-ceae36687171 nodeName:}" failed. No retries permitted until 2026-01-25 08:14:50.235633139 +0000 UTC m=+1072.909456672 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/68ef9e02-9e33-48c3-a32b-ceae36687171-etc-swift") pod "swift-storage-0" (UID: "68ef9e02-9e33-48c3-a32b-ceae36687171") : configmap "swift-ring-files" not found Jan 25 08:14:47 crc kubenswrapper[4832]: I0125 08:14:47.477895 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-galera-0" Jan 25 08:14:47 crc kubenswrapper[4832]: I0125 08:14:47.561433 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-galera-0" Jan 25 08:14:47 crc kubenswrapper[4832]: I0125 08:14:47.903930 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-cell1-galera-0" Jan 25 08:14:47 crc kubenswrapper[4832]: I0125 08:14:47.991353 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-cell1-galera-0" Jan 25 08:14:49 crc kubenswrapper[4832]: I0125 08:14:49.394408 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-create-n7gsd"] Jan 25 08:14:49 crc kubenswrapper[4832]: E0125 08:14:49.395173 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1bbbad5d-1634-4187-b9d8-0748dca46ba3" containerName="init" Jan 25 08:14:49 crc kubenswrapper[4832]: I0125 08:14:49.395185 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="1bbbad5d-1634-4187-b9d8-0748dca46ba3" containerName="init" Jan 25 08:14:49 crc kubenswrapper[4832]: I0125 08:14:49.395369 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="1bbbad5d-1634-4187-b9d8-0748dca46ba3" containerName="init" Jan 25 08:14:49 crc kubenswrapper[4832]: I0125 08:14:49.395838 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-n7gsd" Jan 25 08:14:49 crc kubenswrapper[4832]: I0125 08:14:49.415374 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-n7gsd"] Jan 25 08:14:49 crc kubenswrapper[4832]: I0125 08:14:49.468425 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c1da6c5d-2894-431a-bec2-804d998b607b-operator-scripts\") pod \"keystone-db-create-n7gsd\" (UID: \"c1da6c5d-2894-431a-bec2-804d998b607b\") " pod="openstack/keystone-db-create-n7gsd" Jan 25 08:14:49 crc kubenswrapper[4832]: I0125 08:14:49.468621 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gg96m\" (UniqueName: \"kubernetes.io/projected/c1da6c5d-2894-431a-bec2-804d998b607b-kube-api-access-gg96m\") pod \"keystone-db-create-n7gsd\" (UID: \"c1da6c5d-2894-431a-bec2-804d998b607b\") " pod="openstack/keystone-db-create-n7gsd" Jan 25 08:14:49 crc kubenswrapper[4832]: I0125 08:14:49.505592 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-7fa9-account-create-update-9gzv2"] Jan 25 08:14:49 crc kubenswrapper[4832]: I0125 08:14:49.506956 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-7fa9-account-create-update-9gzv2" Jan 25 08:14:49 crc kubenswrapper[4832]: I0125 08:14:49.509052 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-db-secret" Jan 25 08:14:49 crc kubenswrapper[4832]: I0125 08:14:49.509580 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-7fa9-account-create-update-9gzv2"] Jan 25 08:14:49 crc kubenswrapper[4832]: I0125 08:14:49.570013 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c1da6c5d-2894-431a-bec2-804d998b607b-operator-scripts\") pod \"keystone-db-create-n7gsd\" (UID: \"c1da6c5d-2894-431a-bec2-804d998b607b\") " pod="openstack/keystone-db-create-n7gsd" Jan 25 08:14:49 crc kubenswrapper[4832]: I0125 08:14:49.570075 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fpb9b\" (UniqueName: \"kubernetes.io/projected/41d61b0c-2799-4be1-a1fb-d5402ada7efd-kube-api-access-fpb9b\") pod \"keystone-7fa9-account-create-update-9gzv2\" (UID: \"41d61b0c-2799-4be1-a1fb-d5402ada7efd\") " pod="openstack/keystone-7fa9-account-create-update-9gzv2" Jan 25 08:14:49 crc kubenswrapper[4832]: I0125 08:14:49.570167 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gg96m\" (UniqueName: \"kubernetes.io/projected/c1da6c5d-2894-431a-bec2-804d998b607b-kube-api-access-gg96m\") pod \"keystone-db-create-n7gsd\" (UID: \"c1da6c5d-2894-431a-bec2-804d998b607b\") " pod="openstack/keystone-db-create-n7gsd" Jan 25 08:14:49 crc kubenswrapper[4832]: I0125 08:14:49.570252 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/41d61b0c-2799-4be1-a1fb-d5402ada7efd-operator-scripts\") pod \"keystone-7fa9-account-create-update-9gzv2\" (UID: \"41d61b0c-2799-4be1-a1fb-d5402ada7efd\") " pod="openstack/keystone-7fa9-account-create-update-9gzv2" Jan 25 08:14:49 crc kubenswrapper[4832]: I0125 08:14:49.571404 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c1da6c5d-2894-431a-bec2-804d998b607b-operator-scripts\") pod \"keystone-db-create-n7gsd\" (UID: \"c1da6c5d-2894-431a-bec2-804d998b607b\") " pod="openstack/keystone-db-create-n7gsd" Jan 25 08:14:49 crc kubenswrapper[4832]: I0125 08:14:49.598621 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gg96m\" (UniqueName: \"kubernetes.io/projected/c1da6c5d-2894-431a-bec2-804d998b607b-kube-api-access-gg96m\") pod \"keystone-db-create-n7gsd\" (UID: \"c1da6c5d-2894-431a-bec2-804d998b607b\") " pod="openstack/keystone-db-create-n7gsd" Jan 25 08:14:49 crc kubenswrapper[4832]: I0125 08:14:49.604065 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-create-mkcbk"] Jan 25 08:14:49 crc kubenswrapper[4832]: I0125 08:14:49.605195 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-mkcbk" Jan 25 08:14:49 crc kubenswrapper[4832]: I0125 08:14:49.612180 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-mkcbk"] Jan 25 08:14:49 crc kubenswrapper[4832]: I0125 08:14:49.671829 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/41d61b0c-2799-4be1-a1fb-d5402ada7efd-operator-scripts\") pod \"keystone-7fa9-account-create-update-9gzv2\" (UID: \"41d61b0c-2799-4be1-a1fb-d5402ada7efd\") " pod="openstack/keystone-7fa9-account-create-update-9gzv2" Jan 25 08:14:49 crc kubenswrapper[4832]: I0125 08:14:49.671881 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v9qfr\" (UniqueName: \"kubernetes.io/projected/078f097c-bbd2-4fad-9ea6-0e92f09607c8-kube-api-access-v9qfr\") pod \"placement-db-create-mkcbk\" (UID: \"078f097c-bbd2-4fad-9ea6-0e92f09607c8\") " pod="openstack/placement-db-create-mkcbk" Jan 25 08:14:49 crc kubenswrapper[4832]: I0125 08:14:49.671946 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/078f097c-bbd2-4fad-9ea6-0e92f09607c8-operator-scripts\") pod \"placement-db-create-mkcbk\" (UID: \"078f097c-bbd2-4fad-9ea6-0e92f09607c8\") " pod="openstack/placement-db-create-mkcbk" Jan 25 08:14:49 crc kubenswrapper[4832]: I0125 08:14:49.671988 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fpb9b\" (UniqueName: \"kubernetes.io/projected/41d61b0c-2799-4be1-a1fb-d5402ada7efd-kube-api-access-fpb9b\") pod \"keystone-7fa9-account-create-update-9gzv2\" (UID: \"41d61b0c-2799-4be1-a1fb-d5402ada7efd\") " pod="openstack/keystone-7fa9-account-create-update-9gzv2" Jan 25 08:14:49 crc kubenswrapper[4832]: I0125 08:14:49.673092 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/41d61b0c-2799-4be1-a1fb-d5402ada7efd-operator-scripts\") pod \"keystone-7fa9-account-create-update-9gzv2\" (UID: \"41d61b0c-2799-4be1-a1fb-d5402ada7efd\") " pod="openstack/keystone-7fa9-account-create-update-9gzv2" Jan 25 08:14:49 crc kubenswrapper[4832]: I0125 08:14:49.700784 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-36c3-account-create-update-m7jc9"] Jan 25 08:14:49 crc kubenswrapper[4832]: I0125 08:14:49.700889 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fpb9b\" (UniqueName: \"kubernetes.io/projected/41d61b0c-2799-4be1-a1fb-d5402ada7efd-kube-api-access-fpb9b\") pod \"keystone-7fa9-account-create-update-9gzv2\" (UID: \"41d61b0c-2799-4be1-a1fb-d5402ada7efd\") " pod="openstack/keystone-7fa9-account-create-update-9gzv2" Jan 25 08:14:49 crc kubenswrapper[4832]: I0125 08:14:49.702018 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-36c3-account-create-update-m7jc9" Jan 25 08:14:49 crc kubenswrapper[4832]: I0125 08:14:49.704104 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-db-secret" Jan 25 08:14:49 crc kubenswrapper[4832]: I0125 08:14:49.748148 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-n7gsd" Jan 25 08:14:49 crc kubenswrapper[4832]: I0125 08:14:49.756827 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-36c3-account-create-update-m7jc9"] Jan 25 08:14:49 crc kubenswrapper[4832]: I0125 08:14:49.772758 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/13555380-67de-40bf-9255-d195682c6e56-operator-scripts\") pod \"placement-36c3-account-create-update-m7jc9\" (UID: \"13555380-67de-40bf-9255-d195682c6e56\") " pod="openstack/placement-36c3-account-create-update-m7jc9" Jan 25 08:14:49 crc kubenswrapper[4832]: I0125 08:14:49.772830 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v9qfr\" (UniqueName: \"kubernetes.io/projected/078f097c-bbd2-4fad-9ea6-0e92f09607c8-kube-api-access-v9qfr\") pod \"placement-db-create-mkcbk\" (UID: \"078f097c-bbd2-4fad-9ea6-0e92f09607c8\") " pod="openstack/placement-db-create-mkcbk" Jan 25 08:14:49 crc kubenswrapper[4832]: I0125 08:14:49.772912 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/078f097c-bbd2-4fad-9ea6-0e92f09607c8-operator-scripts\") pod \"placement-db-create-mkcbk\" (UID: \"078f097c-bbd2-4fad-9ea6-0e92f09607c8\") " pod="openstack/placement-db-create-mkcbk" Jan 25 08:14:49 crc kubenswrapper[4832]: I0125 08:14:49.772989 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4fndz\" (UniqueName: \"kubernetes.io/projected/13555380-67de-40bf-9255-d195682c6e56-kube-api-access-4fndz\") pod \"placement-36c3-account-create-update-m7jc9\" (UID: \"13555380-67de-40bf-9255-d195682c6e56\") " pod="openstack/placement-36c3-account-create-update-m7jc9" Jan 25 08:14:49 crc kubenswrapper[4832]: I0125 08:14:49.774076 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/078f097c-bbd2-4fad-9ea6-0e92f09607c8-operator-scripts\") pod \"placement-db-create-mkcbk\" (UID: \"078f097c-bbd2-4fad-9ea6-0e92f09607c8\") " pod="openstack/placement-db-create-mkcbk" Jan 25 08:14:49 crc kubenswrapper[4832]: I0125 08:14:49.794986 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v9qfr\" (UniqueName: \"kubernetes.io/projected/078f097c-bbd2-4fad-9ea6-0e92f09607c8-kube-api-access-v9qfr\") pod \"placement-db-create-mkcbk\" (UID: \"078f097c-bbd2-4fad-9ea6-0e92f09607c8\") " pod="openstack/placement-db-create-mkcbk" Jan 25 08:14:49 crc kubenswrapper[4832]: I0125 08:14:49.820537 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-s7nx7" event={"ID":"8780670c-4459-4064-a5ee-d22abf7923aa","Type":"ContainerStarted","Data":"21ae7a60ce8dfe46aacb6676a7fec11a2c54bef23e0eaadb3c681552db875aef"} Jan 25 08:14:49 crc kubenswrapper[4832]: I0125 08:14:49.840554 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-ring-rebalance-s7nx7" podStartSLOduration=2.890280857 podStartE2EDuration="7.840533765s" podCreationTimestamp="2026-01-25 08:14:42 +0000 UTC" firstStartedPulling="2026-01-25 08:14:43.838187318 +0000 UTC m=+1066.512010851" lastFinishedPulling="2026-01-25 08:14:48.788440216 +0000 UTC m=+1071.462263759" observedRunningTime="2026-01-25 08:14:49.836105187 +0000 UTC m=+1072.509928720" watchObservedRunningTime="2026-01-25 08:14:49.840533765 +0000 UTC m=+1072.514357298" Jan 25 08:14:49 crc kubenswrapper[4832]: I0125 08:14:49.914351 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-7fa9-account-create-update-9gzv2" Jan 25 08:14:49 crc kubenswrapper[4832]: I0125 08:14:49.934992 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4fndz\" (UniqueName: \"kubernetes.io/projected/13555380-67de-40bf-9255-d195682c6e56-kube-api-access-4fndz\") pod \"placement-36c3-account-create-update-m7jc9\" (UID: \"13555380-67de-40bf-9255-d195682c6e56\") " pod="openstack/placement-36c3-account-create-update-m7jc9" Jan 25 08:14:49 crc kubenswrapper[4832]: I0125 08:14:49.935172 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/13555380-67de-40bf-9255-d195682c6e56-operator-scripts\") pod \"placement-36c3-account-create-update-m7jc9\" (UID: \"13555380-67de-40bf-9255-d195682c6e56\") " pod="openstack/placement-36c3-account-create-update-m7jc9" Jan 25 08:14:49 crc kubenswrapper[4832]: I0125 08:14:49.940235 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/13555380-67de-40bf-9255-d195682c6e56-operator-scripts\") pod \"placement-36c3-account-create-update-m7jc9\" (UID: \"13555380-67de-40bf-9255-d195682c6e56\") " pod="openstack/placement-36c3-account-create-update-m7jc9" Jan 25 08:14:49 crc kubenswrapper[4832]: I0125 08:14:49.943825 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-mkcbk" Jan 25 08:14:49 crc kubenswrapper[4832]: I0125 08:14:49.959292 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4fndz\" (UniqueName: \"kubernetes.io/projected/13555380-67de-40bf-9255-d195682c6e56-kube-api-access-4fndz\") pod \"placement-36c3-account-create-update-m7jc9\" (UID: \"13555380-67de-40bf-9255-d195682c6e56\") " pod="openstack/placement-36c3-account-create-update-m7jc9" Jan 25 08:14:50 crc kubenswrapper[4832]: I0125 08:14:50.081419 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-36c3-account-create-update-m7jc9" Jan 25 08:14:50 crc kubenswrapper[4832]: I0125 08:14:50.249218 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/68ef9e02-9e33-48c3-a32b-ceae36687171-etc-swift\") pod \"swift-storage-0\" (UID: \"68ef9e02-9e33-48c3-a32b-ceae36687171\") " pod="openstack/swift-storage-0" Jan 25 08:14:50 crc kubenswrapper[4832]: E0125 08:14:50.249488 4832 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 25 08:14:50 crc kubenswrapper[4832]: E0125 08:14:50.249503 4832 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 25 08:14:50 crc kubenswrapper[4832]: E0125 08:14:50.249550 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/68ef9e02-9e33-48c3-a32b-ceae36687171-etc-swift podName:68ef9e02-9e33-48c3-a32b-ceae36687171 nodeName:}" failed. No retries permitted until 2026-01-25 08:14:58.249534191 +0000 UTC m=+1080.923357724 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/68ef9e02-9e33-48c3-a32b-ceae36687171-etc-swift") pod "swift-storage-0" (UID: "68ef9e02-9e33-48c3-a32b-ceae36687171") : configmap "swift-ring-files" not found Jan 25 08:14:50 crc kubenswrapper[4832]: I0125 08:14:50.380918 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-n7gsd"] Jan 25 08:14:50 crc kubenswrapper[4832]: I0125 08:14:50.837495 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-n7gsd" event={"ID":"c1da6c5d-2894-431a-bec2-804d998b607b","Type":"ContainerStarted","Data":"201f6d2c316f2683a5cc2ce5979bc19b95b2b22bb51dca55411bd5ac69855848"} Jan 25 08:14:50 crc kubenswrapper[4832]: I0125 08:14:50.838006 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-n7gsd" event={"ID":"c1da6c5d-2894-431a-bec2-804d998b607b","Type":"ContainerStarted","Data":"35a8897585b428b76932230e0bc9f3b7c748312a324d54b7132eaada50ca722a"} Jan 25 08:14:50 crc kubenswrapper[4832]: I0125 08:14:50.856817 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-db-create-n7gsd" podStartSLOduration=1.8567969789999998 podStartE2EDuration="1.856796979s" podCreationTimestamp="2026-01-25 08:14:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-25 08:14:50.852576427 +0000 UTC m=+1073.526399960" watchObservedRunningTime="2026-01-25 08:14:50.856796979 +0000 UTC m=+1073.530620512" Jan 25 08:14:51 crc kubenswrapper[4832]: I0125 08:14:51.018489 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-7fa9-account-create-update-9gzv2"] Jan 25 08:14:51 crc kubenswrapper[4832]: I0125 08:14:51.100823 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-36c3-account-create-update-m7jc9"] Jan 25 08:14:51 crc kubenswrapper[4832]: I0125 08:14:51.108232 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-mkcbk"] Jan 25 08:14:51 crc kubenswrapper[4832]: I0125 08:14:51.654543 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-698758b865-vswdl" Jan 25 08:14:51 crc kubenswrapper[4832]: I0125 08:14:51.708693 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-ccnpl"] Jan 25 08:14:51 crc kubenswrapper[4832]: I0125 08:14:51.709079 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-86db49b7ff-ccnpl" podUID="1adf8f99-37eb-4472-83a1-13c3500fadfe" containerName="dnsmasq-dns" containerID="cri-o://e574064622c4daf8fe17a54f40aa590e39a9af8a7dc5c4f8056a9bba8d66795f" gracePeriod=10 Jan 25 08:14:51 crc kubenswrapper[4832]: I0125 08:14:51.713611 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-86db49b7ff-ccnpl" Jan 25 08:14:51 crc kubenswrapper[4832]: I0125 08:14:51.857492 4832 generic.go:334] "Generic (PLEG): container finished" podID="13555380-67de-40bf-9255-d195682c6e56" containerID="1b3b7c88c783e78f21260f9705950cd9a7906b374ee7543a4d8f6bf7bc36abab" exitCode=0 Jan 25 08:14:51 crc kubenswrapper[4832]: I0125 08:14:51.857555 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-36c3-account-create-update-m7jc9" event={"ID":"13555380-67de-40bf-9255-d195682c6e56","Type":"ContainerDied","Data":"1b3b7c88c783e78f21260f9705950cd9a7906b374ee7543a4d8f6bf7bc36abab"} Jan 25 08:14:51 crc kubenswrapper[4832]: I0125 08:14:51.857584 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-36c3-account-create-update-m7jc9" event={"ID":"13555380-67de-40bf-9255-d195682c6e56","Type":"ContainerStarted","Data":"c7f5a313e13584b6351be18b1b2d981f5ebce80918a9b7b1a8d2cbc2eaef6135"} Jan 25 08:14:51 crc kubenswrapper[4832]: I0125 08:14:51.862084 4832 generic.go:334] "Generic (PLEG): container finished" podID="078f097c-bbd2-4fad-9ea6-0e92f09607c8" containerID="639b3bfa6f1d4cc91f16c767b6214b91518bd1c57823b9dee0788b23bcf6a51f" exitCode=0 Jan 25 08:14:51 crc kubenswrapper[4832]: I0125 08:14:51.862170 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-mkcbk" event={"ID":"078f097c-bbd2-4fad-9ea6-0e92f09607c8","Type":"ContainerDied","Data":"639b3bfa6f1d4cc91f16c767b6214b91518bd1c57823b9dee0788b23bcf6a51f"} Jan 25 08:14:51 crc kubenswrapper[4832]: I0125 08:14:51.862197 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-mkcbk" event={"ID":"078f097c-bbd2-4fad-9ea6-0e92f09607c8","Type":"ContainerStarted","Data":"6ad83444d91e26bcd67c43af91735fe58bcb67222c753eee996b2340a11528a5"} Jan 25 08:14:51 crc kubenswrapper[4832]: I0125 08:14:51.869399 4832 generic.go:334] "Generic (PLEG): container finished" podID="1adf8f99-37eb-4472-83a1-13c3500fadfe" containerID="e574064622c4daf8fe17a54f40aa590e39a9af8a7dc5c4f8056a9bba8d66795f" exitCode=0 Jan 25 08:14:51 crc kubenswrapper[4832]: I0125 08:14:51.869476 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-ccnpl" event={"ID":"1adf8f99-37eb-4472-83a1-13c3500fadfe","Type":"ContainerDied","Data":"e574064622c4daf8fe17a54f40aa590e39a9af8a7dc5c4f8056a9bba8d66795f"} Jan 25 08:14:51 crc kubenswrapper[4832]: I0125 08:14:51.872127 4832 generic.go:334] "Generic (PLEG): container finished" podID="41d61b0c-2799-4be1-a1fb-d5402ada7efd" containerID="b37e1b6972a63335a0599c0210fd8992c16cf2493470030556beaa855933526f" exitCode=0 Jan 25 08:14:51 crc kubenswrapper[4832]: I0125 08:14:51.872187 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-7fa9-account-create-update-9gzv2" event={"ID":"41d61b0c-2799-4be1-a1fb-d5402ada7efd","Type":"ContainerDied","Data":"b37e1b6972a63335a0599c0210fd8992c16cf2493470030556beaa855933526f"} Jan 25 08:14:51 crc kubenswrapper[4832]: I0125 08:14:51.872211 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-7fa9-account-create-update-9gzv2" event={"ID":"41d61b0c-2799-4be1-a1fb-d5402ada7efd","Type":"ContainerStarted","Data":"6a92c79f5f97ff6375e0e54afd76dfbbb035516c7dcb44218379a21a2611038e"} Jan 25 08:14:51 crc kubenswrapper[4832]: I0125 08:14:51.878651 4832 generic.go:334] "Generic (PLEG): container finished" podID="c1da6c5d-2894-431a-bec2-804d998b607b" containerID="201f6d2c316f2683a5cc2ce5979bc19b95b2b22bb51dca55411bd5ac69855848" exitCode=0 Jan 25 08:14:51 crc kubenswrapper[4832]: I0125 08:14:51.878697 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-n7gsd" event={"ID":"c1da6c5d-2894-431a-bec2-804d998b607b","Type":"ContainerDied","Data":"201f6d2c316f2683a5cc2ce5979bc19b95b2b22bb51dca55411bd5ac69855848"} Jan 25 08:14:52 crc kubenswrapper[4832]: I0125 08:14:52.149846 4832 patch_prober.go:28] interesting pod/machine-config-daemon-9r9sz container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 25 08:14:52 crc kubenswrapper[4832]: I0125 08:14:52.149907 4832 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-9r9sz" podUID="1fb47e8e-c812-41b4-9be7-3fad81e121b0" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 25 08:14:52 crc kubenswrapper[4832]: I0125 08:14:52.357597 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86db49b7ff-ccnpl" Jan 25 08:14:52 crc kubenswrapper[4832]: I0125 08:14:52.491842 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1adf8f99-37eb-4472-83a1-13c3500fadfe-dns-svc\") pod \"1adf8f99-37eb-4472-83a1-13c3500fadfe\" (UID: \"1adf8f99-37eb-4472-83a1-13c3500fadfe\") " Jan 25 08:14:52 crc kubenswrapper[4832]: I0125 08:14:52.492600 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1adf8f99-37eb-4472-83a1-13c3500fadfe-config\") pod \"1adf8f99-37eb-4472-83a1-13c3500fadfe\" (UID: \"1adf8f99-37eb-4472-83a1-13c3500fadfe\") " Jan 25 08:14:52 crc kubenswrapper[4832]: I0125 08:14:52.492691 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gqgbs\" (UniqueName: \"kubernetes.io/projected/1adf8f99-37eb-4472-83a1-13c3500fadfe-kube-api-access-gqgbs\") pod \"1adf8f99-37eb-4472-83a1-13c3500fadfe\" (UID: \"1adf8f99-37eb-4472-83a1-13c3500fadfe\") " Jan 25 08:14:52 crc kubenswrapper[4832]: I0125 08:14:52.492730 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1adf8f99-37eb-4472-83a1-13c3500fadfe-ovsdbserver-nb\") pod \"1adf8f99-37eb-4472-83a1-13c3500fadfe\" (UID: \"1adf8f99-37eb-4472-83a1-13c3500fadfe\") " Jan 25 08:14:52 crc kubenswrapper[4832]: I0125 08:14:52.492755 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1adf8f99-37eb-4472-83a1-13c3500fadfe-ovsdbserver-sb\") pod \"1adf8f99-37eb-4472-83a1-13c3500fadfe\" (UID: \"1adf8f99-37eb-4472-83a1-13c3500fadfe\") " Jan 25 08:14:52 crc kubenswrapper[4832]: I0125 08:14:52.508884 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1adf8f99-37eb-4472-83a1-13c3500fadfe-kube-api-access-gqgbs" (OuterVolumeSpecName: "kube-api-access-gqgbs") pod "1adf8f99-37eb-4472-83a1-13c3500fadfe" (UID: "1adf8f99-37eb-4472-83a1-13c3500fadfe"). InnerVolumeSpecName "kube-api-access-gqgbs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 25 08:14:52 crc kubenswrapper[4832]: I0125 08:14:52.534601 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1adf8f99-37eb-4472-83a1-13c3500fadfe-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "1adf8f99-37eb-4472-83a1-13c3500fadfe" (UID: "1adf8f99-37eb-4472-83a1-13c3500fadfe"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 25 08:14:52 crc kubenswrapper[4832]: I0125 08:14:52.535247 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1adf8f99-37eb-4472-83a1-13c3500fadfe-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "1adf8f99-37eb-4472-83a1-13c3500fadfe" (UID: "1adf8f99-37eb-4472-83a1-13c3500fadfe"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 25 08:14:52 crc kubenswrapper[4832]: I0125 08:14:52.544123 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1adf8f99-37eb-4472-83a1-13c3500fadfe-config" (OuterVolumeSpecName: "config") pod "1adf8f99-37eb-4472-83a1-13c3500fadfe" (UID: "1adf8f99-37eb-4472-83a1-13c3500fadfe"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 25 08:14:52 crc kubenswrapper[4832]: I0125 08:14:52.565466 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1adf8f99-37eb-4472-83a1-13c3500fadfe-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "1adf8f99-37eb-4472-83a1-13c3500fadfe" (UID: "1adf8f99-37eb-4472-83a1-13c3500fadfe"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 25 08:14:52 crc kubenswrapper[4832]: I0125 08:14:52.594116 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gqgbs\" (UniqueName: \"kubernetes.io/projected/1adf8f99-37eb-4472-83a1-13c3500fadfe-kube-api-access-gqgbs\") on node \"crc\" DevicePath \"\"" Jan 25 08:14:52 crc kubenswrapper[4832]: I0125 08:14:52.594158 4832 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1adf8f99-37eb-4472-83a1-13c3500fadfe-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 25 08:14:52 crc kubenswrapper[4832]: I0125 08:14:52.594170 4832 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1adf8f99-37eb-4472-83a1-13c3500fadfe-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 25 08:14:52 crc kubenswrapper[4832]: I0125 08:14:52.594183 4832 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1adf8f99-37eb-4472-83a1-13c3500fadfe-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 25 08:14:52 crc kubenswrapper[4832]: I0125 08:14:52.594194 4832 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1adf8f99-37eb-4472-83a1-13c3500fadfe-config\") on node \"crc\" DevicePath \"\"" Jan 25 08:14:52 crc kubenswrapper[4832]: I0125 08:14:52.887037 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-ccnpl" event={"ID":"1adf8f99-37eb-4472-83a1-13c3500fadfe","Type":"ContainerDied","Data":"5a9e5497a27b7a9c01c0c1e22a1df7fedfb9472b228c3e83304eae5890c9e1f7"} Jan 25 08:14:52 crc kubenswrapper[4832]: I0125 08:14:52.887073 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86db49b7ff-ccnpl" Jan 25 08:14:52 crc kubenswrapper[4832]: I0125 08:14:52.887086 4832 scope.go:117] "RemoveContainer" containerID="e574064622c4daf8fe17a54f40aa590e39a9af8a7dc5c4f8056a9bba8d66795f" Jan 25 08:14:52 crc kubenswrapper[4832]: I0125 08:14:52.889019 4832 generic.go:334] "Generic (PLEG): container finished" podID="2f80d9a5-5d45-4053-875c-908242efc5e9" containerID="8c6a9c3ffb2f64548b47ebec87882784fa19f4d77d6e1f3a9d7a92e52d67191e" exitCode=0 Jan 25 08:14:52 crc kubenswrapper[4832]: I0125 08:14:52.889091 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"2f80d9a5-5d45-4053-875c-908242efc5e9","Type":"ContainerDied","Data":"8c6a9c3ffb2f64548b47ebec87882784fa19f4d77d6e1f3a9d7a92e52d67191e"} Jan 25 08:14:52 crc kubenswrapper[4832]: I0125 08:14:52.890514 4832 generic.go:334] "Generic (PLEG): container finished" podID="9b86227f-350e-4e03-aefd-00f308ccb238" containerID="b460c04d4adb8e23c0d8d586e6e38768fc8da8021c8d34a10874eaba07e58ccf" exitCode=0 Jan 25 08:14:52 crc kubenswrapper[4832]: I0125 08:14:52.890664 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"9b86227f-350e-4e03-aefd-00f308ccb238","Type":"ContainerDied","Data":"b460c04d4adb8e23c0d8d586e6e38768fc8da8021c8d34a10874eaba07e58ccf"} Jan 25 08:14:53 crc kubenswrapper[4832]: I0125 08:14:53.187522 4832 scope.go:117] "RemoveContainer" containerID="8b14c3580ea07bc8194d982cd30d2aed67f35e96a302dd8547da50bd4e6f7561" Jan 25 08:14:53 crc kubenswrapper[4832]: I0125 08:14:53.250517 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-ccnpl"] Jan 25 08:14:53 crc kubenswrapper[4832]: I0125 08:14:53.258341 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-ccnpl"] Jan 25 08:14:53 crc kubenswrapper[4832]: I0125 08:14:53.282060 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-mkcbk" Jan 25 08:14:53 crc kubenswrapper[4832]: I0125 08:14:53.423816 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v9qfr\" (UniqueName: \"kubernetes.io/projected/078f097c-bbd2-4fad-9ea6-0e92f09607c8-kube-api-access-v9qfr\") pod \"078f097c-bbd2-4fad-9ea6-0e92f09607c8\" (UID: \"078f097c-bbd2-4fad-9ea6-0e92f09607c8\") " Jan 25 08:14:53 crc kubenswrapper[4832]: I0125 08:14:53.423887 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/078f097c-bbd2-4fad-9ea6-0e92f09607c8-operator-scripts\") pod \"078f097c-bbd2-4fad-9ea6-0e92f09607c8\" (UID: \"078f097c-bbd2-4fad-9ea6-0e92f09607c8\") " Jan 25 08:14:53 crc kubenswrapper[4832]: I0125 08:14:53.424757 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/078f097c-bbd2-4fad-9ea6-0e92f09607c8-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "078f097c-bbd2-4fad-9ea6-0e92f09607c8" (UID: "078f097c-bbd2-4fad-9ea6-0e92f09607c8"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 25 08:14:53 crc kubenswrapper[4832]: I0125 08:14:53.443433 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/078f097c-bbd2-4fad-9ea6-0e92f09607c8-kube-api-access-v9qfr" (OuterVolumeSpecName: "kube-api-access-v9qfr") pod "078f097c-bbd2-4fad-9ea6-0e92f09607c8" (UID: "078f097c-bbd2-4fad-9ea6-0e92f09607c8"). InnerVolumeSpecName "kube-api-access-v9qfr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 25 08:14:53 crc kubenswrapper[4832]: I0125 08:14:53.456140 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-7fa9-account-create-update-9gzv2" Jan 25 08:14:53 crc kubenswrapper[4832]: I0125 08:14:53.468534 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-n7gsd" Jan 25 08:14:53 crc kubenswrapper[4832]: I0125 08:14:53.480085 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-36c3-account-create-update-m7jc9" Jan 25 08:14:53 crc kubenswrapper[4832]: I0125 08:14:53.525736 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v9qfr\" (UniqueName: \"kubernetes.io/projected/078f097c-bbd2-4fad-9ea6-0e92f09607c8-kube-api-access-v9qfr\") on node \"crc\" DevicePath \"\"" Jan 25 08:14:53 crc kubenswrapper[4832]: I0125 08:14:53.526039 4832 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/078f097c-bbd2-4fad-9ea6-0e92f09607c8-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 25 08:14:53 crc kubenswrapper[4832]: I0125 08:14:53.627329 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fpb9b\" (UniqueName: \"kubernetes.io/projected/41d61b0c-2799-4be1-a1fb-d5402ada7efd-kube-api-access-fpb9b\") pod \"41d61b0c-2799-4be1-a1fb-d5402ada7efd\" (UID: \"41d61b0c-2799-4be1-a1fb-d5402ada7efd\") " Jan 25 08:14:53 crc kubenswrapper[4832]: I0125 08:14:53.627407 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c1da6c5d-2894-431a-bec2-804d998b607b-operator-scripts\") pod \"c1da6c5d-2894-431a-bec2-804d998b607b\" (UID: \"c1da6c5d-2894-431a-bec2-804d998b607b\") " Jan 25 08:14:53 crc kubenswrapper[4832]: I0125 08:14:53.627465 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4fndz\" (UniqueName: \"kubernetes.io/projected/13555380-67de-40bf-9255-d195682c6e56-kube-api-access-4fndz\") pod \"13555380-67de-40bf-9255-d195682c6e56\" (UID: \"13555380-67de-40bf-9255-d195682c6e56\") " Jan 25 08:14:53 crc kubenswrapper[4832]: I0125 08:14:53.627527 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gg96m\" (UniqueName: \"kubernetes.io/projected/c1da6c5d-2894-431a-bec2-804d998b607b-kube-api-access-gg96m\") pod \"c1da6c5d-2894-431a-bec2-804d998b607b\" (UID: \"c1da6c5d-2894-431a-bec2-804d998b607b\") " Jan 25 08:14:53 crc kubenswrapper[4832]: I0125 08:14:53.627562 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/41d61b0c-2799-4be1-a1fb-d5402ada7efd-operator-scripts\") pod \"41d61b0c-2799-4be1-a1fb-d5402ada7efd\" (UID: \"41d61b0c-2799-4be1-a1fb-d5402ada7efd\") " Jan 25 08:14:53 crc kubenswrapper[4832]: I0125 08:14:53.627593 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/13555380-67de-40bf-9255-d195682c6e56-operator-scripts\") pod \"13555380-67de-40bf-9255-d195682c6e56\" (UID: \"13555380-67de-40bf-9255-d195682c6e56\") " Jan 25 08:14:53 crc kubenswrapper[4832]: I0125 08:14:53.628701 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/41d61b0c-2799-4be1-a1fb-d5402ada7efd-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "41d61b0c-2799-4be1-a1fb-d5402ada7efd" (UID: "41d61b0c-2799-4be1-a1fb-d5402ada7efd"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 25 08:14:53 crc kubenswrapper[4832]: I0125 08:14:53.628725 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c1da6c5d-2894-431a-bec2-804d998b607b-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "c1da6c5d-2894-431a-bec2-804d998b607b" (UID: "c1da6c5d-2894-431a-bec2-804d998b607b"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 25 08:14:53 crc kubenswrapper[4832]: I0125 08:14:53.628891 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/13555380-67de-40bf-9255-d195682c6e56-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "13555380-67de-40bf-9255-d195682c6e56" (UID: "13555380-67de-40bf-9255-d195682c6e56"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 25 08:14:53 crc kubenswrapper[4832]: I0125 08:14:53.631156 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/41d61b0c-2799-4be1-a1fb-d5402ada7efd-kube-api-access-fpb9b" (OuterVolumeSpecName: "kube-api-access-fpb9b") pod "41d61b0c-2799-4be1-a1fb-d5402ada7efd" (UID: "41d61b0c-2799-4be1-a1fb-d5402ada7efd"). InnerVolumeSpecName "kube-api-access-fpb9b". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 25 08:14:53 crc kubenswrapper[4832]: I0125 08:14:53.632091 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c1da6c5d-2894-431a-bec2-804d998b607b-kube-api-access-gg96m" (OuterVolumeSpecName: "kube-api-access-gg96m") pod "c1da6c5d-2894-431a-bec2-804d998b607b" (UID: "c1da6c5d-2894-431a-bec2-804d998b607b"). InnerVolumeSpecName "kube-api-access-gg96m". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 25 08:14:53 crc kubenswrapper[4832]: I0125 08:14:53.632940 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/13555380-67de-40bf-9255-d195682c6e56-kube-api-access-4fndz" (OuterVolumeSpecName: "kube-api-access-4fndz") pod "13555380-67de-40bf-9255-d195682c6e56" (UID: "13555380-67de-40bf-9255-d195682c6e56"). InnerVolumeSpecName "kube-api-access-4fndz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 25 08:14:53 crc kubenswrapper[4832]: I0125 08:14:53.679450 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1adf8f99-37eb-4472-83a1-13c3500fadfe" path="/var/lib/kubelet/pods/1adf8f99-37eb-4472-83a1-13c3500fadfe/volumes" Jan 25 08:14:53 crc kubenswrapper[4832]: I0125 08:14:53.729037 4832 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/41d61b0c-2799-4be1-a1fb-d5402ada7efd-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 25 08:14:53 crc kubenswrapper[4832]: I0125 08:14:53.729067 4832 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/13555380-67de-40bf-9255-d195682c6e56-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 25 08:14:53 crc kubenswrapper[4832]: I0125 08:14:53.729077 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fpb9b\" (UniqueName: \"kubernetes.io/projected/41d61b0c-2799-4be1-a1fb-d5402ada7efd-kube-api-access-fpb9b\") on node \"crc\" DevicePath \"\"" Jan 25 08:14:53 crc kubenswrapper[4832]: I0125 08:14:53.729086 4832 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c1da6c5d-2894-431a-bec2-804d998b607b-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 25 08:14:53 crc kubenswrapper[4832]: I0125 08:14:53.729094 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4fndz\" (UniqueName: \"kubernetes.io/projected/13555380-67de-40bf-9255-d195682c6e56-kube-api-access-4fndz\") on node \"crc\" DevicePath \"\"" Jan 25 08:14:53 crc kubenswrapper[4832]: I0125 08:14:53.729103 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gg96m\" (UniqueName: \"kubernetes.io/projected/c1da6c5d-2894-431a-bec2-804d998b607b-kube-api-access-gg96m\") on node \"crc\" DevicePath \"\"" Jan 25 08:14:53 crc kubenswrapper[4832]: I0125 08:14:53.900745 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"2f80d9a5-5d45-4053-875c-908242efc5e9","Type":"ContainerStarted","Data":"f156861900973b8bec71d88b12b47b18fb0be58100a51df160c5b222ddc36166"} Jan 25 08:14:53 crc kubenswrapper[4832]: I0125 08:14:53.900940 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Jan 25 08:14:53 crc kubenswrapper[4832]: I0125 08:14:53.902879 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"9b86227f-350e-4e03-aefd-00f308ccb238","Type":"ContainerStarted","Data":"b4222cb79b322095ec7642cdbdab0fdb9e6322bb2158b4beba10850315703092"} Jan 25 08:14:53 crc kubenswrapper[4832]: I0125 08:14:53.903082 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Jan 25 08:14:53 crc kubenswrapper[4832]: I0125 08:14:53.904022 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-36c3-account-create-update-m7jc9" event={"ID":"13555380-67de-40bf-9255-d195682c6e56","Type":"ContainerDied","Data":"c7f5a313e13584b6351be18b1b2d981f5ebce80918a9b7b1a8d2cbc2eaef6135"} Jan 25 08:14:53 crc kubenswrapper[4832]: I0125 08:14:53.904048 4832 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c7f5a313e13584b6351be18b1b2d981f5ebce80918a9b7b1a8d2cbc2eaef6135" Jan 25 08:14:53 crc kubenswrapper[4832]: I0125 08:14:53.904088 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-36c3-account-create-update-m7jc9" Jan 25 08:14:53 crc kubenswrapper[4832]: I0125 08:14:53.906352 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-mkcbk" event={"ID":"078f097c-bbd2-4fad-9ea6-0e92f09607c8","Type":"ContainerDied","Data":"6ad83444d91e26bcd67c43af91735fe58bcb67222c753eee996b2340a11528a5"} Jan 25 08:14:53 crc kubenswrapper[4832]: I0125 08:14:53.906374 4832 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6ad83444d91e26bcd67c43af91735fe58bcb67222c753eee996b2340a11528a5" Jan 25 08:14:53 crc kubenswrapper[4832]: I0125 08:14:53.906422 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-mkcbk" Jan 25 08:14:53 crc kubenswrapper[4832]: I0125 08:14:53.908932 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-7fa9-account-create-update-9gzv2" event={"ID":"41d61b0c-2799-4be1-a1fb-d5402ada7efd","Type":"ContainerDied","Data":"6a92c79f5f97ff6375e0e54afd76dfbbb035516c7dcb44218379a21a2611038e"} Jan 25 08:14:53 crc kubenswrapper[4832]: I0125 08:14:53.908951 4832 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6a92c79f5f97ff6375e0e54afd76dfbbb035516c7dcb44218379a21a2611038e" Jan 25 08:14:53 crc kubenswrapper[4832]: I0125 08:14:53.908984 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-7fa9-account-create-update-9gzv2" Jan 25 08:14:53 crc kubenswrapper[4832]: I0125 08:14:53.912050 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-n7gsd" event={"ID":"c1da6c5d-2894-431a-bec2-804d998b607b","Type":"ContainerDied","Data":"35a8897585b428b76932230e0bc9f3b7c748312a324d54b7132eaada50ca722a"} Jan 25 08:14:53 crc kubenswrapper[4832]: I0125 08:14:53.912079 4832 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="35a8897585b428b76932230e0bc9f3b7c748312a324d54b7132eaada50ca722a" Jan 25 08:14:53 crc kubenswrapper[4832]: I0125 08:14:53.912111 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-n7gsd" Jan 25 08:14:53 crc kubenswrapper[4832]: I0125 08:14:53.929767 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=42.109974522 podStartE2EDuration="58.92974974s" podCreationTimestamp="2026-01-25 08:13:55 +0000 UTC" firstStartedPulling="2026-01-25 08:14:02.563792961 +0000 UTC m=+1025.237616494" lastFinishedPulling="2026-01-25 08:14:19.383568179 +0000 UTC m=+1042.057391712" observedRunningTime="2026-01-25 08:14:53.925692883 +0000 UTC m=+1076.599516416" watchObservedRunningTime="2026-01-25 08:14:53.92974974 +0000 UTC m=+1076.603573283" Jan 25 08:14:53 crc kubenswrapper[4832]: I0125 08:14:53.963019 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=42.104368737 podStartE2EDuration="58.962987983s" podCreationTimestamp="2026-01-25 08:13:55 +0000 UTC" firstStartedPulling="2026-01-25 08:14:02.559655082 +0000 UTC m=+1025.233478635" lastFinishedPulling="2026-01-25 08:14:19.418274348 +0000 UTC m=+1042.092097881" observedRunningTime="2026-01-25 08:14:53.962512708 +0000 UTC m=+1076.636336241" watchObservedRunningTime="2026-01-25 08:14:53.962987983 +0000 UTC m=+1076.636811516" Jan 25 08:14:54 crc kubenswrapper[4832]: I0125 08:14:54.347974 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-northd-0" Jan 25 08:14:54 crc kubenswrapper[4832]: I0125 08:14:54.936746 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-create-h7pph"] Jan 25 08:14:54 crc kubenswrapper[4832]: E0125 08:14:54.937300 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="13555380-67de-40bf-9255-d195682c6e56" containerName="mariadb-account-create-update" Jan 25 08:14:54 crc kubenswrapper[4832]: I0125 08:14:54.937314 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="13555380-67de-40bf-9255-d195682c6e56" containerName="mariadb-account-create-update" Jan 25 08:14:54 crc kubenswrapper[4832]: E0125 08:14:54.937335 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c1da6c5d-2894-431a-bec2-804d998b607b" containerName="mariadb-database-create" Jan 25 08:14:54 crc kubenswrapper[4832]: I0125 08:14:54.937343 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="c1da6c5d-2894-431a-bec2-804d998b607b" containerName="mariadb-database-create" Jan 25 08:14:54 crc kubenswrapper[4832]: E0125 08:14:54.937396 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1adf8f99-37eb-4472-83a1-13c3500fadfe" containerName="init" Jan 25 08:14:54 crc kubenswrapper[4832]: I0125 08:14:54.937403 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="1adf8f99-37eb-4472-83a1-13c3500fadfe" containerName="init" Jan 25 08:14:54 crc kubenswrapper[4832]: E0125 08:14:54.937417 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1adf8f99-37eb-4472-83a1-13c3500fadfe" containerName="dnsmasq-dns" Jan 25 08:14:54 crc kubenswrapper[4832]: I0125 08:14:54.937422 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="1adf8f99-37eb-4472-83a1-13c3500fadfe" containerName="dnsmasq-dns" Jan 25 08:14:54 crc kubenswrapper[4832]: E0125 08:14:54.937445 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="078f097c-bbd2-4fad-9ea6-0e92f09607c8" containerName="mariadb-database-create" Jan 25 08:14:54 crc kubenswrapper[4832]: I0125 08:14:54.937453 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="078f097c-bbd2-4fad-9ea6-0e92f09607c8" containerName="mariadb-database-create" Jan 25 08:14:54 crc kubenswrapper[4832]: E0125 08:14:54.937466 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="41d61b0c-2799-4be1-a1fb-d5402ada7efd" containerName="mariadb-account-create-update" Jan 25 08:14:54 crc kubenswrapper[4832]: I0125 08:14:54.937474 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="41d61b0c-2799-4be1-a1fb-d5402ada7efd" containerName="mariadb-account-create-update" Jan 25 08:14:54 crc kubenswrapper[4832]: I0125 08:14:54.937649 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="13555380-67de-40bf-9255-d195682c6e56" containerName="mariadb-account-create-update" Jan 25 08:14:54 crc kubenswrapper[4832]: I0125 08:14:54.937661 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="c1da6c5d-2894-431a-bec2-804d998b607b" containerName="mariadb-database-create" Jan 25 08:14:54 crc kubenswrapper[4832]: I0125 08:14:54.937679 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="078f097c-bbd2-4fad-9ea6-0e92f09607c8" containerName="mariadb-database-create" Jan 25 08:14:54 crc kubenswrapper[4832]: I0125 08:14:54.937690 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="1adf8f99-37eb-4472-83a1-13c3500fadfe" containerName="dnsmasq-dns" Jan 25 08:14:54 crc kubenswrapper[4832]: I0125 08:14:54.937703 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="41d61b0c-2799-4be1-a1fb-d5402ada7efd" containerName="mariadb-account-create-update" Jan 25 08:14:54 crc kubenswrapper[4832]: I0125 08:14:54.938350 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-h7pph" Jan 25 08:14:54 crc kubenswrapper[4832]: I0125 08:14:54.949799 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-h7pph"] Jan 25 08:14:54 crc kubenswrapper[4832]: I0125 08:14:54.953015 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/be22c9ab-23d0-48ef-8d5d-298d42e5590f-operator-scripts\") pod \"glance-db-create-h7pph\" (UID: \"be22c9ab-23d0-48ef-8d5d-298d42e5590f\") " pod="openstack/glance-db-create-h7pph" Jan 25 08:14:54 crc kubenswrapper[4832]: I0125 08:14:54.953064 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qtqxn\" (UniqueName: \"kubernetes.io/projected/be22c9ab-23d0-48ef-8d5d-298d42e5590f-kube-api-access-qtqxn\") pod \"glance-db-create-h7pph\" (UID: \"be22c9ab-23d0-48ef-8d5d-298d42e5590f\") " pod="openstack/glance-db-create-h7pph" Jan 25 08:14:55 crc kubenswrapper[4832]: I0125 08:14:55.059606 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/be22c9ab-23d0-48ef-8d5d-298d42e5590f-operator-scripts\") pod \"glance-db-create-h7pph\" (UID: \"be22c9ab-23d0-48ef-8d5d-298d42e5590f\") " pod="openstack/glance-db-create-h7pph" Jan 25 08:14:55 crc kubenswrapper[4832]: I0125 08:14:55.059672 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qtqxn\" (UniqueName: \"kubernetes.io/projected/be22c9ab-23d0-48ef-8d5d-298d42e5590f-kube-api-access-qtqxn\") pod \"glance-db-create-h7pph\" (UID: \"be22c9ab-23d0-48ef-8d5d-298d42e5590f\") " pod="openstack/glance-db-create-h7pph" Jan 25 08:14:55 crc kubenswrapper[4832]: I0125 08:14:55.100542 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qtqxn\" (UniqueName: \"kubernetes.io/projected/be22c9ab-23d0-48ef-8d5d-298d42e5590f-kube-api-access-qtqxn\") pod \"glance-db-create-h7pph\" (UID: \"be22c9ab-23d0-48ef-8d5d-298d42e5590f\") " pod="openstack/glance-db-create-h7pph" Jan 25 08:14:55 crc kubenswrapper[4832]: I0125 08:14:55.141364 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/be22c9ab-23d0-48ef-8d5d-298d42e5590f-operator-scripts\") pod \"glance-db-create-h7pph\" (UID: \"be22c9ab-23d0-48ef-8d5d-298d42e5590f\") " pod="openstack/glance-db-create-h7pph" Jan 25 08:14:55 crc kubenswrapper[4832]: I0125 08:14:55.173802 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-1d89-account-create-update-nnk7h"] Jan 25 08:14:55 crc kubenswrapper[4832]: I0125 08:14:55.179248 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-1d89-account-create-update-nnk7h" Jan 25 08:14:55 crc kubenswrapper[4832]: I0125 08:14:55.182350 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-db-secret" Jan 25 08:14:55 crc kubenswrapper[4832]: I0125 08:14:55.189968 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-1d89-account-create-update-nnk7h"] Jan 25 08:14:55 crc kubenswrapper[4832]: I0125 08:14:55.283753 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vgpr8\" (UniqueName: \"kubernetes.io/projected/f4884afc-1fd6-43f9-bd20-b02a682b1975-kube-api-access-vgpr8\") pod \"glance-1d89-account-create-update-nnk7h\" (UID: \"f4884afc-1fd6-43f9-bd20-b02a682b1975\") " pod="openstack/glance-1d89-account-create-update-nnk7h" Jan 25 08:14:55 crc kubenswrapper[4832]: I0125 08:14:55.284247 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f4884afc-1fd6-43f9-bd20-b02a682b1975-operator-scripts\") pod \"glance-1d89-account-create-update-nnk7h\" (UID: \"f4884afc-1fd6-43f9-bd20-b02a682b1975\") " pod="openstack/glance-1d89-account-create-update-nnk7h" Jan 25 08:14:55 crc kubenswrapper[4832]: I0125 08:14:55.377734 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-h7pph" Jan 25 08:14:55 crc kubenswrapper[4832]: I0125 08:14:55.386120 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vgpr8\" (UniqueName: \"kubernetes.io/projected/f4884afc-1fd6-43f9-bd20-b02a682b1975-kube-api-access-vgpr8\") pod \"glance-1d89-account-create-update-nnk7h\" (UID: \"f4884afc-1fd6-43f9-bd20-b02a682b1975\") " pod="openstack/glance-1d89-account-create-update-nnk7h" Jan 25 08:14:55 crc kubenswrapper[4832]: I0125 08:14:55.386260 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f4884afc-1fd6-43f9-bd20-b02a682b1975-operator-scripts\") pod \"glance-1d89-account-create-update-nnk7h\" (UID: \"f4884afc-1fd6-43f9-bd20-b02a682b1975\") " pod="openstack/glance-1d89-account-create-update-nnk7h" Jan 25 08:14:55 crc kubenswrapper[4832]: I0125 08:14:55.387237 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f4884afc-1fd6-43f9-bd20-b02a682b1975-operator-scripts\") pod \"glance-1d89-account-create-update-nnk7h\" (UID: \"f4884afc-1fd6-43f9-bd20-b02a682b1975\") " pod="openstack/glance-1d89-account-create-update-nnk7h" Jan 25 08:14:55 crc kubenswrapper[4832]: I0125 08:14:55.412016 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vgpr8\" (UniqueName: \"kubernetes.io/projected/f4884afc-1fd6-43f9-bd20-b02a682b1975-kube-api-access-vgpr8\") pod \"glance-1d89-account-create-update-nnk7h\" (UID: \"f4884afc-1fd6-43f9-bd20-b02a682b1975\") " pod="openstack/glance-1d89-account-create-update-nnk7h" Jan 25 08:14:55 crc kubenswrapper[4832]: I0125 08:14:55.539972 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-1d89-account-create-update-nnk7h" Jan 25 08:14:56 crc kubenswrapper[4832]: I0125 08:14:56.602167 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-h7pph"] Jan 25 08:14:56 crc kubenswrapper[4832]: I0125 08:14:56.744823 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-ldwjg"] Jan 25 08:14:56 crc kubenswrapper[4832]: I0125 08:14:56.745986 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-ldwjg" Jan 25 08:14:56 crc kubenswrapper[4832]: I0125 08:14:56.751287 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-mariadb-root-db-secret" Jan 25 08:14:56 crc kubenswrapper[4832]: I0125 08:14:56.758275 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-ldwjg"] Jan 25 08:14:56 crc kubenswrapper[4832]: I0125 08:14:56.770128 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-1d89-account-create-update-nnk7h"] Jan 25 08:14:56 crc kubenswrapper[4832]: I0125 08:14:56.786279 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7xnl9\" (UniqueName: \"kubernetes.io/projected/899aaa97-a9b6-4ee7-9499-2114b65607af-kube-api-access-7xnl9\") pod \"root-account-create-update-ldwjg\" (UID: \"899aaa97-a9b6-4ee7-9499-2114b65607af\") " pod="openstack/root-account-create-update-ldwjg" Jan 25 08:14:56 crc kubenswrapper[4832]: I0125 08:14:56.786339 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/899aaa97-a9b6-4ee7-9499-2114b65607af-operator-scripts\") pod \"root-account-create-update-ldwjg\" (UID: \"899aaa97-a9b6-4ee7-9499-2114b65607af\") " pod="openstack/root-account-create-update-ldwjg" Jan 25 08:14:56 crc kubenswrapper[4832]: I0125 08:14:56.888208 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7xnl9\" (UniqueName: \"kubernetes.io/projected/899aaa97-a9b6-4ee7-9499-2114b65607af-kube-api-access-7xnl9\") pod \"root-account-create-update-ldwjg\" (UID: \"899aaa97-a9b6-4ee7-9499-2114b65607af\") " pod="openstack/root-account-create-update-ldwjg" Jan 25 08:14:56 crc kubenswrapper[4832]: I0125 08:14:56.888461 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/899aaa97-a9b6-4ee7-9499-2114b65607af-operator-scripts\") pod \"root-account-create-update-ldwjg\" (UID: \"899aaa97-a9b6-4ee7-9499-2114b65607af\") " pod="openstack/root-account-create-update-ldwjg" Jan 25 08:14:56 crc kubenswrapper[4832]: I0125 08:14:56.889222 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/899aaa97-a9b6-4ee7-9499-2114b65607af-operator-scripts\") pod \"root-account-create-update-ldwjg\" (UID: \"899aaa97-a9b6-4ee7-9499-2114b65607af\") " pod="openstack/root-account-create-update-ldwjg" Jan 25 08:14:56 crc kubenswrapper[4832]: I0125 08:14:56.916001 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7xnl9\" (UniqueName: \"kubernetes.io/projected/899aaa97-a9b6-4ee7-9499-2114b65607af-kube-api-access-7xnl9\") pod \"root-account-create-update-ldwjg\" (UID: \"899aaa97-a9b6-4ee7-9499-2114b65607af\") " pod="openstack/root-account-create-update-ldwjg" Jan 25 08:14:57 crc kubenswrapper[4832]: I0125 08:14:57.081653 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-1d89-account-create-update-nnk7h" event={"ID":"f4884afc-1fd6-43f9-bd20-b02a682b1975","Type":"ContainerStarted","Data":"91f1c057cdb42c03f5d2e577b4c21aa0212750ee20de4ac6e8bbda20db4ec82a"} Jan 25 08:14:57 crc kubenswrapper[4832]: I0125 08:14:57.081714 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-1d89-account-create-update-nnk7h" event={"ID":"f4884afc-1fd6-43f9-bd20-b02a682b1975","Type":"ContainerStarted","Data":"8ba16092c3e533a19f124a77783d707403148da52909af83914832763b93e7e2"} Jan 25 08:14:57 crc kubenswrapper[4832]: I0125 08:14:57.083574 4832 generic.go:334] "Generic (PLEG): container finished" podID="be22c9ab-23d0-48ef-8d5d-298d42e5590f" containerID="86638e548ea7882a51876bf5fa20b5eb04d1b7db97b260c72f26e6ce546a7de9" exitCode=0 Jan 25 08:14:57 crc kubenswrapper[4832]: I0125 08:14:57.083640 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-h7pph" event={"ID":"be22c9ab-23d0-48ef-8d5d-298d42e5590f","Type":"ContainerDied","Data":"86638e548ea7882a51876bf5fa20b5eb04d1b7db97b260c72f26e6ce546a7de9"} Jan 25 08:14:57 crc kubenswrapper[4832]: I0125 08:14:57.083677 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-h7pph" event={"ID":"be22c9ab-23d0-48ef-8d5d-298d42e5590f","Type":"ContainerStarted","Data":"bc0da36082edd0a39c997fa3c3ff0ddb3696505df51a15c0784146635c87422f"} Jan 25 08:14:57 crc kubenswrapper[4832]: I0125 08:14:57.112982 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-ldwjg" Jan 25 08:14:57 crc kubenswrapper[4832]: I0125 08:14:57.117780 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-1d89-account-create-update-nnk7h" podStartSLOduration=2.117749652 podStartE2EDuration="2.117749652s" podCreationTimestamp="2026-01-25 08:14:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-25 08:14:57.112881099 +0000 UTC m=+1079.786704652" watchObservedRunningTime="2026-01-25 08:14:57.117749652 +0000 UTC m=+1079.791573185" Jan 25 08:14:57 crc kubenswrapper[4832]: I0125 08:14:57.548663 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-ldwjg"] Jan 25 08:14:58 crc kubenswrapper[4832]: I0125 08:14:58.093642 4832 generic.go:334] "Generic (PLEG): container finished" podID="f4884afc-1fd6-43f9-bd20-b02a682b1975" containerID="91f1c057cdb42c03f5d2e577b4c21aa0212750ee20de4ac6e8bbda20db4ec82a" exitCode=0 Jan 25 08:14:58 crc kubenswrapper[4832]: I0125 08:14:58.093759 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-1d89-account-create-update-nnk7h" event={"ID":"f4884afc-1fd6-43f9-bd20-b02a682b1975","Type":"ContainerDied","Data":"91f1c057cdb42c03f5d2e577b4c21aa0212750ee20de4ac6e8bbda20db4ec82a"} Jan 25 08:14:58 crc kubenswrapper[4832]: I0125 08:14:58.096432 4832 generic.go:334] "Generic (PLEG): container finished" podID="899aaa97-a9b6-4ee7-9499-2114b65607af" containerID="9ca814f6b8251cfd6b10bb677f8a7dcbc1d7ac5e4285315c0bb7306bb32d833a" exitCode=0 Jan 25 08:14:58 crc kubenswrapper[4832]: I0125 08:14:58.096509 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-ldwjg" event={"ID":"899aaa97-a9b6-4ee7-9499-2114b65607af","Type":"ContainerDied","Data":"9ca814f6b8251cfd6b10bb677f8a7dcbc1d7ac5e4285315c0bb7306bb32d833a"} Jan 25 08:14:58 crc kubenswrapper[4832]: I0125 08:14:58.096539 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-ldwjg" event={"ID":"899aaa97-a9b6-4ee7-9499-2114b65607af","Type":"ContainerStarted","Data":"57f7aff9f317d38ecdc14598215a65178ee58e2d1aaf427b6a55504fa2662bc3"} Jan 25 08:14:58 crc kubenswrapper[4832]: I0125 08:14:58.098942 4832 generic.go:334] "Generic (PLEG): container finished" podID="8780670c-4459-4064-a5ee-d22abf7923aa" containerID="21ae7a60ce8dfe46aacb6676a7fec11a2c54bef23e0eaadb3c681552db875aef" exitCode=0 Jan 25 08:14:58 crc kubenswrapper[4832]: I0125 08:14:58.099150 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-s7nx7" event={"ID":"8780670c-4459-4064-a5ee-d22abf7923aa","Type":"ContainerDied","Data":"21ae7a60ce8dfe46aacb6676a7fec11a2c54bef23e0eaadb3c681552db875aef"} Jan 25 08:14:58 crc kubenswrapper[4832]: I0125 08:14:58.316416 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/68ef9e02-9e33-48c3-a32b-ceae36687171-etc-swift\") pod \"swift-storage-0\" (UID: \"68ef9e02-9e33-48c3-a32b-ceae36687171\") " pod="openstack/swift-storage-0" Jan 25 08:14:58 crc kubenswrapper[4832]: I0125 08:14:58.329099 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/68ef9e02-9e33-48c3-a32b-ceae36687171-etc-swift\") pod \"swift-storage-0\" (UID: \"68ef9e02-9e33-48c3-a32b-ceae36687171\") " pod="openstack/swift-storage-0" Jan 25 08:14:58 crc kubenswrapper[4832]: I0125 08:14:58.382941 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Jan 25 08:14:58 crc kubenswrapper[4832]: I0125 08:14:58.515090 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-h7pph" Jan 25 08:14:58 crc kubenswrapper[4832]: I0125 08:14:58.620315 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/be22c9ab-23d0-48ef-8d5d-298d42e5590f-operator-scripts\") pod \"be22c9ab-23d0-48ef-8d5d-298d42e5590f\" (UID: \"be22c9ab-23d0-48ef-8d5d-298d42e5590f\") " Jan 25 08:14:58 crc kubenswrapper[4832]: I0125 08:14:58.620371 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qtqxn\" (UniqueName: \"kubernetes.io/projected/be22c9ab-23d0-48ef-8d5d-298d42e5590f-kube-api-access-qtqxn\") pod \"be22c9ab-23d0-48ef-8d5d-298d42e5590f\" (UID: \"be22c9ab-23d0-48ef-8d5d-298d42e5590f\") " Jan 25 08:14:58 crc kubenswrapper[4832]: I0125 08:14:58.621235 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/be22c9ab-23d0-48ef-8d5d-298d42e5590f-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "be22c9ab-23d0-48ef-8d5d-298d42e5590f" (UID: "be22c9ab-23d0-48ef-8d5d-298d42e5590f"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 25 08:14:58 crc kubenswrapper[4832]: I0125 08:14:58.624047 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/be22c9ab-23d0-48ef-8d5d-298d42e5590f-kube-api-access-qtqxn" (OuterVolumeSpecName: "kube-api-access-qtqxn") pod "be22c9ab-23d0-48ef-8d5d-298d42e5590f" (UID: "be22c9ab-23d0-48ef-8d5d-298d42e5590f"). InnerVolumeSpecName "kube-api-access-qtqxn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 25 08:14:58 crc kubenswrapper[4832]: I0125 08:14:58.722324 4832 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/be22c9ab-23d0-48ef-8d5d-298d42e5590f-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 25 08:14:58 crc kubenswrapper[4832]: I0125 08:14:58.722356 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qtqxn\" (UniqueName: \"kubernetes.io/projected/be22c9ab-23d0-48ef-8d5d-298d42e5590f-kube-api-access-qtqxn\") on node \"crc\" DevicePath \"\"" Jan 25 08:14:58 crc kubenswrapper[4832]: I0125 08:14:58.995455 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Jan 25 08:14:59 crc kubenswrapper[4832]: I0125 08:14:59.107192 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-h7pph" Jan 25 08:14:59 crc kubenswrapper[4832]: I0125 08:14:59.107187 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-h7pph" event={"ID":"be22c9ab-23d0-48ef-8d5d-298d42e5590f","Type":"ContainerDied","Data":"bc0da36082edd0a39c997fa3c3ff0ddb3696505df51a15c0784146635c87422f"} Jan 25 08:14:59 crc kubenswrapper[4832]: I0125 08:14:59.107398 4832 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bc0da36082edd0a39c997fa3c3ff0ddb3696505df51a15c0784146635c87422f" Jan 25 08:14:59 crc kubenswrapper[4832]: I0125 08:14:59.109480 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"68ef9e02-9e33-48c3-a32b-ceae36687171","Type":"ContainerStarted","Data":"4f92d50a2c9dec712c7bb1d5c48c93b11885751c4c01d68f0f62f9a525d7c217"} Jan 25 08:14:59 crc kubenswrapper[4832]: I0125 08:14:59.571536 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-ldwjg" Jan 25 08:14:59 crc kubenswrapper[4832]: I0125 08:14:59.578587 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-s7nx7" Jan 25 08:14:59 crc kubenswrapper[4832]: I0125 08:14:59.585245 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-1d89-account-create-update-nnk7h" Jan 25 08:14:59 crc kubenswrapper[4832]: I0125 08:14:59.737579 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/8780670c-4459-4064-a5ee-d22abf7923aa-etc-swift\") pod \"8780670c-4459-4064-a5ee-d22abf7923aa\" (UID: \"8780670c-4459-4064-a5ee-d22abf7923aa\") " Jan 25 08:14:59 crc kubenswrapper[4832]: I0125 08:14:59.737625 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7xnl9\" (UniqueName: \"kubernetes.io/projected/899aaa97-a9b6-4ee7-9499-2114b65607af-kube-api-access-7xnl9\") pod \"899aaa97-a9b6-4ee7-9499-2114b65607af\" (UID: \"899aaa97-a9b6-4ee7-9499-2114b65607af\") " Jan 25 08:14:59 crc kubenswrapper[4832]: I0125 08:14:59.737696 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/899aaa97-a9b6-4ee7-9499-2114b65607af-operator-scripts\") pod \"899aaa97-a9b6-4ee7-9499-2114b65607af\" (UID: \"899aaa97-a9b6-4ee7-9499-2114b65607af\") " Jan 25 08:14:59 crc kubenswrapper[4832]: I0125 08:14:59.737719 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/8780670c-4459-4064-a5ee-d22abf7923aa-swiftconf\") pod \"8780670c-4459-4064-a5ee-d22abf7923aa\" (UID: \"8780670c-4459-4064-a5ee-d22abf7923aa\") " Jan 25 08:14:59 crc kubenswrapper[4832]: I0125 08:14:59.737774 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f4884afc-1fd6-43f9-bd20-b02a682b1975-operator-scripts\") pod \"f4884afc-1fd6-43f9-bd20-b02a682b1975\" (UID: \"f4884afc-1fd6-43f9-bd20-b02a682b1975\") " Jan 25 08:14:59 crc kubenswrapper[4832]: I0125 08:14:59.737796 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/8780670c-4459-4064-a5ee-d22abf7923aa-dispersionconf\") pod \"8780670c-4459-4064-a5ee-d22abf7923aa\" (UID: \"8780670c-4459-4064-a5ee-d22abf7923aa\") " Jan 25 08:14:59 crc kubenswrapper[4832]: I0125 08:14:59.737846 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vgpr8\" (UniqueName: \"kubernetes.io/projected/f4884afc-1fd6-43f9-bd20-b02a682b1975-kube-api-access-vgpr8\") pod \"f4884afc-1fd6-43f9-bd20-b02a682b1975\" (UID: \"f4884afc-1fd6-43f9-bd20-b02a682b1975\") " Jan 25 08:14:59 crc kubenswrapper[4832]: I0125 08:14:59.737881 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2vh7r\" (UniqueName: \"kubernetes.io/projected/8780670c-4459-4064-a5ee-d22abf7923aa-kube-api-access-2vh7r\") pod \"8780670c-4459-4064-a5ee-d22abf7923aa\" (UID: \"8780670c-4459-4064-a5ee-d22abf7923aa\") " Jan 25 08:14:59 crc kubenswrapper[4832]: I0125 08:14:59.737919 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/8780670c-4459-4064-a5ee-d22abf7923aa-scripts\") pod \"8780670c-4459-4064-a5ee-d22abf7923aa\" (UID: \"8780670c-4459-4064-a5ee-d22abf7923aa\") " Jan 25 08:14:59 crc kubenswrapper[4832]: I0125 08:14:59.737939 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8780670c-4459-4064-a5ee-d22abf7923aa-combined-ca-bundle\") pod \"8780670c-4459-4064-a5ee-d22abf7923aa\" (UID: \"8780670c-4459-4064-a5ee-d22abf7923aa\") " Jan 25 08:14:59 crc kubenswrapper[4832]: I0125 08:14:59.737957 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/8780670c-4459-4064-a5ee-d22abf7923aa-ring-data-devices\") pod \"8780670c-4459-4064-a5ee-d22abf7923aa\" (UID: \"8780670c-4459-4064-a5ee-d22abf7923aa\") " Jan 25 08:14:59 crc kubenswrapper[4832]: I0125 08:14:59.738798 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8780670c-4459-4064-a5ee-d22abf7923aa-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "8780670c-4459-4064-a5ee-d22abf7923aa" (UID: "8780670c-4459-4064-a5ee-d22abf7923aa"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 25 08:14:59 crc kubenswrapper[4832]: I0125 08:14:59.738841 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/899aaa97-a9b6-4ee7-9499-2114b65607af-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "899aaa97-a9b6-4ee7-9499-2114b65607af" (UID: "899aaa97-a9b6-4ee7-9499-2114b65607af"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 25 08:14:59 crc kubenswrapper[4832]: I0125 08:14:59.739324 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f4884afc-1fd6-43f9-bd20-b02a682b1975-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "f4884afc-1fd6-43f9-bd20-b02a682b1975" (UID: "f4884afc-1fd6-43f9-bd20-b02a682b1975"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 25 08:14:59 crc kubenswrapper[4832]: I0125 08:14:59.739428 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8780670c-4459-4064-a5ee-d22abf7923aa-ring-data-devices" (OuterVolumeSpecName: "ring-data-devices") pod "8780670c-4459-4064-a5ee-d22abf7923aa" (UID: "8780670c-4459-4064-a5ee-d22abf7923aa"). InnerVolumeSpecName "ring-data-devices". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 25 08:14:59 crc kubenswrapper[4832]: I0125 08:14:59.743625 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8780670c-4459-4064-a5ee-d22abf7923aa-kube-api-access-2vh7r" (OuterVolumeSpecName: "kube-api-access-2vh7r") pod "8780670c-4459-4064-a5ee-d22abf7923aa" (UID: "8780670c-4459-4064-a5ee-d22abf7923aa"). InnerVolumeSpecName "kube-api-access-2vh7r". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 25 08:14:59 crc kubenswrapper[4832]: I0125 08:14:59.747893 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/899aaa97-a9b6-4ee7-9499-2114b65607af-kube-api-access-7xnl9" (OuterVolumeSpecName: "kube-api-access-7xnl9") pod "899aaa97-a9b6-4ee7-9499-2114b65607af" (UID: "899aaa97-a9b6-4ee7-9499-2114b65607af"). InnerVolumeSpecName "kube-api-access-7xnl9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 25 08:14:59 crc kubenswrapper[4832]: I0125 08:14:59.748352 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f4884afc-1fd6-43f9-bd20-b02a682b1975-kube-api-access-vgpr8" (OuterVolumeSpecName: "kube-api-access-vgpr8") pod "f4884afc-1fd6-43f9-bd20-b02a682b1975" (UID: "f4884afc-1fd6-43f9-bd20-b02a682b1975"). InnerVolumeSpecName "kube-api-access-vgpr8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 25 08:14:59 crc kubenswrapper[4832]: I0125 08:14:59.751341 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8780670c-4459-4064-a5ee-d22abf7923aa-dispersionconf" (OuterVolumeSpecName: "dispersionconf") pod "8780670c-4459-4064-a5ee-d22abf7923aa" (UID: "8780670c-4459-4064-a5ee-d22abf7923aa"). InnerVolumeSpecName "dispersionconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 25 08:14:59 crc kubenswrapper[4832]: I0125 08:14:59.766199 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8780670c-4459-4064-a5ee-d22abf7923aa-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8780670c-4459-4064-a5ee-d22abf7923aa" (UID: "8780670c-4459-4064-a5ee-d22abf7923aa"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 25 08:14:59 crc kubenswrapper[4832]: I0125 08:14:59.773967 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8780670c-4459-4064-a5ee-d22abf7923aa-scripts" (OuterVolumeSpecName: "scripts") pod "8780670c-4459-4064-a5ee-d22abf7923aa" (UID: "8780670c-4459-4064-a5ee-d22abf7923aa"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 25 08:14:59 crc kubenswrapper[4832]: I0125 08:14:59.776103 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8780670c-4459-4064-a5ee-d22abf7923aa-swiftconf" (OuterVolumeSpecName: "swiftconf") pod "8780670c-4459-4064-a5ee-d22abf7923aa" (UID: "8780670c-4459-4064-a5ee-d22abf7923aa"). InnerVolumeSpecName "swiftconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 25 08:14:59 crc kubenswrapper[4832]: I0125 08:14:59.839747 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vgpr8\" (UniqueName: \"kubernetes.io/projected/f4884afc-1fd6-43f9-bd20-b02a682b1975-kube-api-access-vgpr8\") on node \"crc\" DevicePath \"\"" Jan 25 08:14:59 crc kubenswrapper[4832]: I0125 08:14:59.839788 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2vh7r\" (UniqueName: \"kubernetes.io/projected/8780670c-4459-4064-a5ee-d22abf7923aa-kube-api-access-2vh7r\") on node \"crc\" DevicePath \"\"" Jan 25 08:14:59 crc kubenswrapper[4832]: I0125 08:14:59.839802 4832 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/8780670c-4459-4064-a5ee-d22abf7923aa-scripts\") on node \"crc\" DevicePath \"\"" Jan 25 08:14:59 crc kubenswrapper[4832]: I0125 08:14:59.839814 4832 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8780670c-4459-4064-a5ee-d22abf7923aa-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 25 08:14:59 crc kubenswrapper[4832]: I0125 08:14:59.839826 4832 reconciler_common.go:293] "Volume detached for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/8780670c-4459-4064-a5ee-d22abf7923aa-ring-data-devices\") on node \"crc\" DevicePath \"\"" Jan 25 08:14:59 crc kubenswrapper[4832]: I0125 08:14:59.839836 4832 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/8780670c-4459-4064-a5ee-d22abf7923aa-etc-swift\") on node \"crc\" DevicePath \"\"" Jan 25 08:14:59 crc kubenswrapper[4832]: I0125 08:14:59.839847 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7xnl9\" (UniqueName: \"kubernetes.io/projected/899aaa97-a9b6-4ee7-9499-2114b65607af-kube-api-access-7xnl9\") on node \"crc\" DevicePath \"\"" Jan 25 08:14:59 crc kubenswrapper[4832]: I0125 08:14:59.839858 4832 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/899aaa97-a9b6-4ee7-9499-2114b65607af-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 25 08:14:59 crc kubenswrapper[4832]: I0125 08:14:59.839868 4832 reconciler_common.go:293] "Volume detached for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/8780670c-4459-4064-a5ee-d22abf7923aa-swiftconf\") on node \"crc\" DevicePath \"\"" Jan 25 08:14:59 crc kubenswrapper[4832]: I0125 08:14:59.839880 4832 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f4884afc-1fd6-43f9-bd20-b02a682b1975-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 25 08:14:59 crc kubenswrapper[4832]: I0125 08:14:59.839890 4832 reconciler_common.go:293] "Volume detached for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/8780670c-4459-4064-a5ee-d22abf7923aa-dispersionconf\") on node \"crc\" DevicePath \"\"" Jan 25 08:15:00 crc kubenswrapper[4832]: I0125 08:15:00.120115 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-1d89-account-create-update-nnk7h" event={"ID":"f4884afc-1fd6-43f9-bd20-b02a682b1975","Type":"ContainerDied","Data":"8ba16092c3e533a19f124a77783d707403148da52909af83914832763b93e7e2"} Jan 25 08:15:00 crc kubenswrapper[4832]: I0125 08:15:00.120370 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-1d89-account-create-update-nnk7h" Jan 25 08:15:00 crc kubenswrapper[4832]: I0125 08:15:00.120395 4832 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8ba16092c3e533a19f124a77783d707403148da52909af83914832763b93e7e2" Jan 25 08:15:00 crc kubenswrapper[4832]: I0125 08:15:00.123054 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-ldwjg" event={"ID":"899aaa97-a9b6-4ee7-9499-2114b65607af","Type":"ContainerDied","Data":"57f7aff9f317d38ecdc14598215a65178ee58e2d1aaf427b6a55504fa2662bc3"} Jan 25 08:15:00 crc kubenswrapper[4832]: I0125 08:15:00.123085 4832 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="57f7aff9f317d38ecdc14598215a65178ee58e2d1aaf427b6a55504fa2662bc3" Jan 25 08:15:00 crc kubenswrapper[4832]: I0125 08:15:00.123141 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-ldwjg" Jan 25 08:15:00 crc kubenswrapper[4832]: I0125 08:15:00.131735 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-s7nx7" event={"ID":"8780670c-4459-4064-a5ee-d22abf7923aa","Type":"ContainerDied","Data":"f2de4a6d987cd68c871d5df5cde98883b8140c607650deacac6b613d03330cdf"} Jan 25 08:15:00 crc kubenswrapper[4832]: I0125 08:15:00.131767 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-s7nx7" Jan 25 08:15:00 crc kubenswrapper[4832]: I0125 08:15:00.131775 4832 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f2de4a6d987cd68c871d5df5cde98883b8140c607650deacac6b613d03330cdf" Jan 25 08:15:00 crc kubenswrapper[4832]: I0125 08:15:00.229439 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29488815-gd6rm"] Jan 25 08:15:00 crc kubenswrapper[4832]: E0125 08:15:00.229818 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4884afc-1fd6-43f9-bd20-b02a682b1975" containerName="mariadb-account-create-update" Jan 25 08:15:00 crc kubenswrapper[4832]: I0125 08:15:00.229835 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4884afc-1fd6-43f9-bd20-b02a682b1975" containerName="mariadb-account-create-update" Jan 25 08:15:00 crc kubenswrapper[4832]: E0125 08:15:00.229862 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8780670c-4459-4064-a5ee-d22abf7923aa" containerName="swift-ring-rebalance" Jan 25 08:15:00 crc kubenswrapper[4832]: I0125 08:15:00.229870 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="8780670c-4459-4064-a5ee-d22abf7923aa" containerName="swift-ring-rebalance" Jan 25 08:15:00 crc kubenswrapper[4832]: E0125 08:15:00.229887 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="899aaa97-a9b6-4ee7-9499-2114b65607af" containerName="mariadb-account-create-update" Jan 25 08:15:00 crc kubenswrapper[4832]: I0125 08:15:00.229895 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="899aaa97-a9b6-4ee7-9499-2114b65607af" containerName="mariadb-account-create-update" Jan 25 08:15:00 crc kubenswrapper[4832]: E0125 08:15:00.229906 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="be22c9ab-23d0-48ef-8d5d-298d42e5590f" containerName="mariadb-database-create" Jan 25 08:15:00 crc kubenswrapper[4832]: I0125 08:15:00.229914 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="be22c9ab-23d0-48ef-8d5d-298d42e5590f" containerName="mariadb-database-create" Jan 25 08:15:00 crc kubenswrapper[4832]: I0125 08:15:00.230105 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="be22c9ab-23d0-48ef-8d5d-298d42e5590f" containerName="mariadb-database-create" Jan 25 08:15:00 crc kubenswrapper[4832]: I0125 08:15:00.230117 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="899aaa97-a9b6-4ee7-9499-2114b65607af" containerName="mariadb-account-create-update" Jan 25 08:15:00 crc kubenswrapper[4832]: I0125 08:15:00.230130 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4884afc-1fd6-43f9-bd20-b02a682b1975" containerName="mariadb-account-create-update" Jan 25 08:15:00 crc kubenswrapper[4832]: I0125 08:15:00.230142 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="8780670c-4459-4064-a5ee-d22abf7923aa" containerName="swift-ring-rebalance" Jan 25 08:15:00 crc kubenswrapper[4832]: I0125 08:15:00.230865 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29488815-gd6rm" Jan 25 08:15:00 crc kubenswrapper[4832]: I0125 08:15:00.244300 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-sync-dnzjb"] Jan 25 08:15:00 crc kubenswrapper[4832]: I0125 08:15:00.244608 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 25 08:15:00 crc kubenswrapper[4832]: I0125 08:15:00.245039 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 25 08:15:00 crc kubenswrapper[4832]: I0125 08:15:00.253423 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-dnzjb" Jan 25 08:15:00 crc kubenswrapper[4832]: I0125 08:15:00.255294 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-8rn6w" Jan 25 08:15:00 crc kubenswrapper[4832]: I0125 08:15:00.255657 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-config-data" Jan 25 08:15:00 crc kubenswrapper[4832]: I0125 08:15:00.261955 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29488815-gd6rm"] Jan 25 08:15:00 crc kubenswrapper[4832]: I0125 08:15:00.275335 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-dnzjb"] Jan 25 08:15:00 crc kubenswrapper[4832]: I0125 08:15:00.324723 4832 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-n6hrr" podUID="54cecc85-b18f-4136-bd00-cbcc0f680643" containerName="ovn-controller" probeResult="failure" output=< Jan 25 08:15:00 crc kubenswrapper[4832]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Jan 25 08:15:00 crc kubenswrapper[4832]: > Jan 25 08:15:00 crc kubenswrapper[4832]: I0125 08:15:00.347955 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-tk26k" Jan 25 08:15:00 crc kubenswrapper[4832]: I0125 08:15:00.349730 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t6g5x\" (UniqueName: \"kubernetes.io/projected/88b922f3-0125-4078-8ec7-ad4edd04d0ed-kube-api-access-t6g5x\") pod \"glance-db-sync-dnzjb\" (UID: \"88b922f3-0125-4078-8ec7-ad4edd04d0ed\") " pod="openstack/glance-db-sync-dnzjb" Jan 25 08:15:00 crc kubenswrapper[4832]: I0125 08:15:00.349781 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/88b922f3-0125-4078-8ec7-ad4edd04d0ed-config-data\") pod \"glance-db-sync-dnzjb\" (UID: \"88b922f3-0125-4078-8ec7-ad4edd04d0ed\") " pod="openstack/glance-db-sync-dnzjb" Jan 25 08:15:00 crc kubenswrapper[4832]: I0125 08:15:00.349805 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/88b922f3-0125-4078-8ec7-ad4edd04d0ed-combined-ca-bundle\") pod \"glance-db-sync-dnzjb\" (UID: \"88b922f3-0125-4078-8ec7-ad4edd04d0ed\") " pod="openstack/glance-db-sync-dnzjb" Jan 25 08:15:00 crc kubenswrapper[4832]: I0125 08:15:00.349840 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s5tr8\" (UniqueName: \"kubernetes.io/projected/a053d916-f24b-4013-b7bf-9a4abe14e218-kube-api-access-s5tr8\") pod \"collect-profiles-29488815-gd6rm\" (UID: \"a053d916-f24b-4013-b7bf-9a4abe14e218\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29488815-gd6rm" Jan 25 08:15:00 crc kubenswrapper[4832]: I0125 08:15:00.349870 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a053d916-f24b-4013-b7bf-9a4abe14e218-config-volume\") pod \"collect-profiles-29488815-gd6rm\" (UID: \"a053d916-f24b-4013-b7bf-9a4abe14e218\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29488815-gd6rm" Jan 25 08:15:00 crc kubenswrapper[4832]: I0125 08:15:00.349986 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-tk26k" Jan 25 08:15:00 crc kubenswrapper[4832]: I0125 08:15:00.350127 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a053d916-f24b-4013-b7bf-9a4abe14e218-secret-volume\") pod \"collect-profiles-29488815-gd6rm\" (UID: \"a053d916-f24b-4013-b7bf-9a4abe14e218\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29488815-gd6rm" Jan 25 08:15:00 crc kubenswrapper[4832]: I0125 08:15:00.350223 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/88b922f3-0125-4078-8ec7-ad4edd04d0ed-db-sync-config-data\") pod \"glance-db-sync-dnzjb\" (UID: \"88b922f3-0125-4078-8ec7-ad4edd04d0ed\") " pod="openstack/glance-db-sync-dnzjb" Jan 25 08:15:00 crc kubenswrapper[4832]: I0125 08:15:00.452338 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a053d916-f24b-4013-b7bf-9a4abe14e218-secret-volume\") pod \"collect-profiles-29488815-gd6rm\" (UID: \"a053d916-f24b-4013-b7bf-9a4abe14e218\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29488815-gd6rm" Jan 25 08:15:00 crc kubenswrapper[4832]: I0125 08:15:00.452406 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/88b922f3-0125-4078-8ec7-ad4edd04d0ed-db-sync-config-data\") pod \"glance-db-sync-dnzjb\" (UID: \"88b922f3-0125-4078-8ec7-ad4edd04d0ed\") " pod="openstack/glance-db-sync-dnzjb" Jan 25 08:15:00 crc kubenswrapper[4832]: I0125 08:15:00.452478 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t6g5x\" (UniqueName: \"kubernetes.io/projected/88b922f3-0125-4078-8ec7-ad4edd04d0ed-kube-api-access-t6g5x\") pod \"glance-db-sync-dnzjb\" (UID: \"88b922f3-0125-4078-8ec7-ad4edd04d0ed\") " pod="openstack/glance-db-sync-dnzjb" Jan 25 08:15:00 crc kubenswrapper[4832]: I0125 08:15:00.452512 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/88b922f3-0125-4078-8ec7-ad4edd04d0ed-config-data\") pod \"glance-db-sync-dnzjb\" (UID: \"88b922f3-0125-4078-8ec7-ad4edd04d0ed\") " pod="openstack/glance-db-sync-dnzjb" Jan 25 08:15:00 crc kubenswrapper[4832]: I0125 08:15:00.452532 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/88b922f3-0125-4078-8ec7-ad4edd04d0ed-combined-ca-bundle\") pod \"glance-db-sync-dnzjb\" (UID: \"88b922f3-0125-4078-8ec7-ad4edd04d0ed\") " pod="openstack/glance-db-sync-dnzjb" Jan 25 08:15:00 crc kubenswrapper[4832]: I0125 08:15:00.452586 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s5tr8\" (UniqueName: \"kubernetes.io/projected/a053d916-f24b-4013-b7bf-9a4abe14e218-kube-api-access-s5tr8\") pod \"collect-profiles-29488815-gd6rm\" (UID: \"a053d916-f24b-4013-b7bf-9a4abe14e218\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29488815-gd6rm" Jan 25 08:15:00 crc kubenswrapper[4832]: I0125 08:15:00.452612 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a053d916-f24b-4013-b7bf-9a4abe14e218-config-volume\") pod \"collect-profiles-29488815-gd6rm\" (UID: \"a053d916-f24b-4013-b7bf-9a4abe14e218\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29488815-gd6rm" Jan 25 08:15:00 crc kubenswrapper[4832]: I0125 08:15:00.454121 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a053d916-f24b-4013-b7bf-9a4abe14e218-config-volume\") pod \"collect-profiles-29488815-gd6rm\" (UID: \"a053d916-f24b-4013-b7bf-9a4abe14e218\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29488815-gd6rm" Jan 25 08:15:00 crc kubenswrapper[4832]: I0125 08:15:00.456633 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/88b922f3-0125-4078-8ec7-ad4edd04d0ed-db-sync-config-data\") pod \"glance-db-sync-dnzjb\" (UID: \"88b922f3-0125-4078-8ec7-ad4edd04d0ed\") " pod="openstack/glance-db-sync-dnzjb" Jan 25 08:15:00 crc kubenswrapper[4832]: I0125 08:15:00.456942 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/88b922f3-0125-4078-8ec7-ad4edd04d0ed-config-data\") pod \"glance-db-sync-dnzjb\" (UID: \"88b922f3-0125-4078-8ec7-ad4edd04d0ed\") " pod="openstack/glance-db-sync-dnzjb" Jan 25 08:15:00 crc kubenswrapper[4832]: I0125 08:15:00.457034 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/88b922f3-0125-4078-8ec7-ad4edd04d0ed-combined-ca-bundle\") pod \"glance-db-sync-dnzjb\" (UID: \"88b922f3-0125-4078-8ec7-ad4edd04d0ed\") " pod="openstack/glance-db-sync-dnzjb" Jan 25 08:15:00 crc kubenswrapper[4832]: I0125 08:15:00.457882 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a053d916-f24b-4013-b7bf-9a4abe14e218-secret-volume\") pod \"collect-profiles-29488815-gd6rm\" (UID: \"a053d916-f24b-4013-b7bf-9a4abe14e218\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29488815-gd6rm" Jan 25 08:15:00 crc kubenswrapper[4832]: I0125 08:15:00.470037 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s5tr8\" (UniqueName: \"kubernetes.io/projected/a053d916-f24b-4013-b7bf-9a4abe14e218-kube-api-access-s5tr8\") pod \"collect-profiles-29488815-gd6rm\" (UID: \"a053d916-f24b-4013-b7bf-9a4abe14e218\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29488815-gd6rm" Jan 25 08:15:00 crc kubenswrapper[4832]: I0125 08:15:00.475830 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t6g5x\" (UniqueName: \"kubernetes.io/projected/88b922f3-0125-4078-8ec7-ad4edd04d0ed-kube-api-access-t6g5x\") pod \"glance-db-sync-dnzjb\" (UID: \"88b922f3-0125-4078-8ec7-ad4edd04d0ed\") " pod="openstack/glance-db-sync-dnzjb" Jan 25 08:15:00 crc kubenswrapper[4832]: I0125 08:15:00.574460 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-n6hrr-config-pjzs9"] Jan 25 08:15:00 crc kubenswrapper[4832]: I0125 08:15:00.575846 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-n6hrr-config-pjzs9" Jan 25 08:15:00 crc kubenswrapper[4832]: I0125 08:15:00.577761 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-extra-scripts" Jan 25 08:15:00 crc kubenswrapper[4832]: I0125 08:15:00.620899 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29488815-gd6rm" Jan 25 08:15:00 crc kubenswrapper[4832]: I0125 08:15:00.621458 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-n6hrr-config-pjzs9"] Jan 25 08:15:00 crc kubenswrapper[4832]: I0125 08:15:00.655495 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/bebc619a-e953-4cd3-b90e-325f3b0344ff-scripts\") pod \"ovn-controller-n6hrr-config-pjzs9\" (UID: \"bebc619a-e953-4cd3-b90e-325f3b0344ff\") " pod="openstack/ovn-controller-n6hrr-config-pjzs9" Jan 25 08:15:00 crc kubenswrapper[4832]: I0125 08:15:00.655549 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/bebc619a-e953-4cd3-b90e-325f3b0344ff-var-run\") pod \"ovn-controller-n6hrr-config-pjzs9\" (UID: \"bebc619a-e953-4cd3-b90e-325f3b0344ff\") " pod="openstack/ovn-controller-n6hrr-config-pjzs9" Jan 25 08:15:00 crc kubenswrapper[4832]: I0125 08:15:00.655619 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/bebc619a-e953-4cd3-b90e-325f3b0344ff-additional-scripts\") pod \"ovn-controller-n6hrr-config-pjzs9\" (UID: \"bebc619a-e953-4cd3-b90e-325f3b0344ff\") " pod="openstack/ovn-controller-n6hrr-config-pjzs9" Jan 25 08:15:00 crc kubenswrapper[4832]: I0125 08:15:00.655650 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/bebc619a-e953-4cd3-b90e-325f3b0344ff-var-log-ovn\") pod \"ovn-controller-n6hrr-config-pjzs9\" (UID: \"bebc619a-e953-4cd3-b90e-325f3b0344ff\") " pod="openstack/ovn-controller-n6hrr-config-pjzs9" Jan 25 08:15:00 crc kubenswrapper[4832]: I0125 08:15:00.655694 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/bebc619a-e953-4cd3-b90e-325f3b0344ff-var-run-ovn\") pod \"ovn-controller-n6hrr-config-pjzs9\" (UID: \"bebc619a-e953-4cd3-b90e-325f3b0344ff\") " pod="openstack/ovn-controller-n6hrr-config-pjzs9" Jan 25 08:15:00 crc kubenswrapper[4832]: I0125 08:15:00.655723 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dvvrc\" (UniqueName: \"kubernetes.io/projected/bebc619a-e953-4cd3-b90e-325f3b0344ff-kube-api-access-dvvrc\") pod \"ovn-controller-n6hrr-config-pjzs9\" (UID: \"bebc619a-e953-4cd3-b90e-325f3b0344ff\") " pod="openstack/ovn-controller-n6hrr-config-pjzs9" Jan 25 08:15:00 crc kubenswrapper[4832]: I0125 08:15:00.682173 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-dnzjb" Jan 25 08:15:00 crc kubenswrapper[4832]: I0125 08:15:00.756889 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dvvrc\" (UniqueName: \"kubernetes.io/projected/bebc619a-e953-4cd3-b90e-325f3b0344ff-kube-api-access-dvvrc\") pod \"ovn-controller-n6hrr-config-pjzs9\" (UID: \"bebc619a-e953-4cd3-b90e-325f3b0344ff\") " pod="openstack/ovn-controller-n6hrr-config-pjzs9" Jan 25 08:15:00 crc kubenswrapper[4832]: I0125 08:15:00.757042 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/bebc619a-e953-4cd3-b90e-325f3b0344ff-scripts\") pod \"ovn-controller-n6hrr-config-pjzs9\" (UID: \"bebc619a-e953-4cd3-b90e-325f3b0344ff\") " pod="openstack/ovn-controller-n6hrr-config-pjzs9" Jan 25 08:15:00 crc kubenswrapper[4832]: I0125 08:15:00.757073 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/bebc619a-e953-4cd3-b90e-325f3b0344ff-var-run\") pod \"ovn-controller-n6hrr-config-pjzs9\" (UID: \"bebc619a-e953-4cd3-b90e-325f3b0344ff\") " pod="openstack/ovn-controller-n6hrr-config-pjzs9" Jan 25 08:15:00 crc kubenswrapper[4832]: I0125 08:15:00.757153 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/bebc619a-e953-4cd3-b90e-325f3b0344ff-additional-scripts\") pod \"ovn-controller-n6hrr-config-pjzs9\" (UID: \"bebc619a-e953-4cd3-b90e-325f3b0344ff\") " pod="openstack/ovn-controller-n6hrr-config-pjzs9" Jan 25 08:15:00 crc kubenswrapper[4832]: I0125 08:15:00.757182 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/bebc619a-e953-4cd3-b90e-325f3b0344ff-var-log-ovn\") pod \"ovn-controller-n6hrr-config-pjzs9\" (UID: \"bebc619a-e953-4cd3-b90e-325f3b0344ff\") " pod="openstack/ovn-controller-n6hrr-config-pjzs9" Jan 25 08:15:00 crc kubenswrapper[4832]: I0125 08:15:00.757229 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/bebc619a-e953-4cd3-b90e-325f3b0344ff-var-run-ovn\") pod \"ovn-controller-n6hrr-config-pjzs9\" (UID: \"bebc619a-e953-4cd3-b90e-325f3b0344ff\") " pod="openstack/ovn-controller-n6hrr-config-pjzs9" Jan 25 08:15:00 crc kubenswrapper[4832]: I0125 08:15:00.757613 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/bebc619a-e953-4cd3-b90e-325f3b0344ff-var-run-ovn\") pod \"ovn-controller-n6hrr-config-pjzs9\" (UID: \"bebc619a-e953-4cd3-b90e-325f3b0344ff\") " pod="openstack/ovn-controller-n6hrr-config-pjzs9" Jan 25 08:15:00 crc kubenswrapper[4832]: I0125 08:15:00.758517 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/bebc619a-e953-4cd3-b90e-325f3b0344ff-var-log-ovn\") pod \"ovn-controller-n6hrr-config-pjzs9\" (UID: \"bebc619a-e953-4cd3-b90e-325f3b0344ff\") " pod="openstack/ovn-controller-n6hrr-config-pjzs9" Jan 25 08:15:00 crc kubenswrapper[4832]: I0125 08:15:00.758622 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/bebc619a-e953-4cd3-b90e-325f3b0344ff-var-run\") pod \"ovn-controller-n6hrr-config-pjzs9\" (UID: \"bebc619a-e953-4cd3-b90e-325f3b0344ff\") " pod="openstack/ovn-controller-n6hrr-config-pjzs9" Jan 25 08:15:00 crc kubenswrapper[4832]: I0125 08:15:00.759611 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/bebc619a-e953-4cd3-b90e-325f3b0344ff-additional-scripts\") pod \"ovn-controller-n6hrr-config-pjzs9\" (UID: \"bebc619a-e953-4cd3-b90e-325f3b0344ff\") " pod="openstack/ovn-controller-n6hrr-config-pjzs9" Jan 25 08:15:00 crc kubenswrapper[4832]: I0125 08:15:00.761675 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/bebc619a-e953-4cd3-b90e-325f3b0344ff-scripts\") pod \"ovn-controller-n6hrr-config-pjzs9\" (UID: \"bebc619a-e953-4cd3-b90e-325f3b0344ff\") " pod="openstack/ovn-controller-n6hrr-config-pjzs9" Jan 25 08:15:00 crc kubenswrapper[4832]: I0125 08:15:00.776692 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dvvrc\" (UniqueName: \"kubernetes.io/projected/bebc619a-e953-4cd3-b90e-325f3b0344ff-kube-api-access-dvvrc\") pod \"ovn-controller-n6hrr-config-pjzs9\" (UID: \"bebc619a-e953-4cd3-b90e-325f3b0344ff\") " pod="openstack/ovn-controller-n6hrr-config-pjzs9" Jan 25 08:15:00 crc kubenswrapper[4832]: I0125 08:15:00.893003 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-n6hrr-config-pjzs9" Jan 25 08:15:01 crc kubenswrapper[4832]: I0125 08:15:01.095044 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29488815-gd6rm"] Jan 25 08:15:01 crc kubenswrapper[4832]: W0125 08:15:01.114032 4832 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda053d916_f24b_4013_b7bf_9a4abe14e218.slice/crio-259538b408f57d9715d91cef9313625899b753600a47a579a270498023cb684a WatchSource:0}: Error finding container 259538b408f57d9715d91cef9313625899b753600a47a579a270498023cb684a: Status 404 returned error can't find the container with id 259538b408f57d9715d91cef9313625899b753600a47a579a270498023cb684a Jan 25 08:15:01 crc kubenswrapper[4832]: I0125 08:15:01.158881 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"68ef9e02-9e33-48c3-a32b-ceae36687171","Type":"ContainerStarted","Data":"719279866e4ae6993150780a592bd25dbdae17d718b9f8a05c6d9c7d515e3f4d"} Jan 25 08:15:01 crc kubenswrapper[4832]: I0125 08:15:01.158961 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"68ef9e02-9e33-48c3-a32b-ceae36687171","Type":"ContainerStarted","Data":"ed1198f26e71faa42fbebd6206818d83ab26831a477c4fe0b18d73ca5d85f10f"} Jan 25 08:15:01 crc kubenswrapper[4832]: I0125 08:15:01.166362 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29488815-gd6rm" event={"ID":"a053d916-f24b-4013-b7bf-9a4abe14e218","Type":"ContainerStarted","Data":"259538b408f57d9715d91cef9313625899b753600a47a579a270498023cb684a"} Jan 25 08:15:02 crc kubenswrapper[4832]: I0125 08:15:02.118089 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-n6hrr-config-pjzs9"] Jan 25 08:15:02 crc kubenswrapper[4832]: I0125 08:15:02.195019 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"68ef9e02-9e33-48c3-a32b-ceae36687171","Type":"ContainerStarted","Data":"f2c194a63dbc456dbf5403059bdbe3ccdce17537240721331dec2192983e7000"} Jan 25 08:15:02 crc kubenswrapper[4832]: I0125 08:15:02.199588 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29488815-gd6rm" event={"ID":"a053d916-f24b-4013-b7bf-9a4abe14e218","Type":"ContainerStarted","Data":"2c535f6ce45bd6825b7b760a6f368451fd16bb9e78bb41f0b0003ddd1b5b96e9"} Jan 25 08:15:02 crc kubenswrapper[4832]: I0125 08:15:02.201972 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-n6hrr-config-pjzs9" event={"ID":"bebc619a-e953-4cd3-b90e-325f3b0344ff","Type":"ContainerStarted","Data":"975e15e7c534c218be44c7bb24195a24c8b9791dfc7ba693c4ed858e05f7b776"} Jan 25 08:15:02 crc kubenswrapper[4832]: I0125 08:15:02.236354 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-dnzjb"] Jan 25 08:15:02 crc kubenswrapper[4832]: I0125 08:15:02.240652 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29488815-gd6rm" podStartSLOduration=2.24063439 podStartE2EDuration="2.24063439s" podCreationTimestamp="2026-01-25 08:15:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-25 08:15:02.230643827 +0000 UTC m=+1084.904467360" watchObservedRunningTime="2026-01-25 08:15:02.24063439 +0000 UTC m=+1084.914457923" Jan 25 08:15:02 crc kubenswrapper[4832]: W0125 08:15:02.247188 4832 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod88b922f3_0125_4078_8ec7_ad4edd04d0ed.slice/crio-247fdb440e453d46419f89dae43eed7cd9e2f304234fbe8ac79722c75dd0e797 WatchSource:0}: Error finding container 247fdb440e453d46419f89dae43eed7cd9e2f304234fbe8ac79722c75dd0e797: Status 404 returned error can't find the container with id 247fdb440e453d46419f89dae43eed7cd9e2f304234fbe8ac79722c75dd0e797 Jan 25 08:15:03 crc kubenswrapper[4832]: I0125 08:15:03.085993 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-ldwjg"] Jan 25 08:15:03 crc kubenswrapper[4832]: I0125 08:15:03.093496 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-ldwjg"] Jan 25 08:15:03 crc kubenswrapper[4832]: I0125 08:15:03.217133 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"68ef9e02-9e33-48c3-a32b-ceae36687171","Type":"ContainerStarted","Data":"810093675ef3f8005a39cdc40f400aa7e42db3c3b0b37215df1dcdeb439a5d59"} Jan 25 08:15:03 crc kubenswrapper[4832]: I0125 08:15:03.220291 4832 generic.go:334] "Generic (PLEG): container finished" podID="a053d916-f24b-4013-b7bf-9a4abe14e218" containerID="2c535f6ce45bd6825b7b760a6f368451fd16bb9e78bb41f0b0003ddd1b5b96e9" exitCode=0 Jan 25 08:15:03 crc kubenswrapper[4832]: I0125 08:15:03.220365 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29488815-gd6rm" event={"ID":"a053d916-f24b-4013-b7bf-9a4abe14e218","Type":"ContainerDied","Data":"2c535f6ce45bd6825b7b760a6f368451fd16bb9e78bb41f0b0003ddd1b5b96e9"} Jan 25 08:15:03 crc kubenswrapper[4832]: I0125 08:15:03.221535 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-dnzjb" event={"ID":"88b922f3-0125-4078-8ec7-ad4edd04d0ed","Type":"ContainerStarted","Data":"247fdb440e453d46419f89dae43eed7cd9e2f304234fbe8ac79722c75dd0e797"} Jan 25 08:15:03 crc kubenswrapper[4832]: I0125 08:15:03.226607 4832 generic.go:334] "Generic (PLEG): container finished" podID="bebc619a-e953-4cd3-b90e-325f3b0344ff" containerID="449a56dd7d9c8f7d92a1b953146140e5c3eff8d435d90869a9361ea033ab56a2" exitCode=0 Jan 25 08:15:03 crc kubenswrapper[4832]: I0125 08:15:03.226689 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-n6hrr-config-pjzs9" event={"ID":"bebc619a-e953-4cd3-b90e-325f3b0344ff","Type":"ContainerDied","Data":"449a56dd7d9c8f7d92a1b953146140e5c3eff8d435d90869a9361ea033ab56a2"} Jan 25 08:15:03 crc kubenswrapper[4832]: I0125 08:15:03.700722 4832 pod_container_manager_linux.go:210] "Failed to delete cgroup paths" cgroupName=["kubepods","besteffort","poddaa59b36-5024-41ae-88f1-49703006f341"] err="unable to destroy cgroup paths for cgroup [kubepods besteffort poddaa59b36-5024-41ae-88f1-49703006f341] : Timed out while waiting for systemd to remove kubepods-besteffort-poddaa59b36_5024_41ae_88f1_49703006f341.slice" Jan 25 08:15:03 crc kubenswrapper[4832]: E0125 08:15:03.700792 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to delete cgroup paths for [kubepods besteffort poddaa59b36-5024-41ae-88f1-49703006f341] : unable to destroy cgroup paths for cgroup [kubepods besteffort poddaa59b36-5024-41ae-88f1-49703006f341] : Timed out while waiting for systemd to remove kubepods-besteffort-poddaa59b36_5024_41ae_88f1_49703006f341.slice" pod="openstack/dnsmasq-dns-666b6646f7-gfs8w" podUID="daa59b36-5024-41ae-88f1-49703006f341" Jan 25 08:15:03 crc kubenswrapper[4832]: I0125 08:15:03.825966 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="899aaa97-a9b6-4ee7-9499-2114b65607af" path="/var/lib/kubelet/pods/899aaa97-a9b6-4ee7-9499-2114b65607af/volumes" Jan 25 08:15:04 crc kubenswrapper[4832]: I0125 08:15:04.236751 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-gfs8w" Jan 25 08:15:04 crc kubenswrapper[4832]: I0125 08:15:04.432196 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-gfs8w"] Jan 25 08:15:04 crc kubenswrapper[4832]: I0125 08:15:04.440653 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-gfs8w"] Jan 25 08:15:04 crc kubenswrapper[4832]: I0125 08:15:04.703331 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-n6hrr-config-pjzs9" Jan 25 08:15:04 crc kubenswrapper[4832]: I0125 08:15:04.753839 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/bebc619a-e953-4cd3-b90e-325f3b0344ff-var-run-ovn\") pod \"bebc619a-e953-4cd3-b90e-325f3b0344ff\" (UID: \"bebc619a-e953-4cd3-b90e-325f3b0344ff\") " Jan 25 08:15:04 crc kubenswrapper[4832]: I0125 08:15:04.754168 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bebc619a-e953-4cd3-b90e-325f3b0344ff-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "bebc619a-e953-4cd3-b90e-325f3b0344ff" (UID: "bebc619a-e953-4cd3-b90e-325f3b0344ff"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 25 08:15:04 crc kubenswrapper[4832]: I0125 08:15:04.754194 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dvvrc\" (UniqueName: \"kubernetes.io/projected/bebc619a-e953-4cd3-b90e-325f3b0344ff-kube-api-access-dvvrc\") pod \"bebc619a-e953-4cd3-b90e-325f3b0344ff\" (UID: \"bebc619a-e953-4cd3-b90e-325f3b0344ff\") " Jan 25 08:15:04 crc kubenswrapper[4832]: I0125 08:15:04.754327 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/bebc619a-e953-4cd3-b90e-325f3b0344ff-scripts\") pod \"bebc619a-e953-4cd3-b90e-325f3b0344ff\" (UID: \"bebc619a-e953-4cd3-b90e-325f3b0344ff\") " Jan 25 08:15:04 crc kubenswrapper[4832]: I0125 08:15:04.754351 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/bebc619a-e953-4cd3-b90e-325f3b0344ff-additional-scripts\") pod \"bebc619a-e953-4cd3-b90e-325f3b0344ff\" (UID: \"bebc619a-e953-4cd3-b90e-325f3b0344ff\") " Jan 25 08:15:04 crc kubenswrapper[4832]: I0125 08:15:04.754622 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/bebc619a-e953-4cd3-b90e-325f3b0344ff-var-run\") pod \"bebc619a-e953-4cd3-b90e-325f3b0344ff\" (UID: \"bebc619a-e953-4cd3-b90e-325f3b0344ff\") " Jan 25 08:15:04 crc kubenswrapper[4832]: I0125 08:15:04.754761 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/bebc619a-e953-4cd3-b90e-325f3b0344ff-var-log-ovn\") pod \"bebc619a-e953-4cd3-b90e-325f3b0344ff\" (UID: \"bebc619a-e953-4cd3-b90e-325f3b0344ff\") " Jan 25 08:15:04 crc kubenswrapper[4832]: I0125 08:15:04.755243 4832 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/bebc619a-e953-4cd3-b90e-325f3b0344ff-var-run-ovn\") on node \"crc\" DevicePath \"\"" Jan 25 08:15:04 crc kubenswrapper[4832]: I0125 08:15:04.755264 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bebc619a-e953-4cd3-b90e-325f3b0344ff-var-run" (OuterVolumeSpecName: "var-run") pod "bebc619a-e953-4cd3-b90e-325f3b0344ff" (UID: "bebc619a-e953-4cd3-b90e-325f3b0344ff"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 25 08:15:04 crc kubenswrapper[4832]: I0125 08:15:04.755491 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bebc619a-e953-4cd3-b90e-325f3b0344ff-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "bebc619a-e953-4cd3-b90e-325f3b0344ff" (UID: "bebc619a-e953-4cd3-b90e-325f3b0344ff"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 25 08:15:04 crc kubenswrapper[4832]: I0125 08:15:04.755657 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bebc619a-e953-4cd3-b90e-325f3b0344ff-additional-scripts" (OuterVolumeSpecName: "additional-scripts") pod "bebc619a-e953-4cd3-b90e-325f3b0344ff" (UID: "bebc619a-e953-4cd3-b90e-325f3b0344ff"). InnerVolumeSpecName "additional-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 25 08:15:04 crc kubenswrapper[4832]: I0125 08:15:04.756034 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bebc619a-e953-4cd3-b90e-325f3b0344ff-scripts" (OuterVolumeSpecName: "scripts") pod "bebc619a-e953-4cd3-b90e-325f3b0344ff" (UID: "bebc619a-e953-4cd3-b90e-325f3b0344ff"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 25 08:15:04 crc kubenswrapper[4832]: I0125 08:15:04.762971 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bebc619a-e953-4cd3-b90e-325f3b0344ff-kube-api-access-dvvrc" (OuterVolumeSpecName: "kube-api-access-dvvrc") pod "bebc619a-e953-4cd3-b90e-325f3b0344ff" (UID: "bebc619a-e953-4cd3-b90e-325f3b0344ff"). InnerVolumeSpecName "kube-api-access-dvvrc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 25 08:15:04 crc kubenswrapper[4832]: I0125 08:15:04.766075 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29488815-gd6rm" Jan 25 08:15:04 crc kubenswrapper[4832]: I0125 08:15:04.856049 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a053d916-f24b-4013-b7bf-9a4abe14e218-secret-volume\") pod \"a053d916-f24b-4013-b7bf-9a4abe14e218\" (UID: \"a053d916-f24b-4013-b7bf-9a4abe14e218\") " Jan 25 08:15:04 crc kubenswrapper[4832]: I0125 08:15:04.856176 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s5tr8\" (UniqueName: \"kubernetes.io/projected/a053d916-f24b-4013-b7bf-9a4abe14e218-kube-api-access-s5tr8\") pod \"a053d916-f24b-4013-b7bf-9a4abe14e218\" (UID: \"a053d916-f24b-4013-b7bf-9a4abe14e218\") " Jan 25 08:15:04 crc kubenswrapper[4832]: I0125 08:15:04.856202 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a053d916-f24b-4013-b7bf-9a4abe14e218-config-volume\") pod \"a053d916-f24b-4013-b7bf-9a4abe14e218\" (UID: \"a053d916-f24b-4013-b7bf-9a4abe14e218\") " Jan 25 08:15:04 crc kubenswrapper[4832]: I0125 08:15:04.856648 4832 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/bebc619a-e953-4cd3-b90e-325f3b0344ff-var-run\") on node \"crc\" DevicePath \"\"" Jan 25 08:15:04 crc kubenswrapper[4832]: I0125 08:15:04.856665 4832 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/bebc619a-e953-4cd3-b90e-325f3b0344ff-var-log-ovn\") on node \"crc\" DevicePath \"\"" Jan 25 08:15:04 crc kubenswrapper[4832]: I0125 08:15:04.856677 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dvvrc\" (UniqueName: \"kubernetes.io/projected/bebc619a-e953-4cd3-b90e-325f3b0344ff-kube-api-access-dvvrc\") on node \"crc\" DevicePath \"\"" Jan 25 08:15:04 crc kubenswrapper[4832]: I0125 08:15:04.856688 4832 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/bebc619a-e953-4cd3-b90e-325f3b0344ff-scripts\") on node \"crc\" DevicePath \"\"" Jan 25 08:15:04 crc kubenswrapper[4832]: I0125 08:15:04.856696 4832 reconciler_common.go:293] "Volume detached for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/bebc619a-e953-4cd3-b90e-325f3b0344ff-additional-scripts\") on node \"crc\" DevicePath \"\"" Jan 25 08:15:04 crc kubenswrapper[4832]: I0125 08:15:04.856977 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a053d916-f24b-4013-b7bf-9a4abe14e218-config-volume" (OuterVolumeSpecName: "config-volume") pod "a053d916-f24b-4013-b7bf-9a4abe14e218" (UID: "a053d916-f24b-4013-b7bf-9a4abe14e218"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 25 08:15:04 crc kubenswrapper[4832]: I0125 08:15:04.859481 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a053d916-f24b-4013-b7bf-9a4abe14e218-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "a053d916-f24b-4013-b7bf-9a4abe14e218" (UID: "a053d916-f24b-4013-b7bf-9a4abe14e218"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 25 08:15:04 crc kubenswrapper[4832]: I0125 08:15:04.862674 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a053d916-f24b-4013-b7bf-9a4abe14e218-kube-api-access-s5tr8" (OuterVolumeSpecName: "kube-api-access-s5tr8") pod "a053d916-f24b-4013-b7bf-9a4abe14e218" (UID: "a053d916-f24b-4013-b7bf-9a4abe14e218"). InnerVolumeSpecName "kube-api-access-s5tr8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 25 08:15:04 crc kubenswrapper[4832]: I0125 08:15:04.958502 4832 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a053d916-f24b-4013-b7bf-9a4abe14e218-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 25 08:15:04 crc kubenswrapper[4832]: I0125 08:15:04.958540 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s5tr8\" (UniqueName: \"kubernetes.io/projected/a053d916-f24b-4013-b7bf-9a4abe14e218-kube-api-access-s5tr8\") on node \"crc\" DevicePath \"\"" Jan 25 08:15:04 crc kubenswrapper[4832]: I0125 08:15:04.958553 4832 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a053d916-f24b-4013-b7bf-9a4abe14e218-config-volume\") on node \"crc\" DevicePath \"\"" Jan 25 08:15:05 crc kubenswrapper[4832]: I0125 08:15:05.260149 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-n6hrr-config-pjzs9" event={"ID":"bebc619a-e953-4cd3-b90e-325f3b0344ff","Type":"ContainerDied","Data":"975e15e7c534c218be44c7bb24195a24c8b9791dfc7ba693c4ed858e05f7b776"} Jan 25 08:15:05 crc kubenswrapper[4832]: I0125 08:15:05.260217 4832 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="975e15e7c534c218be44c7bb24195a24c8b9791dfc7ba693c4ed858e05f7b776" Jan 25 08:15:05 crc kubenswrapper[4832]: I0125 08:15:05.260307 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-n6hrr-config-pjzs9" Jan 25 08:15:05 crc kubenswrapper[4832]: I0125 08:15:05.286451 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"68ef9e02-9e33-48c3-a32b-ceae36687171","Type":"ContainerStarted","Data":"6d6e4183e4e21dba715011decab3490060f228e2fa68cd7ebc8ff9c50b6987bb"} Jan 25 08:15:05 crc kubenswrapper[4832]: I0125 08:15:05.286501 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"68ef9e02-9e33-48c3-a32b-ceae36687171","Type":"ContainerStarted","Data":"13b645b811a3349befd09f5f99a4636b4a36307a9e85b553fc0cfc63c8180ec9"} Jan 25 08:15:05 crc kubenswrapper[4832]: I0125 08:15:05.286516 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"68ef9e02-9e33-48c3-a32b-ceae36687171","Type":"ContainerStarted","Data":"e82565f782680331c229806e72cb5e1df8a6154ead9775f387e331651f08660b"} Jan 25 08:15:05 crc kubenswrapper[4832]: I0125 08:15:05.288629 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29488815-gd6rm" event={"ID":"a053d916-f24b-4013-b7bf-9a4abe14e218","Type":"ContainerDied","Data":"259538b408f57d9715d91cef9313625899b753600a47a579a270498023cb684a"} Jan 25 08:15:05 crc kubenswrapper[4832]: I0125 08:15:05.288704 4832 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="259538b408f57d9715d91cef9313625899b753600a47a579a270498023cb684a" Jan 25 08:15:05 crc kubenswrapper[4832]: I0125 08:15:05.288770 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29488815-gd6rm" Jan 25 08:15:05 crc kubenswrapper[4832]: I0125 08:15:05.707981 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="daa59b36-5024-41ae-88f1-49703006f341" path="/var/lib/kubelet/pods/daa59b36-5024-41ae-88f1-49703006f341/volumes" Jan 25 08:15:05 crc kubenswrapper[4832]: I0125 08:15:05.733795 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-n6hrr" Jan 25 08:15:05 crc kubenswrapper[4832]: I0125 08:15:05.858007 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-n6hrr-config-pjzs9"] Jan 25 08:15:05 crc kubenswrapper[4832]: I0125 08:15:05.858238 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-n6hrr-config-pjzs9"] Jan 25 08:15:06 crc kubenswrapper[4832]: I0125 08:15:06.623616 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Jan 25 08:15:06 crc kubenswrapper[4832]: I0125 08:15:06.754163 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"68ef9e02-9e33-48c3-a32b-ceae36687171","Type":"ContainerStarted","Data":"9e399b444cd19a0da623b8aa1598aa0192fbcef31d170bcc58758d2eea29d7fa"} Jan 25 08:15:07 crc kubenswrapper[4832]: I0125 08:15:07.022624 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Jan 25 08:15:07 crc kubenswrapper[4832]: I0125 08:15:07.868101 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bebc619a-e953-4cd3-b90e-325f3b0344ff" path="/var/lib/kubelet/pods/bebc619a-e953-4cd3-b90e-325f3b0344ff/volumes" Jan 25 08:15:07 crc kubenswrapper[4832]: I0125 08:15:07.869425 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-create-khdxr"] Jan 25 08:15:07 crc kubenswrapper[4832]: E0125 08:15:07.869716 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bebc619a-e953-4cd3-b90e-325f3b0344ff" containerName="ovn-config" Jan 25 08:15:07 crc kubenswrapper[4832]: I0125 08:15:07.869732 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="bebc619a-e953-4cd3-b90e-325f3b0344ff" containerName="ovn-config" Jan 25 08:15:07 crc kubenswrapper[4832]: E0125 08:15:07.869744 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a053d916-f24b-4013-b7bf-9a4abe14e218" containerName="collect-profiles" Jan 25 08:15:07 crc kubenswrapper[4832]: I0125 08:15:07.869750 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="a053d916-f24b-4013-b7bf-9a4abe14e218" containerName="collect-profiles" Jan 25 08:15:07 crc kubenswrapper[4832]: I0125 08:15:07.869912 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="a053d916-f24b-4013-b7bf-9a4abe14e218" containerName="collect-profiles" Jan 25 08:15:07 crc kubenswrapper[4832]: I0125 08:15:07.869934 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="bebc619a-e953-4cd3-b90e-325f3b0344ff" containerName="ovn-config" Jan 25 08:15:07 crc kubenswrapper[4832]: I0125 08:15:07.870449 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-khdxr"] Jan 25 08:15:07 crc kubenswrapper[4832]: I0125 08:15:07.870486 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-create-bdwvt"] Jan 25 08:15:07 crc kubenswrapper[4832]: I0125 08:15:07.871238 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-bdwvt"] Jan 25 08:15:07 crc kubenswrapper[4832]: I0125 08:15:07.871258 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-95bb-account-create-update-9qtwc"] Jan 25 08:15:07 crc kubenswrapper[4832]: I0125 08:15:07.882673 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-bdwvt" Jan 25 08:15:07 crc kubenswrapper[4832]: I0125 08:15:07.883292 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-khdxr" Jan 25 08:15:07 crc kubenswrapper[4832]: I0125 08:15:07.884170 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-95bb-account-create-update-9qtwc"] Jan 25 08:15:07 crc kubenswrapper[4832]: I0125 08:15:07.884194 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-create-dlpsc"] Jan 25 08:15:07 crc kubenswrapper[4832]: I0125 08:15:07.884864 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-dlpsc"] Jan 25 08:15:07 crc kubenswrapper[4832]: I0125 08:15:07.884889 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-a9d0-account-create-update-5njf2"] Jan 25 08:15:07 crc kubenswrapper[4832]: I0125 08:15:07.886019 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-a9d0-account-create-update-5njf2" Jan 25 08:15:07 crc kubenswrapper[4832]: I0125 08:15:07.886489 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-95bb-account-create-update-9qtwc" Jan 25 08:15:07 crc kubenswrapper[4832]: I0125 08:15:07.886856 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-dlpsc" Jan 25 08:15:07 crc kubenswrapper[4832]: I0125 08:15:07.896120 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-a9d0-account-create-update-5njf2"] Jan 25 08:15:07 crc kubenswrapper[4832]: I0125 08:15:07.898178 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-db-secret" Jan 25 08:15:07 crc kubenswrapper[4832]: I0125 08:15:07.898559 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-db-secret" Jan 25 08:15:07 crc kubenswrapper[4832]: I0125 08:15:07.928943 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/78d32c3b-2a6c-4a1e-a1c5-146a00bbba21-operator-scripts\") pod \"neutron-db-create-dlpsc\" (UID: \"78d32c3b-2a6c-4a1e-a1c5-146a00bbba21\") " pod="openstack/neutron-db-create-dlpsc" Jan 25 08:15:07 crc kubenswrapper[4832]: I0125 08:15:07.929013 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-td4hg\" (UniqueName: \"kubernetes.io/projected/a4bac199-c6e9-4bef-b649-12aa5af881ab-kube-api-access-td4hg\") pod \"cinder-95bb-account-create-update-9qtwc\" (UID: \"a4bac199-c6e9-4bef-b649-12aa5af881ab\") " pod="openstack/cinder-95bb-account-create-update-9qtwc" Jan 25 08:15:07 crc kubenswrapper[4832]: I0125 08:15:07.929050 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d05c514f-1bc8-45c4-aa69-e8d08cfeb515-operator-scripts\") pod \"barbican-a9d0-account-create-update-5njf2\" (UID: \"d05c514f-1bc8-45c4-aa69-e8d08cfeb515\") " pod="openstack/barbican-a9d0-account-create-update-5njf2" Jan 25 08:15:07 crc kubenswrapper[4832]: I0125 08:15:07.929069 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7640ab02-6a97-40ae-9d40-99e42123e170-operator-scripts\") pod \"barbican-db-create-bdwvt\" (UID: \"7640ab02-6a97-40ae-9d40-99e42123e170\") " pod="openstack/barbican-db-create-bdwvt" Jan 25 08:15:07 crc kubenswrapper[4832]: I0125 08:15:07.929092 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a4bac199-c6e9-4bef-b649-12aa5af881ab-operator-scripts\") pod \"cinder-95bb-account-create-update-9qtwc\" (UID: \"a4bac199-c6e9-4bef-b649-12aa5af881ab\") " pod="openstack/cinder-95bb-account-create-update-9qtwc" Jan 25 08:15:07 crc kubenswrapper[4832]: I0125 08:15:07.929108 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/15a33ab1-a365-4e45-b7aa-3208d9b16fd0-operator-scripts\") pod \"cinder-db-create-khdxr\" (UID: \"15a33ab1-a365-4e45-b7aa-3208d9b16fd0\") " pod="openstack/cinder-db-create-khdxr" Jan 25 08:15:07 crc kubenswrapper[4832]: I0125 08:15:07.929162 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-79nhx\" (UniqueName: \"kubernetes.io/projected/78d32c3b-2a6c-4a1e-a1c5-146a00bbba21-kube-api-access-79nhx\") pod \"neutron-db-create-dlpsc\" (UID: \"78d32c3b-2a6c-4a1e-a1c5-146a00bbba21\") " pod="openstack/neutron-db-create-dlpsc" Jan 25 08:15:07 crc kubenswrapper[4832]: I0125 08:15:07.929193 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v6kw6\" (UniqueName: \"kubernetes.io/projected/15a33ab1-a365-4e45-b7aa-3208d9b16fd0-kube-api-access-v6kw6\") pod \"cinder-db-create-khdxr\" (UID: \"15a33ab1-a365-4e45-b7aa-3208d9b16fd0\") " pod="openstack/cinder-db-create-khdxr" Jan 25 08:15:07 crc kubenswrapper[4832]: I0125 08:15:07.929214 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9rvsd\" (UniqueName: \"kubernetes.io/projected/7640ab02-6a97-40ae-9d40-99e42123e170-kube-api-access-9rvsd\") pod \"barbican-db-create-bdwvt\" (UID: \"7640ab02-6a97-40ae-9d40-99e42123e170\") " pod="openstack/barbican-db-create-bdwvt" Jan 25 08:15:07 crc kubenswrapper[4832]: I0125 08:15:07.929231 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w9glf\" (UniqueName: \"kubernetes.io/projected/d05c514f-1bc8-45c4-aa69-e8d08cfeb515-kube-api-access-w9glf\") pod \"barbican-a9d0-account-create-update-5njf2\" (UID: \"d05c514f-1bc8-45c4-aa69-e8d08cfeb515\") " pod="openstack/barbican-a9d0-account-create-update-5njf2" Jan 25 08:15:08 crc kubenswrapper[4832]: I0125 08:15:08.030823 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v6kw6\" (UniqueName: \"kubernetes.io/projected/15a33ab1-a365-4e45-b7aa-3208d9b16fd0-kube-api-access-v6kw6\") pod \"cinder-db-create-khdxr\" (UID: \"15a33ab1-a365-4e45-b7aa-3208d9b16fd0\") " pod="openstack/cinder-db-create-khdxr" Jan 25 08:15:08 crc kubenswrapper[4832]: I0125 08:15:08.030923 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9rvsd\" (UniqueName: \"kubernetes.io/projected/7640ab02-6a97-40ae-9d40-99e42123e170-kube-api-access-9rvsd\") pod \"barbican-db-create-bdwvt\" (UID: \"7640ab02-6a97-40ae-9d40-99e42123e170\") " pod="openstack/barbican-db-create-bdwvt" Jan 25 08:15:08 crc kubenswrapper[4832]: I0125 08:15:08.030980 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w9glf\" (UniqueName: \"kubernetes.io/projected/d05c514f-1bc8-45c4-aa69-e8d08cfeb515-kube-api-access-w9glf\") pod \"barbican-a9d0-account-create-update-5njf2\" (UID: \"d05c514f-1bc8-45c4-aa69-e8d08cfeb515\") " pod="openstack/barbican-a9d0-account-create-update-5njf2" Jan 25 08:15:08 crc kubenswrapper[4832]: I0125 08:15:08.031034 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/78d32c3b-2a6c-4a1e-a1c5-146a00bbba21-operator-scripts\") pod \"neutron-db-create-dlpsc\" (UID: \"78d32c3b-2a6c-4a1e-a1c5-146a00bbba21\") " pod="openstack/neutron-db-create-dlpsc" Jan 25 08:15:08 crc kubenswrapper[4832]: I0125 08:15:08.031104 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-td4hg\" (UniqueName: \"kubernetes.io/projected/a4bac199-c6e9-4bef-b649-12aa5af881ab-kube-api-access-td4hg\") pod \"cinder-95bb-account-create-update-9qtwc\" (UID: \"a4bac199-c6e9-4bef-b649-12aa5af881ab\") " pod="openstack/cinder-95bb-account-create-update-9qtwc" Jan 25 08:15:08 crc kubenswrapper[4832]: I0125 08:15:08.031146 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d05c514f-1bc8-45c4-aa69-e8d08cfeb515-operator-scripts\") pod \"barbican-a9d0-account-create-update-5njf2\" (UID: \"d05c514f-1bc8-45c4-aa69-e8d08cfeb515\") " pod="openstack/barbican-a9d0-account-create-update-5njf2" Jan 25 08:15:08 crc kubenswrapper[4832]: I0125 08:15:08.031178 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7640ab02-6a97-40ae-9d40-99e42123e170-operator-scripts\") pod \"barbican-db-create-bdwvt\" (UID: \"7640ab02-6a97-40ae-9d40-99e42123e170\") " pod="openstack/barbican-db-create-bdwvt" Jan 25 08:15:08 crc kubenswrapper[4832]: I0125 08:15:08.031217 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a4bac199-c6e9-4bef-b649-12aa5af881ab-operator-scripts\") pod \"cinder-95bb-account-create-update-9qtwc\" (UID: \"a4bac199-c6e9-4bef-b649-12aa5af881ab\") " pod="openstack/cinder-95bb-account-create-update-9qtwc" Jan 25 08:15:08 crc kubenswrapper[4832]: I0125 08:15:08.031240 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/15a33ab1-a365-4e45-b7aa-3208d9b16fd0-operator-scripts\") pod \"cinder-db-create-khdxr\" (UID: \"15a33ab1-a365-4e45-b7aa-3208d9b16fd0\") " pod="openstack/cinder-db-create-khdxr" Jan 25 08:15:08 crc kubenswrapper[4832]: I0125 08:15:08.031308 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-79nhx\" (UniqueName: \"kubernetes.io/projected/78d32c3b-2a6c-4a1e-a1c5-146a00bbba21-kube-api-access-79nhx\") pod \"neutron-db-create-dlpsc\" (UID: \"78d32c3b-2a6c-4a1e-a1c5-146a00bbba21\") " pod="openstack/neutron-db-create-dlpsc" Jan 25 08:15:08 crc kubenswrapper[4832]: I0125 08:15:08.032070 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d05c514f-1bc8-45c4-aa69-e8d08cfeb515-operator-scripts\") pod \"barbican-a9d0-account-create-update-5njf2\" (UID: \"d05c514f-1bc8-45c4-aa69-e8d08cfeb515\") " pod="openstack/barbican-a9d0-account-create-update-5njf2" Jan 25 08:15:08 crc kubenswrapper[4832]: I0125 08:15:08.032089 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a4bac199-c6e9-4bef-b649-12aa5af881ab-operator-scripts\") pod \"cinder-95bb-account-create-update-9qtwc\" (UID: \"a4bac199-c6e9-4bef-b649-12aa5af881ab\") " pod="openstack/cinder-95bb-account-create-update-9qtwc" Jan 25 08:15:08 crc kubenswrapper[4832]: I0125 08:15:08.032203 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/78d32c3b-2a6c-4a1e-a1c5-146a00bbba21-operator-scripts\") pod \"neutron-db-create-dlpsc\" (UID: \"78d32c3b-2a6c-4a1e-a1c5-146a00bbba21\") " pod="openstack/neutron-db-create-dlpsc" Jan 25 08:15:08 crc kubenswrapper[4832]: I0125 08:15:08.084181 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7640ab02-6a97-40ae-9d40-99e42123e170-operator-scripts\") pod \"barbican-db-create-bdwvt\" (UID: \"7640ab02-6a97-40ae-9d40-99e42123e170\") " pod="openstack/barbican-db-create-bdwvt" Jan 25 08:15:08 crc kubenswrapper[4832]: I0125 08:15:08.085871 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/15a33ab1-a365-4e45-b7aa-3208d9b16fd0-operator-scripts\") pod \"cinder-db-create-khdxr\" (UID: \"15a33ab1-a365-4e45-b7aa-3208d9b16fd0\") " pod="openstack/cinder-db-create-khdxr" Jan 25 08:15:08 crc kubenswrapper[4832]: I0125 08:15:08.096265 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9rvsd\" (UniqueName: \"kubernetes.io/projected/7640ab02-6a97-40ae-9d40-99e42123e170-kube-api-access-9rvsd\") pod \"barbican-db-create-bdwvt\" (UID: \"7640ab02-6a97-40ae-9d40-99e42123e170\") " pod="openstack/barbican-db-create-bdwvt" Jan 25 08:15:08 crc kubenswrapper[4832]: I0125 08:15:08.097492 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w9glf\" (UniqueName: \"kubernetes.io/projected/d05c514f-1bc8-45c4-aa69-e8d08cfeb515-kube-api-access-w9glf\") pod \"barbican-a9d0-account-create-update-5njf2\" (UID: \"d05c514f-1bc8-45c4-aa69-e8d08cfeb515\") " pod="openstack/barbican-a9d0-account-create-update-5njf2" Jan 25 08:15:08 crc kubenswrapper[4832]: I0125 08:15:08.100896 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-79nhx\" (UniqueName: \"kubernetes.io/projected/78d32c3b-2a6c-4a1e-a1c5-146a00bbba21-kube-api-access-79nhx\") pod \"neutron-db-create-dlpsc\" (UID: \"78d32c3b-2a6c-4a1e-a1c5-146a00bbba21\") " pod="openstack/neutron-db-create-dlpsc" Jan 25 08:15:08 crc kubenswrapper[4832]: I0125 08:15:08.108551 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-td4hg\" (UniqueName: \"kubernetes.io/projected/a4bac199-c6e9-4bef-b649-12aa5af881ab-kube-api-access-td4hg\") pod \"cinder-95bb-account-create-update-9qtwc\" (UID: \"a4bac199-c6e9-4bef-b649-12aa5af881ab\") " pod="openstack/cinder-95bb-account-create-update-9qtwc" Jan 25 08:15:08 crc kubenswrapper[4832]: I0125 08:15:08.109812 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v6kw6\" (UniqueName: \"kubernetes.io/projected/15a33ab1-a365-4e45-b7aa-3208d9b16fd0-kube-api-access-v6kw6\") pod \"cinder-db-create-khdxr\" (UID: \"15a33ab1-a365-4e45-b7aa-3208d9b16fd0\") " pod="openstack/cinder-db-create-khdxr" Jan 25 08:15:08 crc kubenswrapper[4832]: I0125 08:15:08.145449 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-7094-account-create-update-zccgm"] Jan 25 08:15:08 crc kubenswrapper[4832]: I0125 08:15:08.146837 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-7094-account-create-update-zccgm" Jan 25 08:15:08 crc kubenswrapper[4832]: I0125 08:15:08.153920 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-db-secret" Jan 25 08:15:08 crc kubenswrapper[4832]: I0125 08:15:08.159636 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-7094-account-create-update-zccgm"] Jan 25 08:15:08 crc kubenswrapper[4832]: I0125 08:15:08.211862 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-bdwvt" Jan 25 08:15:08 crc kubenswrapper[4832]: I0125 08:15:08.232476 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-khdxr" Jan 25 08:15:08 crc kubenswrapper[4832]: I0125 08:15:08.245561 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-a9d0-account-create-update-5njf2" Jan 25 08:15:08 crc kubenswrapper[4832]: I0125 08:15:08.258068 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-95bb-account-create-update-9qtwc" Jan 25 08:15:08 crc kubenswrapper[4832]: I0125 08:15:08.278835 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-dlpsc" Jan 25 08:15:08 crc kubenswrapper[4832]: I0125 08:15:08.289753 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-58szm"] Jan 25 08:15:08 crc kubenswrapper[4832]: I0125 08:15:08.290705 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-58szm" Jan 25 08:15:08 crc kubenswrapper[4832]: I0125 08:15:08.297957 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-mariadb-root-db-secret" Jan 25 08:15:08 crc kubenswrapper[4832]: I0125 08:15:08.338601 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-58szm"] Jan 25 08:15:08 crc kubenswrapper[4832]: I0125 08:15:08.341296 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j8cfb\" (UniqueName: \"kubernetes.io/projected/b3771f9f-7c61-47ef-9977-96275f49cd91-kube-api-access-j8cfb\") pod \"neutron-7094-account-create-update-zccgm\" (UID: \"b3771f9f-7c61-47ef-9977-96275f49cd91\") " pod="openstack/neutron-7094-account-create-update-zccgm" Jan 25 08:15:08 crc kubenswrapper[4832]: I0125 08:15:08.341411 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b3771f9f-7c61-47ef-9977-96275f49cd91-operator-scripts\") pod \"neutron-7094-account-create-update-zccgm\" (UID: \"b3771f9f-7c61-47ef-9977-96275f49cd91\") " pod="openstack/neutron-7094-account-create-update-zccgm" Jan 25 08:15:08 crc kubenswrapper[4832]: I0125 08:15:08.443674 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b3771f9f-7c61-47ef-9977-96275f49cd91-operator-scripts\") pod \"neutron-7094-account-create-update-zccgm\" (UID: \"b3771f9f-7c61-47ef-9977-96275f49cd91\") " pod="openstack/neutron-7094-account-create-update-zccgm" Jan 25 08:15:08 crc kubenswrapper[4832]: I0125 08:15:08.443996 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z8r5k\" (UniqueName: \"kubernetes.io/projected/5db077e1-3078-4290-91ea-4e099d11584a-kube-api-access-z8r5k\") pod \"root-account-create-update-58szm\" (UID: \"5db077e1-3078-4290-91ea-4e099d11584a\") " pod="openstack/root-account-create-update-58szm" Jan 25 08:15:08 crc kubenswrapper[4832]: I0125 08:15:08.444079 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5db077e1-3078-4290-91ea-4e099d11584a-operator-scripts\") pod \"root-account-create-update-58szm\" (UID: \"5db077e1-3078-4290-91ea-4e099d11584a\") " pod="openstack/root-account-create-update-58szm" Jan 25 08:15:08 crc kubenswrapper[4832]: I0125 08:15:08.444117 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j8cfb\" (UniqueName: \"kubernetes.io/projected/b3771f9f-7c61-47ef-9977-96275f49cd91-kube-api-access-j8cfb\") pod \"neutron-7094-account-create-update-zccgm\" (UID: \"b3771f9f-7c61-47ef-9977-96275f49cd91\") " pod="openstack/neutron-7094-account-create-update-zccgm" Jan 25 08:15:08 crc kubenswrapper[4832]: I0125 08:15:08.445364 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b3771f9f-7c61-47ef-9977-96275f49cd91-operator-scripts\") pod \"neutron-7094-account-create-update-zccgm\" (UID: \"b3771f9f-7c61-47ef-9977-96275f49cd91\") " pod="openstack/neutron-7094-account-create-update-zccgm" Jan 25 08:15:08 crc kubenswrapper[4832]: I0125 08:15:08.461466 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-sync-csqzf"] Jan 25 08:15:08 crc kubenswrapper[4832]: I0125 08:15:08.462421 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-csqzf" Jan 25 08:15:08 crc kubenswrapper[4832]: I0125 08:15:08.470325 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-xml8n" Jan 25 08:15:08 crc kubenswrapper[4832]: I0125 08:15:08.470536 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Jan 25 08:15:08 crc kubenswrapper[4832]: I0125 08:15:08.470692 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Jan 25 08:15:08 crc kubenswrapper[4832]: I0125 08:15:08.473309 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Jan 25 08:15:08 crc kubenswrapper[4832]: I0125 08:15:08.475176 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-csqzf"] Jan 25 08:15:08 crc kubenswrapper[4832]: I0125 08:15:08.491960 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j8cfb\" (UniqueName: \"kubernetes.io/projected/b3771f9f-7c61-47ef-9977-96275f49cd91-kube-api-access-j8cfb\") pod \"neutron-7094-account-create-update-zccgm\" (UID: \"b3771f9f-7c61-47ef-9977-96275f49cd91\") " pod="openstack/neutron-7094-account-create-update-zccgm" Jan 25 08:15:08 crc kubenswrapper[4832]: I0125 08:15:08.518498 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-7094-account-create-update-zccgm" Jan 25 08:15:08 crc kubenswrapper[4832]: I0125 08:15:08.545286 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5db077e1-3078-4290-91ea-4e099d11584a-operator-scripts\") pod \"root-account-create-update-58szm\" (UID: \"5db077e1-3078-4290-91ea-4e099d11584a\") " pod="openstack/root-account-create-update-58szm" Jan 25 08:15:08 crc kubenswrapper[4832]: I0125 08:15:08.545493 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z8r5k\" (UniqueName: \"kubernetes.io/projected/5db077e1-3078-4290-91ea-4e099d11584a-kube-api-access-z8r5k\") pod \"root-account-create-update-58szm\" (UID: \"5db077e1-3078-4290-91ea-4e099d11584a\") " pod="openstack/root-account-create-update-58szm" Jan 25 08:15:08 crc kubenswrapper[4832]: I0125 08:15:08.546551 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5db077e1-3078-4290-91ea-4e099d11584a-operator-scripts\") pod \"root-account-create-update-58szm\" (UID: \"5db077e1-3078-4290-91ea-4e099d11584a\") " pod="openstack/root-account-create-update-58szm" Jan 25 08:15:08 crc kubenswrapper[4832]: I0125 08:15:08.646998 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dd9939bf-1855-4b5d-8b7c-38e73d8a8a10-combined-ca-bundle\") pod \"keystone-db-sync-csqzf\" (UID: \"dd9939bf-1855-4b5d-8b7c-38e73d8a8a10\") " pod="openstack/keystone-db-sync-csqzf" Jan 25 08:15:08 crc kubenswrapper[4832]: I0125 08:15:08.647070 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z45bv\" (UniqueName: \"kubernetes.io/projected/dd9939bf-1855-4b5d-8b7c-38e73d8a8a10-kube-api-access-z45bv\") pod \"keystone-db-sync-csqzf\" (UID: \"dd9939bf-1855-4b5d-8b7c-38e73d8a8a10\") " pod="openstack/keystone-db-sync-csqzf" Jan 25 08:15:08 crc kubenswrapper[4832]: I0125 08:15:08.647100 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dd9939bf-1855-4b5d-8b7c-38e73d8a8a10-config-data\") pod \"keystone-db-sync-csqzf\" (UID: \"dd9939bf-1855-4b5d-8b7c-38e73d8a8a10\") " pod="openstack/keystone-db-sync-csqzf" Jan 25 08:15:08 crc kubenswrapper[4832]: I0125 08:15:08.725315 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z8r5k\" (UniqueName: \"kubernetes.io/projected/5db077e1-3078-4290-91ea-4e099d11584a-kube-api-access-z8r5k\") pod \"root-account-create-update-58szm\" (UID: \"5db077e1-3078-4290-91ea-4e099d11584a\") " pod="openstack/root-account-create-update-58szm" Jan 25 08:15:08 crc kubenswrapper[4832]: I0125 08:15:08.748929 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z45bv\" (UniqueName: \"kubernetes.io/projected/dd9939bf-1855-4b5d-8b7c-38e73d8a8a10-kube-api-access-z45bv\") pod \"keystone-db-sync-csqzf\" (UID: \"dd9939bf-1855-4b5d-8b7c-38e73d8a8a10\") " pod="openstack/keystone-db-sync-csqzf" Jan 25 08:15:08 crc kubenswrapper[4832]: I0125 08:15:08.748988 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dd9939bf-1855-4b5d-8b7c-38e73d8a8a10-config-data\") pod \"keystone-db-sync-csqzf\" (UID: \"dd9939bf-1855-4b5d-8b7c-38e73d8a8a10\") " pod="openstack/keystone-db-sync-csqzf" Jan 25 08:15:08 crc kubenswrapper[4832]: I0125 08:15:08.749108 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dd9939bf-1855-4b5d-8b7c-38e73d8a8a10-combined-ca-bundle\") pod \"keystone-db-sync-csqzf\" (UID: \"dd9939bf-1855-4b5d-8b7c-38e73d8a8a10\") " pod="openstack/keystone-db-sync-csqzf" Jan 25 08:15:08 crc kubenswrapper[4832]: I0125 08:15:08.770175 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dd9939bf-1855-4b5d-8b7c-38e73d8a8a10-config-data\") pod \"keystone-db-sync-csqzf\" (UID: \"dd9939bf-1855-4b5d-8b7c-38e73d8a8a10\") " pod="openstack/keystone-db-sync-csqzf" Jan 25 08:15:08 crc kubenswrapper[4832]: I0125 08:15:08.824274 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z45bv\" (UniqueName: \"kubernetes.io/projected/dd9939bf-1855-4b5d-8b7c-38e73d8a8a10-kube-api-access-z45bv\") pod \"keystone-db-sync-csqzf\" (UID: \"dd9939bf-1855-4b5d-8b7c-38e73d8a8a10\") " pod="openstack/keystone-db-sync-csqzf" Jan 25 08:15:08 crc kubenswrapper[4832]: I0125 08:15:08.850838 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dd9939bf-1855-4b5d-8b7c-38e73d8a8a10-combined-ca-bundle\") pod \"keystone-db-sync-csqzf\" (UID: \"dd9939bf-1855-4b5d-8b7c-38e73d8a8a10\") " pod="openstack/keystone-db-sync-csqzf" Jan 25 08:15:09 crc kubenswrapper[4832]: I0125 08:15:09.084315 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-58szm" Jan 25 08:15:09 crc kubenswrapper[4832]: I0125 08:15:09.085047 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-csqzf" Jan 25 08:15:10 crc kubenswrapper[4832]: I0125 08:15:10.485121 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-dlpsc"] Jan 25 08:15:10 crc kubenswrapper[4832]: I0125 08:15:10.499747 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-a9d0-account-create-update-5njf2"] Jan 25 08:15:10 crc kubenswrapper[4832]: I0125 08:15:10.513093 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-95bb-account-create-update-9qtwc"] Jan 25 08:15:10 crc kubenswrapper[4832]: I0125 08:15:10.675117 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-bdwvt"] Jan 25 08:15:10 crc kubenswrapper[4832]: I0125 08:15:10.695729 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-7094-account-create-update-zccgm"] Jan 25 08:15:10 crc kubenswrapper[4832]: I0125 08:15:10.714288 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-khdxr"] Jan 25 08:15:10 crc kubenswrapper[4832]: I0125 08:15:10.795267 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-csqzf"] Jan 25 08:15:10 crc kubenswrapper[4832]: I0125 08:15:10.912572 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-58szm"] Jan 25 08:15:11 crc kubenswrapper[4832]: W0125 08:15:11.011200 4832 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda4bac199_c6e9_4bef_b649_12aa5af881ab.slice/crio-d21919735541d638c2396b6329492379a0d4872ecbfa1cd10ad261a7a928dd49 WatchSource:0}: Error finding container d21919735541d638c2396b6329492379a0d4872ecbfa1cd10ad261a7a928dd49: Status 404 returned error can't find the container with id d21919735541d638c2396b6329492379a0d4872ecbfa1cd10ad261a7a928dd49 Jan 25 08:15:11 crc kubenswrapper[4832]: W0125 08:15:11.011708 4832 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7640ab02_6a97_40ae_9d40_99e42123e170.slice/crio-7ba8735b4f8564bf97d16c8eae3452bae6cabe7d6e59239c45f9520ae56669a9 WatchSource:0}: Error finding container 7ba8735b4f8564bf97d16c8eae3452bae6cabe7d6e59239c45f9520ae56669a9: Status 404 returned error can't find the container with id 7ba8735b4f8564bf97d16c8eae3452bae6cabe7d6e59239c45f9520ae56669a9 Jan 25 08:15:11 crc kubenswrapper[4832]: W0125 08:15:11.036354 4832 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd05c514f_1bc8_45c4_aa69_e8d08cfeb515.slice/crio-ad98443f3ab949386557481126b4bfa78caaa9daeaf4e6c55ae462cbaa96a01a WatchSource:0}: Error finding container ad98443f3ab949386557481126b4bfa78caaa9daeaf4e6c55ae462cbaa96a01a: Status 404 returned error can't find the container with id ad98443f3ab949386557481126b4bfa78caaa9daeaf4e6c55ae462cbaa96a01a Jan 25 08:15:11 crc kubenswrapper[4832]: W0125 08:15:11.042206 4832 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod78d32c3b_2a6c_4a1e_a1c5_146a00bbba21.slice/crio-65d303c29287d6edd3c05f01de2e96c3677f18ec815a01cb6b1d37a21da24412 WatchSource:0}: Error finding container 65d303c29287d6edd3c05f01de2e96c3677f18ec815a01cb6b1d37a21da24412: Status 404 returned error can't find the container with id 65d303c29287d6edd3c05f01de2e96c3677f18ec815a01cb6b1d37a21da24412 Jan 25 08:15:11 crc kubenswrapper[4832]: W0125 08:15:11.049827 4832 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddd9939bf_1855_4b5d_8b7c_38e73d8a8a10.slice/crio-ca3f98a34f725c9bdd13bcd9c4963f5370be34a52eaa5bae5e1b468f8d6e85f0 WatchSource:0}: Error finding container ca3f98a34f725c9bdd13bcd9c4963f5370be34a52eaa5bae5e1b468f8d6e85f0: Status 404 returned error can't find the container with id ca3f98a34f725c9bdd13bcd9c4963f5370be34a52eaa5bae5e1b468f8d6e85f0 Jan 25 08:15:11 crc kubenswrapper[4832]: W0125 08:15:11.054673 4832 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod15a33ab1_a365_4e45_b7aa_3208d9b16fd0.slice/crio-11e1f4d6bdd352b4bb625123c50ce867a959d8f7020dc67f0ba53077dfabf889 WatchSource:0}: Error finding container 11e1f4d6bdd352b4bb625123c50ce867a959d8f7020dc67f0ba53077dfabf889: Status 404 returned error can't find the container with id 11e1f4d6bdd352b4bb625123c50ce867a959d8f7020dc67f0ba53077dfabf889 Jan 25 08:15:11 crc kubenswrapper[4832]: I0125 08:15:11.283044 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-csqzf" event={"ID":"dd9939bf-1855-4b5d-8b7c-38e73d8a8a10","Type":"ContainerStarted","Data":"ca3f98a34f725c9bdd13bcd9c4963f5370be34a52eaa5bae5e1b468f8d6e85f0"} Jan 25 08:15:11 crc kubenswrapper[4832]: I0125 08:15:11.298646 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7094-account-create-update-zccgm" event={"ID":"b3771f9f-7c61-47ef-9977-96275f49cd91","Type":"ContainerStarted","Data":"d7c57b766ed70bd621899cace1a1a31d3b1c80a0ca336a6936a8736facee86dd"} Jan 25 08:15:11 crc kubenswrapper[4832]: I0125 08:15:11.301922 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-95bb-account-create-update-9qtwc" event={"ID":"a4bac199-c6e9-4bef-b649-12aa5af881ab","Type":"ContainerStarted","Data":"d21919735541d638c2396b6329492379a0d4872ecbfa1cd10ad261a7a928dd49"} Jan 25 08:15:11 crc kubenswrapper[4832]: I0125 08:15:11.303148 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-dlpsc" event={"ID":"78d32c3b-2a6c-4a1e-a1c5-146a00bbba21","Type":"ContainerStarted","Data":"65d303c29287d6edd3c05f01de2e96c3677f18ec815a01cb6b1d37a21da24412"} Jan 25 08:15:11 crc kubenswrapper[4832]: I0125 08:15:11.303855 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-khdxr" event={"ID":"15a33ab1-a365-4e45-b7aa-3208d9b16fd0","Type":"ContainerStarted","Data":"11e1f4d6bdd352b4bb625123c50ce867a959d8f7020dc67f0ba53077dfabf889"} Jan 25 08:15:11 crc kubenswrapper[4832]: I0125 08:15:11.304980 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-58szm" event={"ID":"5db077e1-3078-4290-91ea-4e099d11584a","Type":"ContainerStarted","Data":"facc19527168253f474243262380f2e4199e2ca5e4982cfa86a429482b32b581"} Jan 25 08:15:11 crc kubenswrapper[4832]: I0125 08:15:11.310789 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-a9d0-account-create-update-5njf2" event={"ID":"d05c514f-1bc8-45c4-aa69-e8d08cfeb515","Type":"ContainerStarted","Data":"ad98443f3ab949386557481126b4bfa78caaa9daeaf4e6c55ae462cbaa96a01a"} Jan 25 08:15:11 crc kubenswrapper[4832]: I0125 08:15:11.312110 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-bdwvt" event={"ID":"7640ab02-6a97-40ae-9d40-99e42123e170","Type":"ContainerStarted","Data":"7ba8735b4f8564bf97d16c8eae3452bae6cabe7d6e59239c45f9520ae56669a9"} Jan 25 08:15:12 crc kubenswrapper[4832]: I0125 08:15:12.331607 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"68ef9e02-9e33-48c3-a32b-ceae36687171","Type":"ContainerStarted","Data":"a180b64a5a6ee601507858b6cbda03a524af89f5b5f87df67b64e8349baeea8d"} Jan 25 08:15:12 crc kubenswrapper[4832]: I0125 08:15:12.332172 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"68ef9e02-9e33-48c3-a32b-ceae36687171","Type":"ContainerStarted","Data":"9bb485d57e6e34894701a8466d211c75e3910eb7c39c29f1a4f771d312a04217"} Jan 25 08:15:12 crc kubenswrapper[4832]: I0125 08:15:12.339355 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-bdwvt" event={"ID":"7640ab02-6a97-40ae-9d40-99e42123e170","Type":"ContainerStarted","Data":"7236a7397d9b7007dd81b2829d19e0a00b651840306d949d52e7cc5e4e72fad1"} Jan 25 08:15:12 crc kubenswrapper[4832]: I0125 08:15:12.343672 4832 generic.go:334] "Generic (PLEG): container finished" podID="d05c514f-1bc8-45c4-aa69-e8d08cfeb515" containerID="51feac782fe6444dc2d01017ed8996c4c63c5a832d7d03361f500084111a7d6f" exitCode=0 Jan 25 08:15:12 crc kubenswrapper[4832]: I0125 08:15:12.343785 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-a9d0-account-create-update-5njf2" event={"ID":"d05c514f-1bc8-45c4-aa69-e8d08cfeb515","Type":"ContainerDied","Data":"51feac782fe6444dc2d01017ed8996c4c63c5a832d7d03361f500084111a7d6f"} Jan 25 08:15:12 crc kubenswrapper[4832]: I0125 08:15:12.345314 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7094-account-create-update-zccgm" event={"ID":"b3771f9f-7c61-47ef-9977-96275f49cd91","Type":"ContainerStarted","Data":"a00cfde4bfde10f46126d63276bf226cdbe3bea6b92b1cb55a658b51d3217bc7"} Jan 25 08:15:12 crc kubenswrapper[4832]: I0125 08:15:12.348089 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-95bb-account-create-update-9qtwc" event={"ID":"a4bac199-c6e9-4bef-b649-12aa5af881ab","Type":"ContainerStarted","Data":"d01fb318cba2e3e10a5923f98bd8c0680a4aa77dd407ff0faed75e0e0e47003b"} Jan 25 08:15:12 crc kubenswrapper[4832]: I0125 08:15:12.350131 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-dlpsc" event={"ID":"78d32c3b-2a6c-4a1e-a1c5-146a00bbba21","Type":"ContainerStarted","Data":"90184d494eb56a19649488a1af2182f74457b36d9179d268301e2ca7875a33f2"} Jan 25 08:15:12 crc kubenswrapper[4832]: I0125 08:15:12.352824 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-khdxr" event={"ID":"15a33ab1-a365-4e45-b7aa-3208d9b16fd0","Type":"ContainerStarted","Data":"365b82ec7d97ec39d1787c8c19e678438c6114c19446fb79b5acf88ede37d16d"} Jan 25 08:15:12 crc kubenswrapper[4832]: I0125 08:15:12.354276 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-58szm" event={"ID":"5db077e1-3078-4290-91ea-4e099d11584a","Type":"ContainerStarted","Data":"851a6a1c2a9bdaeae4dfd13545cddb503f9ffdbe1ea4e4369837beefa242308a"} Jan 25 08:15:12 crc kubenswrapper[4832]: I0125 08:15:12.479663 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-db-create-khdxr" podStartSLOduration=5.479642798 podStartE2EDuration="5.479642798s" podCreationTimestamp="2026-01-25 08:15:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-25 08:15:12.423880978 +0000 UTC m=+1095.097704511" watchObservedRunningTime="2026-01-25 08:15:12.479642798 +0000 UTC m=+1095.153466431" Jan 25 08:15:12 crc kubenswrapper[4832]: I0125 08:15:12.543206 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-db-create-dlpsc" podStartSLOduration=5.543186072 podStartE2EDuration="5.543186072s" podCreationTimestamp="2026-01-25 08:15:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-25 08:15:12.535658496 +0000 UTC m=+1095.209482029" watchObservedRunningTime="2026-01-25 08:15:12.543186072 +0000 UTC m=+1095.217009605" Jan 25 08:15:12 crc kubenswrapper[4832]: I0125 08:15:12.645290 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-7094-account-create-update-zccgm" podStartSLOduration=4.645265686 podStartE2EDuration="4.645265686s" podCreationTimestamp="2026-01-25 08:15:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-25 08:15:12.638912747 +0000 UTC m=+1095.312736280" watchObservedRunningTime="2026-01-25 08:15:12.645265686 +0000 UTC m=+1095.319089239" Jan 25 08:15:12 crc kubenswrapper[4832]: I0125 08:15:12.669779 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/root-account-create-update-58szm" podStartSLOduration=4.669763545 podStartE2EDuration="4.669763545s" podCreationTimestamp="2026-01-25 08:15:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-25 08:15:12.667514584 +0000 UTC m=+1095.341338117" watchObservedRunningTime="2026-01-25 08:15:12.669763545 +0000 UTC m=+1095.343587068" Jan 25 08:15:12 crc kubenswrapper[4832]: I0125 08:15:12.695350 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-95bb-account-create-update-9qtwc" podStartSLOduration=5.695331007 podStartE2EDuration="5.695331007s" podCreationTimestamp="2026-01-25 08:15:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-25 08:15:12.683433414 +0000 UTC m=+1095.357256947" watchObservedRunningTime="2026-01-25 08:15:12.695331007 +0000 UTC m=+1095.369154540" Jan 25 08:15:13 crc kubenswrapper[4832]: I0125 08:15:13.367238 4832 generic.go:334] "Generic (PLEG): container finished" podID="15a33ab1-a365-4e45-b7aa-3208d9b16fd0" containerID="365b82ec7d97ec39d1787c8c19e678438c6114c19446fb79b5acf88ede37d16d" exitCode=0 Jan 25 08:15:13 crc kubenswrapper[4832]: I0125 08:15:13.367477 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-khdxr" event={"ID":"15a33ab1-a365-4e45-b7aa-3208d9b16fd0","Type":"ContainerDied","Data":"365b82ec7d97ec39d1787c8c19e678438c6114c19446fb79b5acf88ede37d16d"} Jan 25 08:15:13 crc kubenswrapper[4832]: I0125 08:15:13.370049 4832 generic.go:334] "Generic (PLEG): container finished" podID="5db077e1-3078-4290-91ea-4e099d11584a" containerID="851a6a1c2a9bdaeae4dfd13545cddb503f9ffdbe1ea4e4369837beefa242308a" exitCode=0 Jan 25 08:15:13 crc kubenswrapper[4832]: I0125 08:15:13.370106 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-58szm" event={"ID":"5db077e1-3078-4290-91ea-4e099d11584a","Type":"ContainerDied","Data":"851a6a1c2a9bdaeae4dfd13545cddb503f9ffdbe1ea4e4369837beefa242308a"} Jan 25 08:15:13 crc kubenswrapper[4832]: I0125 08:15:13.373033 4832 generic.go:334] "Generic (PLEG): container finished" podID="7640ab02-6a97-40ae-9d40-99e42123e170" containerID="7236a7397d9b7007dd81b2829d19e0a00b651840306d949d52e7cc5e4e72fad1" exitCode=0 Jan 25 08:15:13 crc kubenswrapper[4832]: I0125 08:15:13.373083 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-bdwvt" event={"ID":"7640ab02-6a97-40ae-9d40-99e42123e170","Type":"ContainerDied","Data":"7236a7397d9b7007dd81b2829d19e0a00b651840306d949d52e7cc5e4e72fad1"} Jan 25 08:15:13 crc kubenswrapper[4832]: I0125 08:15:13.376473 4832 generic.go:334] "Generic (PLEG): container finished" podID="a4bac199-c6e9-4bef-b649-12aa5af881ab" containerID="d01fb318cba2e3e10a5923f98bd8c0680a4aa77dd407ff0faed75e0e0e47003b" exitCode=0 Jan 25 08:15:13 crc kubenswrapper[4832]: I0125 08:15:13.376558 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-95bb-account-create-update-9qtwc" event={"ID":"a4bac199-c6e9-4bef-b649-12aa5af881ab","Type":"ContainerDied","Data":"d01fb318cba2e3e10a5923f98bd8c0680a4aa77dd407ff0faed75e0e0e47003b"} Jan 25 08:15:13 crc kubenswrapper[4832]: I0125 08:15:13.383949 4832 generic.go:334] "Generic (PLEG): container finished" podID="78d32c3b-2a6c-4a1e-a1c5-146a00bbba21" containerID="90184d494eb56a19649488a1af2182f74457b36d9179d268301e2ca7875a33f2" exitCode=0 Jan 25 08:15:13 crc kubenswrapper[4832]: I0125 08:15:13.384165 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-dlpsc" event={"ID":"78d32c3b-2a6c-4a1e-a1c5-146a00bbba21","Type":"ContainerDied","Data":"90184d494eb56a19649488a1af2182f74457b36d9179d268301e2ca7875a33f2"} Jan 25 08:15:14 crc kubenswrapper[4832]: I0125 08:15:14.440782 4832 generic.go:334] "Generic (PLEG): container finished" podID="b3771f9f-7c61-47ef-9977-96275f49cd91" containerID="a00cfde4bfde10f46126d63276bf226cdbe3bea6b92b1cb55a658b51d3217bc7" exitCode=0 Jan 25 08:15:14 crc kubenswrapper[4832]: I0125 08:15:14.440950 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7094-account-create-update-zccgm" event={"ID":"b3771f9f-7c61-47ef-9977-96275f49cd91","Type":"ContainerDied","Data":"a00cfde4bfde10f46126d63276bf226cdbe3bea6b92b1cb55a658b51d3217bc7"} Jan 25 08:15:16 crc kubenswrapper[4832]: I0125 08:15:16.002080 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-bdwvt" Jan 25 08:15:16 crc kubenswrapper[4832]: I0125 08:15:16.100191 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7640ab02-6a97-40ae-9d40-99e42123e170-operator-scripts\") pod \"7640ab02-6a97-40ae-9d40-99e42123e170\" (UID: \"7640ab02-6a97-40ae-9d40-99e42123e170\") " Jan 25 08:15:16 crc kubenswrapper[4832]: I0125 08:15:16.100314 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9rvsd\" (UniqueName: \"kubernetes.io/projected/7640ab02-6a97-40ae-9d40-99e42123e170-kube-api-access-9rvsd\") pod \"7640ab02-6a97-40ae-9d40-99e42123e170\" (UID: \"7640ab02-6a97-40ae-9d40-99e42123e170\") " Jan 25 08:15:16 crc kubenswrapper[4832]: I0125 08:15:16.103535 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7640ab02-6a97-40ae-9d40-99e42123e170-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "7640ab02-6a97-40ae-9d40-99e42123e170" (UID: "7640ab02-6a97-40ae-9d40-99e42123e170"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 25 08:15:16 crc kubenswrapper[4832]: I0125 08:15:16.110741 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7640ab02-6a97-40ae-9d40-99e42123e170-kube-api-access-9rvsd" (OuterVolumeSpecName: "kube-api-access-9rvsd") pod "7640ab02-6a97-40ae-9d40-99e42123e170" (UID: "7640ab02-6a97-40ae-9d40-99e42123e170"). InnerVolumeSpecName "kube-api-access-9rvsd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 25 08:15:16 crc kubenswrapper[4832]: I0125 08:15:16.202136 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9rvsd\" (UniqueName: \"kubernetes.io/projected/7640ab02-6a97-40ae-9d40-99e42123e170-kube-api-access-9rvsd\") on node \"crc\" DevicePath \"\"" Jan 25 08:15:16 crc kubenswrapper[4832]: I0125 08:15:16.202171 4832 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7640ab02-6a97-40ae-9d40-99e42123e170-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 25 08:15:16 crc kubenswrapper[4832]: I0125 08:15:16.242331 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-a9d0-account-create-update-5njf2" Jan 25 08:15:16 crc kubenswrapper[4832]: I0125 08:15:16.252460 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-dlpsc" Jan 25 08:15:16 crc kubenswrapper[4832]: I0125 08:15:16.299943 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-95bb-account-create-update-9qtwc" Jan 25 08:15:16 crc kubenswrapper[4832]: I0125 08:15:16.304157 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/78d32c3b-2a6c-4a1e-a1c5-146a00bbba21-operator-scripts\") pod \"78d32c3b-2a6c-4a1e-a1c5-146a00bbba21\" (UID: \"78d32c3b-2a6c-4a1e-a1c5-146a00bbba21\") " Jan 25 08:15:16 crc kubenswrapper[4832]: I0125 08:15:16.304293 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-79nhx\" (UniqueName: \"kubernetes.io/projected/78d32c3b-2a6c-4a1e-a1c5-146a00bbba21-kube-api-access-79nhx\") pod \"78d32c3b-2a6c-4a1e-a1c5-146a00bbba21\" (UID: \"78d32c3b-2a6c-4a1e-a1c5-146a00bbba21\") " Jan 25 08:15:16 crc kubenswrapper[4832]: I0125 08:15:16.304338 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d05c514f-1bc8-45c4-aa69-e8d08cfeb515-operator-scripts\") pod \"d05c514f-1bc8-45c4-aa69-e8d08cfeb515\" (UID: \"d05c514f-1bc8-45c4-aa69-e8d08cfeb515\") " Jan 25 08:15:16 crc kubenswrapper[4832]: I0125 08:15:16.304395 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w9glf\" (UniqueName: \"kubernetes.io/projected/d05c514f-1bc8-45c4-aa69-e8d08cfeb515-kube-api-access-w9glf\") pod \"d05c514f-1bc8-45c4-aa69-e8d08cfeb515\" (UID: \"d05c514f-1bc8-45c4-aa69-e8d08cfeb515\") " Jan 25 08:15:16 crc kubenswrapper[4832]: I0125 08:15:16.306576 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/78d32c3b-2a6c-4a1e-a1c5-146a00bbba21-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "78d32c3b-2a6c-4a1e-a1c5-146a00bbba21" (UID: "78d32c3b-2a6c-4a1e-a1c5-146a00bbba21"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 25 08:15:16 crc kubenswrapper[4832]: I0125 08:15:16.318088 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/78d32c3b-2a6c-4a1e-a1c5-146a00bbba21-kube-api-access-79nhx" (OuterVolumeSpecName: "kube-api-access-79nhx") pod "78d32c3b-2a6c-4a1e-a1c5-146a00bbba21" (UID: "78d32c3b-2a6c-4a1e-a1c5-146a00bbba21"). InnerVolumeSpecName "kube-api-access-79nhx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 25 08:15:16 crc kubenswrapper[4832]: I0125 08:15:16.320664 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d05c514f-1bc8-45c4-aa69-e8d08cfeb515-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "d05c514f-1bc8-45c4-aa69-e8d08cfeb515" (UID: "d05c514f-1bc8-45c4-aa69-e8d08cfeb515"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 25 08:15:16 crc kubenswrapper[4832]: I0125 08:15:16.324319 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d05c514f-1bc8-45c4-aa69-e8d08cfeb515-kube-api-access-w9glf" (OuterVolumeSpecName: "kube-api-access-w9glf") pod "d05c514f-1bc8-45c4-aa69-e8d08cfeb515" (UID: "d05c514f-1bc8-45c4-aa69-e8d08cfeb515"). InnerVolumeSpecName "kube-api-access-w9glf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 25 08:15:16 crc kubenswrapper[4832]: I0125 08:15:16.361075 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-58szm" Jan 25 08:15:16 crc kubenswrapper[4832]: I0125 08:15:16.405339 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z8r5k\" (UniqueName: \"kubernetes.io/projected/5db077e1-3078-4290-91ea-4e099d11584a-kube-api-access-z8r5k\") pod \"5db077e1-3078-4290-91ea-4e099d11584a\" (UID: \"5db077e1-3078-4290-91ea-4e099d11584a\") " Jan 25 08:15:16 crc kubenswrapper[4832]: I0125 08:15:16.405460 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a4bac199-c6e9-4bef-b649-12aa5af881ab-operator-scripts\") pod \"a4bac199-c6e9-4bef-b649-12aa5af881ab\" (UID: \"a4bac199-c6e9-4bef-b649-12aa5af881ab\") " Jan 25 08:15:16 crc kubenswrapper[4832]: I0125 08:15:16.405485 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-td4hg\" (UniqueName: \"kubernetes.io/projected/a4bac199-c6e9-4bef-b649-12aa5af881ab-kube-api-access-td4hg\") pod \"a4bac199-c6e9-4bef-b649-12aa5af881ab\" (UID: \"a4bac199-c6e9-4bef-b649-12aa5af881ab\") " Jan 25 08:15:16 crc kubenswrapper[4832]: I0125 08:15:16.405544 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5db077e1-3078-4290-91ea-4e099d11584a-operator-scripts\") pod \"5db077e1-3078-4290-91ea-4e099d11584a\" (UID: \"5db077e1-3078-4290-91ea-4e099d11584a\") " Jan 25 08:15:16 crc kubenswrapper[4832]: I0125 08:15:16.406015 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a4bac199-c6e9-4bef-b649-12aa5af881ab-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "a4bac199-c6e9-4bef-b649-12aa5af881ab" (UID: "a4bac199-c6e9-4bef-b649-12aa5af881ab"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 25 08:15:16 crc kubenswrapper[4832]: I0125 08:15:16.407140 4832 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d05c514f-1bc8-45c4-aa69-e8d08cfeb515-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 25 08:15:16 crc kubenswrapper[4832]: I0125 08:15:16.407164 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w9glf\" (UniqueName: \"kubernetes.io/projected/d05c514f-1bc8-45c4-aa69-e8d08cfeb515-kube-api-access-w9glf\") on node \"crc\" DevicePath \"\"" Jan 25 08:15:16 crc kubenswrapper[4832]: I0125 08:15:16.407180 4832 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a4bac199-c6e9-4bef-b649-12aa5af881ab-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 25 08:15:16 crc kubenswrapper[4832]: I0125 08:15:16.407193 4832 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/78d32c3b-2a6c-4a1e-a1c5-146a00bbba21-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 25 08:15:16 crc kubenswrapper[4832]: I0125 08:15:16.407204 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-79nhx\" (UniqueName: \"kubernetes.io/projected/78d32c3b-2a6c-4a1e-a1c5-146a00bbba21-kube-api-access-79nhx\") on node \"crc\" DevicePath \"\"" Jan 25 08:15:16 crc kubenswrapper[4832]: I0125 08:15:16.410516 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5db077e1-3078-4290-91ea-4e099d11584a-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "5db077e1-3078-4290-91ea-4e099d11584a" (UID: "5db077e1-3078-4290-91ea-4e099d11584a"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 25 08:15:16 crc kubenswrapper[4832]: I0125 08:15:16.412180 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a4bac199-c6e9-4bef-b649-12aa5af881ab-kube-api-access-td4hg" (OuterVolumeSpecName: "kube-api-access-td4hg") pod "a4bac199-c6e9-4bef-b649-12aa5af881ab" (UID: "a4bac199-c6e9-4bef-b649-12aa5af881ab"). InnerVolumeSpecName "kube-api-access-td4hg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 25 08:15:16 crc kubenswrapper[4832]: I0125 08:15:16.412507 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-7094-account-create-update-zccgm" Jan 25 08:15:16 crc kubenswrapper[4832]: I0125 08:15:16.417477 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5db077e1-3078-4290-91ea-4e099d11584a-kube-api-access-z8r5k" (OuterVolumeSpecName: "kube-api-access-z8r5k") pod "5db077e1-3078-4290-91ea-4e099d11584a" (UID: "5db077e1-3078-4290-91ea-4e099d11584a"). InnerVolumeSpecName "kube-api-access-z8r5k". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 25 08:15:16 crc kubenswrapper[4832]: I0125 08:15:16.419465 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-khdxr" Jan 25 08:15:16 crc kubenswrapper[4832]: I0125 08:15:16.470280 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-95bb-account-create-update-9qtwc" event={"ID":"a4bac199-c6e9-4bef-b649-12aa5af881ab","Type":"ContainerDied","Data":"d21919735541d638c2396b6329492379a0d4872ecbfa1cd10ad261a7a928dd49"} Jan 25 08:15:16 crc kubenswrapper[4832]: I0125 08:15:16.470325 4832 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d21919735541d638c2396b6329492379a0d4872ecbfa1cd10ad261a7a928dd49" Jan 25 08:15:16 crc kubenswrapper[4832]: I0125 08:15:16.470343 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-95bb-account-create-update-9qtwc" Jan 25 08:15:16 crc kubenswrapper[4832]: I0125 08:15:16.471796 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-dlpsc" event={"ID":"78d32c3b-2a6c-4a1e-a1c5-146a00bbba21","Type":"ContainerDied","Data":"65d303c29287d6edd3c05f01de2e96c3677f18ec815a01cb6b1d37a21da24412"} Jan 25 08:15:16 crc kubenswrapper[4832]: I0125 08:15:16.471813 4832 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="65d303c29287d6edd3c05f01de2e96c3677f18ec815a01cb6b1d37a21da24412" Jan 25 08:15:16 crc kubenswrapper[4832]: I0125 08:15:16.471878 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-dlpsc" Jan 25 08:15:16 crc kubenswrapper[4832]: I0125 08:15:16.478656 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-khdxr" event={"ID":"15a33ab1-a365-4e45-b7aa-3208d9b16fd0","Type":"ContainerDied","Data":"11e1f4d6bdd352b4bb625123c50ce867a959d8f7020dc67f0ba53077dfabf889"} Jan 25 08:15:16 crc kubenswrapper[4832]: I0125 08:15:16.478718 4832 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="11e1f4d6bdd352b4bb625123c50ce867a959d8f7020dc67f0ba53077dfabf889" Jan 25 08:15:16 crc kubenswrapper[4832]: I0125 08:15:16.478802 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-khdxr" Jan 25 08:15:16 crc kubenswrapper[4832]: I0125 08:15:16.480721 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-58szm" event={"ID":"5db077e1-3078-4290-91ea-4e099d11584a","Type":"ContainerDied","Data":"facc19527168253f474243262380f2e4199e2ca5e4982cfa86a429482b32b581"} Jan 25 08:15:16 crc kubenswrapper[4832]: I0125 08:15:16.480746 4832 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="facc19527168253f474243262380f2e4199e2ca5e4982cfa86a429482b32b581" Jan 25 08:15:16 crc kubenswrapper[4832]: I0125 08:15:16.480795 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-58szm" Jan 25 08:15:16 crc kubenswrapper[4832]: I0125 08:15:16.497736 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"68ef9e02-9e33-48c3-a32b-ceae36687171","Type":"ContainerStarted","Data":"279ff2841f86b653bbcc6311fe72eb8c0b1e3e6541315342123f68a85f5992d3"} Jan 25 08:15:16 crc kubenswrapper[4832]: I0125 08:15:16.502785 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-a9d0-account-create-update-5njf2" Jan 25 08:15:16 crc kubenswrapper[4832]: I0125 08:15:16.503805 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-a9d0-account-create-update-5njf2" event={"ID":"d05c514f-1bc8-45c4-aa69-e8d08cfeb515","Type":"ContainerDied","Data":"ad98443f3ab949386557481126b4bfa78caaa9daeaf4e6c55ae462cbaa96a01a"} Jan 25 08:15:16 crc kubenswrapper[4832]: I0125 08:15:16.503835 4832 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ad98443f3ab949386557481126b4bfa78caaa9daeaf4e6c55ae462cbaa96a01a" Jan 25 08:15:16 crc kubenswrapper[4832]: I0125 08:15:16.506217 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-bdwvt" event={"ID":"7640ab02-6a97-40ae-9d40-99e42123e170","Type":"ContainerDied","Data":"7ba8735b4f8564bf97d16c8eae3452bae6cabe7d6e59239c45f9520ae56669a9"} Jan 25 08:15:16 crc kubenswrapper[4832]: I0125 08:15:16.506249 4832 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7ba8735b4f8564bf97d16c8eae3452bae6cabe7d6e59239c45f9520ae56669a9" Jan 25 08:15:16 crc kubenswrapper[4832]: I0125 08:15:16.506318 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-bdwvt" Jan 25 08:15:16 crc kubenswrapper[4832]: I0125 08:15:16.507803 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v6kw6\" (UniqueName: \"kubernetes.io/projected/15a33ab1-a365-4e45-b7aa-3208d9b16fd0-kube-api-access-v6kw6\") pod \"15a33ab1-a365-4e45-b7aa-3208d9b16fd0\" (UID: \"15a33ab1-a365-4e45-b7aa-3208d9b16fd0\") " Jan 25 08:15:16 crc kubenswrapper[4832]: I0125 08:15:16.507941 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/15a33ab1-a365-4e45-b7aa-3208d9b16fd0-operator-scripts\") pod \"15a33ab1-a365-4e45-b7aa-3208d9b16fd0\" (UID: \"15a33ab1-a365-4e45-b7aa-3208d9b16fd0\") " Jan 25 08:15:16 crc kubenswrapper[4832]: I0125 08:15:16.507970 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b3771f9f-7c61-47ef-9977-96275f49cd91-operator-scripts\") pod \"b3771f9f-7c61-47ef-9977-96275f49cd91\" (UID: \"b3771f9f-7c61-47ef-9977-96275f49cd91\") " Jan 25 08:15:16 crc kubenswrapper[4832]: I0125 08:15:16.507989 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j8cfb\" (UniqueName: \"kubernetes.io/projected/b3771f9f-7c61-47ef-9977-96275f49cd91-kube-api-access-j8cfb\") pod \"b3771f9f-7c61-47ef-9977-96275f49cd91\" (UID: \"b3771f9f-7c61-47ef-9977-96275f49cd91\") " Jan 25 08:15:16 crc kubenswrapper[4832]: I0125 08:15:16.508461 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z8r5k\" (UniqueName: \"kubernetes.io/projected/5db077e1-3078-4290-91ea-4e099d11584a-kube-api-access-z8r5k\") on node \"crc\" DevicePath \"\"" Jan 25 08:15:16 crc kubenswrapper[4832]: I0125 08:15:16.508479 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-td4hg\" (UniqueName: \"kubernetes.io/projected/a4bac199-c6e9-4bef-b649-12aa5af881ab-kube-api-access-td4hg\") on node \"crc\" DevicePath \"\"" Jan 25 08:15:16 crc kubenswrapper[4832]: I0125 08:15:16.508488 4832 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5db077e1-3078-4290-91ea-4e099d11584a-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 25 08:15:16 crc kubenswrapper[4832]: I0125 08:15:16.508583 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b3771f9f-7c61-47ef-9977-96275f49cd91-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "b3771f9f-7c61-47ef-9977-96275f49cd91" (UID: "b3771f9f-7c61-47ef-9977-96275f49cd91"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 25 08:15:16 crc kubenswrapper[4832]: I0125 08:15:16.508590 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/15a33ab1-a365-4e45-b7aa-3208d9b16fd0-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "15a33ab1-a365-4e45-b7aa-3208d9b16fd0" (UID: "15a33ab1-a365-4e45-b7aa-3208d9b16fd0"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 25 08:15:16 crc kubenswrapper[4832]: I0125 08:15:16.511173 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/15a33ab1-a365-4e45-b7aa-3208d9b16fd0-kube-api-access-v6kw6" (OuterVolumeSpecName: "kube-api-access-v6kw6") pod "15a33ab1-a365-4e45-b7aa-3208d9b16fd0" (UID: "15a33ab1-a365-4e45-b7aa-3208d9b16fd0"). InnerVolumeSpecName "kube-api-access-v6kw6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 25 08:15:16 crc kubenswrapper[4832]: I0125 08:15:16.511962 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7094-account-create-update-zccgm" event={"ID":"b3771f9f-7c61-47ef-9977-96275f49cd91","Type":"ContainerDied","Data":"d7c57b766ed70bd621899cace1a1a31d3b1c80a0ca336a6936a8736facee86dd"} Jan 25 08:15:16 crc kubenswrapper[4832]: I0125 08:15:16.512008 4832 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d7c57b766ed70bd621899cace1a1a31d3b1c80a0ca336a6936a8736facee86dd" Jan 25 08:15:16 crc kubenswrapper[4832]: I0125 08:15:16.512038 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-7094-account-create-update-zccgm" Jan 25 08:15:16 crc kubenswrapper[4832]: I0125 08:15:16.515674 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b3771f9f-7c61-47ef-9977-96275f49cd91-kube-api-access-j8cfb" (OuterVolumeSpecName: "kube-api-access-j8cfb") pod "b3771f9f-7c61-47ef-9977-96275f49cd91" (UID: "b3771f9f-7c61-47ef-9977-96275f49cd91"). InnerVolumeSpecName "kube-api-access-j8cfb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 25 08:15:16 crc kubenswrapper[4832]: I0125 08:15:16.610299 4832 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/15a33ab1-a365-4e45-b7aa-3208d9b16fd0-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 25 08:15:16 crc kubenswrapper[4832]: I0125 08:15:16.610338 4832 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b3771f9f-7c61-47ef-9977-96275f49cd91-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 25 08:15:16 crc kubenswrapper[4832]: I0125 08:15:16.610348 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j8cfb\" (UniqueName: \"kubernetes.io/projected/b3771f9f-7c61-47ef-9977-96275f49cd91-kube-api-access-j8cfb\") on node \"crc\" DevicePath \"\"" Jan 25 08:15:16 crc kubenswrapper[4832]: I0125 08:15:16.610359 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v6kw6\" (UniqueName: \"kubernetes.io/projected/15a33ab1-a365-4e45-b7aa-3208d9b16fd0-kube-api-access-v6kw6\") on node \"crc\" DevicePath \"\"" Jan 25 08:15:17 crc kubenswrapper[4832]: I0125 08:15:17.528379 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"68ef9e02-9e33-48c3-a32b-ceae36687171","Type":"ContainerStarted","Data":"f7829c957f789078932b122e560349fafadcffb3cc46f6e39116c8565ae591a6"} Jan 25 08:15:17 crc kubenswrapper[4832]: I0125 08:15:17.528732 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"68ef9e02-9e33-48c3-a32b-ceae36687171","Type":"ContainerStarted","Data":"a54d4b650f9277123921982c8fc4c7b3cbce59c4dfacfeb2cf722773df9ce693"} Jan 25 08:15:17 crc kubenswrapper[4832]: I0125 08:15:17.528747 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"68ef9e02-9e33-48c3-a32b-ceae36687171","Type":"ContainerStarted","Data":"e5066372a3a651a3a6cff3ac6934ac65336e5698fbe30f027178a3bbb3917b8b"} Jan 25 08:15:17 crc kubenswrapper[4832]: I0125 08:15:17.528755 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"68ef9e02-9e33-48c3-a32b-ceae36687171","Type":"ContainerStarted","Data":"18038d9a408b76f3ef0f895f76181fe965fbe96ea47bcebfae960ba9e5054eb1"} Jan 25 08:15:17 crc kubenswrapper[4832]: I0125 08:15:17.562422 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-storage-0" podStartSLOduration=24.429795795 podStartE2EDuration="36.562392683s" podCreationTimestamp="2026-01-25 08:14:41 +0000 UTC" firstStartedPulling="2026-01-25 08:14:59.008754109 +0000 UTC m=+1081.682577642" lastFinishedPulling="2026-01-25 08:15:11.141350987 +0000 UTC m=+1093.815174530" observedRunningTime="2026-01-25 08:15:17.56226987 +0000 UTC m=+1100.236093403" watchObservedRunningTime="2026-01-25 08:15:17.562392683 +0000 UTC m=+1100.236216216" Jan 25 08:15:17 crc kubenswrapper[4832]: I0125 08:15:17.835815 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-77585f5f8c-pl49p"] Jan 25 08:15:17 crc kubenswrapper[4832]: E0125 08:15:17.836248 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="78d32c3b-2a6c-4a1e-a1c5-146a00bbba21" containerName="mariadb-database-create" Jan 25 08:15:17 crc kubenswrapper[4832]: I0125 08:15:17.836272 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="78d32c3b-2a6c-4a1e-a1c5-146a00bbba21" containerName="mariadb-database-create" Jan 25 08:15:17 crc kubenswrapper[4832]: E0125 08:15:17.836288 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5db077e1-3078-4290-91ea-4e099d11584a" containerName="mariadb-account-create-update" Jan 25 08:15:17 crc kubenswrapper[4832]: I0125 08:15:17.836297 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="5db077e1-3078-4290-91ea-4e099d11584a" containerName="mariadb-account-create-update" Jan 25 08:15:17 crc kubenswrapper[4832]: E0125 08:15:17.836310 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a4bac199-c6e9-4bef-b649-12aa5af881ab" containerName="mariadb-account-create-update" Jan 25 08:15:17 crc kubenswrapper[4832]: I0125 08:15:17.836317 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="a4bac199-c6e9-4bef-b649-12aa5af881ab" containerName="mariadb-account-create-update" Jan 25 08:15:17 crc kubenswrapper[4832]: E0125 08:15:17.836332 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="15a33ab1-a365-4e45-b7aa-3208d9b16fd0" containerName="mariadb-database-create" Jan 25 08:15:17 crc kubenswrapper[4832]: I0125 08:15:17.836338 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="15a33ab1-a365-4e45-b7aa-3208d9b16fd0" containerName="mariadb-database-create" Jan 25 08:15:17 crc kubenswrapper[4832]: E0125 08:15:17.836356 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b3771f9f-7c61-47ef-9977-96275f49cd91" containerName="mariadb-account-create-update" Jan 25 08:15:17 crc kubenswrapper[4832]: I0125 08:15:17.836363 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="b3771f9f-7c61-47ef-9977-96275f49cd91" containerName="mariadb-account-create-update" Jan 25 08:15:17 crc kubenswrapper[4832]: E0125 08:15:17.836385 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7640ab02-6a97-40ae-9d40-99e42123e170" containerName="mariadb-database-create" Jan 25 08:15:17 crc kubenswrapper[4832]: I0125 08:15:17.836395 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="7640ab02-6a97-40ae-9d40-99e42123e170" containerName="mariadb-database-create" Jan 25 08:15:17 crc kubenswrapper[4832]: E0125 08:15:17.836458 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d05c514f-1bc8-45c4-aa69-e8d08cfeb515" containerName="mariadb-account-create-update" Jan 25 08:15:17 crc kubenswrapper[4832]: I0125 08:15:17.836469 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="d05c514f-1bc8-45c4-aa69-e8d08cfeb515" containerName="mariadb-account-create-update" Jan 25 08:15:17 crc kubenswrapper[4832]: I0125 08:15:17.836640 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="b3771f9f-7c61-47ef-9977-96275f49cd91" containerName="mariadb-account-create-update" Jan 25 08:15:17 crc kubenswrapper[4832]: I0125 08:15:17.836657 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="5db077e1-3078-4290-91ea-4e099d11584a" containerName="mariadb-account-create-update" Jan 25 08:15:17 crc kubenswrapper[4832]: I0125 08:15:17.836668 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="a4bac199-c6e9-4bef-b649-12aa5af881ab" containerName="mariadb-account-create-update" Jan 25 08:15:17 crc kubenswrapper[4832]: I0125 08:15:17.836682 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="d05c514f-1bc8-45c4-aa69-e8d08cfeb515" containerName="mariadb-account-create-update" Jan 25 08:15:17 crc kubenswrapper[4832]: I0125 08:15:17.836691 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="7640ab02-6a97-40ae-9d40-99e42123e170" containerName="mariadb-database-create" Jan 25 08:15:17 crc kubenswrapper[4832]: I0125 08:15:17.836705 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="15a33ab1-a365-4e45-b7aa-3208d9b16fd0" containerName="mariadb-database-create" Jan 25 08:15:17 crc kubenswrapper[4832]: I0125 08:15:17.836715 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="78d32c3b-2a6c-4a1e-a1c5-146a00bbba21" containerName="mariadb-database-create" Jan 25 08:15:17 crc kubenswrapper[4832]: I0125 08:15:17.837848 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-77585f5f8c-pl49p" Jan 25 08:15:17 crc kubenswrapper[4832]: I0125 08:15:17.840100 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-swift-storage-0" Jan 25 08:15:17 crc kubenswrapper[4832]: I0125 08:15:17.847835 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-77585f5f8c-pl49p"] Jan 25 08:15:17 crc kubenswrapper[4832]: I0125 08:15:17.936547 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a036699e-21c9-45bd-abf1-f2b054143deb-dns-swift-storage-0\") pod \"dnsmasq-dns-77585f5f8c-pl49p\" (UID: \"a036699e-21c9-45bd-abf1-f2b054143deb\") " pod="openstack/dnsmasq-dns-77585f5f8c-pl49p" Jan 25 08:15:17 crc kubenswrapper[4832]: I0125 08:15:17.936661 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8pwz2\" (UniqueName: \"kubernetes.io/projected/a036699e-21c9-45bd-abf1-f2b054143deb-kube-api-access-8pwz2\") pod \"dnsmasq-dns-77585f5f8c-pl49p\" (UID: \"a036699e-21c9-45bd-abf1-f2b054143deb\") " pod="openstack/dnsmasq-dns-77585f5f8c-pl49p" Jan 25 08:15:17 crc kubenswrapper[4832]: I0125 08:15:17.936691 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a036699e-21c9-45bd-abf1-f2b054143deb-ovsdbserver-sb\") pod \"dnsmasq-dns-77585f5f8c-pl49p\" (UID: \"a036699e-21c9-45bd-abf1-f2b054143deb\") " pod="openstack/dnsmasq-dns-77585f5f8c-pl49p" Jan 25 08:15:17 crc kubenswrapper[4832]: I0125 08:15:17.936739 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a036699e-21c9-45bd-abf1-f2b054143deb-config\") pod \"dnsmasq-dns-77585f5f8c-pl49p\" (UID: \"a036699e-21c9-45bd-abf1-f2b054143deb\") " pod="openstack/dnsmasq-dns-77585f5f8c-pl49p" Jan 25 08:15:17 crc kubenswrapper[4832]: I0125 08:15:17.936777 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a036699e-21c9-45bd-abf1-f2b054143deb-ovsdbserver-nb\") pod \"dnsmasq-dns-77585f5f8c-pl49p\" (UID: \"a036699e-21c9-45bd-abf1-f2b054143deb\") " pod="openstack/dnsmasq-dns-77585f5f8c-pl49p" Jan 25 08:15:17 crc kubenswrapper[4832]: I0125 08:15:17.936823 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a036699e-21c9-45bd-abf1-f2b054143deb-dns-svc\") pod \"dnsmasq-dns-77585f5f8c-pl49p\" (UID: \"a036699e-21c9-45bd-abf1-f2b054143deb\") " pod="openstack/dnsmasq-dns-77585f5f8c-pl49p" Jan 25 08:15:18 crc kubenswrapper[4832]: I0125 08:15:18.038287 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a036699e-21c9-45bd-abf1-f2b054143deb-dns-swift-storage-0\") pod \"dnsmasq-dns-77585f5f8c-pl49p\" (UID: \"a036699e-21c9-45bd-abf1-f2b054143deb\") " pod="openstack/dnsmasq-dns-77585f5f8c-pl49p" Jan 25 08:15:18 crc kubenswrapper[4832]: I0125 08:15:18.038356 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8pwz2\" (UniqueName: \"kubernetes.io/projected/a036699e-21c9-45bd-abf1-f2b054143deb-kube-api-access-8pwz2\") pod \"dnsmasq-dns-77585f5f8c-pl49p\" (UID: \"a036699e-21c9-45bd-abf1-f2b054143deb\") " pod="openstack/dnsmasq-dns-77585f5f8c-pl49p" Jan 25 08:15:18 crc kubenswrapper[4832]: I0125 08:15:18.038382 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a036699e-21c9-45bd-abf1-f2b054143deb-ovsdbserver-sb\") pod \"dnsmasq-dns-77585f5f8c-pl49p\" (UID: \"a036699e-21c9-45bd-abf1-f2b054143deb\") " pod="openstack/dnsmasq-dns-77585f5f8c-pl49p" Jan 25 08:15:18 crc kubenswrapper[4832]: I0125 08:15:18.038421 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a036699e-21c9-45bd-abf1-f2b054143deb-config\") pod \"dnsmasq-dns-77585f5f8c-pl49p\" (UID: \"a036699e-21c9-45bd-abf1-f2b054143deb\") " pod="openstack/dnsmasq-dns-77585f5f8c-pl49p" Jan 25 08:15:18 crc kubenswrapper[4832]: I0125 08:15:18.038466 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a036699e-21c9-45bd-abf1-f2b054143deb-ovsdbserver-nb\") pod \"dnsmasq-dns-77585f5f8c-pl49p\" (UID: \"a036699e-21c9-45bd-abf1-f2b054143deb\") " pod="openstack/dnsmasq-dns-77585f5f8c-pl49p" Jan 25 08:15:18 crc kubenswrapper[4832]: I0125 08:15:18.038513 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a036699e-21c9-45bd-abf1-f2b054143deb-dns-svc\") pod \"dnsmasq-dns-77585f5f8c-pl49p\" (UID: \"a036699e-21c9-45bd-abf1-f2b054143deb\") " pod="openstack/dnsmasq-dns-77585f5f8c-pl49p" Jan 25 08:15:18 crc kubenswrapper[4832]: I0125 08:15:18.039716 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a036699e-21c9-45bd-abf1-f2b054143deb-config\") pod \"dnsmasq-dns-77585f5f8c-pl49p\" (UID: \"a036699e-21c9-45bd-abf1-f2b054143deb\") " pod="openstack/dnsmasq-dns-77585f5f8c-pl49p" Jan 25 08:15:18 crc kubenswrapper[4832]: I0125 08:15:18.039806 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a036699e-21c9-45bd-abf1-f2b054143deb-ovsdbserver-sb\") pod \"dnsmasq-dns-77585f5f8c-pl49p\" (UID: \"a036699e-21c9-45bd-abf1-f2b054143deb\") " pod="openstack/dnsmasq-dns-77585f5f8c-pl49p" Jan 25 08:15:18 crc kubenswrapper[4832]: I0125 08:15:18.040017 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a036699e-21c9-45bd-abf1-f2b054143deb-ovsdbserver-nb\") pod \"dnsmasq-dns-77585f5f8c-pl49p\" (UID: \"a036699e-21c9-45bd-abf1-f2b054143deb\") " pod="openstack/dnsmasq-dns-77585f5f8c-pl49p" Jan 25 08:15:18 crc kubenswrapper[4832]: I0125 08:15:18.040086 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a036699e-21c9-45bd-abf1-f2b054143deb-dns-swift-storage-0\") pod \"dnsmasq-dns-77585f5f8c-pl49p\" (UID: \"a036699e-21c9-45bd-abf1-f2b054143deb\") " pod="openstack/dnsmasq-dns-77585f5f8c-pl49p" Jan 25 08:15:18 crc kubenswrapper[4832]: I0125 08:15:18.040300 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a036699e-21c9-45bd-abf1-f2b054143deb-dns-svc\") pod \"dnsmasq-dns-77585f5f8c-pl49p\" (UID: \"a036699e-21c9-45bd-abf1-f2b054143deb\") " pod="openstack/dnsmasq-dns-77585f5f8c-pl49p" Jan 25 08:15:18 crc kubenswrapper[4832]: I0125 08:15:18.056280 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8pwz2\" (UniqueName: \"kubernetes.io/projected/a036699e-21c9-45bd-abf1-f2b054143deb-kube-api-access-8pwz2\") pod \"dnsmasq-dns-77585f5f8c-pl49p\" (UID: \"a036699e-21c9-45bd-abf1-f2b054143deb\") " pod="openstack/dnsmasq-dns-77585f5f8c-pl49p" Jan 25 08:15:18 crc kubenswrapper[4832]: I0125 08:15:18.155676 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-77585f5f8c-pl49p" Jan 25 08:15:22 crc kubenswrapper[4832]: I0125 08:15:22.150203 4832 patch_prober.go:28] interesting pod/machine-config-daemon-9r9sz container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 25 08:15:22 crc kubenswrapper[4832]: I0125 08:15:22.150868 4832 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-9r9sz" podUID="1fb47e8e-c812-41b4-9be7-3fad81e121b0" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 25 08:15:22 crc kubenswrapper[4832]: I0125 08:15:22.150918 4832 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-9r9sz" Jan 25 08:15:22 crc kubenswrapper[4832]: I0125 08:15:22.151717 4832 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"bc7fb24eb792d448b55ed5e2d984c4783247ec2dc70708259ed13f1676a5263b"} pod="openshift-machine-config-operator/machine-config-daemon-9r9sz" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 25 08:15:22 crc kubenswrapper[4832]: I0125 08:15:22.151787 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-9r9sz" podUID="1fb47e8e-c812-41b4-9be7-3fad81e121b0" containerName="machine-config-daemon" containerID="cri-o://bc7fb24eb792d448b55ed5e2d984c4783247ec2dc70708259ed13f1676a5263b" gracePeriod=600 Jan 25 08:15:22 crc kubenswrapper[4832]: I0125 08:15:22.623704 4832 generic.go:334] "Generic (PLEG): container finished" podID="1fb47e8e-c812-41b4-9be7-3fad81e121b0" containerID="bc7fb24eb792d448b55ed5e2d984c4783247ec2dc70708259ed13f1676a5263b" exitCode=0 Jan 25 08:15:22 crc kubenswrapper[4832]: I0125 08:15:22.623736 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-9r9sz" event={"ID":"1fb47e8e-c812-41b4-9be7-3fad81e121b0","Type":"ContainerDied","Data":"bc7fb24eb792d448b55ed5e2d984c4783247ec2dc70708259ed13f1676a5263b"} Jan 25 08:15:22 crc kubenswrapper[4832]: I0125 08:15:22.623797 4832 scope.go:117] "RemoveContainer" containerID="3375547b40eab52484bd4c11f9fadcc1b41ff739f66fbe9ad0a6f2e89555dcb1" Jan 25 08:15:33 crc kubenswrapper[4832]: E0125 08:15:33.744119 4832 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-glance-api:current-podified" Jan 25 08:15:33 crc kubenswrapper[4832]: E0125 08:15:33.744917 4832 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:glance-db-sync,Image:quay.io/podified-antelope-centos9/openstack-glance-api:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/glance/glance.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:db-sync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-t6g5x,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42415,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*42415,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod glance-db-sync-dnzjb_openstack(88b922f3-0125-4078-8ec7-ad4edd04d0ed): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 25 08:15:33 crc kubenswrapper[4832]: E0125 08:15:33.746118 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"glance-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/glance-db-sync-dnzjb" podUID="88b922f3-0125-4078-8ec7-ad4edd04d0ed" Jan 25 08:15:33 crc kubenswrapper[4832]: E0125 08:15:33.888851 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"glance-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-glance-api:current-podified\\\"\"" pod="openstack/glance-db-sync-dnzjb" podUID="88b922f3-0125-4078-8ec7-ad4edd04d0ed" Jan 25 08:15:34 crc kubenswrapper[4832]: I0125 08:15:34.214929 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-77585f5f8c-pl49p"] Jan 25 08:15:34 crc kubenswrapper[4832]: W0125 08:15:34.217816 4832 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda036699e_21c9_45bd_abf1_f2b054143deb.slice/crio-405db18afb44995f1710855778265575f717ce0d7eb94b87fb394b5889ac089b WatchSource:0}: Error finding container 405db18afb44995f1710855778265575f717ce0d7eb94b87fb394b5889ac089b: Status 404 returned error can't find the container with id 405db18afb44995f1710855778265575f717ce0d7eb94b87fb394b5889ac089b Jan 25 08:15:34 crc kubenswrapper[4832]: I0125 08:15:34.897882 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-9r9sz" event={"ID":"1fb47e8e-c812-41b4-9be7-3fad81e121b0","Type":"ContainerStarted","Data":"a703522300807412e74dfb0216f7c46b79210bcc992ea5f87976c5936fa1c4d9"} Jan 25 08:15:34 crc kubenswrapper[4832]: I0125 08:15:34.904015 4832 generic.go:334] "Generic (PLEG): container finished" podID="a036699e-21c9-45bd-abf1-f2b054143deb" containerID="4066cad4c98ab89ec880906941517b6251e245f1266874916962fc5317b0612b" exitCode=0 Jan 25 08:15:34 crc kubenswrapper[4832]: I0125 08:15:34.904268 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-77585f5f8c-pl49p" event={"ID":"a036699e-21c9-45bd-abf1-f2b054143deb","Type":"ContainerDied","Data":"4066cad4c98ab89ec880906941517b6251e245f1266874916962fc5317b0612b"} Jan 25 08:15:34 crc kubenswrapper[4832]: I0125 08:15:34.904321 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-77585f5f8c-pl49p" event={"ID":"a036699e-21c9-45bd-abf1-f2b054143deb","Type":"ContainerStarted","Data":"405db18afb44995f1710855778265575f717ce0d7eb94b87fb394b5889ac089b"} Jan 25 08:15:34 crc kubenswrapper[4832]: I0125 08:15:34.910892 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-csqzf" event={"ID":"dd9939bf-1855-4b5d-8b7c-38e73d8a8a10","Type":"ContainerStarted","Data":"4eba20c9281a894eb2807c25bdda31d05f6c3826474f98e68c9778832d038975"} Jan 25 08:15:34 crc kubenswrapper[4832]: I0125 08:15:34.949349 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-db-sync-csqzf" podStartSLOduration=4.254342827 podStartE2EDuration="26.949321695s" podCreationTimestamp="2026-01-25 08:15:08 +0000 UTC" firstStartedPulling="2026-01-25 08:15:11.083016937 +0000 UTC m=+1093.756840470" lastFinishedPulling="2026-01-25 08:15:33.777995805 +0000 UTC m=+1116.451819338" observedRunningTime="2026-01-25 08:15:34.943038278 +0000 UTC m=+1117.616861821" watchObservedRunningTime="2026-01-25 08:15:34.949321695 +0000 UTC m=+1117.623145228" Jan 25 08:15:35 crc kubenswrapper[4832]: I0125 08:15:35.924687 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-77585f5f8c-pl49p" event={"ID":"a036699e-21c9-45bd-abf1-f2b054143deb","Type":"ContainerStarted","Data":"63a74969493a9ca0c6b78b98ce92dafc4ce1cf7293bff14daadf9f061154a4b6"} Jan 25 08:15:35 crc kubenswrapper[4832]: I0125 08:15:35.949901 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-77585f5f8c-pl49p" podStartSLOduration=18.949884542 podStartE2EDuration="18.949884542s" podCreationTimestamp="2026-01-25 08:15:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-25 08:15:35.942001155 +0000 UTC m=+1118.615824688" watchObservedRunningTime="2026-01-25 08:15:35.949884542 +0000 UTC m=+1118.623708075" Jan 25 08:15:36 crc kubenswrapper[4832]: I0125 08:15:36.935774 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-77585f5f8c-pl49p" Jan 25 08:15:37 crc kubenswrapper[4832]: I0125 08:15:37.943559 4832 generic.go:334] "Generic (PLEG): container finished" podID="dd9939bf-1855-4b5d-8b7c-38e73d8a8a10" containerID="4eba20c9281a894eb2807c25bdda31d05f6c3826474f98e68c9778832d038975" exitCode=0 Jan 25 08:15:37 crc kubenswrapper[4832]: I0125 08:15:37.943625 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-csqzf" event={"ID":"dd9939bf-1855-4b5d-8b7c-38e73d8a8a10","Type":"ContainerDied","Data":"4eba20c9281a894eb2807c25bdda31d05f6c3826474f98e68c9778832d038975"} Jan 25 08:15:39 crc kubenswrapper[4832]: I0125 08:15:39.268683 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-csqzf" Jan 25 08:15:39 crc kubenswrapper[4832]: I0125 08:15:39.391583 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z45bv\" (UniqueName: \"kubernetes.io/projected/dd9939bf-1855-4b5d-8b7c-38e73d8a8a10-kube-api-access-z45bv\") pod \"dd9939bf-1855-4b5d-8b7c-38e73d8a8a10\" (UID: \"dd9939bf-1855-4b5d-8b7c-38e73d8a8a10\") " Jan 25 08:15:39 crc kubenswrapper[4832]: I0125 08:15:39.391719 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dd9939bf-1855-4b5d-8b7c-38e73d8a8a10-config-data\") pod \"dd9939bf-1855-4b5d-8b7c-38e73d8a8a10\" (UID: \"dd9939bf-1855-4b5d-8b7c-38e73d8a8a10\") " Jan 25 08:15:39 crc kubenswrapper[4832]: I0125 08:15:39.391977 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dd9939bf-1855-4b5d-8b7c-38e73d8a8a10-combined-ca-bundle\") pod \"dd9939bf-1855-4b5d-8b7c-38e73d8a8a10\" (UID: \"dd9939bf-1855-4b5d-8b7c-38e73d8a8a10\") " Jan 25 08:15:39 crc kubenswrapper[4832]: I0125 08:15:39.405721 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dd9939bf-1855-4b5d-8b7c-38e73d8a8a10-kube-api-access-z45bv" (OuterVolumeSpecName: "kube-api-access-z45bv") pod "dd9939bf-1855-4b5d-8b7c-38e73d8a8a10" (UID: "dd9939bf-1855-4b5d-8b7c-38e73d8a8a10"). InnerVolumeSpecName "kube-api-access-z45bv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 25 08:15:39 crc kubenswrapper[4832]: I0125 08:15:39.423477 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dd9939bf-1855-4b5d-8b7c-38e73d8a8a10-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "dd9939bf-1855-4b5d-8b7c-38e73d8a8a10" (UID: "dd9939bf-1855-4b5d-8b7c-38e73d8a8a10"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 25 08:15:39 crc kubenswrapper[4832]: I0125 08:15:39.465068 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dd9939bf-1855-4b5d-8b7c-38e73d8a8a10-config-data" (OuterVolumeSpecName: "config-data") pod "dd9939bf-1855-4b5d-8b7c-38e73d8a8a10" (UID: "dd9939bf-1855-4b5d-8b7c-38e73d8a8a10"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 25 08:15:39 crc kubenswrapper[4832]: I0125 08:15:39.493743 4832 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dd9939bf-1855-4b5d-8b7c-38e73d8a8a10-config-data\") on node \"crc\" DevicePath \"\"" Jan 25 08:15:39 crc kubenswrapper[4832]: I0125 08:15:39.493785 4832 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dd9939bf-1855-4b5d-8b7c-38e73d8a8a10-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 25 08:15:39 crc kubenswrapper[4832]: I0125 08:15:39.493804 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z45bv\" (UniqueName: \"kubernetes.io/projected/dd9939bf-1855-4b5d-8b7c-38e73d8a8a10-kube-api-access-z45bv\") on node \"crc\" DevicePath \"\"" Jan 25 08:15:39 crc kubenswrapper[4832]: I0125 08:15:39.962346 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-csqzf" event={"ID":"dd9939bf-1855-4b5d-8b7c-38e73d8a8a10","Type":"ContainerDied","Data":"ca3f98a34f725c9bdd13bcd9c4963f5370be34a52eaa5bae5e1b468f8d6e85f0"} Jan 25 08:15:39 crc kubenswrapper[4832]: I0125 08:15:39.962465 4832 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ca3f98a34f725c9bdd13bcd9c4963f5370be34a52eaa5bae5e1b468f8d6e85f0" Jan 25 08:15:39 crc kubenswrapper[4832]: I0125 08:15:39.962433 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-csqzf" Jan 25 08:15:40 crc kubenswrapper[4832]: I0125 08:15:40.252531 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-vn66d"] Jan 25 08:15:40 crc kubenswrapper[4832]: E0125 08:15:40.252887 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dd9939bf-1855-4b5d-8b7c-38e73d8a8a10" containerName="keystone-db-sync" Jan 25 08:15:40 crc kubenswrapper[4832]: I0125 08:15:40.252904 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="dd9939bf-1855-4b5d-8b7c-38e73d8a8a10" containerName="keystone-db-sync" Jan 25 08:15:40 crc kubenswrapper[4832]: I0125 08:15:40.253048 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="dd9939bf-1855-4b5d-8b7c-38e73d8a8a10" containerName="keystone-db-sync" Jan 25 08:15:40 crc kubenswrapper[4832]: I0125 08:15:40.253580 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-vn66d" Jan 25 08:15:40 crc kubenswrapper[4832]: I0125 08:15:40.256503 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Jan 25 08:15:40 crc kubenswrapper[4832]: I0125 08:15:40.256584 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Jan 25 08:15:40 crc kubenswrapper[4832]: I0125 08:15:40.256662 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Jan 25 08:15:40 crc kubenswrapper[4832]: I0125 08:15:40.257043 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-xml8n" Jan 25 08:15:40 crc kubenswrapper[4832]: I0125 08:15:40.257054 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Jan 25 08:15:40 crc kubenswrapper[4832]: I0125 08:15:40.274577 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-vn66d"] Jan 25 08:15:40 crc kubenswrapper[4832]: I0125 08:15:40.281292 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-77585f5f8c-pl49p"] Jan 25 08:15:40 crc kubenswrapper[4832]: I0125 08:15:40.283532 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-77585f5f8c-pl49p" podUID="a036699e-21c9-45bd-abf1-f2b054143deb" containerName="dnsmasq-dns" containerID="cri-o://63a74969493a9ca0c6b78b98ce92dafc4ce1cf7293bff14daadf9f061154a4b6" gracePeriod=10 Jan 25 08:15:40 crc kubenswrapper[4832]: I0125 08:15:40.285555 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-77585f5f8c-pl49p" Jan 25 08:15:40 crc kubenswrapper[4832]: I0125 08:15:40.347660 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-55fff446b9-gj9pp"] Jan 25 08:15:40 crc kubenswrapper[4832]: I0125 08:15:40.349016 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-55fff446b9-gj9pp" Jan 25 08:15:40 crc kubenswrapper[4832]: I0125 08:15:40.374377 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-55fff446b9-gj9pp"] Jan 25 08:15:40 crc kubenswrapper[4832]: I0125 08:15:40.416886 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5e0cb7b1-ca34-4d43-ab93-febd41f35489-combined-ca-bundle\") pod \"keystone-bootstrap-vn66d\" (UID: \"5e0cb7b1-ca34-4d43-ab93-febd41f35489\") " pod="openstack/keystone-bootstrap-vn66d" Jan 25 08:15:40 crc kubenswrapper[4832]: I0125 08:15:40.416948 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5e0cb7b1-ca34-4d43-ab93-febd41f35489-config-data\") pod \"keystone-bootstrap-vn66d\" (UID: \"5e0cb7b1-ca34-4d43-ab93-febd41f35489\") " pod="openstack/keystone-bootstrap-vn66d" Jan 25 08:15:40 crc kubenswrapper[4832]: I0125 08:15:40.416969 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/5e0cb7b1-ca34-4d43-ab93-febd41f35489-credential-keys\") pod \"keystone-bootstrap-vn66d\" (UID: \"5e0cb7b1-ca34-4d43-ab93-febd41f35489\") " pod="openstack/keystone-bootstrap-vn66d" Jan 25 08:15:40 crc kubenswrapper[4832]: I0125 08:15:40.416998 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5e0cb7b1-ca34-4d43-ab93-febd41f35489-scripts\") pod \"keystone-bootstrap-vn66d\" (UID: \"5e0cb7b1-ca34-4d43-ab93-febd41f35489\") " pod="openstack/keystone-bootstrap-vn66d" Jan 25 08:15:40 crc kubenswrapper[4832]: I0125 08:15:40.417055 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/5e0cb7b1-ca34-4d43-ab93-febd41f35489-fernet-keys\") pod \"keystone-bootstrap-vn66d\" (UID: \"5e0cb7b1-ca34-4d43-ab93-febd41f35489\") " pod="openstack/keystone-bootstrap-vn66d" Jan 25 08:15:40 crc kubenswrapper[4832]: I0125 08:15:40.417077 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ndndd\" (UniqueName: \"kubernetes.io/projected/5e0cb7b1-ca34-4d43-ab93-febd41f35489-kube-api-access-ndndd\") pod \"keystone-bootstrap-vn66d\" (UID: \"5e0cb7b1-ca34-4d43-ab93-febd41f35489\") " pod="openstack/keystone-bootstrap-vn66d" Jan 25 08:15:40 crc kubenswrapper[4832]: I0125 08:15:40.449685 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-85c746769-89kvs"] Jan 25 08:15:40 crc kubenswrapper[4832]: I0125 08:15:40.455482 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-85c746769-89kvs" Jan 25 08:15:40 crc kubenswrapper[4832]: I0125 08:15:40.461016 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"horizon" Jan 25 08:15:40 crc kubenswrapper[4832]: I0125 08:15:40.461069 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"horizon-scripts" Jan 25 08:15:40 crc kubenswrapper[4832]: I0125 08:15:40.461368 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"horizon-config-data" Jan 25 08:15:40 crc kubenswrapper[4832]: I0125 08:15:40.461526 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"horizon-horizon-dockercfg-xpz6j" Jan 25 08:15:40 crc kubenswrapper[4832]: I0125 08:15:40.490201 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-sync-pfc28"] Jan 25 08:15:40 crc kubenswrapper[4832]: I0125 08:15:40.491201 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-pfc28" Jan 25 08:15:40 crc kubenswrapper[4832]: I0125 08:15:40.499218 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Jan 25 08:15:40 crc kubenswrapper[4832]: I0125 08:15:40.499668 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Jan 25 08:15:40 crc kubenswrapper[4832]: I0125 08:15:40.499797 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-d67qp" Jan 25 08:15:40 crc kubenswrapper[4832]: I0125 08:15:40.513127 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-85c746769-89kvs"] Jan 25 08:15:40 crc kubenswrapper[4832]: I0125 08:15:40.518454 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/5e0cb7b1-ca34-4d43-ab93-febd41f35489-fernet-keys\") pod \"keystone-bootstrap-vn66d\" (UID: \"5e0cb7b1-ca34-4d43-ab93-febd41f35489\") " pod="openstack/keystone-bootstrap-vn66d" Jan 25 08:15:40 crc kubenswrapper[4832]: I0125 08:15:40.518502 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ndndd\" (UniqueName: \"kubernetes.io/projected/5e0cb7b1-ca34-4d43-ab93-febd41f35489-kube-api-access-ndndd\") pod \"keystone-bootstrap-vn66d\" (UID: \"5e0cb7b1-ca34-4d43-ab93-febd41f35489\") " pod="openstack/keystone-bootstrap-vn66d" Jan 25 08:15:40 crc kubenswrapper[4832]: I0125 08:15:40.518548 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d66779ca-60d0-4bce-9bb6-e10b6508ad7f-dns-swift-storage-0\") pod \"dnsmasq-dns-55fff446b9-gj9pp\" (UID: \"d66779ca-60d0-4bce-9bb6-e10b6508ad7f\") " pod="openstack/dnsmasq-dns-55fff446b9-gj9pp" Jan 25 08:15:40 crc kubenswrapper[4832]: I0125 08:15:40.518594 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d66779ca-60d0-4bce-9bb6-e10b6508ad7f-config\") pod \"dnsmasq-dns-55fff446b9-gj9pp\" (UID: \"d66779ca-60d0-4bce-9bb6-e10b6508ad7f\") " pod="openstack/dnsmasq-dns-55fff446b9-gj9pp" Jan 25 08:15:40 crc kubenswrapper[4832]: I0125 08:15:40.518637 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5e0cb7b1-ca34-4d43-ab93-febd41f35489-combined-ca-bundle\") pod \"keystone-bootstrap-vn66d\" (UID: \"5e0cb7b1-ca34-4d43-ab93-febd41f35489\") " pod="openstack/keystone-bootstrap-vn66d" Jan 25 08:15:40 crc kubenswrapper[4832]: I0125 08:15:40.518667 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sbpkg\" (UniqueName: \"kubernetes.io/projected/d66779ca-60d0-4bce-9bb6-e10b6508ad7f-kube-api-access-sbpkg\") pod \"dnsmasq-dns-55fff446b9-gj9pp\" (UID: \"d66779ca-60d0-4bce-9bb6-e10b6508ad7f\") " pod="openstack/dnsmasq-dns-55fff446b9-gj9pp" Jan 25 08:15:40 crc kubenswrapper[4832]: I0125 08:15:40.518694 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5e0cb7b1-ca34-4d43-ab93-febd41f35489-config-data\") pod \"keystone-bootstrap-vn66d\" (UID: \"5e0cb7b1-ca34-4d43-ab93-febd41f35489\") " pod="openstack/keystone-bootstrap-vn66d" Jan 25 08:15:40 crc kubenswrapper[4832]: I0125 08:15:40.518709 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d66779ca-60d0-4bce-9bb6-e10b6508ad7f-ovsdbserver-sb\") pod \"dnsmasq-dns-55fff446b9-gj9pp\" (UID: \"d66779ca-60d0-4bce-9bb6-e10b6508ad7f\") " pod="openstack/dnsmasq-dns-55fff446b9-gj9pp" Jan 25 08:15:40 crc kubenswrapper[4832]: I0125 08:15:40.518732 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/5e0cb7b1-ca34-4d43-ab93-febd41f35489-credential-keys\") pod \"keystone-bootstrap-vn66d\" (UID: \"5e0cb7b1-ca34-4d43-ab93-febd41f35489\") " pod="openstack/keystone-bootstrap-vn66d" Jan 25 08:15:40 crc kubenswrapper[4832]: I0125 08:15:40.518763 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d66779ca-60d0-4bce-9bb6-e10b6508ad7f-dns-svc\") pod \"dnsmasq-dns-55fff446b9-gj9pp\" (UID: \"d66779ca-60d0-4bce-9bb6-e10b6508ad7f\") " pod="openstack/dnsmasq-dns-55fff446b9-gj9pp" Jan 25 08:15:40 crc kubenswrapper[4832]: I0125 08:15:40.518801 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5e0cb7b1-ca34-4d43-ab93-febd41f35489-scripts\") pod \"keystone-bootstrap-vn66d\" (UID: \"5e0cb7b1-ca34-4d43-ab93-febd41f35489\") " pod="openstack/keystone-bootstrap-vn66d" Jan 25 08:15:40 crc kubenswrapper[4832]: I0125 08:15:40.518847 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d66779ca-60d0-4bce-9bb6-e10b6508ad7f-ovsdbserver-nb\") pod \"dnsmasq-dns-55fff446b9-gj9pp\" (UID: \"d66779ca-60d0-4bce-9bb6-e10b6508ad7f\") " pod="openstack/dnsmasq-dns-55fff446b9-gj9pp" Jan 25 08:15:40 crc kubenswrapper[4832]: I0125 08:15:40.523116 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/5e0cb7b1-ca34-4d43-ab93-febd41f35489-fernet-keys\") pod \"keystone-bootstrap-vn66d\" (UID: \"5e0cb7b1-ca34-4d43-ab93-febd41f35489\") " pod="openstack/keystone-bootstrap-vn66d" Jan 25 08:15:40 crc kubenswrapper[4832]: I0125 08:15:40.523854 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5e0cb7b1-ca34-4d43-ab93-febd41f35489-config-data\") pod \"keystone-bootstrap-vn66d\" (UID: \"5e0cb7b1-ca34-4d43-ab93-febd41f35489\") " pod="openstack/keystone-bootstrap-vn66d" Jan 25 08:15:40 crc kubenswrapper[4832]: I0125 08:15:40.544264 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/5e0cb7b1-ca34-4d43-ab93-febd41f35489-credential-keys\") pod \"keystone-bootstrap-vn66d\" (UID: \"5e0cb7b1-ca34-4d43-ab93-febd41f35489\") " pod="openstack/keystone-bootstrap-vn66d" Jan 25 08:15:40 crc kubenswrapper[4832]: I0125 08:15:40.545112 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5e0cb7b1-ca34-4d43-ab93-febd41f35489-combined-ca-bundle\") pod \"keystone-bootstrap-vn66d\" (UID: \"5e0cb7b1-ca34-4d43-ab93-febd41f35489\") " pod="openstack/keystone-bootstrap-vn66d" Jan 25 08:15:40 crc kubenswrapper[4832]: I0125 08:15:40.547548 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5e0cb7b1-ca34-4d43-ab93-febd41f35489-scripts\") pod \"keystone-bootstrap-vn66d\" (UID: \"5e0cb7b1-ca34-4d43-ab93-febd41f35489\") " pod="openstack/keystone-bootstrap-vn66d" Jan 25 08:15:40 crc kubenswrapper[4832]: I0125 08:15:40.579168 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ndndd\" (UniqueName: \"kubernetes.io/projected/5e0cb7b1-ca34-4d43-ab93-febd41f35489-kube-api-access-ndndd\") pod \"keystone-bootstrap-vn66d\" (UID: \"5e0cb7b1-ca34-4d43-ab93-febd41f35489\") " pod="openstack/keystone-bootstrap-vn66d" Jan 25 08:15:40 crc kubenswrapper[4832]: I0125 08:15:40.595550 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-pfc28"] Jan 25 08:15:40 crc kubenswrapper[4832]: I0125 08:15:40.620979 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8129d5bc-af98-4ef4-b204-fc568ac4ae11-logs\") pod \"horizon-85c746769-89kvs\" (UID: \"8129d5bc-af98-4ef4-b204-fc568ac4ae11\") " pod="openstack/horizon-85c746769-89kvs" Jan 25 08:15:40 crc kubenswrapper[4832]: I0125 08:15:40.621039 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d66779ca-60d0-4bce-9bb6-e10b6508ad7f-ovsdbserver-nb\") pod \"dnsmasq-dns-55fff446b9-gj9pp\" (UID: \"d66779ca-60d0-4bce-9bb6-e10b6508ad7f\") " pod="openstack/dnsmasq-dns-55fff446b9-gj9pp" Jan 25 08:15:40 crc kubenswrapper[4832]: I0125 08:15:40.621064 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/8129d5bc-af98-4ef4-b204-fc568ac4ae11-scripts\") pod \"horizon-85c746769-89kvs\" (UID: \"8129d5bc-af98-4ef4-b204-fc568ac4ae11\") " pod="openstack/horizon-85c746769-89kvs" Jan 25 08:15:40 crc kubenswrapper[4832]: I0125 08:15:40.621083 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v67wb\" (UniqueName: \"kubernetes.io/projected/8129d5bc-af98-4ef4-b204-fc568ac4ae11-kube-api-access-v67wb\") pod \"horizon-85c746769-89kvs\" (UID: \"8129d5bc-af98-4ef4-b204-fc568ac4ae11\") " pod="openstack/horizon-85c746769-89kvs" Jan 25 08:15:40 crc kubenswrapper[4832]: I0125 08:15:40.621108 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/88d4e115-8ad0-4971-b4aa-cb63d0bd2c11-combined-ca-bundle\") pod \"neutron-db-sync-pfc28\" (UID: \"88d4e115-8ad0-4971-b4aa-cb63d0bd2c11\") " pod="openstack/neutron-db-sync-pfc28" Jan 25 08:15:40 crc kubenswrapper[4832]: I0125 08:15:40.621132 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/88d4e115-8ad0-4971-b4aa-cb63d0bd2c11-config\") pod \"neutron-db-sync-pfc28\" (UID: \"88d4e115-8ad0-4971-b4aa-cb63d0bd2c11\") " pod="openstack/neutron-db-sync-pfc28" Jan 25 08:15:40 crc kubenswrapper[4832]: I0125 08:15:40.621146 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/8129d5bc-af98-4ef4-b204-fc568ac4ae11-config-data\") pod \"horizon-85c746769-89kvs\" (UID: \"8129d5bc-af98-4ef4-b204-fc568ac4ae11\") " pod="openstack/horizon-85c746769-89kvs" Jan 25 08:15:40 crc kubenswrapper[4832]: I0125 08:15:40.621191 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d66779ca-60d0-4bce-9bb6-e10b6508ad7f-dns-swift-storage-0\") pod \"dnsmasq-dns-55fff446b9-gj9pp\" (UID: \"d66779ca-60d0-4bce-9bb6-e10b6508ad7f\") " pod="openstack/dnsmasq-dns-55fff446b9-gj9pp" Jan 25 08:15:40 crc kubenswrapper[4832]: I0125 08:15:40.621209 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/8129d5bc-af98-4ef4-b204-fc568ac4ae11-horizon-secret-key\") pod \"horizon-85c746769-89kvs\" (UID: \"8129d5bc-af98-4ef4-b204-fc568ac4ae11\") " pod="openstack/horizon-85c746769-89kvs" Jan 25 08:15:40 crc kubenswrapper[4832]: I0125 08:15:40.621227 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ck2dk\" (UniqueName: \"kubernetes.io/projected/88d4e115-8ad0-4971-b4aa-cb63d0bd2c11-kube-api-access-ck2dk\") pod \"neutron-db-sync-pfc28\" (UID: \"88d4e115-8ad0-4971-b4aa-cb63d0bd2c11\") " pod="openstack/neutron-db-sync-pfc28" Jan 25 08:15:40 crc kubenswrapper[4832]: I0125 08:15:40.621258 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d66779ca-60d0-4bce-9bb6-e10b6508ad7f-config\") pod \"dnsmasq-dns-55fff446b9-gj9pp\" (UID: \"d66779ca-60d0-4bce-9bb6-e10b6508ad7f\") " pod="openstack/dnsmasq-dns-55fff446b9-gj9pp" Jan 25 08:15:40 crc kubenswrapper[4832]: I0125 08:15:40.621284 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sbpkg\" (UniqueName: \"kubernetes.io/projected/d66779ca-60d0-4bce-9bb6-e10b6508ad7f-kube-api-access-sbpkg\") pod \"dnsmasq-dns-55fff446b9-gj9pp\" (UID: \"d66779ca-60d0-4bce-9bb6-e10b6508ad7f\") " pod="openstack/dnsmasq-dns-55fff446b9-gj9pp" Jan 25 08:15:40 crc kubenswrapper[4832]: I0125 08:15:40.621309 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d66779ca-60d0-4bce-9bb6-e10b6508ad7f-ovsdbserver-sb\") pod \"dnsmasq-dns-55fff446b9-gj9pp\" (UID: \"d66779ca-60d0-4bce-9bb6-e10b6508ad7f\") " pod="openstack/dnsmasq-dns-55fff446b9-gj9pp" Jan 25 08:15:40 crc kubenswrapper[4832]: I0125 08:15:40.621342 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d66779ca-60d0-4bce-9bb6-e10b6508ad7f-dns-svc\") pod \"dnsmasq-dns-55fff446b9-gj9pp\" (UID: \"d66779ca-60d0-4bce-9bb6-e10b6508ad7f\") " pod="openstack/dnsmasq-dns-55fff446b9-gj9pp" Jan 25 08:15:40 crc kubenswrapper[4832]: I0125 08:15:40.622227 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d66779ca-60d0-4bce-9bb6-e10b6508ad7f-dns-svc\") pod \"dnsmasq-dns-55fff446b9-gj9pp\" (UID: \"d66779ca-60d0-4bce-9bb6-e10b6508ad7f\") " pod="openstack/dnsmasq-dns-55fff446b9-gj9pp" Jan 25 08:15:40 crc kubenswrapper[4832]: I0125 08:15:40.622795 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d66779ca-60d0-4bce-9bb6-e10b6508ad7f-ovsdbserver-nb\") pod \"dnsmasq-dns-55fff446b9-gj9pp\" (UID: \"d66779ca-60d0-4bce-9bb6-e10b6508ad7f\") " pod="openstack/dnsmasq-dns-55fff446b9-gj9pp" Jan 25 08:15:40 crc kubenswrapper[4832]: I0125 08:15:40.623296 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d66779ca-60d0-4bce-9bb6-e10b6508ad7f-dns-swift-storage-0\") pod \"dnsmasq-dns-55fff446b9-gj9pp\" (UID: \"d66779ca-60d0-4bce-9bb6-e10b6508ad7f\") " pod="openstack/dnsmasq-dns-55fff446b9-gj9pp" Jan 25 08:15:40 crc kubenswrapper[4832]: I0125 08:15:40.628499 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 25 08:15:40 crc kubenswrapper[4832]: I0125 08:15:40.630586 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 25 08:15:40 crc kubenswrapper[4832]: I0125 08:15:40.635913 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 25 08:15:40 crc kubenswrapper[4832]: I0125 08:15:40.636089 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 25 08:15:40 crc kubenswrapper[4832]: I0125 08:15:40.639054 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d66779ca-60d0-4bce-9bb6-e10b6508ad7f-config\") pod \"dnsmasq-dns-55fff446b9-gj9pp\" (UID: \"d66779ca-60d0-4bce-9bb6-e10b6508ad7f\") " pod="openstack/dnsmasq-dns-55fff446b9-gj9pp" Jan 25 08:15:40 crc kubenswrapper[4832]: I0125 08:15:40.642250 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d66779ca-60d0-4bce-9bb6-e10b6508ad7f-ovsdbserver-sb\") pod \"dnsmasq-dns-55fff446b9-gj9pp\" (UID: \"d66779ca-60d0-4bce-9bb6-e10b6508ad7f\") " pod="openstack/dnsmasq-dns-55fff446b9-gj9pp" Jan 25 08:15:40 crc kubenswrapper[4832]: I0125 08:15:40.660168 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sbpkg\" (UniqueName: \"kubernetes.io/projected/d66779ca-60d0-4bce-9bb6-e10b6508ad7f-kube-api-access-sbpkg\") pod \"dnsmasq-dns-55fff446b9-gj9pp\" (UID: \"d66779ca-60d0-4bce-9bb6-e10b6508ad7f\") " pod="openstack/dnsmasq-dns-55fff446b9-gj9pp" Jan 25 08:15:40 crc kubenswrapper[4832]: I0125 08:15:40.676144 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-sync-vrvb2"] Jan 25 08:15:40 crc kubenswrapper[4832]: I0125 08:15:40.677227 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-vrvb2" Jan 25 08:15:40 crc kubenswrapper[4832]: I0125 08:15:40.680831 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Jan 25 08:15:40 crc kubenswrapper[4832]: I0125 08:15:40.681028 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-975sp" Jan 25 08:15:40 crc kubenswrapper[4832]: I0125 08:15:40.681182 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Jan 25 08:15:40 crc kubenswrapper[4832]: I0125 08:15:40.685523 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-55fff446b9-gj9pp" Jan 25 08:15:40 crc kubenswrapper[4832]: I0125 08:15:40.698888 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 25 08:15:40 crc kubenswrapper[4832]: I0125 08:15:40.723372 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b48b257e-ddb7-486d-8788-489ca788ac1f-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"b48b257e-ddb7-486d-8788-489ca788ac1f\") " pod="openstack/ceilometer-0" Jan 25 08:15:40 crc kubenswrapper[4832]: I0125 08:15:40.723630 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/8129d5bc-af98-4ef4-b204-fc568ac4ae11-scripts\") pod \"horizon-85c746769-89kvs\" (UID: \"8129d5bc-af98-4ef4-b204-fc568ac4ae11\") " pod="openstack/horizon-85c746769-89kvs" Jan 25 08:15:40 crc kubenswrapper[4832]: I0125 08:15:40.723708 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v67wb\" (UniqueName: \"kubernetes.io/projected/8129d5bc-af98-4ef4-b204-fc568ac4ae11-kube-api-access-v67wb\") pod \"horizon-85c746769-89kvs\" (UID: \"8129d5bc-af98-4ef4-b204-fc568ac4ae11\") " pod="openstack/horizon-85c746769-89kvs" Jan 25 08:15:40 crc kubenswrapper[4832]: I0125 08:15:40.723788 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/88d4e115-8ad0-4971-b4aa-cb63d0bd2c11-combined-ca-bundle\") pod \"neutron-db-sync-pfc28\" (UID: \"88d4e115-8ad0-4971-b4aa-cb63d0bd2c11\") " pod="openstack/neutron-db-sync-pfc28" Jan 25 08:15:40 crc kubenswrapper[4832]: I0125 08:15:40.723880 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/88d4e115-8ad0-4971-b4aa-cb63d0bd2c11-config\") pod \"neutron-db-sync-pfc28\" (UID: \"88d4e115-8ad0-4971-b4aa-cb63d0bd2c11\") " pod="openstack/neutron-db-sync-pfc28" Jan 25 08:15:40 crc kubenswrapper[4832]: I0125 08:15:40.723958 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/8129d5bc-af98-4ef4-b204-fc568ac4ae11-config-data\") pod \"horizon-85c746769-89kvs\" (UID: \"8129d5bc-af98-4ef4-b204-fc568ac4ae11\") " pod="openstack/horizon-85c746769-89kvs" Jan 25 08:15:40 crc kubenswrapper[4832]: I0125 08:15:40.724060 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b48b257e-ddb7-486d-8788-489ca788ac1f-scripts\") pod \"ceilometer-0\" (UID: \"b48b257e-ddb7-486d-8788-489ca788ac1f\") " pod="openstack/ceilometer-0" Jan 25 08:15:40 crc kubenswrapper[4832]: I0125 08:15:40.724147 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t5q9s\" (UniqueName: \"kubernetes.io/projected/b48b257e-ddb7-486d-8788-489ca788ac1f-kube-api-access-t5q9s\") pod \"ceilometer-0\" (UID: \"b48b257e-ddb7-486d-8788-489ca788ac1f\") " pod="openstack/ceilometer-0" Jan 25 08:15:40 crc kubenswrapper[4832]: I0125 08:15:40.724231 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b48b257e-ddb7-486d-8788-489ca788ac1f-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"b48b257e-ddb7-486d-8788-489ca788ac1f\") " pod="openstack/ceilometer-0" Jan 25 08:15:40 crc kubenswrapper[4832]: I0125 08:15:40.724329 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/8129d5bc-af98-4ef4-b204-fc568ac4ae11-horizon-secret-key\") pod \"horizon-85c746769-89kvs\" (UID: \"8129d5bc-af98-4ef4-b204-fc568ac4ae11\") " pod="openstack/horizon-85c746769-89kvs" Jan 25 08:15:40 crc kubenswrapper[4832]: I0125 08:15:40.724415 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ck2dk\" (UniqueName: \"kubernetes.io/projected/88d4e115-8ad0-4971-b4aa-cb63d0bd2c11-kube-api-access-ck2dk\") pod \"neutron-db-sync-pfc28\" (UID: \"88d4e115-8ad0-4971-b4aa-cb63d0bd2c11\") " pod="openstack/neutron-db-sync-pfc28" Jan 25 08:15:40 crc kubenswrapper[4832]: I0125 08:15:40.724527 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b48b257e-ddb7-486d-8788-489ca788ac1f-run-httpd\") pod \"ceilometer-0\" (UID: \"b48b257e-ddb7-486d-8788-489ca788ac1f\") " pod="openstack/ceilometer-0" Jan 25 08:15:40 crc kubenswrapper[4832]: I0125 08:15:40.724614 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b48b257e-ddb7-486d-8788-489ca788ac1f-config-data\") pod \"ceilometer-0\" (UID: \"b48b257e-ddb7-486d-8788-489ca788ac1f\") " pod="openstack/ceilometer-0" Jan 25 08:15:40 crc kubenswrapper[4832]: I0125 08:15:40.724704 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b48b257e-ddb7-486d-8788-489ca788ac1f-log-httpd\") pod \"ceilometer-0\" (UID: \"b48b257e-ddb7-486d-8788-489ca788ac1f\") " pod="openstack/ceilometer-0" Jan 25 08:15:40 crc kubenswrapper[4832]: I0125 08:15:40.724831 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8129d5bc-af98-4ef4-b204-fc568ac4ae11-logs\") pod \"horizon-85c746769-89kvs\" (UID: \"8129d5bc-af98-4ef4-b204-fc568ac4ae11\") " pod="openstack/horizon-85c746769-89kvs" Jan 25 08:15:40 crc kubenswrapper[4832]: I0125 08:15:40.725314 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8129d5bc-af98-4ef4-b204-fc568ac4ae11-logs\") pod \"horizon-85c746769-89kvs\" (UID: \"8129d5bc-af98-4ef4-b204-fc568ac4ae11\") " pod="openstack/horizon-85c746769-89kvs" Jan 25 08:15:40 crc kubenswrapper[4832]: I0125 08:15:40.727245 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/8129d5bc-af98-4ef4-b204-fc568ac4ae11-scripts\") pod \"horizon-85c746769-89kvs\" (UID: \"8129d5bc-af98-4ef4-b204-fc568ac4ae11\") " pod="openstack/horizon-85c746769-89kvs" Jan 25 08:15:40 crc kubenswrapper[4832]: I0125 08:15:40.732689 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/8129d5bc-af98-4ef4-b204-fc568ac4ae11-config-data\") pod \"horizon-85c746769-89kvs\" (UID: \"8129d5bc-af98-4ef4-b204-fc568ac4ae11\") " pod="openstack/horizon-85c746769-89kvs" Jan 25 08:15:40 crc kubenswrapper[4832]: I0125 08:15:40.733893 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/88d4e115-8ad0-4971-b4aa-cb63d0bd2c11-config\") pod \"neutron-db-sync-pfc28\" (UID: \"88d4e115-8ad0-4971-b4aa-cb63d0bd2c11\") " pod="openstack/neutron-db-sync-pfc28" Jan 25 08:15:40 crc kubenswrapper[4832]: I0125 08:15:40.740440 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/8129d5bc-af98-4ef4-b204-fc568ac4ae11-horizon-secret-key\") pod \"horizon-85c746769-89kvs\" (UID: \"8129d5bc-af98-4ef4-b204-fc568ac4ae11\") " pod="openstack/horizon-85c746769-89kvs" Jan 25 08:15:40 crc kubenswrapper[4832]: I0125 08:15:40.753324 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-vrvb2"] Jan 25 08:15:40 crc kubenswrapper[4832]: I0125 08:15:40.756041 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/88d4e115-8ad0-4971-b4aa-cb63d0bd2c11-combined-ca-bundle\") pod \"neutron-db-sync-pfc28\" (UID: \"88d4e115-8ad0-4971-b4aa-cb63d0bd2c11\") " pod="openstack/neutron-db-sync-pfc28" Jan 25 08:15:40 crc kubenswrapper[4832]: I0125 08:15:40.781369 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v67wb\" (UniqueName: \"kubernetes.io/projected/8129d5bc-af98-4ef4-b204-fc568ac4ae11-kube-api-access-v67wb\") pod \"horizon-85c746769-89kvs\" (UID: \"8129d5bc-af98-4ef4-b204-fc568ac4ae11\") " pod="openstack/horizon-85c746769-89kvs" Jan 25 08:15:40 crc kubenswrapper[4832]: I0125 08:15:40.781764 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-85c746769-89kvs" Jan 25 08:15:40 crc kubenswrapper[4832]: I0125 08:15:40.782570 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ck2dk\" (UniqueName: \"kubernetes.io/projected/88d4e115-8ad0-4971-b4aa-cb63d0bd2c11-kube-api-access-ck2dk\") pod \"neutron-db-sync-pfc28\" (UID: \"88d4e115-8ad0-4971-b4aa-cb63d0bd2c11\") " pod="openstack/neutron-db-sync-pfc28" Jan 25 08:15:40 crc kubenswrapper[4832]: I0125 08:15:40.839034 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b48b257e-ddb7-486d-8788-489ca788ac1f-run-httpd\") pod \"ceilometer-0\" (UID: \"b48b257e-ddb7-486d-8788-489ca788ac1f\") " pod="openstack/ceilometer-0" Jan 25 08:15:40 crc kubenswrapper[4832]: I0125 08:15:40.839081 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b48b257e-ddb7-486d-8788-489ca788ac1f-config-data\") pod \"ceilometer-0\" (UID: \"b48b257e-ddb7-486d-8788-489ca788ac1f\") " pod="openstack/ceilometer-0" Jan 25 08:15:40 crc kubenswrapper[4832]: I0125 08:15:40.839114 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/e793ce7a-261b-4b97-8436-c7a5efc5e126-db-sync-config-data\") pod \"cinder-db-sync-vrvb2\" (UID: \"e793ce7a-261b-4b97-8436-c7a5efc5e126\") " pod="openstack/cinder-db-sync-vrvb2" Jan 25 08:15:40 crc kubenswrapper[4832]: I0125 08:15:40.839664 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b48b257e-ddb7-486d-8788-489ca788ac1f-run-httpd\") pod \"ceilometer-0\" (UID: \"b48b257e-ddb7-486d-8788-489ca788ac1f\") " pod="openstack/ceilometer-0" Jan 25 08:15:40 crc kubenswrapper[4832]: I0125 08:15:40.841864 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vxq2n\" (UniqueName: \"kubernetes.io/projected/e793ce7a-261b-4b97-8436-c7a5efc5e126-kube-api-access-vxq2n\") pod \"cinder-db-sync-vrvb2\" (UID: \"e793ce7a-261b-4b97-8436-c7a5efc5e126\") " pod="openstack/cinder-db-sync-vrvb2" Jan 25 08:15:40 crc kubenswrapper[4832]: I0125 08:15:40.842323 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b48b257e-ddb7-486d-8788-489ca788ac1f-log-httpd\") pod \"ceilometer-0\" (UID: \"b48b257e-ddb7-486d-8788-489ca788ac1f\") " pod="openstack/ceilometer-0" Jan 25 08:15:40 crc kubenswrapper[4832]: I0125 08:15:40.842635 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b48b257e-ddb7-486d-8788-489ca788ac1f-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"b48b257e-ddb7-486d-8788-489ca788ac1f\") " pod="openstack/ceilometer-0" Jan 25 08:15:40 crc kubenswrapper[4832]: I0125 08:15:40.842788 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e793ce7a-261b-4b97-8436-c7a5efc5e126-config-data\") pod \"cinder-db-sync-vrvb2\" (UID: \"e793ce7a-261b-4b97-8436-c7a5efc5e126\") " pod="openstack/cinder-db-sync-vrvb2" Jan 25 08:15:40 crc kubenswrapper[4832]: I0125 08:15:40.842925 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e793ce7a-261b-4b97-8436-c7a5efc5e126-combined-ca-bundle\") pod \"cinder-db-sync-vrvb2\" (UID: \"e793ce7a-261b-4b97-8436-c7a5efc5e126\") " pod="openstack/cinder-db-sync-vrvb2" Jan 25 08:15:40 crc kubenswrapper[4832]: I0125 08:15:40.843020 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e793ce7a-261b-4b97-8436-c7a5efc5e126-scripts\") pod \"cinder-db-sync-vrvb2\" (UID: \"e793ce7a-261b-4b97-8436-c7a5efc5e126\") " pod="openstack/cinder-db-sync-vrvb2" Jan 25 08:15:40 crc kubenswrapper[4832]: I0125 08:15:40.843097 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b48b257e-ddb7-486d-8788-489ca788ac1f-scripts\") pod \"ceilometer-0\" (UID: \"b48b257e-ddb7-486d-8788-489ca788ac1f\") " pod="openstack/ceilometer-0" Jan 25 08:15:40 crc kubenswrapper[4832]: I0125 08:15:40.843187 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/e793ce7a-261b-4b97-8436-c7a5efc5e126-etc-machine-id\") pod \"cinder-db-sync-vrvb2\" (UID: \"e793ce7a-261b-4b97-8436-c7a5efc5e126\") " pod="openstack/cinder-db-sync-vrvb2" Jan 25 08:15:40 crc kubenswrapper[4832]: I0125 08:15:40.843277 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t5q9s\" (UniqueName: \"kubernetes.io/projected/b48b257e-ddb7-486d-8788-489ca788ac1f-kube-api-access-t5q9s\") pod \"ceilometer-0\" (UID: \"b48b257e-ddb7-486d-8788-489ca788ac1f\") " pod="openstack/ceilometer-0" Jan 25 08:15:40 crc kubenswrapper[4832]: I0125 08:15:40.843356 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b48b257e-ddb7-486d-8788-489ca788ac1f-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"b48b257e-ddb7-486d-8788-489ca788ac1f\") " pod="openstack/ceilometer-0" Jan 25 08:15:40 crc kubenswrapper[4832]: I0125 08:15:40.843826 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b48b257e-ddb7-486d-8788-489ca788ac1f-log-httpd\") pod \"ceilometer-0\" (UID: \"b48b257e-ddb7-486d-8788-489ca788ac1f\") " pod="openstack/ceilometer-0" Jan 25 08:15:40 crc kubenswrapper[4832]: I0125 08:15:40.849983 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-pfc28" Jan 25 08:15:40 crc kubenswrapper[4832]: I0125 08:15:40.853424 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-547d75495c-rgz7z"] Jan 25 08:15:40 crc kubenswrapper[4832]: I0125 08:15:40.855005 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-547d75495c-rgz7z" Jan 25 08:15:40 crc kubenswrapper[4832]: I0125 08:15:40.860294 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b48b257e-ddb7-486d-8788-489ca788ac1f-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"b48b257e-ddb7-486d-8788-489ca788ac1f\") " pod="openstack/ceilometer-0" Jan 25 08:15:40 crc kubenswrapper[4832]: I0125 08:15:40.861992 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b48b257e-ddb7-486d-8788-489ca788ac1f-config-data\") pod \"ceilometer-0\" (UID: \"b48b257e-ddb7-486d-8788-489ca788ac1f\") " pod="openstack/ceilometer-0" Jan 25 08:15:40 crc kubenswrapper[4832]: I0125 08:15:40.867200 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b48b257e-ddb7-486d-8788-489ca788ac1f-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"b48b257e-ddb7-486d-8788-489ca788ac1f\") " pod="openstack/ceilometer-0" Jan 25 08:15:40 crc kubenswrapper[4832]: I0125 08:15:40.867207 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b48b257e-ddb7-486d-8788-489ca788ac1f-scripts\") pod \"ceilometer-0\" (UID: \"b48b257e-ddb7-486d-8788-489ca788ac1f\") " pod="openstack/ceilometer-0" Jan 25 08:15:40 crc kubenswrapper[4832]: I0125 08:15:40.874231 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t5q9s\" (UniqueName: \"kubernetes.io/projected/b48b257e-ddb7-486d-8788-489ca788ac1f-kube-api-access-t5q9s\") pod \"ceilometer-0\" (UID: \"b48b257e-ddb7-486d-8788-489ca788ac1f\") " pod="openstack/ceilometer-0" Jan 25 08:15:40 crc kubenswrapper[4832]: I0125 08:15:40.874623 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-vn66d" Jan 25 08:15:40 crc kubenswrapper[4832]: I0125 08:15:40.884131 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-55fff446b9-gj9pp"] Jan 25 08:15:40 crc kubenswrapper[4832]: I0125 08:15:40.893187 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-547d75495c-rgz7z"] Jan 25 08:15:40 crc kubenswrapper[4832]: I0125 08:15:40.943522 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-sync-xdqfx"] Jan 25 08:15:40 crc kubenswrapper[4832]: I0125 08:15:40.944858 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-xdqfx" Jan 25 08:15:40 crc kubenswrapper[4832]: I0125 08:15:40.944994 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/e793ce7a-261b-4b97-8436-c7a5efc5e126-db-sync-config-data\") pod \"cinder-db-sync-vrvb2\" (UID: \"e793ce7a-261b-4b97-8436-c7a5efc5e126\") " pod="openstack/cinder-db-sync-vrvb2" Jan 25 08:15:40 crc kubenswrapper[4832]: I0125 08:15:40.945048 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vxq2n\" (UniqueName: \"kubernetes.io/projected/e793ce7a-261b-4b97-8436-c7a5efc5e126-kube-api-access-vxq2n\") pod \"cinder-db-sync-vrvb2\" (UID: \"e793ce7a-261b-4b97-8436-c7a5efc5e126\") " pod="openstack/cinder-db-sync-vrvb2" Jan 25 08:15:40 crc kubenswrapper[4832]: I0125 08:15:40.945140 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e793ce7a-261b-4b97-8436-c7a5efc5e126-config-data\") pod \"cinder-db-sync-vrvb2\" (UID: \"e793ce7a-261b-4b97-8436-c7a5efc5e126\") " pod="openstack/cinder-db-sync-vrvb2" Jan 25 08:15:40 crc kubenswrapper[4832]: I0125 08:15:40.945178 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e793ce7a-261b-4b97-8436-c7a5efc5e126-combined-ca-bundle\") pod \"cinder-db-sync-vrvb2\" (UID: \"e793ce7a-261b-4b97-8436-c7a5efc5e126\") " pod="openstack/cinder-db-sync-vrvb2" Jan 25 08:15:40 crc kubenswrapper[4832]: I0125 08:15:40.945206 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e793ce7a-261b-4b97-8436-c7a5efc5e126-scripts\") pod \"cinder-db-sync-vrvb2\" (UID: \"e793ce7a-261b-4b97-8436-c7a5efc5e126\") " pod="openstack/cinder-db-sync-vrvb2" Jan 25 08:15:40 crc kubenswrapper[4832]: I0125 08:15:40.945238 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/e793ce7a-261b-4b97-8436-c7a5efc5e126-etc-machine-id\") pod \"cinder-db-sync-vrvb2\" (UID: \"e793ce7a-261b-4b97-8436-c7a5efc5e126\") " pod="openstack/cinder-db-sync-vrvb2" Jan 25 08:15:40 crc kubenswrapper[4832]: I0125 08:15:40.945348 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/e793ce7a-261b-4b97-8436-c7a5efc5e126-etc-machine-id\") pod \"cinder-db-sync-vrvb2\" (UID: \"e793ce7a-261b-4b97-8436-c7a5efc5e126\") " pod="openstack/cinder-db-sync-vrvb2" Jan 25 08:15:40 crc kubenswrapper[4832]: I0125 08:15:40.951415 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e793ce7a-261b-4b97-8436-c7a5efc5e126-config-data\") pod \"cinder-db-sync-vrvb2\" (UID: \"e793ce7a-261b-4b97-8436-c7a5efc5e126\") " pod="openstack/cinder-db-sync-vrvb2" Jan 25 08:15:40 crc kubenswrapper[4832]: I0125 08:15:40.951585 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-bmfkx" Jan 25 08:15:40 crc kubenswrapper[4832]: I0125 08:15:40.951878 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Jan 25 08:15:40 crc kubenswrapper[4832]: I0125 08:15:40.952008 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/e793ce7a-261b-4b97-8436-c7a5efc5e126-db-sync-config-data\") pod \"cinder-db-sync-vrvb2\" (UID: \"e793ce7a-261b-4b97-8436-c7a5efc5e126\") " pod="openstack/cinder-db-sync-vrvb2" Jan 25 08:15:40 crc kubenswrapper[4832]: I0125 08:15:40.957340 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e793ce7a-261b-4b97-8436-c7a5efc5e126-scripts\") pod \"cinder-db-sync-vrvb2\" (UID: \"e793ce7a-261b-4b97-8436-c7a5efc5e126\") " pod="openstack/cinder-db-sync-vrvb2" Jan 25 08:15:40 crc kubenswrapper[4832]: I0125 08:15:40.964129 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e793ce7a-261b-4b97-8436-c7a5efc5e126-combined-ca-bundle\") pod \"cinder-db-sync-vrvb2\" (UID: \"e793ce7a-261b-4b97-8436-c7a5efc5e126\") " pod="openstack/cinder-db-sync-vrvb2" Jan 25 08:15:40 crc kubenswrapper[4832]: I0125 08:15:40.967647 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vxq2n\" (UniqueName: \"kubernetes.io/projected/e793ce7a-261b-4b97-8436-c7a5efc5e126-kube-api-access-vxq2n\") pod \"cinder-db-sync-vrvb2\" (UID: \"e793ce7a-261b-4b97-8436-c7a5efc5e126\") " pod="openstack/cinder-db-sync-vrvb2" Jan 25 08:15:40 crc kubenswrapper[4832]: I0125 08:15:40.987798 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-76fcf4b695-75nt4"] Jan 25 08:15:41 crc kubenswrapper[4832]: I0125 08:15:41.010774 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-76fcf4b695-75nt4" Jan 25 08:15:41 crc kubenswrapper[4832]: I0125 08:15:41.038516 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-xdqfx"] Jan 25 08:15:41 crc kubenswrapper[4832]: I0125 08:15:41.040434 4832 generic.go:334] "Generic (PLEG): container finished" podID="a036699e-21c9-45bd-abf1-f2b054143deb" containerID="63a74969493a9ca0c6b78b98ce92dafc4ce1cf7293bff14daadf9f061154a4b6" exitCode=0 Jan 25 08:15:41 crc kubenswrapper[4832]: I0125 08:15:41.040540 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-77585f5f8c-pl49p" event={"ID":"a036699e-21c9-45bd-abf1-f2b054143deb","Type":"ContainerDied","Data":"63a74969493a9ca0c6b78b98ce92dafc4ce1cf7293bff14daadf9f061154a4b6"} Jan 25 08:15:41 crc kubenswrapper[4832]: I0125 08:15:41.059123 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/05d31ada-06df-4ffc-9e3a-3d476edaaa4f-config-data\") pod \"horizon-547d75495c-rgz7z\" (UID: \"05d31ada-06df-4ffc-9e3a-3d476edaaa4f\") " pod="openstack/horizon-547d75495c-rgz7z" Jan 25 08:15:41 crc kubenswrapper[4832]: I0125 08:15:41.059186 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/05d31ada-06df-4ffc-9e3a-3d476edaaa4f-logs\") pod \"horizon-547d75495c-rgz7z\" (UID: \"05d31ada-06df-4ffc-9e3a-3d476edaaa4f\") " pod="openstack/horizon-547d75495c-rgz7z" Jan 25 08:15:41 crc kubenswrapper[4832]: I0125 08:15:41.059237 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/05d31ada-06df-4ffc-9e3a-3d476edaaa4f-horizon-secret-key\") pod \"horizon-547d75495c-rgz7z\" (UID: \"05d31ada-06df-4ffc-9e3a-3d476edaaa4f\") " pod="openstack/horizon-547d75495c-rgz7z" Jan 25 08:15:41 crc kubenswrapper[4832]: I0125 08:15:41.059268 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5xcbj\" (UniqueName: \"kubernetes.io/projected/f4bbdba8-c7bc-4dd7-ae19-1655bc089a86-kube-api-access-5xcbj\") pod \"barbican-db-sync-xdqfx\" (UID: \"f4bbdba8-c7bc-4dd7-ae19-1655bc089a86\") " pod="openstack/barbican-db-sync-xdqfx" Jan 25 08:15:41 crc kubenswrapper[4832]: I0125 08:15:41.059285 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f4bbdba8-c7bc-4dd7-ae19-1655bc089a86-combined-ca-bundle\") pod \"barbican-db-sync-xdqfx\" (UID: \"f4bbdba8-c7bc-4dd7-ae19-1655bc089a86\") " pod="openstack/barbican-db-sync-xdqfx" Jan 25 08:15:41 crc kubenswrapper[4832]: I0125 08:15:41.059324 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-52mhf\" (UniqueName: \"kubernetes.io/projected/05d31ada-06df-4ffc-9e3a-3d476edaaa4f-kube-api-access-52mhf\") pod \"horizon-547d75495c-rgz7z\" (UID: \"05d31ada-06df-4ffc-9e3a-3d476edaaa4f\") " pod="openstack/horizon-547d75495c-rgz7z" Jan 25 08:15:41 crc kubenswrapper[4832]: I0125 08:15:41.059355 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/f4bbdba8-c7bc-4dd7-ae19-1655bc089a86-db-sync-config-data\") pod \"barbican-db-sync-xdqfx\" (UID: \"f4bbdba8-c7bc-4dd7-ae19-1655bc089a86\") " pod="openstack/barbican-db-sync-xdqfx" Jan 25 08:15:41 crc kubenswrapper[4832]: I0125 08:15:41.059379 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/05d31ada-06df-4ffc-9e3a-3d476edaaa4f-scripts\") pod \"horizon-547d75495c-rgz7z\" (UID: \"05d31ada-06df-4ffc-9e3a-3d476edaaa4f\") " pod="openstack/horizon-547d75495c-rgz7z" Jan 25 08:15:41 crc kubenswrapper[4832]: I0125 08:15:41.075489 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-sync-7tnnv"] Jan 25 08:15:41 crc kubenswrapper[4832]: I0125 08:15:41.076809 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-7tnnv" Jan 25 08:15:41 crc kubenswrapper[4832]: I0125 08:15:41.084249 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Jan 25 08:15:41 crc kubenswrapper[4832]: I0125 08:15:41.084527 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Jan 25 08:15:41 crc kubenswrapper[4832]: I0125 08:15:41.085053 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-gj2fx" Jan 25 08:15:41 crc kubenswrapper[4832]: I0125 08:15:41.089448 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-76fcf4b695-75nt4"] Jan 25 08:15:41 crc kubenswrapper[4832]: I0125 08:15:41.100612 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 25 08:15:41 crc kubenswrapper[4832]: I0125 08:15:41.104646 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-7tnnv"] Jan 25 08:15:41 crc kubenswrapper[4832]: I0125 08:15:41.106105 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-vrvb2" Jan 25 08:15:41 crc kubenswrapper[4832]: I0125 08:15:41.160582 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/05d31ada-06df-4ffc-9e3a-3d476edaaa4f-horizon-secret-key\") pod \"horizon-547d75495c-rgz7z\" (UID: \"05d31ada-06df-4ffc-9e3a-3d476edaaa4f\") " pod="openstack/horizon-547d75495c-rgz7z" Jan 25 08:15:41 crc kubenswrapper[4832]: I0125 08:15:41.160638 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5xcbj\" (UniqueName: \"kubernetes.io/projected/f4bbdba8-c7bc-4dd7-ae19-1655bc089a86-kube-api-access-5xcbj\") pod \"barbican-db-sync-xdqfx\" (UID: \"f4bbdba8-c7bc-4dd7-ae19-1655bc089a86\") " pod="openstack/barbican-db-sync-xdqfx" Jan 25 08:15:41 crc kubenswrapper[4832]: I0125 08:15:41.160658 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f4bbdba8-c7bc-4dd7-ae19-1655bc089a86-combined-ca-bundle\") pod \"barbican-db-sync-xdqfx\" (UID: \"f4bbdba8-c7bc-4dd7-ae19-1655bc089a86\") " pod="openstack/barbican-db-sync-xdqfx" Jan 25 08:15:41 crc kubenswrapper[4832]: I0125 08:15:41.160701 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/91ca2186-0d45-4246-9a45-4cca828f2e82-ovsdbserver-nb\") pod \"dnsmasq-dns-76fcf4b695-75nt4\" (UID: \"91ca2186-0d45-4246-9a45-4cca828f2e82\") " pod="openstack/dnsmasq-dns-76fcf4b695-75nt4" Jan 25 08:15:41 crc kubenswrapper[4832]: I0125 08:15:41.160728 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/91ca2186-0d45-4246-9a45-4cca828f2e82-dns-swift-storage-0\") pod \"dnsmasq-dns-76fcf4b695-75nt4\" (UID: \"91ca2186-0d45-4246-9a45-4cca828f2e82\") " pod="openstack/dnsmasq-dns-76fcf4b695-75nt4" Jan 25 08:15:41 crc kubenswrapper[4832]: I0125 08:15:41.160749 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-52mhf\" (UniqueName: \"kubernetes.io/projected/05d31ada-06df-4ffc-9e3a-3d476edaaa4f-kube-api-access-52mhf\") pod \"horizon-547d75495c-rgz7z\" (UID: \"05d31ada-06df-4ffc-9e3a-3d476edaaa4f\") " pod="openstack/horizon-547d75495c-rgz7z" Jan 25 08:15:41 crc kubenswrapper[4832]: I0125 08:15:41.160780 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/f4bbdba8-c7bc-4dd7-ae19-1655bc089a86-db-sync-config-data\") pod \"barbican-db-sync-xdqfx\" (UID: \"f4bbdba8-c7bc-4dd7-ae19-1655bc089a86\") " pod="openstack/barbican-db-sync-xdqfx" Jan 25 08:15:41 crc kubenswrapper[4832]: I0125 08:15:41.160798 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/91ca2186-0d45-4246-9a45-4cca828f2e82-dns-svc\") pod \"dnsmasq-dns-76fcf4b695-75nt4\" (UID: \"91ca2186-0d45-4246-9a45-4cca828f2e82\") " pod="openstack/dnsmasq-dns-76fcf4b695-75nt4" Jan 25 08:15:41 crc kubenswrapper[4832]: I0125 08:15:41.160821 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vhv9f\" (UniqueName: \"kubernetes.io/projected/91ca2186-0d45-4246-9a45-4cca828f2e82-kube-api-access-vhv9f\") pod \"dnsmasq-dns-76fcf4b695-75nt4\" (UID: \"91ca2186-0d45-4246-9a45-4cca828f2e82\") " pod="openstack/dnsmasq-dns-76fcf4b695-75nt4" Jan 25 08:15:41 crc kubenswrapper[4832]: I0125 08:15:41.160839 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/05d31ada-06df-4ffc-9e3a-3d476edaaa4f-scripts\") pod \"horizon-547d75495c-rgz7z\" (UID: \"05d31ada-06df-4ffc-9e3a-3d476edaaa4f\") " pod="openstack/horizon-547d75495c-rgz7z" Jan 25 08:15:41 crc kubenswrapper[4832]: I0125 08:15:41.160870 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/91ca2186-0d45-4246-9a45-4cca828f2e82-ovsdbserver-sb\") pod \"dnsmasq-dns-76fcf4b695-75nt4\" (UID: \"91ca2186-0d45-4246-9a45-4cca828f2e82\") " pod="openstack/dnsmasq-dns-76fcf4b695-75nt4" Jan 25 08:15:41 crc kubenswrapper[4832]: I0125 08:15:41.160886 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/05d31ada-06df-4ffc-9e3a-3d476edaaa4f-config-data\") pod \"horizon-547d75495c-rgz7z\" (UID: \"05d31ada-06df-4ffc-9e3a-3d476edaaa4f\") " pod="openstack/horizon-547d75495c-rgz7z" Jan 25 08:15:41 crc kubenswrapper[4832]: I0125 08:15:41.160917 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/05d31ada-06df-4ffc-9e3a-3d476edaaa4f-logs\") pod \"horizon-547d75495c-rgz7z\" (UID: \"05d31ada-06df-4ffc-9e3a-3d476edaaa4f\") " pod="openstack/horizon-547d75495c-rgz7z" Jan 25 08:15:41 crc kubenswrapper[4832]: I0125 08:15:41.160939 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/91ca2186-0d45-4246-9a45-4cca828f2e82-config\") pod \"dnsmasq-dns-76fcf4b695-75nt4\" (UID: \"91ca2186-0d45-4246-9a45-4cca828f2e82\") " pod="openstack/dnsmasq-dns-76fcf4b695-75nt4" Jan 25 08:15:41 crc kubenswrapper[4832]: I0125 08:15:41.164722 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/05d31ada-06df-4ffc-9e3a-3d476edaaa4f-horizon-secret-key\") pod \"horizon-547d75495c-rgz7z\" (UID: \"05d31ada-06df-4ffc-9e3a-3d476edaaa4f\") " pod="openstack/horizon-547d75495c-rgz7z" Jan 25 08:15:41 crc kubenswrapper[4832]: I0125 08:15:41.165212 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/05d31ada-06df-4ffc-9e3a-3d476edaaa4f-scripts\") pod \"horizon-547d75495c-rgz7z\" (UID: \"05d31ada-06df-4ffc-9e3a-3d476edaaa4f\") " pod="openstack/horizon-547d75495c-rgz7z" Jan 25 08:15:41 crc kubenswrapper[4832]: I0125 08:15:41.166165 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/05d31ada-06df-4ffc-9e3a-3d476edaaa4f-config-data\") pod \"horizon-547d75495c-rgz7z\" (UID: \"05d31ada-06df-4ffc-9e3a-3d476edaaa4f\") " pod="openstack/horizon-547d75495c-rgz7z" Jan 25 08:15:41 crc kubenswrapper[4832]: I0125 08:15:41.166419 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/05d31ada-06df-4ffc-9e3a-3d476edaaa4f-logs\") pod \"horizon-547d75495c-rgz7z\" (UID: \"05d31ada-06df-4ffc-9e3a-3d476edaaa4f\") " pod="openstack/horizon-547d75495c-rgz7z" Jan 25 08:15:41 crc kubenswrapper[4832]: I0125 08:15:41.170147 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f4bbdba8-c7bc-4dd7-ae19-1655bc089a86-combined-ca-bundle\") pod \"barbican-db-sync-xdqfx\" (UID: \"f4bbdba8-c7bc-4dd7-ae19-1655bc089a86\") " pod="openstack/barbican-db-sync-xdqfx" Jan 25 08:15:41 crc kubenswrapper[4832]: I0125 08:15:41.171065 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/f4bbdba8-c7bc-4dd7-ae19-1655bc089a86-db-sync-config-data\") pod \"barbican-db-sync-xdqfx\" (UID: \"f4bbdba8-c7bc-4dd7-ae19-1655bc089a86\") " pod="openstack/barbican-db-sync-xdqfx" Jan 25 08:15:41 crc kubenswrapper[4832]: I0125 08:15:41.182490 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5xcbj\" (UniqueName: \"kubernetes.io/projected/f4bbdba8-c7bc-4dd7-ae19-1655bc089a86-kube-api-access-5xcbj\") pod \"barbican-db-sync-xdqfx\" (UID: \"f4bbdba8-c7bc-4dd7-ae19-1655bc089a86\") " pod="openstack/barbican-db-sync-xdqfx" Jan 25 08:15:41 crc kubenswrapper[4832]: I0125 08:15:41.189177 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-52mhf\" (UniqueName: \"kubernetes.io/projected/05d31ada-06df-4ffc-9e3a-3d476edaaa4f-kube-api-access-52mhf\") pod \"horizon-547d75495c-rgz7z\" (UID: \"05d31ada-06df-4ffc-9e3a-3d476edaaa4f\") " pod="openstack/horizon-547d75495c-rgz7z" Jan 25 08:15:41 crc kubenswrapper[4832]: I0125 08:15:41.214099 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-547d75495c-rgz7z" Jan 25 08:15:41 crc kubenswrapper[4832]: I0125 08:15:41.262231 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/91ca2186-0d45-4246-9a45-4cca828f2e82-config\") pod \"dnsmasq-dns-76fcf4b695-75nt4\" (UID: \"91ca2186-0d45-4246-9a45-4cca828f2e82\") " pod="openstack/dnsmasq-dns-76fcf4b695-75nt4" Jan 25 08:15:41 crc kubenswrapper[4832]: I0125 08:15:41.262319 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/91ca2186-0d45-4246-9a45-4cca828f2e82-ovsdbserver-nb\") pod \"dnsmasq-dns-76fcf4b695-75nt4\" (UID: \"91ca2186-0d45-4246-9a45-4cca828f2e82\") " pod="openstack/dnsmasq-dns-76fcf4b695-75nt4" Jan 25 08:15:41 crc kubenswrapper[4832]: I0125 08:15:41.262342 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/91ca2186-0d45-4246-9a45-4cca828f2e82-dns-swift-storage-0\") pod \"dnsmasq-dns-76fcf4b695-75nt4\" (UID: \"91ca2186-0d45-4246-9a45-4cca828f2e82\") " pod="openstack/dnsmasq-dns-76fcf4b695-75nt4" Jan 25 08:15:41 crc kubenswrapper[4832]: I0125 08:15:41.262361 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e1a44ba3-2a1f-4189-80d7-cd0c8795bd9a-config-data\") pod \"placement-db-sync-7tnnv\" (UID: \"e1a44ba3-2a1f-4189-80d7-cd0c8795bd9a\") " pod="openstack/placement-db-sync-7tnnv" Jan 25 08:15:41 crc kubenswrapper[4832]: I0125 08:15:41.262396 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e1a44ba3-2a1f-4189-80d7-cd0c8795bd9a-logs\") pod \"placement-db-sync-7tnnv\" (UID: \"e1a44ba3-2a1f-4189-80d7-cd0c8795bd9a\") " pod="openstack/placement-db-sync-7tnnv" Jan 25 08:15:41 crc kubenswrapper[4832]: I0125 08:15:41.262413 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kgc9s\" (UniqueName: \"kubernetes.io/projected/e1a44ba3-2a1f-4189-80d7-cd0c8795bd9a-kube-api-access-kgc9s\") pod \"placement-db-sync-7tnnv\" (UID: \"e1a44ba3-2a1f-4189-80d7-cd0c8795bd9a\") " pod="openstack/placement-db-sync-7tnnv" Jan 25 08:15:41 crc kubenswrapper[4832]: I0125 08:15:41.262436 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e1a44ba3-2a1f-4189-80d7-cd0c8795bd9a-combined-ca-bundle\") pod \"placement-db-sync-7tnnv\" (UID: \"e1a44ba3-2a1f-4189-80d7-cd0c8795bd9a\") " pod="openstack/placement-db-sync-7tnnv" Jan 25 08:15:41 crc kubenswrapper[4832]: I0125 08:15:41.262464 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/91ca2186-0d45-4246-9a45-4cca828f2e82-dns-svc\") pod \"dnsmasq-dns-76fcf4b695-75nt4\" (UID: \"91ca2186-0d45-4246-9a45-4cca828f2e82\") " pod="openstack/dnsmasq-dns-76fcf4b695-75nt4" Jan 25 08:15:41 crc kubenswrapper[4832]: I0125 08:15:41.262489 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vhv9f\" (UniqueName: \"kubernetes.io/projected/91ca2186-0d45-4246-9a45-4cca828f2e82-kube-api-access-vhv9f\") pod \"dnsmasq-dns-76fcf4b695-75nt4\" (UID: \"91ca2186-0d45-4246-9a45-4cca828f2e82\") " pod="openstack/dnsmasq-dns-76fcf4b695-75nt4" Jan 25 08:15:41 crc kubenswrapper[4832]: I0125 08:15:41.262513 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e1a44ba3-2a1f-4189-80d7-cd0c8795bd9a-scripts\") pod \"placement-db-sync-7tnnv\" (UID: \"e1a44ba3-2a1f-4189-80d7-cd0c8795bd9a\") " pod="openstack/placement-db-sync-7tnnv" Jan 25 08:15:41 crc kubenswrapper[4832]: I0125 08:15:41.262543 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/91ca2186-0d45-4246-9a45-4cca828f2e82-ovsdbserver-sb\") pod \"dnsmasq-dns-76fcf4b695-75nt4\" (UID: \"91ca2186-0d45-4246-9a45-4cca828f2e82\") " pod="openstack/dnsmasq-dns-76fcf4b695-75nt4" Jan 25 08:15:41 crc kubenswrapper[4832]: I0125 08:15:41.265554 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/91ca2186-0d45-4246-9a45-4cca828f2e82-ovsdbserver-sb\") pod \"dnsmasq-dns-76fcf4b695-75nt4\" (UID: \"91ca2186-0d45-4246-9a45-4cca828f2e82\") " pod="openstack/dnsmasq-dns-76fcf4b695-75nt4" Jan 25 08:15:41 crc kubenswrapper[4832]: I0125 08:15:41.266211 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/91ca2186-0d45-4246-9a45-4cca828f2e82-ovsdbserver-nb\") pod \"dnsmasq-dns-76fcf4b695-75nt4\" (UID: \"91ca2186-0d45-4246-9a45-4cca828f2e82\") " pod="openstack/dnsmasq-dns-76fcf4b695-75nt4" Jan 25 08:15:41 crc kubenswrapper[4832]: I0125 08:15:41.266682 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/91ca2186-0d45-4246-9a45-4cca828f2e82-dns-svc\") pod \"dnsmasq-dns-76fcf4b695-75nt4\" (UID: \"91ca2186-0d45-4246-9a45-4cca828f2e82\") " pod="openstack/dnsmasq-dns-76fcf4b695-75nt4" Jan 25 08:15:41 crc kubenswrapper[4832]: I0125 08:15:41.266715 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/91ca2186-0d45-4246-9a45-4cca828f2e82-dns-swift-storage-0\") pod \"dnsmasq-dns-76fcf4b695-75nt4\" (UID: \"91ca2186-0d45-4246-9a45-4cca828f2e82\") " pod="openstack/dnsmasq-dns-76fcf4b695-75nt4" Jan 25 08:15:41 crc kubenswrapper[4832]: I0125 08:15:41.267219 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/91ca2186-0d45-4246-9a45-4cca828f2e82-config\") pod \"dnsmasq-dns-76fcf4b695-75nt4\" (UID: \"91ca2186-0d45-4246-9a45-4cca828f2e82\") " pod="openstack/dnsmasq-dns-76fcf4b695-75nt4" Jan 25 08:15:41 crc kubenswrapper[4832]: I0125 08:15:41.286514 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vhv9f\" (UniqueName: \"kubernetes.io/projected/91ca2186-0d45-4246-9a45-4cca828f2e82-kube-api-access-vhv9f\") pod \"dnsmasq-dns-76fcf4b695-75nt4\" (UID: \"91ca2186-0d45-4246-9a45-4cca828f2e82\") " pod="openstack/dnsmasq-dns-76fcf4b695-75nt4" Jan 25 08:15:41 crc kubenswrapper[4832]: I0125 08:15:41.306266 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-xdqfx" Jan 25 08:15:41 crc kubenswrapper[4832]: I0125 08:15:41.316226 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-55fff446b9-gj9pp"] Jan 25 08:15:41 crc kubenswrapper[4832]: I0125 08:15:41.344412 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-76fcf4b695-75nt4" Jan 25 08:15:41 crc kubenswrapper[4832]: I0125 08:15:41.364306 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e1a44ba3-2a1f-4189-80d7-cd0c8795bd9a-config-data\") pod \"placement-db-sync-7tnnv\" (UID: \"e1a44ba3-2a1f-4189-80d7-cd0c8795bd9a\") " pod="openstack/placement-db-sync-7tnnv" Jan 25 08:15:41 crc kubenswrapper[4832]: I0125 08:15:41.364361 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e1a44ba3-2a1f-4189-80d7-cd0c8795bd9a-logs\") pod \"placement-db-sync-7tnnv\" (UID: \"e1a44ba3-2a1f-4189-80d7-cd0c8795bd9a\") " pod="openstack/placement-db-sync-7tnnv" Jan 25 08:15:41 crc kubenswrapper[4832]: I0125 08:15:41.364419 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kgc9s\" (UniqueName: \"kubernetes.io/projected/e1a44ba3-2a1f-4189-80d7-cd0c8795bd9a-kube-api-access-kgc9s\") pod \"placement-db-sync-7tnnv\" (UID: \"e1a44ba3-2a1f-4189-80d7-cd0c8795bd9a\") " pod="openstack/placement-db-sync-7tnnv" Jan 25 08:15:41 crc kubenswrapper[4832]: I0125 08:15:41.364470 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e1a44ba3-2a1f-4189-80d7-cd0c8795bd9a-combined-ca-bundle\") pod \"placement-db-sync-7tnnv\" (UID: \"e1a44ba3-2a1f-4189-80d7-cd0c8795bd9a\") " pod="openstack/placement-db-sync-7tnnv" Jan 25 08:15:41 crc kubenswrapper[4832]: I0125 08:15:41.364515 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e1a44ba3-2a1f-4189-80d7-cd0c8795bd9a-scripts\") pod \"placement-db-sync-7tnnv\" (UID: \"e1a44ba3-2a1f-4189-80d7-cd0c8795bd9a\") " pod="openstack/placement-db-sync-7tnnv" Jan 25 08:15:41 crc kubenswrapper[4832]: I0125 08:15:41.366587 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e1a44ba3-2a1f-4189-80d7-cd0c8795bd9a-logs\") pod \"placement-db-sync-7tnnv\" (UID: \"e1a44ba3-2a1f-4189-80d7-cd0c8795bd9a\") " pod="openstack/placement-db-sync-7tnnv" Jan 25 08:15:41 crc kubenswrapper[4832]: I0125 08:15:41.372759 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e1a44ba3-2a1f-4189-80d7-cd0c8795bd9a-scripts\") pod \"placement-db-sync-7tnnv\" (UID: \"e1a44ba3-2a1f-4189-80d7-cd0c8795bd9a\") " pod="openstack/placement-db-sync-7tnnv" Jan 25 08:15:41 crc kubenswrapper[4832]: I0125 08:15:41.373144 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e1a44ba3-2a1f-4189-80d7-cd0c8795bd9a-combined-ca-bundle\") pod \"placement-db-sync-7tnnv\" (UID: \"e1a44ba3-2a1f-4189-80d7-cd0c8795bd9a\") " pod="openstack/placement-db-sync-7tnnv" Jan 25 08:15:41 crc kubenswrapper[4832]: I0125 08:15:41.374450 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e1a44ba3-2a1f-4189-80d7-cd0c8795bd9a-config-data\") pod \"placement-db-sync-7tnnv\" (UID: \"e1a44ba3-2a1f-4189-80d7-cd0c8795bd9a\") " pod="openstack/placement-db-sync-7tnnv" Jan 25 08:15:41 crc kubenswrapper[4832]: I0125 08:15:41.390300 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kgc9s\" (UniqueName: \"kubernetes.io/projected/e1a44ba3-2a1f-4189-80d7-cd0c8795bd9a-kube-api-access-kgc9s\") pod \"placement-db-sync-7tnnv\" (UID: \"e1a44ba3-2a1f-4189-80d7-cd0c8795bd9a\") " pod="openstack/placement-db-sync-7tnnv" Jan 25 08:15:41 crc kubenswrapper[4832]: I0125 08:15:41.407347 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-7tnnv" Jan 25 08:15:41 crc kubenswrapper[4832]: I0125 08:15:41.555685 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-pfc28"] Jan 25 08:15:41 crc kubenswrapper[4832]: I0125 08:15:41.570842 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-85c746769-89kvs"] Jan 25 08:15:41 crc kubenswrapper[4832]: I0125 08:15:41.651688 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-vn66d"] Jan 25 08:15:41 crc kubenswrapper[4832]: I0125 08:15:41.848187 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-77585f5f8c-pl49p" Jan 25 08:15:41 crc kubenswrapper[4832]: I0125 08:15:41.888353 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-vrvb2"] Jan 25 08:15:41 crc kubenswrapper[4832]: I0125 08:15:41.954310 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-547d75495c-rgz7z"] Jan 25 08:15:41 crc kubenswrapper[4832]: W0125 08:15:41.969811 4832 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod05d31ada_06df_4ffc_9e3a_3d476edaaa4f.slice/crio-bfd4a48c6cf38b45521a7be3e60b04d0f5dfd59e4242bb6b85f21af1571012e6 WatchSource:0}: Error finding container bfd4a48c6cf38b45521a7be3e60b04d0f5dfd59e4242bb6b85f21af1571012e6: Status 404 returned error can't find the container with id bfd4a48c6cf38b45521a7be3e60b04d0f5dfd59e4242bb6b85f21af1571012e6 Jan 25 08:15:41 crc kubenswrapper[4832]: I0125 08:15:41.978033 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a036699e-21c9-45bd-abf1-f2b054143deb-dns-svc\") pod \"a036699e-21c9-45bd-abf1-f2b054143deb\" (UID: \"a036699e-21c9-45bd-abf1-f2b054143deb\") " Jan 25 08:15:41 crc kubenswrapper[4832]: I0125 08:15:41.978084 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a036699e-21c9-45bd-abf1-f2b054143deb-ovsdbserver-nb\") pod \"a036699e-21c9-45bd-abf1-f2b054143deb\" (UID: \"a036699e-21c9-45bd-abf1-f2b054143deb\") " Jan 25 08:15:41 crc kubenswrapper[4832]: I0125 08:15:41.978134 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8pwz2\" (UniqueName: \"kubernetes.io/projected/a036699e-21c9-45bd-abf1-f2b054143deb-kube-api-access-8pwz2\") pod \"a036699e-21c9-45bd-abf1-f2b054143deb\" (UID: \"a036699e-21c9-45bd-abf1-f2b054143deb\") " Jan 25 08:15:41 crc kubenswrapper[4832]: I0125 08:15:41.978279 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a036699e-21c9-45bd-abf1-f2b054143deb-dns-swift-storage-0\") pod \"a036699e-21c9-45bd-abf1-f2b054143deb\" (UID: \"a036699e-21c9-45bd-abf1-f2b054143deb\") " Jan 25 08:15:41 crc kubenswrapper[4832]: I0125 08:15:41.978314 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a036699e-21c9-45bd-abf1-f2b054143deb-ovsdbserver-sb\") pod \"a036699e-21c9-45bd-abf1-f2b054143deb\" (UID: \"a036699e-21c9-45bd-abf1-f2b054143deb\") " Jan 25 08:15:41 crc kubenswrapper[4832]: I0125 08:15:41.978364 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a036699e-21c9-45bd-abf1-f2b054143deb-config\") pod \"a036699e-21c9-45bd-abf1-f2b054143deb\" (UID: \"a036699e-21c9-45bd-abf1-f2b054143deb\") " Jan 25 08:15:42 crc kubenswrapper[4832]: I0125 08:15:42.003881 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 25 08:15:42 crc kubenswrapper[4832]: I0125 08:15:42.005280 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a036699e-21c9-45bd-abf1-f2b054143deb-kube-api-access-8pwz2" (OuterVolumeSpecName: "kube-api-access-8pwz2") pod "a036699e-21c9-45bd-abf1-f2b054143deb" (UID: "a036699e-21c9-45bd-abf1-f2b054143deb"). InnerVolumeSpecName "kube-api-access-8pwz2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 25 08:15:42 crc kubenswrapper[4832]: I0125 08:15:42.037476 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a036699e-21c9-45bd-abf1-f2b054143deb-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "a036699e-21c9-45bd-abf1-f2b054143deb" (UID: "a036699e-21c9-45bd-abf1-f2b054143deb"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 25 08:15:42 crc kubenswrapper[4832]: I0125 08:15:42.039789 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a036699e-21c9-45bd-abf1-f2b054143deb-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "a036699e-21c9-45bd-abf1-f2b054143deb" (UID: "a036699e-21c9-45bd-abf1-f2b054143deb"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 25 08:15:42 crc kubenswrapper[4832]: I0125 08:15:42.046852 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a036699e-21c9-45bd-abf1-f2b054143deb-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "a036699e-21c9-45bd-abf1-f2b054143deb" (UID: "a036699e-21c9-45bd-abf1-f2b054143deb"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 25 08:15:42 crc kubenswrapper[4832]: I0125 08:15:42.055408 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-vrvb2" event={"ID":"e793ce7a-261b-4b97-8436-c7a5efc5e126","Type":"ContainerStarted","Data":"4ef0043ab9b84224998d2924f415885a4ca6ee4ec856bd4bbbdc72dd45a762ee"} Jan 25 08:15:42 crc kubenswrapper[4832]: I0125 08:15:42.057658 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-85c746769-89kvs" event={"ID":"8129d5bc-af98-4ef4-b204-fc568ac4ae11","Type":"ContainerStarted","Data":"a370719ca3b851c5f1e01c1410d84eaf2f2ce1e456ee3b1a05b790cfc85b3083"} Jan 25 08:15:42 crc kubenswrapper[4832]: I0125 08:15:42.059033 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b48b257e-ddb7-486d-8788-489ca788ac1f","Type":"ContainerStarted","Data":"bd97d431faa8df4bf55472aa074f3fa273172c7c61c899751f1fbe4fb586947e"} Jan 25 08:15:42 crc kubenswrapper[4832]: I0125 08:15:42.060962 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-547d75495c-rgz7z" event={"ID":"05d31ada-06df-4ffc-9e3a-3d476edaaa4f","Type":"ContainerStarted","Data":"bfd4a48c6cf38b45521a7be3e60b04d0f5dfd59e4242bb6b85f21af1571012e6"} Jan 25 08:15:42 crc kubenswrapper[4832]: I0125 08:15:42.063570 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-vn66d" event={"ID":"5e0cb7b1-ca34-4d43-ab93-febd41f35489","Type":"ContainerStarted","Data":"d399a17cccba09c5367e9af52b2eed1ccb200a38317606a105d12e84fbc4af18"} Jan 25 08:15:42 crc kubenswrapper[4832]: I0125 08:15:42.063625 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-vn66d" event={"ID":"5e0cb7b1-ca34-4d43-ab93-febd41f35489","Type":"ContainerStarted","Data":"5803cdbd66e38038d219fdaeb9fbbef9f7acecd3c4c7fccb510ebcb406268a59"} Jan 25 08:15:42 crc kubenswrapper[4832]: I0125 08:15:42.065167 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55fff446b9-gj9pp" event={"ID":"d66779ca-60d0-4bce-9bb6-e10b6508ad7f","Type":"ContainerStarted","Data":"b398527b5228bc2afc6bc862bf8f2f60d828dc21087662ee6cebea11393611a2"} Jan 25 08:15:42 crc kubenswrapper[4832]: I0125 08:15:42.065210 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55fff446b9-gj9pp" event={"ID":"d66779ca-60d0-4bce-9bb6-e10b6508ad7f","Type":"ContainerStarted","Data":"59af8d52bc232865a0abe735b2c71eaa4827ecd847eb5b4f817030f1c66a0de8"} Jan 25 08:15:42 crc kubenswrapper[4832]: I0125 08:15:42.065454 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-55fff446b9-gj9pp" podUID="d66779ca-60d0-4bce-9bb6-e10b6508ad7f" containerName="init" containerID="cri-o://b398527b5228bc2afc6bc862bf8f2f60d828dc21087662ee6cebea11393611a2" gracePeriod=10 Jan 25 08:15:42 crc kubenswrapper[4832]: I0125 08:15:42.071752 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-77585f5f8c-pl49p" Jan 25 08:15:42 crc kubenswrapper[4832]: I0125 08:15:42.071772 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-77585f5f8c-pl49p" event={"ID":"a036699e-21c9-45bd-abf1-f2b054143deb","Type":"ContainerDied","Data":"405db18afb44995f1710855778265575f717ce0d7eb94b87fb394b5889ac089b"} Jan 25 08:15:42 crc kubenswrapper[4832]: I0125 08:15:42.071836 4832 scope.go:117] "RemoveContainer" containerID="63a74969493a9ca0c6b78b98ce92dafc4ce1cf7293bff14daadf9f061154a4b6" Jan 25 08:15:42 crc kubenswrapper[4832]: I0125 08:15:42.076948 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-pfc28" event={"ID":"88d4e115-8ad0-4971-b4aa-cb63d0bd2c11","Type":"ContainerStarted","Data":"5d8a4aebb6051b9a2ea061e44a57637bc058c8f737d722b1a2136d729d292408"} Jan 25 08:15:42 crc kubenswrapper[4832]: I0125 08:15:42.077087 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-pfc28" event={"ID":"88d4e115-8ad0-4971-b4aa-cb63d0bd2c11","Type":"ContainerStarted","Data":"d2ede1aa46e229d2cd74203e2c10cd703d999c382ed21e04431c7ef6d77da762"} Jan 25 08:15:42 crc kubenswrapper[4832]: I0125 08:15:42.080745 4832 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a036699e-21c9-45bd-abf1-f2b054143deb-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 25 08:15:42 crc kubenswrapper[4832]: I0125 08:15:42.080770 4832 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a036699e-21c9-45bd-abf1-f2b054143deb-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 25 08:15:42 crc kubenswrapper[4832]: I0125 08:15:42.080781 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8pwz2\" (UniqueName: \"kubernetes.io/projected/a036699e-21c9-45bd-abf1-f2b054143deb-kube-api-access-8pwz2\") on node \"crc\" DevicePath \"\"" Jan 25 08:15:42 crc kubenswrapper[4832]: I0125 08:15:42.080791 4832 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a036699e-21c9-45bd-abf1-f2b054143deb-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 25 08:15:42 crc kubenswrapper[4832]: I0125 08:15:42.092344 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a036699e-21c9-45bd-abf1-f2b054143deb-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "a036699e-21c9-45bd-abf1-f2b054143deb" (UID: "a036699e-21c9-45bd-abf1-f2b054143deb"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 25 08:15:42 crc kubenswrapper[4832]: I0125 08:15:42.101892 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a036699e-21c9-45bd-abf1-f2b054143deb-config" (OuterVolumeSpecName: "config") pod "a036699e-21c9-45bd-abf1-f2b054143deb" (UID: "a036699e-21c9-45bd-abf1-f2b054143deb"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 25 08:15:42 crc kubenswrapper[4832]: I0125 08:15:42.104836 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-xdqfx"] Jan 25 08:15:42 crc kubenswrapper[4832]: I0125 08:15:42.111767 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-vn66d" podStartSLOduration=2.111742455 podStartE2EDuration="2.111742455s" podCreationTimestamp="2026-01-25 08:15:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-25 08:15:42.088201999 +0000 UTC m=+1124.762025532" watchObservedRunningTime="2026-01-25 08:15:42.111742455 +0000 UTC m=+1124.785565998" Jan 25 08:15:42 crc kubenswrapper[4832]: I0125 08:15:42.133902 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-76fcf4b695-75nt4"] Jan 25 08:15:42 crc kubenswrapper[4832]: I0125 08:15:42.145544 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-7tnnv"] Jan 25 08:15:42 crc kubenswrapper[4832]: I0125 08:15:42.158892 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-db-sync-pfc28" podStartSLOduration=2.158869109 podStartE2EDuration="2.158869109s" podCreationTimestamp="2026-01-25 08:15:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-25 08:15:42.133872927 +0000 UTC m=+1124.807696460" watchObservedRunningTime="2026-01-25 08:15:42.158869109 +0000 UTC m=+1124.832692642" Jan 25 08:15:42 crc kubenswrapper[4832]: W0125 08:15:42.169965 4832 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod91ca2186_0d45_4246_9a45_4cca828f2e82.slice/crio-ca321f98194076d5703d70240a900848c2a8c4c646e8b2085a92dfeadb9d203d WatchSource:0}: Error finding container ca321f98194076d5703d70240a900848c2a8c4c646e8b2085a92dfeadb9d203d: Status 404 returned error can't find the container with id ca321f98194076d5703d70240a900848c2a8c4c646e8b2085a92dfeadb9d203d Jan 25 08:15:42 crc kubenswrapper[4832]: W0125 08:15:42.171198 4832 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode1a44ba3_2a1f_4189_80d7_cd0c8795bd9a.slice/crio-52cb9f8c097f83c92f04258a204ad51177bbcb4f0218431527547abdd379a578 WatchSource:0}: Error finding container 52cb9f8c097f83c92f04258a204ad51177bbcb4f0218431527547abdd379a578: Status 404 returned error can't find the container with id 52cb9f8c097f83c92f04258a204ad51177bbcb4f0218431527547abdd379a578 Jan 25 08:15:42 crc kubenswrapper[4832]: W0125 08:15:42.176040 4832 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf4bbdba8_c7bc_4dd7_ae19_1655bc089a86.slice/crio-9852031006e2acef3d7437f0532401e427230998dce4ddc63ff7eb29fb7daee9 WatchSource:0}: Error finding container 9852031006e2acef3d7437f0532401e427230998dce4ddc63ff7eb29fb7daee9: Status 404 returned error can't find the container with id 9852031006e2acef3d7437f0532401e427230998dce4ddc63ff7eb29fb7daee9 Jan 25 08:15:42 crc kubenswrapper[4832]: I0125 08:15:42.183444 4832 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a036699e-21c9-45bd-abf1-f2b054143deb-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 25 08:15:42 crc kubenswrapper[4832]: I0125 08:15:42.184251 4832 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a036699e-21c9-45bd-abf1-f2b054143deb-config\") on node \"crc\" DevicePath \"\"" Jan 25 08:15:42 crc kubenswrapper[4832]: I0125 08:15:42.189666 4832 scope.go:117] "RemoveContainer" containerID="4066cad4c98ab89ec880906941517b6251e245f1266874916962fc5317b0612b" Jan 25 08:15:42 crc kubenswrapper[4832]: I0125 08:15:42.410173 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-77585f5f8c-pl49p"] Jan 25 08:15:42 crc kubenswrapper[4832]: I0125 08:15:42.417869 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-77585f5f8c-pl49p"] Jan 25 08:15:42 crc kubenswrapper[4832]: I0125 08:15:42.589736 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-55fff446b9-gj9pp" Jan 25 08:15:42 crc kubenswrapper[4832]: I0125 08:15:42.693228 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d66779ca-60d0-4bce-9bb6-e10b6508ad7f-dns-swift-storage-0\") pod \"d66779ca-60d0-4bce-9bb6-e10b6508ad7f\" (UID: \"d66779ca-60d0-4bce-9bb6-e10b6508ad7f\") " Jan 25 08:15:42 crc kubenswrapper[4832]: I0125 08:15:42.693297 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d66779ca-60d0-4bce-9bb6-e10b6508ad7f-ovsdbserver-sb\") pod \"d66779ca-60d0-4bce-9bb6-e10b6508ad7f\" (UID: \"d66779ca-60d0-4bce-9bb6-e10b6508ad7f\") " Jan 25 08:15:42 crc kubenswrapper[4832]: I0125 08:15:42.693320 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d66779ca-60d0-4bce-9bb6-e10b6508ad7f-ovsdbserver-nb\") pod \"d66779ca-60d0-4bce-9bb6-e10b6508ad7f\" (UID: \"d66779ca-60d0-4bce-9bb6-e10b6508ad7f\") " Jan 25 08:15:42 crc kubenswrapper[4832]: I0125 08:15:42.693468 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sbpkg\" (UniqueName: \"kubernetes.io/projected/d66779ca-60d0-4bce-9bb6-e10b6508ad7f-kube-api-access-sbpkg\") pod \"d66779ca-60d0-4bce-9bb6-e10b6508ad7f\" (UID: \"d66779ca-60d0-4bce-9bb6-e10b6508ad7f\") " Jan 25 08:15:42 crc kubenswrapper[4832]: I0125 08:15:42.693500 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d66779ca-60d0-4bce-9bb6-e10b6508ad7f-config\") pod \"d66779ca-60d0-4bce-9bb6-e10b6508ad7f\" (UID: \"d66779ca-60d0-4bce-9bb6-e10b6508ad7f\") " Jan 25 08:15:42 crc kubenswrapper[4832]: I0125 08:15:42.693567 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d66779ca-60d0-4bce-9bb6-e10b6508ad7f-dns-svc\") pod \"d66779ca-60d0-4bce-9bb6-e10b6508ad7f\" (UID: \"d66779ca-60d0-4bce-9bb6-e10b6508ad7f\") " Jan 25 08:15:42 crc kubenswrapper[4832]: I0125 08:15:42.722438 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d66779ca-60d0-4bce-9bb6-e10b6508ad7f-kube-api-access-sbpkg" (OuterVolumeSpecName: "kube-api-access-sbpkg") pod "d66779ca-60d0-4bce-9bb6-e10b6508ad7f" (UID: "d66779ca-60d0-4bce-9bb6-e10b6508ad7f"). InnerVolumeSpecName "kube-api-access-sbpkg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 25 08:15:42 crc kubenswrapper[4832]: I0125 08:15:42.755883 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d66779ca-60d0-4bce-9bb6-e10b6508ad7f-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "d66779ca-60d0-4bce-9bb6-e10b6508ad7f" (UID: "d66779ca-60d0-4bce-9bb6-e10b6508ad7f"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 25 08:15:42 crc kubenswrapper[4832]: I0125 08:15:42.789804 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d66779ca-60d0-4bce-9bb6-e10b6508ad7f-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "d66779ca-60d0-4bce-9bb6-e10b6508ad7f" (UID: "d66779ca-60d0-4bce-9bb6-e10b6508ad7f"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 25 08:15:42 crc kubenswrapper[4832]: I0125 08:15:42.794772 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d66779ca-60d0-4bce-9bb6-e10b6508ad7f-config" (OuterVolumeSpecName: "config") pod "d66779ca-60d0-4bce-9bb6-e10b6508ad7f" (UID: "d66779ca-60d0-4bce-9bb6-e10b6508ad7f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 25 08:15:42 crc kubenswrapper[4832]: I0125 08:15:42.798586 4832 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d66779ca-60d0-4bce-9bb6-e10b6508ad7f-config\") on node \"crc\" DevicePath \"\"" Jan 25 08:15:42 crc kubenswrapper[4832]: I0125 08:15:42.798665 4832 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d66779ca-60d0-4bce-9bb6-e10b6508ad7f-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 25 08:15:42 crc kubenswrapper[4832]: I0125 08:15:42.798741 4832 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d66779ca-60d0-4bce-9bb6-e10b6508ad7f-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 25 08:15:42 crc kubenswrapper[4832]: I0125 08:15:42.798752 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sbpkg\" (UniqueName: \"kubernetes.io/projected/d66779ca-60d0-4bce-9bb6-e10b6508ad7f-kube-api-access-sbpkg\") on node \"crc\" DevicePath \"\"" Jan 25 08:15:42 crc kubenswrapper[4832]: I0125 08:15:42.856306 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d66779ca-60d0-4bce-9bb6-e10b6508ad7f-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "d66779ca-60d0-4bce-9bb6-e10b6508ad7f" (UID: "d66779ca-60d0-4bce-9bb6-e10b6508ad7f"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 25 08:15:42 crc kubenswrapper[4832]: I0125 08:15:42.873127 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d66779ca-60d0-4bce-9bb6-e10b6508ad7f-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "d66779ca-60d0-4bce-9bb6-e10b6508ad7f" (UID: "d66779ca-60d0-4bce-9bb6-e10b6508ad7f"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 25 08:15:42 crc kubenswrapper[4832]: I0125 08:15:42.901510 4832 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d66779ca-60d0-4bce-9bb6-e10b6508ad7f-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 25 08:15:42 crc kubenswrapper[4832]: I0125 08:15:42.901549 4832 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d66779ca-60d0-4bce-9bb6-e10b6508ad7f-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 25 08:15:42 crc kubenswrapper[4832]: I0125 08:15:42.973476 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-547d75495c-rgz7z"] Jan 25 08:15:43 crc kubenswrapper[4832]: I0125 08:15:43.082161 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-5cc6ffb9d5-b9rt2"] Jan 25 08:15:43 crc kubenswrapper[4832]: E0125 08:15:43.082545 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a036699e-21c9-45bd-abf1-f2b054143deb" containerName="dnsmasq-dns" Jan 25 08:15:43 crc kubenswrapper[4832]: I0125 08:15:43.082558 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="a036699e-21c9-45bd-abf1-f2b054143deb" containerName="dnsmasq-dns" Jan 25 08:15:43 crc kubenswrapper[4832]: E0125 08:15:43.082583 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d66779ca-60d0-4bce-9bb6-e10b6508ad7f" containerName="init" Jan 25 08:15:43 crc kubenswrapper[4832]: I0125 08:15:43.082589 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="d66779ca-60d0-4bce-9bb6-e10b6508ad7f" containerName="init" Jan 25 08:15:43 crc kubenswrapper[4832]: E0125 08:15:43.082608 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a036699e-21c9-45bd-abf1-f2b054143deb" containerName="init" Jan 25 08:15:43 crc kubenswrapper[4832]: I0125 08:15:43.082615 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="a036699e-21c9-45bd-abf1-f2b054143deb" containerName="init" Jan 25 08:15:43 crc kubenswrapper[4832]: I0125 08:15:43.082836 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="d66779ca-60d0-4bce-9bb6-e10b6508ad7f" containerName="init" Jan 25 08:15:43 crc kubenswrapper[4832]: I0125 08:15:43.082870 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="a036699e-21c9-45bd-abf1-f2b054143deb" containerName="dnsmasq-dns" Jan 25 08:15:43 crc kubenswrapper[4832]: I0125 08:15:43.083876 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-5cc6ffb9d5-b9rt2" Jan 25 08:15:43 crc kubenswrapper[4832]: I0125 08:15:43.091662 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-5cc6ffb9d5-b9rt2"] Jan 25 08:15:43 crc kubenswrapper[4832]: I0125 08:15:43.127714 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 25 08:15:43 crc kubenswrapper[4832]: I0125 08:15:43.128593 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-7tnnv" event={"ID":"e1a44ba3-2a1f-4189-80d7-cd0c8795bd9a","Type":"ContainerStarted","Data":"52cb9f8c097f83c92f04258a204ad51177bbcb4f0218431527547abdd379a578"} Jan 25 08:15:43 crc kubenswrapper[4832]: I0125 08:15:43.144193 4832 generic.go:334] "Generic (PLEG): container finished" podID="d66779ca-60d0-4bce-9bb6-e10b6508ad7f" containerID="b398527b5228bc2afc6bc862bf8f2f60d828dc21087662ee6cebea11393611a2" exitCode=0 Jan 25 08:15:43 crc kubenswrapper[4832]: I0125 08:15:43.144289 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55fff446b9-gj9pp" event={"ID":"d66779ca-60d0-4bce-9bb6-e10b6508ad7f","Type":"ContainerDied","Data":"b398527b5228bc2afc6bc862bf8f2f60d828dc21087662ee6cebea11393611a2"} Jan 25 08:15:43 crc kubenswrapper[4832]: I0125 08:15:43.144321 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55fff446b9-gj9pp" event={"ID":"d66779ca-60d0-4bce-9bb6-e10b6508ad7f","Type":"ContainerDied","Data":"59af8d52bc232865a0abe735b2c71eaa4827ecd847eb5b4f817030f1c66a0de8"} Jan 25 08:15:43 crc kubenswrapper[4832]: I0125 08:15:43.144337 4832 scope.go:117] "RemoveContainer" containerID="b398527b5228bc2afc6bc862bf8f2f60d828dc21087662ee6cebea11393611a2" Jan 25 08:15:43 crc kubenswrapper[4832]: I0125 08:15:43.144549 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-55fff446b9-gj9pp" Jan 25 08:15:43 crc kubenswrapper[4832]: I0125 08:15:43.341774 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-xdqfx" event={"ID":"f4bbdba8-c7bc-4dd7-ae19-1655bc089a86","Type":"ContainerStarted","Data":"9852031006e2acef3d7437f0532401e427230998dce4ddc63ff7eb29fb7daee9"} Jan 25 08:15:43 crc kubenswrapper[4832]: I0125 08:15:43.354760 4832 generic.go:334] "Generic (PLEG): container finished" podID="91ca2186-0d45-4246-9a45-4cca828f2e82" containerID="e16923b764baff25929fb5e9daa5e321a58cccbb09101587d4be62a6a05ffaf4" exitCode=0 Jan 25 08:15:43 crc kubenswrapper[4832]: I0125 08:15:43.356187 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-76fcf4b695-75nt4" event={"ID":"91ca2186-0d45-4246-9a45-4cca828f2e82","Type":"ContainerDied","Data":"e16923b764baff25929fb5e9daa5e321a58cccbb09101587d4be62a6a05ffaf4"} Jan 25 08:15:43 crc kubenswrapper[4832]: I0125 08:15:43.356220 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-76fcf4b695-75nt4" event={"ID":"91ca2186-0d45-4246-9a45-4cca828f2e82","Type":"ContainerStarted","Data":"ca321f98194076d5703d70240a900848c2a8c4c646e8b2085a92dfeadb9d203d"} Jan 25 08:15:43 crc kubenswrapper[4832]: I0125 08:15:43.361434 4832 scope.go:117] "RemoveContainer" containerID="b398527b5228bc2afc6bc862bf8f2f60d828dc21087662ee6cebea11393611a2" Jan 25 08:15:43 crc kubenswrapper[4832]: I0125 08:15:43.362178 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9477fabe-d697-48a4-ab52-424034371e3c-scripts\") pod \"horizon-5cc6ffb9d5-b9rt2\" (UID: \"9477fabe-d697-48a4-ab52-424034371e3c\") " pod="openstack/horizon-5cc6ffb9d5-b9rt2" Jan 25 08:15:43 crc kubenswrapper[4832]: I0125 08:15:43.362256 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/9477fabe-d697-48a4-ab52-424034371e3c-horizon-secret-key\") pod \"horizon-5cc6ffb9d5-b9rt2\" (UID: \"9477fabe-d697-48a4-ab52-424034371e3c\") " pod="openstack/horizon-5cc6ffb9d5-b9rt2" Jan 25 08:15:43 crc kubenswrapper[4832]: I0125 08:15:43.362275 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9477fabe-d697-48a4-ab52-424034371e3c-logs\") pod \"horizon-5cc6ffb9d5-b9rt2\" (UID: \"9477fabe-d697-48a4-ab52-424034371e3c\") " pod="openstack/horizon-5cc6ffb9d5-b9rt2" Jan 25 08:15:43 crc kubenswrapper[4832]: I0125 08:15:43.362345 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lzdvm\" (UniqueName: \"kubernetes.io/projected/9477fabe-d697-48a4-ab52-424034371e3c-kube-api-access-lzdvm\") pod \"horizon-5cc6ffb9d5-b9rt2\" (UID: \"9477fabe-d697-48a4-ab52-424034371e3c\") " pod="openstack/horizon-5cc6ffb9d5-b9rt2" Jan 25 08:15:43 crc kubenswrapper[4832]: I0125 08:15:43.366270 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/9477fabe-d697-48a4-ab52-424034371e3c-config-data\") pod \"horizon-5cc6ffb9d5-b9rt2\" (UID: \"9477fabe-d697-48a4-ab52-424034371e3c\") " pod="openstack/horizon-5cc6ffb9d5-b9rt2" Jan 25 08:15:43 crc kubenswrapper[4832]: E0125 08:15:43.367501 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b398527b5228bc2afc6bc862bf8f2f60d828dc21087662ee6cebea11393611a2\": container with ID starting with b398527b5228bc2afc6bc862bf8f2f60d828dc21087662ee6cebea11393611a2 not found: ID does not exist" containerID="b398527b5228bc2afc6bc862bf8f2f60d828dc21087662ee6cebea11393611a2" Jan 25 08:15:43 crc kubenswrapper[4832]: I0125 08:15:43.367757 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b398527b5228bc2afc6bc862bf8f2f60d828dc21087662ee6cebea11393611a2"} err="failed to get container status \"b398527b5228bc2afc6bc862bf8f2f60d828dc21087662ee6cebea11393611a2\": rpc error: code = NotFound desc = could not find container \"b398527b5228bc2afc6bc862bf8f2f60d828dc21087662ee6cebea11393611a2\": container with ID starting with b398527b5228bc2afc6bc862bf8f2f60d828dc21087662ee6cebea11393611a2 not found: ID does not exist" Jan 25 08:15:43 crc kubenswrapper[4832]: I0125 08:15:43.417494 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-55fff446b9-gj9pp"] Jan 25 08:15:43 crc kubenswrapper[4832]: I0125 08:15:43.468508 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-55fff446b9-gj9pp"] Jan 25 08:15:43 crc kubenswrapper[4832]: I0125 08:15:43.474859 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/9477fabe-d697-48a4-ab52-424034371e3c-config-data\") pod \"horizon-5cc6ffb9d5-b9rt2\" (UID: \"9477fabe-d697-48a4-ab52-424034371e3c\") " pod="openstack/horizon-5cc6ffb9d5-b9rt2" Jan 25 08:15:43 crc kubenswrapper[4832]: I0125 08:15:43.474971 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9477fabe-d697-48a4-ab52-424034371e3c-scripts\") pod \"horizon-5cc6ffb9d5-b9rt2\" (UID: \"9477fabe-d697-48a4-ab52-424034371e3c\") " pod="openstack/horizon-5cc6ffb9d5-b9rt2" Jan 25 08:15:43 crc kubenswrapper[4832]: I0125 08:15:43.475029 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/9477fabe-d697-48a4-ab52-424034371e3c-horizon-secret-key\") pod \"horizon-5cc6ffb9d5-b9rt2\" (UID: \"9477fabe-d697-48a4-ab52-424034371e3c\") " pod="openstack/horizon-5cc6ffb9d5-b9rt2" Jan 25 08:15:43 crc kubenswrapper[4832]: I0125 08:15:43.475052 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9477fabe-d697-48a4-ab52-424034371e3c-logs\") pod \"horizon-5cc6ffb9d5-b9rt2\" (UID: \"9477fabe-d697-48a4-ab52-424034371e3c\") " pod="openstack/horizon-5cc6ffb9d5-b9rt2" Jan 25 08:15:43 crc kubenswrapper[4832]: I0125 08:15:43.475227 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lzdvm\" (UniqueName: \"kubernetes.io/projected/9477fabe-d697-48a4-ab52-424034371e3c-kube-api-access-lzdvm\") pod \"horizon-5cc6ffb9d5-b9rt2\" (UID: \"9477fabe-d697-48a4-ab52-424034371e3c\") " pod="openstack/horizon-5cc6ffb9d5-b9rt2" Jan 25 08:15:43 crc kubenswrapper[4832]: I0125 08:15:43.476105 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9477fabe-d697-48a4-ab52-424034371e3c-logs\") pod \"horizon-5cc6ffb9d5-b9rt2\" (UID: \"9477fabe-d697-48a4-ab52-424034371e3c\") " pod="openstack/horizon-5cc6ffb9d5-b9rt2" Jan 25 08:15:43 crc kubenswrapper[4832]: I0125 08:15:43.479594 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9477fabe-d697-48a4-ab52-424034371e3c-scripts\") pod \"horizon-5cc6ffb9d5-b9rt2\" (UID: \"9477fabe-d697-48a4-ab52-424034371e3c\") " pod="openstack/horizon-5cc6ffb9d5-b9rt2" Jan 25 08:15:43 crc kubenswrapper[4832]: I0125 08:15:43.480230 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/9477fabe-d697-48a4-ab52-424034371e3c-config-data\") pod \"horizon-5cc6ffb9d5-b9rt2\" (UID: \"9477fabe-d697-48a4-ab52-424034371e3c\") " pod="openstack/horizon-5cc6ffb9d5-b9rt2" Jan 25 08:15:43 crc kubenswrapper[4832]: I0125 08:15:43.565085 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/9477fabe-d697-48a4-ab52-424034371e3c-horizon-secret-key\") pod \"horizon-5cc6ffb9d5-b9rt2\" (UID: \"9477fabe-d697-48a4-ab52-424034371e3c\") " pod="openstack/horizon-5cc6ffb9d5-b9rt2" Jan 25 08:15:43 crc kubenswrapper[4832]: I0125 08:15:43.579235 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lzdvm\" (UniqueName: \"kubernetes.io/projected/9477fabe-d697-48a4-ab52-424034371e3c-kube-api-access-lzdvm\") pod \"horizon-5cc6ffb9d5-b9rt2\" (UID: \"9477fabe-d697-48a4-ab52-424034371e3c\") " pod="openstack/horizon-5cc6ffb9d5-b9rt2" Jan 25 08:15:43 crc kubenswrapper[4832]: I0125 08:15:43.686768 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a036699e-21c9-45bd-abf1-f2b054143deb" path="/var/lib/kubelet/pods/a036699e-21c9-45bd-abf1-f2b054143deb/volumes" Jan 25 08:15:43 crc kubenswrapper[4832]: I0125 08:15:43.687518 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d66779ca-60d0-4bce-9bb6-e10b6508ad7f" path="/var/lib/kubelet/pods/d66779ca-60d0-4bce-9bb6-e10b6508ad7f/volumes" Jan 25 08:15:43 crc kubenswrapper[4832]: I0125 08:15:43.711016 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-5cc6ffb9d5-b9rt2" Jan 25 08:15:44 crc kubenswrapper[4832]: I0125 08:15:44.200499 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-5cc6ffb9d5-b9rt2"] Jan 25 08:15:44 crc kubenswrapper[4832]: W0125 08:15:44.222749 4832 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9477fabe_d697_48a4_ab52_424034371e3c.slice/crio-9849a29f9a53147c3d755124198d922aff9f3e91f108f035f88467d04f06ebea WatchSource:0}: Error finding container 9849a29f9a53147c3d755124198d922aff9f3e91f108f035f88467d04f06ebea: Status 404 returned error can't find the container with id 9849a29f9a53147c3d755124198d922aff9f3e91f108f035f88467d04f06ebea Jan 25 08:15:44 crc kubenswrapper[4832]: I0125 08:15:44.396067 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-76fcf4b695-75nt4" event={"ID":"91ca2186-0d45-4246-9a45-4cca828f2e82","Type":"ContainerStarted","Data":"cccdd5eb5e560ba70b508b907ea7b798ab0112f27af79429468d64cff012ad9c"} Jan 25 08:15:44 crc kubenswrapper[4832]: I0125 08:15:44.396836 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-76fcf4b695-75nt4" Jan 25 08:15:44 crc kubenswrapper[4832]: I0125 08:15:44.398868 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5cc6ffb9d5-b9rt2" event={"ID":"9477fabe-d697-48a4-ab52-424034371e3c","Type":"ContainerStarted","Data":"9849a29f9a53147c3d755124198d922aff9f3e91f108f035f88467d04f06ebea"} Jan 25 08:15:47 crc kubenswrapper[4832]: I0125 08:15:47.697479 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-76fcf4b695-75nt4" podStartSLOduration=7.697459376 podStartE2EDuration="7.697459376s" podCreationTimestamp="2026-01-25 08:15:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-25 08:15:44.423881948 +0000 UTC m=+1127.097705481" watchObservedRunningTime="2026-01-25 08:15:47.697459376 +0000 UTC m=+1130.371282909" Jan 25 08:15:49 crc kubenswrapper[4832]: I0125 08:15:49.461982 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-85c746769-89kvs"] Jan 25 08:15:49 crc kubenswrapper[4832]: I0125 08:15:49.485177 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-856b6b4996-m59cl"] Jan 25 08:15:49 crc kubenswrapper[4832]: I0125 08:15:49.486697 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-856b6b4996-m59cl" Jan 25 08:15:49 crc kubenswrapper[4832]: I0125 08:15:49.488553 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-horizon-svc" Jan 25 08:15:49 crc kubenswrapper[4832]: I0125 08:15:49.517880 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-856b6b4996-m59cl"] Jan 25 08:15:49 crc kubenswrapper[4832]: I0125 08:15:49.560590 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-5cc6ffb9d5-b9rt2"] Jan 25 08:15:49 crc kubenswrapper[4832]: I0125 08:15:49.563461 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/573d9b12-352d-4b14-b79c-f2a4a3bfec61-horizon-secret-key\") pod \"horizon-856b6b4996-m59cl\" (UID: \"573d9b12-352d-4b14-b79c-f2a4a3bfec61\") " pod="openstack/horizon-856b6b4996-m59cl" Jan 25 08:15:49 crc kubenswrapper[4832]: I0125 08:15:49.563548 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/573d9b12-352d-4b14-b79c-f2a4a3bfec61-horizon-tls-certs\") pod \"horizon-856b6b4996-m59cl\" (UID: \"573d9b12-352d-4b14-b79c-f2a4a3bfec61\") " pod="openstack/horizon-856b6b4996-m59cl" Jan 25 08:15:49 crc kubenswrapper[4832]: I0125 08:15:49.563578 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/573d9b12-352d-4b14-b79c-f2a4a3bfec61-combined-ca-bundle\") pod \"horizon-856b6b4996-m59cl\" (UID: \"573d9b12-352d-4b14-b79c-f2a4a3bfec61\") " pod="openstack/horizon-856b6b4996-m59cl" Jan 25 08:15:49 crc kubenswrapper[4832]: I0125 08:15:49.563626 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/573d9b12-352d-4b14-b79c-f2a4a3bfec61-config-data\") pod \"horizon-856b6b4996-m59cl\" (UID: \"573d9b12-352d-4b14-b79c-f2a4a3bfec61\") " pod="openstack/horizon-856b6b4996-m59cl" Jan 25 08:15:49 crc kubenswrapper[4832]: I0125 08:15:49.563706 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/573d9b12-352d-4b14-b79c-f2a4a3bfec61-scripts\") pod \"horizon-856b6b4996-m59cl\" (UID: \"573d9b12-352d-4b14-b79c-f2a4a3bfec61\") " pod="openstack/horizon-856b6b4996-m59cl" Jan 25 08:15:49 crc kubenswrapper[4832]: I0125 08:15:49.563736 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/573d9b12-352d-4b14-b79c-f2a4a3bfec61-logs\") pod \"horizon-856b6b4996-m59cl\" (UID: \"573d9b12-352d-4b14-b79c-f2a4a3bfec61\") " pod="openstack/horizon-856b6b4996-m59cl" Jan 25 08:15:49 crc kubenswrapper[4832]: I0125 08:15:49.563777 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mpzvt\" (UniqueName: \"kubernetes.io/projected/573d9b12-352d-4b14-b79c-f2a4a3bfec61-kube-api-access-mpzvt\") pod \"horizon-856b6b4996-m59cl\" (UID: \"573d9b12-352d-4b14-b79c-f2a4a3bfec61\") " pod="openstack/horizon-856b6b4996-m59cl" Jan 25 08:15:49 crc kubenswrapper[4832]: I0125 08:15:49.595061 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-f649cfc6-vzpx7"] Jan 25 08:15:49 crc kubenswrapper[4832]: I0125 08:15:49.597004 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-f649cfc6-vzpx7" Jan 25 08:15:49 crc kubenswrapper[4832]: I0125 08:15:49.606594 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-f649cfc6-vzpx7"] Jan 25 08:15:49 crc kubenswrapper[4832]: I0125 08:15:49.665079 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/573d9b12-352d-4b14-b79c-f2a4a3bfec61-combined-ca-bundle\") pod \"horizon-856b6b4996-m59cl\" (UID: \"573d9b12-352d-4b14-b79c-f2a4a3bfec61\") " pod="openstack/horizon-856b6b4996-m59cl" Jan 25 08:15:49 crc kubenswrapper[4832]: I0125 08:15:49.665121 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/26fd6803-3263-4989-a86e-908f6a504d14-logs\") pod \"horizon-f649cfc6-vzpx7\" (UID: \"26fd6803-3263-4989-a86e-908f6a504d14\") " pod="openstack/horizon-f649cfc6-vzpx7" Jan 25 08:15:49 crc kubenswrapper[4832]: I0125 08:15:49.665169 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/26fd6803-3263-4989-a86e-908f6a504d14-scripts\") pod \"horizon-f649cfc6-vzpx7\" (UID: \"26fd6803-3263-4989-a86e-908f6a504d14\") " pod="openstack/horizon-f649cfc6-vzpx7" Jan 25 08:15:49 crc kubenswrapper[4832]: I0125 08:15:49.665190 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/26fd6803-3263-4989-a86e-908f6a504d14-horizon-tls-certs\") pod \"horizon-f649cfc6-vzpx7\" (UID: \"26fd6803-3263-4989-a86e-908f6a504d14\") " pod="openstack/horizon-f649cfc6-vzpx7" Jan 25 08:15:49 crc kubenswrapper[4832]: I0125 08:15:49.665216 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/573d9b12-352d-4b14-b79c-f2a4a3bfec61-config-data\") pod \"horizon-856b6b4996-m59cl\" (UID: \"573d9b12-352d-4b14-b79c-f2a4a3bfec61\") " pod="openstack/horizon-856b6b4996-m59cl" Jan 25 08:15:49 crc kubenswrapper[4832]: I0125 08:15:49.665246 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/573d9b12-352d-4b14-b79c-f2a4a3bfec61-scripts\") pod \"horizon-856b6b4996-m59cl\" (UID: \"573d9b12-352d-4b14-b79c-f2a4a3bfec61\") " pod="openstack/horizon-856b6b4996-m59cl" Jan 25 08:15:49 crc kubenswrapper[4832]: I0125 08:15:49.665264 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/573d9b12-352d-4b14-b79c-f2a4a3bfec61-logs\") pod \"horizon-856b6b4996-m59cl\" (UID: \"573d9b12-352d-4b14-b79c-f2a4a3bfec61\") " pod="openstack/horizon-856b6b4996-m59cl" Jan 25 08:15:49 crc kubenswrapper[4832]: I0125 08:15:49.665287 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/26fd6803-3263-4989-a86e-908f6a504d14-config-data\") pod \"horizon-f649cfc6-vzpx7\" (UID: \"26fd6803-3263-4989-a86e-908f6a504d14\") " pod="openstack/horizon-f649cfc6-vzpx7" Jan 25 08:15:49 crc kubenswrapper[4832]: I0125 08:15:49.665318 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mpzvt\" (UniqueName: \"kubernetes.io/projected/573d9b12-352d-4b14-b79c-f2a4a3bfec61-kube-api-access-mpzvt\") pod \"horizon-856b6b4996-m59cl\" (UID: \"573d9b12-352d-4b14-b79c-f2a4a3bfec61\") " pod="openstack/horizon-856b6b4996-m59cl" Jan 25 08:15:49 crc kubenswrapper[4832]: I0125 08:15:49.665353 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/573d9b12-352d-4b14-b79c-f2a4a3bfec61-horizon-secret-key\") pod \"horizon-856b6b4996-m59cl\" (UID: \"573d9b12-352d-4b14-b79c-f2a4a3bfec61\") " pod="openstack/horizon-856b6b4996-m59cl" Jan 25 08:15:49 crc kubenswrapper[4832]: I0125 08:15:49.665372 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/26fd6803-3263-4989-a86e-908f6a504d14-combined-ca-bundle\") pod \"horizon-f649cfc6-vzpx7\" (UID: \"26fd6803-3263-4989-a86e-908f6a504d14\") " pod="openstack/horizon-f649cfc6-vzpx7" Jan 25 08:15:49 crc kubenswrapper[4832]: I0125 08:15:49.665413 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zlsjc\" (UniqueName: \"kubernetes.io/projected/26fd6803-3263-4989-a86e-908f6a504d14-kube-api-access-zlsjc\") pod \"horizon-f649cfc6-vzpx7\" (UID: \"26fd6803-3263-4989-a86e-908f6a504d14\") " pod="openstack/horizon-f649cfc6-vzpx7" Jan 25 08:15:49 crc kubenswrapper[4832]: I0125 08:15:49.665436 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/26fd6803-3263-4989-a86e-908f6a504d14-horizon-secret-key\") pod \"horizon-f649cfc6-vzpx7\" (UID: \"26fd6803-3263-4989-a86e-908f6a504d14\") " pod="openstack/horizon-f649cfc6-vzpx7" Jan 25 08:15:49 crc kubenswrapper[4832]: I0125 08:15:49.665456 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/573d9b12-352d-4b14-b79c-f2a4a3bfec61-horizon-tls-certs\") pod \"horizon-856b6b4996-m59cl\" (UID: \"573d9b12-352d-4b14-b79c-f2a4a3bfec61\") " pod="openstack/horizon-856b6b4996-m59cl" Jan 25 08:15:49 crc kubenswrapper[4832]: I0125 08:15:49.666304 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/573d9b12-352d-4b14-b79c-f2a4a3bfec61-logs\") pod \"horizon-856b6b4996-m59cl\" (UID: \"573d9b12-352d-4b14-b79c-f2a4a3bfec61\") " pod="openstack/horizon-856b6b4996-m59cl" Jan 25 08:15:49 crc kubenswrapper[4832]: I0125 08:15:49.666721 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/573d9b12-352d-4b14-b79c-f2a4a3bfec61-scripts\") pod \"horizon-856b6b4996-m59cl\" (UID: \"573d9b12-352d-4b14-b79c-f2a4a3bfec61\") " pod="openstack/horizon-856b6b4996-m59cl" Jan 25 08:15:49 crc kubenswrapper[4832]: I0125 08:15:49.667279 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/573d9b12-352d-4b14-b79c-f2a4a3bfec61-config-data\") pod \"horizon-856b6b4996-m59cl\" (UID: \"573d9b12-352d-4b14-b79c-f2a4a3bfec61\") " pod="openstack/horizon-856b6b4996-m59cl" Jan 25 08:15:49 crc kubenswrapper[4832]: I0125 08:15:49.678676 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/573d9b12-352d-4b14-b79c-f2a4a3bfec61-horizon-tls-certs\") pod \"horizon-856b6b4996-m59cl\" (UID: \"573d9b12-352d-4b14-b79c-f2a4a3bfec61\") " pod="openstack/horizon-856b6b4996-m59cl" Jan 25 08:15:49 crc kubenswrapper[4832]: I0125 08:15:49.678755 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/573d9b12-352d-4b14-b79c-f2a4a3bfec61-horizon-secret-key\") pod \"horizon-856b6b4996-m59cl\" (UID: \"573d9b12-352d-4b14-b79c-f2a4a3bfec61\") " pod="openstack/horizon-856b6b4996-m59cl" Jan 25 08:15:49 crc kubenswrapper[4832]: I0125 08:15:49.678891 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/573d9b12-352d-4b14-b79c-f2a4a3bfec61-combined-ca-bundle\") pod \"horizon-856b6b4996-m59cl\" (UID: \"573d9b12-352d-4b14-b79c-f2a4a3bfec61\") " pod="openstack/horizon-856b6b4996-m59cl" Jan 25 08:15:49 crc kubenswrapper[4832]: I0125 08:15:49.692975 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mpzvt\" (UniqueName: \"kubernetes.io/projected/573d9b12-352d-4b14-b79c-f2a4a3bfec61-kube-api-access-mpzvt\") pod \"horizon-856b6b4996-m59cl\" (UID: \"573d9b12-352d-4b14-b79c-f2a4a3bfec61\") " pod="openstack/horizon-856b6b4996-m59cl" Jan 25 08:15:49 crc kubenswrapper[4832]: I0125 08:15:49.766725 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/26fd6803-3263-4989-a86e-908f6a504d14-logs\") pod \"horizon-f649cfc6-vzpx7\" (UID: \"26fd6803-3263-4989-a86e-908f6a504d14\") " pod="openstack/horizon-f649cfc6-vzpx7" Jan 25 08:15:49 crc kubenswrapper[4832]: I0125 08:15:49.766797 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/26fd6803-3263-4989-a86e-908f6a504d14-scripts\") pod \"horizon-f649cfc6-vzpx7\" (UID: \"26fd6803-3263-4989-a86e-908f6a504d14\") " pod="openstack/horizon-f649cfc6-vzpx7" Jan 25 08:15:49 crc kubenswrapper[4832]: I0125 08:15:49.766815 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/26fd6803-3263-4989-a86e-908f6a504d14-horizon-tls-certs\") pod \"horizon-f649cfc6-vzpx7\" (UID: \"26fd6803-3263-4989-a86e-908f6a504d14\") " pod="openstack/horizon-f649cfc6-vzpx7" Jan 25 08:15:49 crc kubenswrapper[4832]: I0125 08:15:49.766876 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/26fd6803-3263-4989-a86e-908f6a504d14-config-data\") pod \"horizon-f649cfc6-vzpx7\" (UID: \"26fd6803-3263-4989-a86e-908f6a504d14\") " pod="openstack/horizon-f649cfc6-vzpx7" Jan 25 08:15:49 crc kubenswrapper[4832]: I0125 08:15:49.766932 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/26fd6803-3263-4989-a86e-908f6a504d14-combined-ca-bundle\") pod \"horizon-f649cfc6-vzpx7\" (UID: \"26fd6803-3263-4989-a86e-908f6a504d14\") " pod="openstack/horizon-f649cfc6-vzpx7" Jan 25 08:15:49 crc kubenswrapper[4832]: I0125 08:15:49.766953 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zlsjc\" (UniqueName: \"kubernetes.io/projected/26fd6803-3263-4989-a86e-908f6a504d14-kube-api-access-zlsjc\") pod \"horizon-f649cfc6-vzpx7\" (UID: \"26fd6803-3263-4989-a86e-908f6a504d14\") " pod="openstack/horizon-f649cfc6-vzpx7" Jan 25 08:15:49 crc kubenswrapper[4832]: I0125 08:15:49.766977 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/26fd6803-3263-4989-a86e-908f6a504d14-horizon-secret-key\") pod \"horizon-f649cfc6-vzpx7\" (UID: \"26fd6803-3263-4989-a86e-908f6a504d14\") " pod="openstack/horizon-f649cfc6-vzpx7" Jan 25 08:15:49 crc kubenswrapper[4832]: I0125 08:15:49.767714 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/26fd6803-3263-4989-a86e-908f6a504d14-logs\") pod \"horizon-f649cfc6-vzpx7\" (UID: \"26fd6803-3263-4989-a86e-908f6a504d14\") " pod="openstack/horizon-f649cfc6-vzpx7" Jan 25 08:15:49 crc kubenswrapper[4832]: I0125 08:15:49.768085 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/26fd6803-3263-4989-a86e-908f6a504d14-scripts\") pod \"horizon-f649cfc6-vzpx7\" (UID: \"26fd6803-3263-4989-a86e-908f6a504d14\") " pod="openstack/horizon-f649cfc6-vzpx7" Jan 25 08:15:49 crc kubenswrapper[4832]: I0125 08:15:49.768807 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/26fd6803-3263-4989-a86e-908f6a504d14-config-data\") pod \"horizon-f649cfc6-vzpx7\" (UID: \"26fd6803-3263-4989-a86e-908f6a504d14\") " pod="openstack/horizon-f649cfc6-vzpx7" Jan 25 08:15:49 crc kubenswrapper[4832]: I0125 08:15:49.770830 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/26fd6803-3263-4989-a86e-908f6a504d14-horizon-secret-key\") pod \"horizon-f649cfc6-vzpx7\" (UID: \"26fd6803-3263-4989-a86e-908f6a504d14\") " pod="openstack/horizon-f649cfc6-vzpx7" Jan 25 08:15:49 crc kubenswrapper[4832]: I0125 08:15:49.771214 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/26fd6803-3263-4989-a86e-908f6a504d14-horizon-tls-certs\") pod \"horizon-f649cfc6-vzpx7\" (UID: \"26fd6803-3263-4989-a86e-908f6a504d14\") " pod="openstack/horizon-f649cfc6-vzpx7" Jan 25 08:15:49 crc kubenswrapper[4832]: I0125 08:15:49.772784 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/26fd6803-3263-4989-a86e-908f6a504d14-combined-ca-bundle\") pod \"horizon-f649cfc6-vzpx7\" (UID: \"26fd6803-3263-4989-a86e-908f6a504d14\") " pod="openstack/horizon-f649cfc6-vzpx7" Jan 25 08:15:49 crc kubenswrapper[4832]: I0125 08:15:49.786209 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zlsjc\" (UniqueName: \"kubernetes.io/projected/26fd6803-3263-4989-a86e-908f6a504d14-kube-api-access-zlsjc\") pod \"horizon-f649cfc6-vzpx7\" (UID: \"26fd6803-3263-4989-a86e-908f6a504d14\") " pod="openstack/horizon-f649cfc6-vzpx7" Jan 25 08:15:49 crc kubenswrapper[4832]: I0125 08:15:49.809935 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-856b6b4996-m59cl" Jan 25 08:15:49 crc kubenswrapper[4832]: I0125 08:15:49.915874 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-f649cfc6-vzpx7" Jan 25 08:15:51 crc kubenswrapper[4832]: I0125 08:15:51.347563 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-76fcf4b695-75nt4" Jan 25 08:15:51 crc kubenswrapper[4832]: I0125 08:15:51.423947 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-698758b865-vswdl"] Jan 25 08:15:51 crc kubenswrapper[4832]: I0125 08:15:51.424232 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-698758b865-vswdl" podUID="d36bac18-e73f-4718-b2b7-89fc54febd73" containerName="dnsmasq-dns" containerID="cri-o://9d3da0a7bdd1779a51a05bb43d06cfc2079f43c7facd448746b691f4951b451d" gracePeriod=10 Jan 25 08:15:51 crc kubenswrapper[4832]: I0125 08:15:51.653767 4832 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-698758b865-vswdl" podUID="d36bac18-e73f-4718-b2b7-89fc54febd73" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.112:5353: connect: connection refused" Jan 25 08:15:52 crc kubenswrapper[4832]: I0125 08:15:52.487523 4832 generic.go:334] "Generic (PLEG): container finished" podID="d36bac18-e73f-4718-b2b7-89fc54febd73" containerID="9d3da0a7bdd1779a51a05bb43d06cfc2079f43c7facd448746b691f4951b451d" exitCode=0 Jan 25 08:15:52 crc kubenswrapper[4832]: I0125 08:15:52.487792 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-vswdl" event={"ID":"d36bac18-e73f-4718-b2b7-89fc54febd73","Type":"ContainerDied","Data":"9d3da0a7bdd1779a51a05bb43d06cfc2079f43c7facd448746b691f4951b451d"} Jan 25 08:15:53 crc kubenswrapper[4832]: I0125 08:15:53.498179 4832 generic.go:334] "Generic (PLEG): container finished" podID="5e0cb7b1-ca34-4d43-ab93-febd41f35489" containerID="d399a17cccba09c5367e9af52b2eed1ccb200a38317606a105d12e84fbc4af18" exitCode=0 Jan 25 08:15:53 crc kubenswrapper[4832]: I0125 08:15:53.498276 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-vn66d" event={"ID":"5e0cb7b1-ca34-4d43-ab93-febd41f35489","Type":"ContainerDied","Data":"d399a17cccba09c5367e9af52b2eed1ccb200a38317606a105d12e84fbc4af18"} Jan 25 08:15:56 crc kubenswrapper[4832]: I0125 08:15:56.653733 4832 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-698758b865-vswdl" podUID="d36bac18-e73f-4718-b2b7-89fc54febd73" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.112:5353: connect: connection refused" Jan 25 08:15:57 crc kubenswrapper[4832]: E0125 08:15:57.068103 4832 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-horizon:current-podified" Jan 25 08:15:57 crc kubenswrapper[4832]: E0125 08:15:57.068593 4832 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:horizon-log,Image:quay.io/podified-antelope-centos9/openstack-horizon:current-podified,Command:[/bin/bash],Args:[-c tail -n+1 -F /var/log/horizon/horizon.log],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n56fh8bhbh88h557h8ch696hf9h4h5cdh88h694h554hdchd5h5d7h666h9h5bbh656h695h66bh54dh595hddh656h54dhf8h594hdh75h596q,ValueFrom:nil,},EnvVar{Name:ENABLE_DESIGNATE,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_HEAT,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_IRONIC,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_MANILA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_OCTAVIA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_WATCHER,Value:no,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},EnvVar{Name:UNPACK_THEME,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:logs,ReadOnly:false,MountPath:/var/log/horizon,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-52mhf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*48,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*42400,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod horizon-547d75495c-rgz7z_openstack(05d31ada-06df-4ffc-9e3a-3d476edaaa4f): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 25 08:15:57 crc kubenswrapper[4832]: E0125 08:15:57.072112 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"horizon-log\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\", failed to \"StartContainer\" for \"horizon\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-horizon:current-podified\\\"\"]" pod="openstack/horizon-547d75495c-rgz7z" podUID="05d31ada-06df-4ffc-9e3a-3d476edaaa4f" Jan 25 08:15:57 crc kubenswrapper[4832]: E0125 08:15:57.075240 4832 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-horizon:current-podified" Jan 25 08:15:57 crc kubenswrapper[4832]: E0125 08:15:57.075366 4832 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:horizon-log,Image:quay.io/podified-antelope-centos9/openstack-horizon:current-podified,Command:[/bin/bash],Args:[-c tail -n+1 -F /var/log/horizon/horizon.log],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n67ch566h684h85hc7h8h66dh67fhddh5dh66ch5dch585h544h546hf7h84h58ch574h66hdch656hc5hd6h79h5dh649hb8h55fh557h6fh698q,ValueFrom:nil,},EnvVar{Name:ENABLE_DESIGNATE,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_HEAT,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_IRONIC,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_MANILA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_OCTAVIA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_WATCHER,Value:no,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},EnvVar{Name:UNPACK_THEME,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:logs,ReadOnly:false,MountPath:/var/log/horizon,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-lzdvm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*48,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*42400,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod horizon-5cc6ffb9d5-b9rt2_openstack(9477fabe-d697-48a4-ab52-424034371e3c): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 25 08:15:57 crc kubenswrapper[4832]: E0125 08:15:57.077876 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"horizon-log\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\", failed to \"StartContainer\" for \"horizon\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-horizon:current-podified\\\"\"]" pod="openstack/horizon-5cc6ffb9d5-b9rt2" podUID="9477fabe-d697-48a4-ab52-424034371e3c" Jan 25 08:15:57 crc kubenswrapper[4832]: I0125 08:15:57.143978 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-vn66d" Jan 25 08:15:57 crc kubenswrapper[4832]: I0125 08:15:57.199877 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ndndd\" (UniqueName: \"kubernetes.io/projected/5e0cb7b1-ca34-4d43-ab93-febd41f35489-kube-api-access-ndndd\") pod \"5e0cb7b1-ca34-4d43-ab93-febd41f35489\" (UID: \"5e0cb7b1-ca34-4d43-ab93-febd41f35489\") " Jan 25 08:15:57 crc kubenswrapper[4832]: I0125 08:15:57.200030 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/5e0cb7b1-ca34-4d43-ab93-febd41f35489-fernet-keys\") pod \"5e0cb7b1-ca34-4d43-ab93-febd41f35489\" (UID: \"5e0cb7b1-ca34-4d43-ab93-febd41f35489\") " Jan 25 08:15:57 crc kubenswrapper[4832]: I0125 08:15:57.200067 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5e0cb7b1-ca34-4d43-ab93-febd41f35489-config-data\") pod \"5e0cb7b1-ca34-4d43-ab93-febd41f35489\" (UID: \"5e0cb7b1-ca34-4d43-ab93-febd41f35489\") " Jan 25 08:15:57 crc kubenswrapper[4832]: I0125 08:15:57.200088 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5e0cb7b1-ca34-4d43-ab93-febd41f35489-scripts\") pod \"5e0cb7b1-ca34-4d43-ab93-febd41f35489\" (UID: \"5e0cb7b1-ca34-4d43-ab93-febd41f35489\") " Jan 25 08:15:57 crc kubenswrapper[4832]: I0125 08:15:57.200143 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5e0cb7b1-ca34-4d43-ab93-febd41f35489-combined-ca-bundle\") pod \"5e0cb7b1-ca34-4d43-ab93-febd41f35489\" (UID: \"5e0cb7b1-ca34-4d43-ab93-febd41f35489\") " Jan 25 08:15:57 crc kubenswrapper[4832]: I0125 08:15:57.200199 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/5e0cb7b1-ca34-4d43-ab93-febd41f35489-credential-keys\") pod \"5e0cb7b1-ca34-4d43-ab93-febd41f35489\" (UID: \"5e0cb7b1-ca34-4d43-ab93-febd41f35489\") " Jan 25 08:15:57 crc kubenswrapper[4832]: I0125 08:15:57.206084 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5e0cb7b1-ca34-4d43-ab93-febd41f35489-scripts" (OuterVolumeSpecName: "scripts") pod "5e0cb7b1-ca34-4d43-ab93-febd41f35489" (UID: "5e0cb7b1-ca34-4d43-ab93-febd41f35489"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 25 08:15:57 crc kubenswrapper[4832]: I0125 08:15:57.206204 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5e0cb7b1-ca34-4d43-ab93-febd41f35489-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "5e0cb7b1-ca34-4d43-ab93-febd41f35489" (UID: "5e0cb7b1-ca34-4d43-ab93-febd41f35489"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 25 08:15:57 crc kubenswrapper[4832]: I0125 08:15:57.213928 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5e0cb7b1-ca34-4d43-ab93-febd41f35489-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "5e0cb7b1-ca34-4d43-ab93-febd41f35489" (UID: "5e0cb7b1-ca34-4d43-ab93-febd41f35489"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 25 08:15:57 crc kubenswrapper[4832]: I0125 08:15:57.213954 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5e0cb7b1-ca34-4d43-ab93-febd41f35489-kube-api-access-ndndd" (OuterVolumeSpecName: "kube-api-access-ndndd") pod "5e0cb7b1-ca34-4d43-ab93-febd41f35489" (UID: "5e0cb7b1-ca34-4d43-ab93-febd41f35489"). InnerVolumeSpecName "kube-api-access-ndndd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 25 08:15:57 crc kubenswrapper[4832]: I0125 08:15:57.229253 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5e0cb7b1-ca34-4d43-ab93-febd41f35489-config-data" (OuterVolumeSpecName: "config-data") pod "5e0cb7b1-ca34-4d43-ab93-febd41f35489" (UID: "5e0cb7b1-ca34-4d43-ab93-febd41f35489"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 25 08:15:57 crc kubenswrapper[4832]: I0125 08:15:57.230799 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5e0cb7b1-ca34-4d43-ab93-febd41f35489-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "5e0cb7b1-ca34-4d43-ab93-febd41f35489" (UID: "5e0cb7b1-ca34-4d43-ab93-febd41f35489"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 25 08:15:57 crc kubenswrapper[4832]: I0125 08:15:57.302367 4832 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5e0cb7b1-ca34-4d43-ab93-febd41f35489-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 25 08:15:57 crc kubenswrapper[4832]: I0125 08:15:57.302431 4832 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/5e0cb7b1-ca34-4d43-ab93-febd41f35489-credential-keys\") on node \"crc\" DevicePath \"\"" Jan 25 08:15:57 crc kubenswrapper[4832]: I0125 08:15:57.302445 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ndndd\" (UniqueName: \"kubernetes.io/projected/5e0cb7b1-ca34-4d43-ab93-febd41f35489-kube-api-access-ndndd\") on node \"crc\" DevicePath \"\"" Jan 25 08:15:57 crc kubenswrapper[4832]: I0125 08:15:57.302462 4832 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/5e0cb7b1-ca34-4d43-ab93-febd41f35489-fernet-keys\") on node \"crc\" DevicePath \"\"" Jan 25 08:15:57 crc kubenswrapper[4832]: I0125 08:15:57.302474 4832 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5e0cb7b1-ca34-4d43-ab93-febd41f35489-config-data\") on node \"crc\" DevicePath \"\"" Jan 25 08:15:57 crc kubenswrapper[4832]: I0125 08:15:57.302486 4832 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5e0cb7b1-ca34-4d43-ab93-febd41f35489-scripts\") on node \"crc\" DevicePath \"\"" Jan 25 08:15:57 crc kubenswrapper[4832]: I0125 08:15:57.532921 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-vn66d" event={"ID":"5e0cb7b1-ca34-4d43-ab93-febd41f35489","Type":"ContainerDied","Data":"5803cdbd66e38038d219fdaeb9fbbef9f7acecd3c4c7fccb510ebcb406268a59"} Jan 25 08:15:57 crc kubenswrapper[4832]: I0125 08:15:57.532975 4832 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5803cdbd66e38038d219fdaeb9fbbef9f7acecd3c4c7fccb510ebcb406268a59" Jan 25 08:15:57 crc kubenswrapper[4832]: I0125 08:15:57.533029 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-vn66d" Jan 25 08:15:58 crc kubenswrapper[4832]: I0125 08:15:58.267283 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-vn66d"] Jan 25 08:15:58 crc kubenswrapper[4832]: I0125 08:15:58.275075 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-vn66d"] Jan 25 08:15:58 crc kubenswrapper[4832]: I0125 08:15:58.363583 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-5dqnt"] Jan 25 08:15:58 crc kubenswrapper[4832]: E0125 08:15:58.363978 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5e0cb7b1-ca34-4d43-ab93-febd41f35489" containerName="keystone-bootstrap" Jan 25 08:15:58 crc kubenswrapper[4832]: I0125 08:15:58.363996 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="5e0cb7b1-ca34-4d43-ab93-febd41f35489" containerName="keystone-bootstrap" Jan 25 08:15:58 crc kubenswrapper[4832]: I0125 08:15:58.364176 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="5e0cb7b1-ca34-4d43-ab93-febd41f35489" containerName="keystone-bootstrap" Jan 25 08:15:58 crc kubenswrapper[4832]: I0125 08:15:58.364746 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-5dqnt" Jan 25 08:15:58 crc kubenswrapper[4832]: I0125 08:15:58.367538 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Jan 25 08:15:58 crc kubenswrapper[4832]: I0125 08:15:58.367859 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-xml8n" Jan 25 08:15:58 crc kubenswrapper[4832]: I0125 08:15:58.368020 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Jan 25 08:15:58 crc kubenswrapper[4832]: I0125 08:15:58.368100 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Jan 25 08:15:58 crc kubenswrapper[4832]: I0125 08:15:58.368233 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Jan 25 08:15:58 crc kubenswrapper[4832]: I0125 08:15:58.389971 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-5dqnt"] Jan 25 08:15:58 crc kubenswrapper[4832]: I0125 08:15:58.425203 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0d1875b5-9bf9-49f8-8600-d4e2c2804c47-combined-ca-bundle\") pod \"keystone-bootstrap-5dqnt\" (UID: \"0d1875b5-9bf9-49f8-8600-d4e2c2804c47\") " pod="openstack/keystone-bootstrap-5dqnt" Jan 25 08:15:58 crc kubenswrapper[4832]: I0125 08:15:58.425250 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/0d1875b5-9bf9-49f8-8600-d4e2c2804c47-credential-keys\") pod \"keystone-bootstrap-5dqnt\" (UID: \"0d1875b5-9bf9-49f8-8600-d4e2c2804c47\") " pod="openstack/keystone-bootstrap-5dqnt" Jan 25 08:15:58 crc kubenswrapper[4832]: I0125 08:15:58.425331 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vkwpq\" (UniqueName: \"kubernetes.io/projected/0d1875b5-9bf9-49f8-8600-d4e2c2804c47-kube-api-access-vkwpq\") pod \"keystone-bootstrap-5dqnt\" (UID: \"0d1875b5-9bf9-49f8-8600-d4e2c2804c47\") " pod="openstack/keystone-bootstrap-5dqnt" Jan 25 08:15:58 crc kubenswrapper[4832]: I0125 08:15:58.425353 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/0d1875b5-9bf9-49f8-8600-d4e2c2804c47-fernet-keys\") pod \"keystone-bootstrap-5dqnt\" (UID: \"0d1875b5-9bf9-49f8-8600-d4e2c2804c47\") " pod="openstack/keystone-bootstrap-5dqnt" Jan 25 08:15:58 crc kubenswrapper[4832]: I0125 08:15:58.425398 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0d1875b5-9bf9-49f8-8600-d4e2c2804c47-scripts\") pod \"keystone-bootstrap-5dqnt\" (UID: \"0d1875b5-9bf9-49f8-8600-d4e2c2804c47\") " pod="openstack/keystone-bootstrap-5dqnt" Jan 25 08:15:58 crc kubenswrapper[4832]: I0125 08:15:58.425711 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0d1875b5-9bf9-49f8-8600-d4e2c2804c47-config-data\") pod \"keystone-bootstrap-5dqnt\" (UID: \"0d1875b5-9bf9-49f8-8600-d4e2c2804c47\") " pod="openstack/keystone-bootstrap-5dqnt" Jan 25 08:15:58 crc kubenswrapper[4832]: I0125 08:15:58.527431 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0d1875b5-9bf9-49f8-8600-d4e2c2804c47-config-data\") pod \"keystone-bootstrap-5dqnt\" (UID: \"0d1875b5-9bf9-49f8-8600-d4e2c2804c47\") " pod="openstack/keystone-bootstrap-5dqnt" Jan 25 08:15:58 crc kubenswrapper[4832]: I0125 08:15:58.527533 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0d1875b5-9bf9-49f8-8600-d4e2c2804c47-combined-ca-bundle\") pod \"keystone-bootstrap-5dqnt\" (UID: \"0d1875b5-9bf9-49f8-8600-d4e2c2804c47\") " pod="openstack/keystone-bootstrap-5dqnt" Jan 25 08:15:58 crc kubenswrapper[4832]: I0125 08:15:58.527559 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/0d1875b5-9bf9-49f8-8600-d4e2c2804c47-credential-keys\") pod \"keystone-bootstrap-5dqnt\" (UID: \"0d1875b5-9bf9-49f8-8600-d4e2c2804c47\") " pod="openstack/keystone-bootstrap-5dqnt" Jan 25 08:15:58 crc kubenswrapper[4832]: I0125 08:15:58.527602 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vkwpq\" (UniqueName: \"kubernetes.io/projected/0d1875b5-9bf9-49f8-8600-d4e2c2804c47-kube-api-access-vkwpq\") pod \"keystone-bootstrap-5dqnt\" (UID: \"0d1875b5-9bf9-49f8-8600-d4e2c2804c47\") " pod="openstack/keystone-bootstrap-5dqnt" Jan 25 08:15:58 crc kubenswrapper[4832]: I0125 08:15:58.527621 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/0d1875b5-9bf9-49f8-8600-d4e2c2804c47-fernet-keys\") pod \"keystone-bootstrap-5dqnt\" (UID: \"0d1875b5-9bf9-49f8-8600-d4e2c2804c47\") " pod="openstack/keystone-bootstrap-5dqnt" Jan 25 08:15:58 crc kubenswrapper[4832]: I0125 08:15:58.527644 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0d1875b5-9bf9-49f8-8600-d4e2c2804c47-scripts\") pod \"keystone-bootstrap-5dqnt\" (UID: \"0d1875b5-9bf9-49f8-8600-d4e2c2804c47\") " pod="openstack/keystone-bootstrap-5dqnt" Jan 25 08:15:58 crc kubenswrapper[4832]: I0125 08:15:58.533901 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0d1875b5-9bf9-49f8-8600-d4e2c2804c47-scripts\") pod \"keystone-bootstrap-5dqnt\" (UID: \"0d1875b5-9bf9-49f8-8600-d4e2c2804c47\") " pod="openstack/keystone-bootstrap-5dqnt" Jan 25 08:15:58 crc kubenswrapper[4832]: I0125 08:15:58.534164 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0d1875b5-9bf9-49f8-8600-d4e2c2804c47-combined-ca-bundle\") pod \"keystone-bootstrap-5dqnt\" (UID: \"0d1875b5-9bf9-49f8-8600-d4e2c2804c47\") " pod="openstack/keystone-bootstrap-5dqnt" Jan 25 08:15:58 crc kubenswrapper[4832]: I0125 08:15:58.534202 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/0d1875b5-9bf9-49f8-8600-d4e2c2804c47-credential-keys\") pod \"keystone-bootstrap-5dqnt\" (UID: \"0d1875b5-9bf9-49f8-8600-d4e2c2804c47\") " pod="openstack/keystone-bootstrap-5dqnt" Jan 25 08:15:58 crc kubenswrapper[4832]: I0125 08:15:58.535828 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0d1875b5-9bf9-49f8-8600-d4e2c2804c47-config-data\") pod \"keystone-bootstrap-5dqnt\" (UID: \"0d1875b5-9bf9-49f8-8600-d4e2c2804c47\") " pod="openstack/keystone-bootstrap-5dqnt" Jan 25 08:15:58 crc kubenswrapper[4832]: I0125 08:15:58.536292 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/0d1875b5-9bf9-49f8-8600-d4e2c2804c47-fernet-keys\") pod \"keystone-bootstrap-5dqnt\" (UID: \"0d1875b5-9bf9-49f8-8600-d4e2c2804c47\") " pod="openstack/keystone-bootstrap-5dqnt" Jan 25 08:15:58 crc kubenswrapper[4832]: I0125 08:15:58.549512 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vkwpq\" (UniqueName: \"kubernetes.io/projected/0d1875b5-9bf9-49f8-8600-d4e2c2804c47-kube-api-access-vkwpq\") pod \"keystone-bootstrap-5dqnt\" (UID: \"0d1875b5-9bf9-49f8-8600-d4e2c2804c47\") " pod="openstack/keystone-bootstrap-5dqnt" Jan 25 08:15:58 crc kubenswrapper[4832]: I0125 08:15:58.689187 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-5dqnt" Jan 25 08:15:59 crc kubenswrapper[4832]: I0125 08:15:59.680720 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5e0cb7b1-ca34-4d43-ab93-febd41f35489" path="/var/lib/kubelet/pods/5e0cb7b1-ca34-4d43-ab93-febd41f35489/volumes" Jan 25 08:16:01 crc kubenswrapper[4832]: I0125 08:16:01.653837 4832 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-698758b865-vswdl" podUID="d36bac18-e73f-4718-b2b7-89fc54febd73" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.112:5353: connect: connection refused" Jan 25 08:16:01 crc kubenswrapper[4832]: I0125 08:16:01.654235 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-698758b865-vswdl" Jan 25 08:16:06 crc kubenswrapper[4832]: E0125 08:16:06.243050 4832 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-horizon:current-podified" Jan 25 08:16:06 crc kubenswrapper[4832]: E0125 08:16:06.243587 4832 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:horizon-log,Image:quay.io/podified-antelope-centos9/openstack-horizon:current-podified,Command:[/bin/bash],Args:[-c tail -n+1 -F /var/log/horizon/horizon.log],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n5c8hd7h5c8h87h587h596h5bbh5cbh698hb5h584h677hdfh97h59fhc6h5d6h9bh676hffh8bh5cdhfh66bh547h6h7fh696h695h697hf9hbfq,ValueFrom:nil,},EnvVar{Name:ENABLE_DESIGNATE,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_HEAT,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_IRONIC,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_MANILA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_OCTAVIA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_WATCHER,Value:no,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},EnvVar{Name:UNPACK_THEME,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:logs,ReadOnly:false,MountPath:/var/log/horizon,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-v67wb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*48,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*42400,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod horizon-85c746769-89kvs_openstack(8129d5bc-af98-4ef4-b204-fc568ac4ae11): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 25 08:16:06 crc kubenswrapper[4832]: E0125 08:16:06.245817 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"horizon-log\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\", failed to \"StartContainer\" for \"horizon\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-horizon:current-podified\\\"\"]" pod="openstack/horizon-85c746769-89kvs" podUID="8129d5bc-af98-4ef4-b204-fc568ac4ae11" Jan 25 08:16:08 crc kubenswrapper[4832]: E0125 08:16:08.371019 4832 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-ceilometer-central:current-podified" Jan 25 08:16:08 crc kubenswrapper[4832]: E0125 08:16:08.371548 4832 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ceilometer-central-agent,Image:quay.io/podified-antelope-centos9/openstack-ceilometer-central:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n57ch5bh677h645h666h58h6fh7h5bdh8dh94h649h87hb9h594h58ch575h548h64hcbh669h66bh588h99h56dh574h698h5b7h54h5fbh86h96q,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:ceilometer-central-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-t5q9s,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/python3 /var/lib/openstack/bin/centralhealth.py],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(b48b257e-ddb7-486d-8788-489ca788ac1f): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 25 08:16:11 crc kubenswrapper[4832]: I0125 08:16:11.653509 4832 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-698758b865-vswdl" podUID="d36bac18-e73f-4718-b2b7-89fc54febd73" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.112:5353: i/o timeout" Jan 25 08:16:12 crc kubenswrapper[4832]: E0125 08:16:12.317628 4832 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-placement-api:current-podified" Jan 25 08:16:12 crc kubenswrapper[4832]: E0125 08:16:12.317862 4832 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:placement-db-sync,Image:quay.io/podified-antelope-centos9/openstack-placement-api:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:logs,ReadOnly:false,MountPath:/var/log/placement,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:false,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:placement-dbsync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-kgc9s,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42482,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod placement-db-sync-7tnnv_openstack(e1a44ba3-2a1f-4189-80d7-cd0c8795bd9a): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 25 08:16:12 crc kubenswrapper[4832]: E0125 08:16:12.319313 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"placement-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/placement-db-sync-7tnnv" podUID="e1a44ba3-2a1f-4189-80d7-cd0c8795bd9a" Jan 25 08:16:12 crc kubenswrapper[4832]: I0125 08:16:12.389349 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-547d75495c-rgz7z" Jan 25 08:16:12 crc kubenswrapper[4832]: I0125 08:16:12.396278 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-5cc6ffb9d5-b9rt2" Jan 25 08:16:12 crc kubenswrapper[4832]: I0125 08:16:12.546252 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9477fabe-d697-48a4-ab52-424034371e3c-scripts\") pod \"9477fabe-d697-48a4-ab52-424034371e3c\" (UID: \"9477fabe-d697-48a4-ab52-424034371e3c\") " Jan 25 08:16:12 crc kubenswrapper[4832]: I0125 08:16:12.546362 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lzdvm\" (UniqueName: \"kubernetes.io/projected/9477fabe-d697-48a4-ab52-424034371e3c-kube-api-access-lzdvm\") pod \"9477fabe-d697-48a4-ab52-424034371e3c\" (UID: \"9477fabe-d697-48a4-ab52-424034371e3c\") " Jan 25 08:16:12 crc kubenswrapper[4832]: I0125 08:16:12.546416 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/05d31ada-06df-4ffc-9e3a-3d476edaaa4f-logs\") pod \"05d31ada-06df-4ffc-9e3a-3d476edaaa4f\" (UID: \"05d31ada-06df-4ffc-9e3a-3d476edaaa4f\") " Jan 25 08:16:12 crc kubenswrapper[4832]: I0125 08:16:12.546475 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9477fabe-d697-48a4-ab52-424034371e3c-logs\") pod \"9477fabe-d697-48a4-ab52-424034371e3c\" (UID: \"9477fabe-d697-48a4-ab52-424034371e3c\") " Jan 25 08:16:12 crc kubenswrapper[4832]: I0125 08:16:12.546514 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/05d31ada-06df-4ffc-9e3a-3d476edaaa4f-horizon-secret-key\") pod \"05d31ada-06df-4ffc-9e3a-3d476edaaa4f\" (UID: \"05d31ada-06df-4ffc-9e3a-3d476edaaa4f\") " Jan 25 08:16:12 crc kubenswrapper[4832]: I0125 08:16:12.546600 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-52mhf\" (UniqueName: \"kubernetes.io/projected/05d31ada-06df-4ffc-9e3a-3d476edaaa4f-kube-api-access-52mhf\") pod \"05d31ada-06df-4ffc-9e3a-3d476edaaa4f\" (UID: \"05d31ada-06df-4ffc-9e3a-3d476edaaa4f\") " Jan 25 08:16:12 crc kubenswrapper[4832]: I0125 08:16:12.546692 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/05d31ada-06df-4ffc-9e3a-3d476edaaa4f-scripts\") pod \"05d31ada-06df-4ffc-9e3a-3d476edaaa4f\" (UID: \"05d31ada-06df-4ffc-9e3a-3d476edaaa4f\") " Jan 25 08:16:12 crc kubenswrapper[4832]: I0125 08:16:12.546716 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/05d31ada-06df-4ffc-9e3a-3d476edaaa4f-config-data\") pod \"05d31ada-06df-4ffc-9e3a-3d476edaaa4f\" (UID: \"05d31ada-06df-4ffc-9e3a-3d476edaaa4f\") " Jan 25 08:16:12 crc kubenswrapper[4832]: I0125 08:16:12.546772 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/9477fabe-d697-48a4-ab52-424034371e3c-config-data\") pod \"9477fabe-d697-48a4-ab52-424034371e3c\" (UID: \"9477fabe-d697-48a4-ab52-424034371e3c\") " Jan 25 08:16:12 crc kubenswrapper[4832]: I0125 08:16:12.546807 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/9477fabe-d697-48a4-ab52-424034371e3c-horizon-secret-key\") pod \"9477fabe-d697-48a4-ab52-424034371e3c\" (UID: \"9477fabe-d697-48a4-ab52-424034371e3c\") " Jan 25 08:16:12 crc kubenswrapper[4832]: I0125 08:16:12.546853 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/05d31ada-06df-4ffc-9e3a-3d476edaaa4f-logs" (OuterVolumeSpecName: "logs") pod "05d31ada-06df-4ffc-9e3a-3d476edaaa4f" (UID: "05d31ada-06df-4ffc-9e3a-3d476edaaa4f"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 25 08:16:12 crc kubenswrapper[4832]: I0125 08:16:12.547359 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9477fabe-d697-48a4-ab52-424034371e3c-scripts" (OuterVolumeSpecName: "scripts") pod "9477fabe-d697-48a4-ab52-424034371e3c" (UID: "9477fabe-d697-48a4-ab52-424034371e3c"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 25 08:16:12 crc kubenswrapper[4832]: I0125 08:16:12.547374 4832 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/05d31ada-06df-4ffc-9e3a-3d476edaaa4f-logs\") on node \"crc\" DevicePath \"\"" Jan 25 08:16:12 crc kubenswrapper[4832]: I0125 08:16:12.547559 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/05d31ada-06df-4ffc-9e3a-3d476edaaa4f-scripts" (OuterVolumeSpecName: "scripts") pod "05d31ada-06df-4ffc-9e3a-3d476edaaa4f" (UID: "05d31ada-06df-4ffc-9e3a-3d476edaaa4f"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 25 08:16:12 crc kubenswrapper[4832]: I0125 08:16:12.547628 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9477fabe-d697-48a4-ab52-424034371e3c-config-data" (OuterVolumeSpecName: "config-data") pod "9477fabe-d697-48a4-ab52-424034371e3c" (UID: "9477fabe-d697-48a4-ab52-424034371e3c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 25 08:16:12 crc kubenswrapper[4832]: I0125 08:16:12.547650 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/05d31ada-06df-4ffc-9e3a-3d476edaaa4f-config-data" (OuterVolumeSpecName: "config-data") pod "05d31ada-06df-4ffc-9e3a-3d476edaaa4f" (UID: "05d31ada-06df-4ffc-9e3a-3d476edaaa4f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 25 08:16:12 crc kubenswrapper[4832]: I0125 08:16:12.548324 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9477fabe-d697-48a4-ab52-424034371e3c-logs" (OuterVolumeSpecName: "logs") pod "9477fabe-d697-48a4-ab52-424034371e3c" (UID: "9477fabe-d697-48a4-ab52-424034371e3c"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 25 08:16:12 crc kubenswrapper[4832]: I0125 08:16:12.552533 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9477fabe-d697-48a4-ab52-424034371e3c-kube-api-access-lzdvm" (OuterVolumeSpecName: "kube-api-access-lzdvm") pod "9477fabe-d697-48a4-ab52-424034371e3c" (UID: "9477fabe-d697-48a4-ab52-424034371e3c"). InnerVolumeSpecName "kube-api-access-lzdvm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 25 08:16:12 crc kubenswrapper[4832]: I0125 08:16:12.553061 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/05d31ada-06df-4ffc-9e3a-3d476edaaa4f-kube-api-access-52mhf" (OuterVolumeSpecName: "kube-api-access-52mhf") pod "05d31ada-06df-4ffc-9e3a-3d476edaaa4f" (UID: "05d31ada-06df-4ffc-9e3a-3d476edaaa4f"). InnerVolumeSpecName "kube-api-access-52mhf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 25 08:16:12 crc kubenswrapper[4832]: I0125 08:16:12.557813 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/05d31ada-06df-4ffc-9e3a-3d476edaaa4f-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "05d31ada-06df-4ffc-9e3a-3d476edaaa4f" (UID: "05d31ada-06df-4ffc-9e3a-3d476edaaa4f"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 25 08:16:12 crc kubenswrapper[4832]: I0125 08:16:12.566110 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9477fabe-d697-48a4-ab52-424034371e3c-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "9477fabe-d697-48a4-ab52-424034371e3c" (UID: "9477fabe-d697-48a4-ab52-424034371e3c"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 25 08:16:12 crc kubenswrapper[4832]: I0125 08:16:12.649296 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-52mhf\" (UniqueName: \"kubernetes.io/projected/05d31ada-06df-4ffc-9e3a-3d476edaaa4f-kube-api-access-52mhf\") on node \"crc\" DevicePath \"\"" Jan 25 08:16:12 crc kubenswrapper[4832]: I0125 08:16:12.649333 4832 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/05d31ada-06df-4ffc-9e3a-3d476edaaa4f-scripts\") on node \"crc\" DevicePath \"\"" Jan 25 08:16:12 crc kubenswrapper[4832]: I0125 08:16:12.649358 4832 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/05d31ada-06df-4ffc-9e3a-3d476edaaa4f-config-data\") on node \"crc\" DevicePath \"\"" Jan 25 08:16:12 crc kubenswrapper[4832]: I0125 08:16:12.649370 4832 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/9477fabe-d697-48a4-ab52-424034371e3c-config-data\") on node \"crc\" DevicePath \"\"" Jan 25 08:16:12 crc kubenswrapper[4832]: I0125 08:16:12.649380 4832 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/9477fabe-d697-48a4-ab52-424034371e3c-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Jan 25 08:16:12 crc kubenswrapper[4832]: I0125 08:16:12.649403 4832 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9477fabe-d697-48a4-ab52-424034371e3c-scripts\") on node \"crc\" DevicePath \"\"" Jan 25 08:16:12 crc kubenswrapper[4832]: I0125 08:16:12.649412 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lzdvm\" (UniqueName: \"kubernetes.io/projected/9477fabe-d697-48a4-ab52-424034371e3c-kube-api-access-lzdvm\") on node \"crc\" DevicePath \"\"" Jan 25 08:16:12 crc kubenswrapper[4832]: I0125 08:16:12.649422 4832 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9477fabe-d697-48a4-ab52-424034371e3c-logs\") on node \"crc\" DevicePath \"\"" Jan 25 08:16:12 crc kubenswrapper[4832]: I0125 08:16:12.649430 4832 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/05d31ada-06df-4ffc-9e3a-3d476edaaa4f-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Jan 25 08:16:12 crc kubenswrapper[4832]: I0125 08:16:12.673890 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5cc6ffb9d5-b9rt2" event={"ID":"9477fabe-d697-48a4-ab52-424034371e3c","Type":"ContainerDied","Data":"9849a29f9a53147c3d755124198d922aff9f3e91f108f035f88467d04f06ebea"} Jan 25 08:16:12 crc kubenswrapper[4832]: I0125 08:16:12.674000 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-5cc6ffb9d5-b9rt2" Jan 25 08:16:12 crc kubenswrapper[4832]: I0125 08:16:12.684190 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-547d75495c-rgz7z" Jan 25 08:16:12 crc kubenswrapper[4832]: I0125 08:16:12.684252 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-547d75495c-rgz7z" event={"ID":"05d31ada-06df-4ffc-9e3a-3d476edaaa4f","Type":"ContainerDied","Data":"bfd4a48c6cf38b45521a7be3e60b04d0f5dfd59e4242bb6b85f21af1571012e6"} Jan 25 08:16:12 crc kubenswrapper[4832]: E0125 08:16:12.685511 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"placement-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-placement-api:current-podified\\\"\"" pod="openstack/placement-db-sync-7tnnv" podUID="e1a44ba3-2a1f-4189-80d7-cd0c8795bd9a" Jan 25 08:16:12 crc kubenswrapper[4832]: I0125 08:16:12.752533 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-547d75495c-rgz7z"] Jan 25 08:16:12 crc kubenswrapper[4832]: I0125 08:16:12.761446 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-547d75495c-rgz7z"] Jan 25 08:16:12 crc kubenswrapper[4832]: I0125 08:16:12.787455 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-5cc6ffb9d5-b9rt2"] Jan 25 08:16:12 crc kubenswrapper[4832]: I0125 08:16:12.794013 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-5cc6ffb9d5-b9rt2"] Jan 25 08:16:12 crc kubenswrapper[4832]: I0125 08:16:12.831700 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-856b6b4996-m59cl"] Jan 25 08:16:13 crc kubenswrapper[4832]: E0125 08:16:13.650526 4832 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified" Jan 25 08:16:13 crc kubenswrapper[4832]: E0125 08:16:13.650741 4832 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cinder-db-sync,Image:quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_set_configs && /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:etc-machine-id,ReadOnly:true,MountPath:/etc/machine-id,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/config-data/merged,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/cinder/cinder.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:db-sync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-vxq2n,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cinder-db-sync-vrvb2_openstack(e793ce7a-261b-4b97-8436-c7a5efc5e126): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 25 08:16:13 crc kubenswrapper[4832]: E0125 08:16:13.651917 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/cinder-db-sync-vrvb2" podUID="e793ce7a-261b-4b97-8436-c7a5efc5e126" Jan 25 08:16:13 crc kubenswrapper[4832]: I0125 08:16:13.683145 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="05d31ada-06df-4ffc-9e3a-3d476edaaa4f" path="/var/lib/kubelet/pods/05d31ada-06df-4ffc-9e3a-3d476edaaa4f/volumes" Jan 25 08:16:13 crc kubenswrapper[4832]: I0125 08:16:13.683692 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9477fabe-d697-48a4-ab52-424034371e3c" path="/var/lib/kubelet/pods/9477fabe-d697-48a4-ab52-424034371e3c/volumes" Jan 25 08:16:13 crc kubenswrapper[4832]: I0125 08:16:13.692747 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-vswdl" event={"ID":"d36bac18-e73f-4718-b2b7-89fc54febd73","Type":"ContainerDied","Data":"29d2f489404d2649fc8b8f47acbb40f91aa617609dffd2fafca640fca875c641"} Jan 25 08:16:13 crc kubenswrapper[4832]: I0125 08:16:13.692793 4832 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="29d2f489404d2649fc8b8f47acbb40f91aa617609dffd2fafca640fca875c641" Jan 25 08:16:13 crc kubenswrapper[4832]: I0125 08:16:13.694692 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-85c746769-89kvs" event={"ID":"8129d5bc-af98-4ef4-b204-fc568ac4ae11","Type":"ContainerDied","Data":"a370719ca3b851c5f1e01c1410d84eaf2f2ce1e456ee3b1a05b790cfc85b3083"} Jan 25 08:16:13 crc kubenswrapper[4832]: I0125 08:16:13.694717 4832 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a370719ca3b851c5f1e01c1410d84eaf2f2ce1e456ee3b1a05b790cfc85b3083" Jan 25 08:16:13 crc kubenswrapper[4832]: E0125 08:16:13.696713 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified\\\"\"" pod="openstack/cinder-db-sync-vrvb2" podUID="e793ce7a-261b-4b97-8436-c7a5efc5e126" Jan 25 08:16:13 crc kubenswrapper[4832]: I0125 08:16:13.708997 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-85c746769-89kvs" Jan 25 08:16:13 crc kubenswrapper[4832]: I0125 08:16:13.725226 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-698758b865-vswdl" Jan 25 08:16:13 crc kubenswrapper[4832]: I0125 08:16:13.786822 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/8129d5bc-af98-4ef4-b204-fc568ac4ae11-horizon-secret-key\") pod \"8129d5bc-af98-4ef4-b204-fc568ac4ae11\" (UID: \"8129d5bc-af98-4ef4-b204-fc568ac4ae11\") " Jan 25 08:16:13 crc kubenswrapper[4832]: I0125 08:16:13.786929 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/8129d5bc-af98-4ef4-b204-fc568ac4ae11-config-data\") pod \"8129d5bc-af98-4ef4-b204-fc568ac4ae11\" (UID: \"8129d5bc-af98-4ef4-b204-fc568ac4ae11\") " Jan 25 08:16:13 crc kubenswrapper[4832]: I0125 08:16:13.786993 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d36bac18-e73f-4718-b2b7-89fc54febd73-ovsdbserver-nb\") pod \"d36bac18-e73f-4718-b2b7-89fc54febd73\" (UID: \"d36bac18-e73f-4718-b2b7-89fc54febd73\") " Jan 25 08:16:13 crc kubenswrapper[4832]: I0125 08:16:13.787026 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v67wb\" (UniqueName: \"kubernetes.io/projected/8129d5bc-af98-4ef4-b204-fc568ac4ae11-kube-api-access-v67wb\") pod \"8129d5bc-af98-4ef4-b204-fc568ac4ae11\" (UID: \"8129d5bc-af98-4ef4-b204-fc568ac4ae11\") " Jan 25 08:16:13 crc kubenswrapper[4832]: I0125 08:16:13.787162 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d36bac18-e73f-4718-b2b7-89fc54febd73-ovsdbserver-sb\") pod \"d36bac18-e73f-4718-b2b7-89fc54febd73\" (UID: \"d36bac18-e73f-4718-b2b7-89fc54febd73\") " Jan 25 08:16:13 crc kubenswrapper[4832]: I0125 08:16:13.787202 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d36bac18-e73f-4718-b2b7-89fc54febd73-dns-svc\") pod \"d36bac18-e73f-4718-b2b7-89fc54febd73\" (UID: \"d36bac18-e73f-4718-b2b7-89fc54febd73\") " Jan 25 08:16:13 crc kubenswrapper[4832]: I0125 08:16:13.787257 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/8129d5bc-af98-4ef4-b204-fc568ac4ae11-scripts\") pod \"8129d5bc-af98-4ef4-b204-fc568ac4ae11\" (UID: \"8129d5bc-af98-4ef4-b204-fc568ac4ae11\") " Jan 25 08:16:13 crc kubenswrapper[4832]: I0125 08:16:13.787323 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2mcj2\" (UniqueName: \"kubernetes.io/projected/d36bac18-e73f-4718-b2b7-89fc54febd73-kube-api-access-2mcj2\") pod \"d36bac18-e73f-4718-b2b7-89fc54febd73\" (UID: \"d36bac18-e73f-4718-b2b7-89fc54febd73\") " Jan 25 08:16:13 crc kubenswrapper[4832]: I0125 08:16:13.787610 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d36bac18-e73f-4718-b2b7-89fc54febd73-config\") pod \"d36bac18-e73f-4718-b2b7-89fc54febd73\" (UID: \"d36bac18-e73f-4718-b2b7-89fc54febd73\") " Jan 25 08:16:13 crc kubenswrapper[4832]: I0125 08:16:13.787915 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8129d5bc-af98-4ef4-b204-fc568ac4ae11-logs\") pod \"8129d5bc-af98-4ef4-b204-fc568ac4ae11\" (UID: \"8129d5bc-af98-4ef4-b204-fc568ac4ae11\") " Jan 25 08:16:13 crc kubenswrapper[4832]: I0125 08:16:13.788006 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8129d5bc-af98-4ef4-b204-fc568ac4ae11-config-data" (OuterVolumeSpecName: "config-data") pod "8129d5bc-af98-4ef4-b204-fc568ac4ae11" (UID: "8129d5bc-af98-4ef4-b204-fc568ac4ae11"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 25 08:16:13 crc kubenswrapper[4832]: I0125 08:16:13.790162 4832 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/8129d5bc-af98-4ef4-b204-fc568ac4ae11-config-data\") on node \"crc\" DevicePath \"\"" Jan 25 08:16:13 crc kubenswrapper[4832]: I0125 08:16:13.791677 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8129d5bc-af98-4ef4-b204-fc568ac4ae11-logs" (OuterVolumeSpecName: "logs") pod "8129d5bc-af98-4ef4-b204-fc568ac4ae11" (UID: "8129d5bc-af98-4ef4-b204-fc568ac4ae11"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 25 08:16:13 crc kubenswrapper[4832]: I0125 08:16:13.792023 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8129d5bc-af98-4ef4-b204-fc568ac4ae11-scripts" (OuterVolumeSpecName: "scripts") pod "8129d5bc-af98-4ef4-b204-fc568ac4ae11" (UID: "8129d5bc-af98-4ef4-b204-fc568ac4ae11"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 25 08:16:13 crc kubenswrapper[4832]: I0125 08:16:13.794999 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8129d5bc-af98-4ef4-b204-fc568ac4ae11-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "8129d5bc-af98-4ef4-b204-fc568ac4ae11" (UID: "8129d5bc-af98-4ef4-b204-fc568ac4ae11"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 25 08:16:13 crc kubenswrapper[4832]: I0125 08:16:13.822917 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d36bac18-e73f-4718-b2b7-89fc54febd73-kube-api-access-2mcj2" (OuterVolumeSpecName: "kube-api-access-2mcj2") pod "d36bac18-e73f-4718-b2b7-89fc54febd73" (UID: "d36bac18-e73f-4718-b2b7-89fc54febd73"). InnerVolumeSpecName "kube-api-access-2mcj2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 25 08:16:13 crc kubenswrapper[4832]: I0125 08:16:13.824338 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8129d5bc-af98-4ef4-b204-fc568ac4ae11-kube-api-access-v67wb" (OuterVolumeSpecName: "kube-api-access-v67wb") pod "8129d5bc-af98-4ef4-b204-fc568ac4ae11" (UID: "8129d5bc-af98-4ef4-b204-fc568ac4ae11"). InnerVolumeSpecName "kube-api-access-v67wb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 25 08:16:13 crc kubenswrapper[4832]: I0125 08:16:13.846888 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d36bac18-e73f-4718-b2b7-89fc54febd73-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "d36bac18-e73f-4718-b2b7-89fc54febd73" (UID: "d36bac18-e73f-4718-b2b7-89fc54febd73"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 25 08:16:13 crc kubenswrapper[4832]: I0125 08:16:13.852114 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d36bac18-e73f-4718-b2b7-89fc54febd73-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "d36bac18-e73f-4718-b2b7-89fc54febd73" (UID: "d36bac18-e73f-4718-b2b7-89fc54febd73"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 25 08:16:13 crc kubenswrapper[4832]: I0125 08:16:13.864678 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d36bac18-e73f-4718-b2b7-89fc54febd73-config" (OuterVolumeSpecName: "config") pod "d36bac18-e73f-4718-b2b7-89fc54febd73" (UID: "d36bac18-e73f-4718-b2b7-89fc54febd73"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 25 08:16:13 crc kubenswrapper[4832]: I0125 08:16:13.867042 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d36bac18-e73f-4718-b2b7-89fc54febd73-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "d36bac18-e73f-4718-b2b7-89fc54febd73" (UID: "d36bac18-e73f-4718-b2b7-89fc54febd73"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 25 08:16:13 crc kubenswrapper[4832]: I0125 08:16:13.892277 4832 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d36bac18-e73f-4718-b2b7-89fc54febd73-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 25 08:16:13 crc kubenswrapper[4832]: I0125 08:16:13.892317 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v67wb\" (UniqueName: \"kubernetes.io/projected/8129d5bc-af98-4ef4-b204-fc568ac4ae11-kube-api-access-v67wb\") on node \"crc\" DevicePath \"\"" Jan 25 08:16:13 crc kubenswrapper[4832]: I0125 08:16:13.892333 4832 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d36bac18-e73f-4718-b2b7-89fc54febd73-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 25 08:16:13 crc kubenswrapper[4832]: I0125 08:16:13.892346 4832 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d36bac18-e73f-4718-b2b7-89fc54febd73-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 25 08:16:13 crc kubenswrapper[4832]: I0125 08:16:13.892359 4832 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/8129d5bc-af98-4ef4-b204-fc568ac4ae11-scripts\") on node \"crc\" DevicePath \"\"" Jan 25 08:16:13 crc kubenswrapper[4832]: I0125 08:16:13.892369 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2mcj2\" (UniqueName: \"kubernetes.io/projected/d36bac18-e73f-4718-b2b7-89fc54febd73-kube-api-access-2mcj2\") on node \"crc\" DevicePath \"\"" Jan 25 08:16:13 crc kubenswrapper[4832]: I0125 08:16:13.892396 4832 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d36bac18-e73f-4718-b2b7-89fc54febd73-config\") on node \"crc\" DevicePath \"\"" Jan 25 08:16:13 crc kubenswrapper[4832]: I0125 08:16:13.892409 4832 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8129d5bc-af98-4ef4-b204-fc568ac4ae11-logs\") on node \"crc\" DevicePath \"\"" Jan 25 08:16:13 crc kubenswrapper[4832]: I0125 08:16:13.892420 4832 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/8129d5bc-af98-4ef4-b204-fc568ac4ae11-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Jan 25 08:16:14 crc kubenswrapper[4832]: W0125 08:16:14.257763 4832 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod573d9b12_352d_4b14_b79c_f2a4a3bfec61.slice/crio-1742dd5219b7af04a6e13c07f9379331dad7bce12fb59c3d9128bb68d2f8e984 WatchSource:0}: Error finding container 1742dd5219b7af04a6e13c07f9379331dad7bce12fb59c3d9128bb68d2f8e984: Status 404 returned error can't find the container with id 1742dd5219b7af04a6e13c07f9379331dad7bce12fb59c3d9128bb68d2f8e984 Jan 25 08:16:14 crc kubenswrapper[4832]: E0125 08:16:14.265792 4832 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified" Jan 25 08:16:14 crc kubenswrapper[4832]: E0125 08:16:14.265957 4832 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:barbican-db-sync,Image:quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified,Command:[/bin/bash],Args:[-c barbican-manage db upgrade],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/barbican/barbican.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-5xcbj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42403,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*42403,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod barbican-db-sync-xdqfx_openstack(f4bbdba8-c7bc-4dd7-ae19-1655bc089a86): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 25 08:16:14 crc kubenswrapper[4832]: E0125 08:16:14.267661 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"barbican-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/barbican-db-sync-xdqfx" podUID="f4bbdba8-c7bc-4dd7-ae19-1655bc089a86" Jan 25 08:16:15 crc kubenswrapper[4832]: I0125 08:16:14.706217 4832 generic.go:334] "Generic (PLEG): container finished" podID="88d4e115-8ad0-4971-b4aa-cb63d0bd2c11" containerID="5d8a4aebb6051b9a2ea061e44a57637bc058c8f737d722b1a2136d729d292408" exitCode=0 Jan 25 08:16:15 crc kubenswrapper[4832]: I0125 08:16:14.706278 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-pfc28" event={"ID":"88d4e115-8ad0-4971-b4aa-cb63d0bd2c11","Type":"ContainerDied","Data":"5d8a4aebb6051b9a2ea061e44a57637bc058c8f737d722b1a2136d729d292408"} Jan 25 08:16:15 crc kubenswrapper[4832]: I0125 08:16:14.708284 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-698758b865-vswdl" Jan 25 08:16:15 crc kubenswrapper[4832]: I0125 08:16:14.708309 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-85c746769-89kvs" Jan 25 08:16:15 crc kubenswrapper[4832]: I0125 08:16:14.708373 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-856b6b4996-m59cl" event={"ID":"573d9b12-352d-4b14-b79c-f2a4a3bfec61","Type":"ContainerStarted","Data":"1742dd5219b7af04a6e13c07f9379331dad7bce12fb59c3d9128bb68d2f8e984"} Jan 25 08:16:15 crc kubenswrapper[4832]: E0125 08:16:14.710243 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"barbican-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified\\\"\"" pod="openstack/barbican-db-sync-xdqfx" podUID="f4bbdba8-c7bc-4dd7-ae19-1655bc089a86" Jan 25 08:16:15 crc kubenswrapper[4832]: I0125 08:16:14.861283 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-698758b865-vswdl"] Jan 25 08:16:15 crc kubenswrapper[4832]: I0125 08:16:14.870395 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-698758b865-vswdl"] Jan 25 08:16:15 crc kubenswrapper[4832]: I0125 08:16:14.945132 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-85c746769-89kvs"] Jan 25 08:16:15 crc kubenswrapper[4832]: I0125 08:16:14.947288 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-85c746769-89kvs"] Jan 25 08:16:15 crc kubenswrapper[4832]: I0125 08:16:15.378084 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-5dqnt"] Jan 25 08:16:15 crc kubenswrapper[4832]: W0125 08:16:15.383456 4832 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0d1875b5_9bf9_49f8_8600_d4e2c2804c47.slice/crio-0d1643b679f86171593dad2fb56be137a45ce5df6b9da36709223073de21df45 WatchSource:0}: Error finding container 0d1643b679f86171593dad2fb56be137a45ce5df6b9da36709223073de21df45: Status 404 returned error can't find the container with id 0d1643b679f86171593dad2fb56be137a45ce5df6b9da36709223073de21df45 Jan 25 08:16:15 crc kubenswrapper[4832]: I0125 08:16:15.387263 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-f649cfc6-vzpx7"] Jan 25 08:16:15 crc kubenswrapper[4832]: W0125 08:16:15.399072 4832 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod26fd6803_3263_4989_a86e_908f6a504d14.slice/crio-f7184e9d5e04c5ebca2ee6b53815922cd8280b8035c749d64afbe6653319854f WatchSource:0}: Error finding container f7184e9d5e04c5ebca2ee6b53815922cd8280b8035c749d64afbe6653319854f: Status 404 returned error can't find the container with id f7184e9d5e04c5ebca2ee6b53815922cd8280b8035c749d64afbe6653319854f Jan 25 08:16:15 crc kubenswrapper[4832]: I0125 08:16:15.686601 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8129d5bc-af98-4ef4-b204-fc568ac4ae11" path="/var/lib/kubelet/pods/8129d5bc-af98-4ef4-b204-fc568ac4ae11/volumes" Jan 25 08:16:15 crc kubenswrapper[4832]: I0125 08:16:15.687640 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d36bac18-e73f-4718-b2b7-89fc54febd73" path="/var/lib/kubelet/pods/d36bac18-e73f-4718-b2b7-89fc54febd73/volumes" Jan 25 08:16:15 crc kubenswrapper[4832]: I0125 08:16:15.717786 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-f649cfc6-vzpx7" event={"ID":"26fd6803-3263-4989-a86e-908f6a504d14","Type":"ContainerStarted","Data":"dfb3149954503b35d35799f49db8b1d162980d5b6630044707b9d05f3f264fb8"} Jan 25 08:16:15 crc kubenswrapper[4832]: I0125 08:16:15.717829 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-f649cfc6-vzpx7" event={"ID":"26fd6803-3263-4989-a86e-908f6a504d14","Type":"ContainerStarted","Data":"f7184e9d5e04c5ebca2ee6b53815922cd8280b8035c749d64afbe6653319854f"} Jan 25 08:16:15 crc kubenswrapper[4832]: I0125 08:16:15.720436 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-dnzjb" event={"ID":"88b922f3-0125-4078-8ec7-ad4edd04d0ed","Type":"ContainerStarted","Data":"2bc24f26d829b53a811da3b1657056332cb5bca551cb0d9c4b02484b0306b433"} Jan 25 08:16:15 crc kubenswrapper[4832]: I0125 08:16:15.731206 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-856b6b4996-m59cl" event={"ID":"573d9b12-352d-4b14-b79c-f2a4a3bfec61","Type":"ContainerStarted","Data":"bb732af1be5b8febd9fa4b66ceda9d6420275da7a02af0dbc3f119bbf4968964"} Jan 25 08:16:15 crc kubenswrapper[4832]: I0125 08:16:15.731259 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-856b6b4996-m59cl" event={"ID":"573d9b12-352d-4b14-b79c-f2a4a3bfec61","Type":"ContainerStarted","Data":"c292b116a3c1fdcc1ff68e24bd47cbed28c4a98bf62546d1e65268a40c49af76"} Jan 25 08:16:15 crc kubenswrapper[4832]: I0125 08:16:15.735240 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-5dqnt" event={"ID":"0d1875b5-9bf9-49f8-8600-d4e2c2804c47","Type":"ContainerStarted","Data":"60af9015ae9720b19176d23260a846349c530a9b3b692bf9315265e29c80cfec"} Jan 25 08:16:15 crc kubenswrapper[4832]: I0125 08:16:15.735306 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-5dqnt" event={"ID":"0d1875b5-9bf9-49f8-8600-d4e2c2804c47","Type":"ContainerStarted","Data":"0d1643b679f86171593dad2fb56be137a45ce5df6b9da36709223073de21df45"} Jan 25 08:16:15 crc kubenswrapper[4832]: I0125 08:16:15.743972 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-db-sync-dnzjb" podStartSLOduration=3.2693749309999998 podStartE2EDuration="1m15.743950133s" podCreationTimestamp="2026-01-25 08:15:00 +0000 UTC" firstStartedPulling="2026-01-25 08:15:02.252217174 +0000 UTC m=+1084.926040707" lastFinishedPulling="2026-01-25 08:16:14.726792376 +0000 UTC m=+1157.400615909" observedRunningTime="2026-01-25 08:16:15.737714458 +0000 UTC m=+1158.411537991" watchObservedRunningTime="2026-01-25 08:16:15.743950133 +0000 UTC m=+1158.417773666" Jan 25 08:16:15 crc kubenswrapper[4832]: I0125 08:16:15.754254 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b48b257e-ddb7-486d-8788-489ca788ac1f","Type":"ContainerStarted","Data":"f68d63b552212b0d184f580f49e465d6ead51b8d0e31c283a3b07b744696dda7"} Jan 25 08:16:15 crc kubenswrapper[4832]: I0125 08:16:15.767786 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-856b6b4996-m59cl" podStartSLOduration=26.204845209 podStartE2EDuration="26.767761407s" podCreationTimestamp="2026-01-25 08:15:49 +0000 UTC" firstStartedPulling="2026-01-25 08:16:14.265085104 +0000 UTC m=+1156.938908627" lastFinishedPulling="2026-01-25 08:16:14.828001292 +0000 UTC m=+1157.501824825" observedRunningTime="2026-01-25 08:16:15.757426014 +0000 UTC m=+1158.431249547" watchObservedRunningTime="2026-01-25 08:16:15.767761407 +0000 UTC m=+1158.441584940" Jan 25 08:16:15 crc kubenswrapper[4832]: I0125 08:16:15.817017 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-5dqnt" podStartSLOduration=17.816995348 podStartE2EDuration="17.816995348s" podCreationTimestamp="2026-01-25 08:15:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-25 08:16:15.777960637 +0000 UTC m=+1158.451784170" watchObservedRunningTime="2026-01-25 08:16:15.816995348 +0000 UTC m=+1158.490818881" Jan 25 08:16:16 crc kubenswrapper[4832]: I0125 08:16:16.107649 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-pfc28" Jan 25 08:16:16 crc kubenswrapper[4832]: I0125 08:16:16.231094 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/88d4e115-8ad0-4971-b4aa-cb63d0bd2c11-combined-ca-bundle\") pod \"88d4e115-8ad0-4971-b4aa-cb63d0bd2c11\" (UID: \"88d4e115-8ad0-4971-b4aa-cb63d0bd2c11\") " Jan 25 08:16:16 crc kubenswrapper[4832]: I0125 08:16:16.231246 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ck2dk\" (UniqueName: \"kubernetes.io/projected/88d4e115-8ad0-4971-b4aa-cb63d0bd2c11-kube-api-access-ck2dk\") pod \"88d4e115-8ad0-4971-b4aa-cb63d0bd2c11\" (UID: \"88d4e115-8ad0-4971-b4aa-cb63d0bd2c11\") " Jan 25 08:16:16 crc kubenswrapper[4832]: I0125 08:16:16.231332 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/88d4e115-8ad0-4971-b4aa-cb63d0bd2c11-config\") pod \"88d4e115-8ad0-4971-b4aa-cb63d0bd2c11\" (UID: \"88d4e115-8ad0-4971-b4aa-cb63d0bd2c11\") " Jan 25 08:16:16 crc kubenswrapper[4832]: I0125 08:16:16.237670 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/88d4e115-8ad0-4971-b4aa-cb63d0bd2c11-kube-api-access-ck2dk" (OuterVolumeSpecName: "kube-api-access-ck2dk") pod "88d4e115-8ad0-4971-b4aa-cb63d0bd2c11" (UID: "88d4e115-8ad0-4971-b4aa-cb63d0bd2c11"). InnerVolumeSpecName "kube-api-access-ck2dk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 25 08:16:16 crc kubenswrapper[4832]: I0125 08:16:16.265578 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/88d4e115-8ad0-4971-b4aa-cb63d0bd2c11-config" (OuterVolumeSpecName: "config") pod "88d4e115-8ad0-4971-b4aa-cb63d0bd2c11" (UID: "88d4e115-8ad0-4971-b4aa-cb63d0bd2c11"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 25 08:16:16 crc kubenswrapper[4832]: I0125 08:16:16.290238 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/88d4e115-8ad0-4971-b4aa-cb63d0bd2c11-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "88d4e115-8ad0-4971-b4aa-cb63d0bd2c11" (UID: "88d4e115-8ad0-4971-b4aa-cb63d0bd2c11"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 25 08:16:16 crc kubenswrapper[4832]: I0125 08:16:16.333345 4832 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/88d4e115-8ad0-4971-b4aa-cb63d0bd2c11-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 25 08:16:16 crc kubenswrapper[4832]: I0125 08:16:16.333706 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ck2dk\" (UniqueName: \"kubernetes.io/projected/88d4e115-8ad0-4971-b4aa-cb63d0bd2c11-kube-api-access-ck2dk\") on node \"crc\" DevicePath \"\"" Jan 25 08:16:16 crc kubenswrapper[4832]: I0125 08:16:16.333719 4832 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/88d4e115-8ad0-4971-b4aa-cb63d0bd2c11-config\") on node \"crc\" DevicePath \"\"" Jan 25 08:16:16 crc kubenswrapper[4832]: I0125 08:16:16.654754 4832 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-698758b865-vswdl" podUID="d36bac18-e73f-4718-b2b7-89fc54febd73" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.112:5353: i/o timeout" Jan 25 08:16:16 crc kubenswrapper[4832]: I0125 08:16:16.764926 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-f649cfc6-vzpx7" event={"ID":"26fd6803-3263-4989-a86e-908f6a504d14","Type":"ContainerStarted","Data":"10ffcffcab8dac65ab76aaa66f717c929c0bbdef0bea9e339bf47c7390fd8147"} Jan 25 08:16:16 crc kubenswrapper[4832]: I0125 08:16:16.772267 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-pfc28" event={"ID":"88d4e115-8ad0-4971-b4aa-cb63d0bd2c11","Type":"ContainerDied","Data":"d2ede1aa46e229d2cd74203e2c10cd703d999c382ed21e04431c7ef6d77da762"} Jan 25 08:16:16 crc kubenswrapper[4832]: I0125 08:16:16.772314 4832 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d2ede1aa46e229d2cd74203e2c10cd703d999c382ed21e04431c7ef6d77da762" Jan 25 08:16:16 crc kubenswrapper[4832]: I0125 08:16:16.772379 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-pfc28" Jan 25 08:16:16 crc kubenswrapper[4832]: I0125 08:16:16.794120 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-f649cfc6-vzpx7" podStartSLOduration=27.794094042 podStartE2EDuration="27.794094042s" podCreationTimestamp="2026-01-25 08:15:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-25 08:16:16.787925258 +0000 UTC m=+1159.461748801" watchObservedRunningTime="2026-01-25 08:16:16.794094042 +0000 UTC m=+1159.467917585" Jan 25 08:16:16 crc kubenswrapper[4832]: I0125 08:16:16.999742 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-65965d6475-wsdhh"] Jan 25 08:16:17 crc kubenswrapper[4832]: E0125 08:16:17.000215 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="88d4e115-8ad0-4971-b4aa-cb63d0bd2c11" containerName="neutron-db-sync" Jan 25 08:16:17 crc kubenswrapper[4832]: I0125 08:16:17.000236 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="88d4e115-8ad0-4971-b4aa-cb63d0bd2c11" containerName="neutron-db-sync" Jan 25 08:16:17 crc kubenswrapper[4832]: E0125 08:16:17.000289 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d36bac18-e73f-4718-b2b7-89fc54febd73" containerName="dnsmasq-dns" Jan 25 08:16:17 crc kubenswrapper[4832]: I0125 08:16:17.000302 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="d36bac18-e73f-4718-b2b7-89fc54febd73" containerName="dnsmasq-dns" Jan 25 08:16:17 crc kubenswrapper[4832]: E0125 08:16:17.000320 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d36bac18-e73f-4718-b2b7-89fc54febd73" containerName="init" Jan 25 08:16:17 crc kubenswrapper[4832]: I0125 08:16:17.000330 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="d36bac18-e73f-4718-b2b7-89fc54febd73" containerName="init" Jan 25 08:16:17 crc kubenswrapper[4832]: I0125 08:16:17.000647 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="d36bac18-e73f-4718-b2b7-89fc54febd73" containerName="dnsmasq-dns" Jan 25 08:16:17 crc kubenswrapper[4832]: I0125 08:16:17.000678 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="88d4e115-8ad0-4971-b4aa-cb63d0bd2c11" containerName="neutron-db-sync" Jan 25 08:16:17 crc kubenswrapper[4832]: I0125 08:16:17.001971 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-65965d6475-wsdhh" Jan 25 08:16:17 crc kubenswrapper[4832]: I0125 08:16:17.017058 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-65965d6475-wsdhh"] Jan 25 08:16:17 crc kubenswrapper[4832]: I0125 08:16:17.052510 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/aba728c5-d77a-4d46-a3e8-2e0d1e31756a-config\") pod \"dnsmasq-dns-65965d6475-wsdhh\" (UID: \"aba728c5-d77a-4d46-a3e8-2e0d1e31756a\") " pod="openstack/dnsmasq-dns-65965d6475-wsdhh" Jan 25 08:16:17 crc kubenswrapper[4832]: I0125 08:16:17.052570 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2hn7q\" (UniqueName: \"kubernetes.io/projected/aba728c5-d77a-4d46-a3e8-2e0d1e31756a-kube-api-access-2hn7q\") pod \"dnsmasq-dns-65965d6475-wsdhh\" (UID: \"aba728c5-d77a-4d46-a3e8-2e0d1e31756a\") " pod="openstack/dnsmasq-dns-65965d6475-wsdhh" Jan 25 08:16:17 crc kubenswrapper[4832]: I0125 08:16:17.052636 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/aba728c5-d77a-4d46-a3e8-2e0d1e31756a-dns-swift-storage-0\") pod \"dnsmasq-dns-65965d6475-wsdhh\" (UID: \"aba728c5-d77a-4d46-a3e8-2e0d1e31756a\") " pod="openstack/dnsmasq-dns-65965d6475-wsdhh" Jan 25 08:16:17 crc kubenswrapper[4832]: I0125 08:16:17.052667 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/aba728c5-d77a-4d46-a3e8-2e0d1e31756a-ovsdbserver-nb\") pod \"dnsmasq-dns-65965d6475-wsdhh\" (UID: \"aba728c5-d77a-4d46-a3e8-2e0d1e31756a\") " pod="openstack/dnsmasq-dns-65965d6475-wsdhh" Jan 25 08:16:17 crc kubenswrapper[4832]: I0125 08:16:17.052717 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/aba728c5-d77a-4d46-a3e8-2e0d1e31756a-ovsdbserver-sb\") pod \"dnsmasq-dns-65965d6475-wsdhh\" (UID: \"aba728c5-d77a-4d46-a3e8-2e0d1e31756a\") " pod="openstack/dnsmasq-dns-65965d6475-wsdhh" Jan 25 08:16:17 crc kubenswrapper[4832]: I0125 08:16:17.052768 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/aba728c5-d77a-4d46-a3e8-2e0d1e31756a-dns-svc\") pod \"dnsmasq-dns-65965d6475-wsdhh\" (UID: \"aba728c5-d77a-4d46-a3e8-2e0d1e31756a\") " pod="openstack/dnsmasq-dns-65965d6475-wsdhh" Jan 25 08:16:17 crc kubenswrapper[4832]: I0125 08:16:17.082253 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-dc694898-lnc2f"] Jan 25 08:16:17 crc kubenswrapper[4832]: I0125 08:16:17.095111 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-dc694898-lnc2f" Jan 25 08:16:17 crc kubenswrapper[4832]: I0125 08:16:17.098646 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Jan 25 08:16:17 crc kubenswrapper[4832]: I0125 08:16:17.098702 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-d67qp" Jan 25 08:16:17 crc kubenswrapper[4832]: I0125 08:16:17.099482 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-ovndbs" Jan 25 08:16:17 crc kubenswrapper[4832]: I0125 08:16:17.099833 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Jan 25 08:16:17 crc kubenswrapper[4832]: I0125 08:16:17.102091 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-dc694898-lnc2f"] Jan 25 08:16:17 crc kubenswrapper[4832]: I0125 08:16:17.161905 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/aba728c5-d77a-4d46-a3e8-2e0d1e31756a-dns-swift-storage-0\") pod \"dnsmasq-dns-65965d6475-wsdhh\" (UID: \"aba728c5-d77a-4d46-a3e8-2e0d1e31756a\") " pod="openstack/dnsmasq-dns-65965d6475-wsdhh" Jan 25 08:16:17 crc kubenswrapper[4832]: I0125 08:16:17.161993 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ljht2\" (UniqueName: \"kubernetes.io/projected/1fdbaf45-d8d7-430d-9c6d-29359e4dd17e-kube-api-access-ljht2\") pod \"neutron-dc694898-lnc2f\" (UID: \"1fdbaf45-d8d7-430d-9c6d-29359e4dd17e\") " pod="openstack/neutron-dc694898-lnc2f" Jan 25 08:16:17 crc kubenswrapper[4832]: I0125 08:16:17.162019 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/1fdbaf45-d8d7-430d-9c6d-29359e4dd17e-ovndb-tls-certs\") pod \"neutron-dc694898-lnc2f\" (UID: \"1fdbaf45-d8d7-430d-9c6d-29359e4dd17e\") " pod="openstack/neutron-dc694898-lnc2f" Jan 25 08:16:17 crc kubenswrapper[4832]: I0125 08:16:17.162040 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/aba728c5-d77a-4d46-a3e8-2e0d1e31756a-ovsdbserver-nb\") pod \"dnsmasq-dns-65965d6475-wsdhh\" (UID: \"aba728c5-d77a-4d46-a3e8-2e0d1e31756a\") " pod="openstack/dnsmasq-dns-65965d6475-wsdhh" Jan 25 08:16:17 crc kubenswrapper[4832]: I0125 08:16:17.162083 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/aba728c5-d77a-4d46-a3e8-2e0d1e31756a-ovsdbserver-sb\") pod \"dnsmasq-dns-65965d6475-wsdhh\" (UID: \"aba728c5-d77a-4d46-a3e8-2e0d1e31756a\") " pod="openstack/dnsmasq-dns-65965d6475-wsdhh" Jan 25 08:16:17 crc kubenswrapper[4832]: I0125 08:16:17.162116 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1fdbaf45-d8d7-430d-9c6d-29359e4dd17e-combined-ca-bundle\") pod \"neutron-dc694898-lnc2f\" (UID: \"1fdbaf45-d8d7-430d-9c6d-29359e4dd17e\") " pod="openstack/neutron-dc694898-lnc2f" Jan 25 08:16:17 crc kubenswrapper[4832]: I0125 08:16:17.162169 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/aba728c5-d77a-4d46-a3e8-2e0d1e31756a-dns-svc\") pod \"dnsmasq-dns-65965d6475-wsdhh\" (UID: \"aba728c5-d77a-4d46-a3e8-2e0d1e31756a\") " pod="openstack/dnsmasq-dns-65965d6475-wsdhh" Jan 25 08:16:17 crc kubenswrapper[4832]: I0125 08:16:17.162204 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/1fdbaf45-d8d7-430d-9c6d-29359e4dd17e-httpd-config\") pod \"neutron-dc694898-lnc2f\" (UID: \"1fdbaf45-d8d7-430d-9c6d-29359e4dd17e\") " pod="openstack/neutron-dc694898-lnc2f" Jan 25 08:16:17 crc kubenswrapper[4832]: I0125 08:16:17.162242 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/1fdbaf45-d8d7-430d-9c6d-29359e4dd17e-config\") pod \"neutron-dc694898-lnc2f\" (UID: \"1fdbaf45-d8d7-430d-9c6d-29359e4dd17e\") " pod="openstack/neutron-dc694898-lnc2f" Jan 25 08:16:17 crc kubenswrapper[4832]: I0125 08:16:17.162264 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/aba728c5-d77a-4d46-a3e8-2e0d1e31756a-config\") pod \"dnsmasq-dns-65965d6475-wsdhh\" (UID: \"aba728c5-d77a-4d46-a3e8-2e0d1e31756a\") " pod="openstack/dnsmasq-dns-65965d6475-wsdhh" Jan 25 08:16:17 crc kubenswrapper[4832]: I0125 08:16:17.162287 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2hn7q\" (UniqueName: \"kubernetes.io/projected/aba728c5-d77a-4d46-a3e8-2e0d1e31756a-kube-api-access-2hn7q\") pod \"dnsmasq-dns-65965d6475-wsdhh\" (UID: \"aba728c5-d77a-4d46-a3e8-2e0d1e31756a\") " pod="openstack/dnsmasq-dns-65965d6475-wsdhh" Jan 25 08:16:17 crc kubenswrapper[4832]: I0125 08:16:17.163474 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/aba728c5-d77a-4d46-a3e8-2e0d1e31756a-dns-swift-storage-0\") pod \"dnsmasq-dns-65965d6475-wsdhh\" (UID: \"aba728c5-d77a-4d46-a3e8-2e0d1e31756a\") " pod="openstack/dnsmasq-dns-65965d6475-wsdhh" Jan 25 08:16:17 crc kubenswrapper[4832]: I0125 08:16:17.163999 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/aba728c5-d77a-4d46-a3e8-2e0d1e31756a-ovsdbserver-nb\") pod \"dnsmasq-dns-65965d6475-wsdhh\" (UID: \"aba728c5-d77a-4d46-a3e8-2e0d1e31756a\") " pod="openstack/dnsmasq-dns-65965d6475-wsdhh" Jan 25 08:16:17 crc kubenswrapper[4832]: I0125 08:16:17.164868 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/aba728c5-d77a-4d46-a3e8-2e0d1e31756a-dns-svc\") pod \"dnsmasq-dns-65965d6475-wsdhh\" (UID: \"aba728c5-d77a-4d46-a3e8-2e0d1e31756a\") " pod="openstack/dnsmasq-dns-65965d6475-wsdhh" Jan 25 08:16:17 crc kubenswrapper[4832]: I0125 08:16:17.165578 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/aba728c5-d77a-4d46-a3e8-2e0d1e31756a-ovsdbserver-sb\") pod \"dnsmasq-dns-65965d6475-wsdhh\" (UID: \"aba728c5-d77a-4d46-a3e8-2e0d1e31756a\") " pod="openstack/dnsmasq-dns-65965d6475-wsdhh" Jan 25 08:16:17 crc kubenswrapper[4832]: I0125 08:16:17.166757 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/aba728c5-d77a-4d46-a3e8-2e0d1e31756a-config\") pod \"dnsmasq-dns-65965d6475-wsdhh\" (UID: \"aba728c5-d77a-4d46-a3e8-2e0d1e31756a\") " pod="openstack/dnsmasq-dns-65965d6475-wsdhh" Jan 25 08:16:17 crc kubenswrapper[4832]: I0125 08:16:17.194180 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2hn7q\" (UniqueName: \"kubernetes.io/projected/aba728c5-d77a-4d46-a3e8-2e0d1e31756a-kube-api-access-2hn7q\") pod \"dnsmasq-dns-65965d6475-wsdhh\" (UID: \"aba728c5-d77a-4d46-a3e8-2e0d1e31756a\") " pod="openstack/dnsmasq-dns-65965d6475-wsdhh" Jan 25 08:16:17 crc kubenswrapper[4832]: I0125 08:16:17.266643 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1fdbaf45-d8d7-430d-9c6d-29359e4dd17e-combined-ca-bundle\") pod \"neutron-dc694898-lnc2f\" (UID: \"1fdbaf45-d8d7-430d-9c6d-29359e4dd17e\") " pod="openstack/neutron-dc694898-lnc2f" Jan 25 08:16:17 crc kubenswrapper[4832]: I0125 08:16:17.266729 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/1fdbaf45-d8d7-430d-9c6d-29359e4dd17e-httpd-config\") pod \"neutron-dc694898-lnc2f\" (UID: \"1fdbaf45-d8d7-430d-9c6d-29359e4dd17e\") " pod="openstack/neutron-dc694898-lnc2f" Jan 25 08:16:17 crc kubenswrapper[4832]: I0125 08:16:17.266767 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/1fdbaf45-d8d7-430d-9c6d-29359e4dd17e-config\") pod \"neutron-dc694898-lnc2f\" (UID: \"1fdbaf45-d8d7-430d-9c6d-29359e4dd17e\") " pod="openstack/neutron-dc694898-lnc2f" Jan 25 08:16:17 crc kubenswrapper[4832]: I0125 08:16:17.266843 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ljht2\" (UniqueName: \"kubernetes.io/projected/1fdbaf45-d8d7-430d-9c6d-29359e4dd17e-kube-api-access-ljht2\") pod \"neutron-dc694898-lnc2f\" (UID: \"1fdbaf45-d8d7-430d-9c6d-29359e4dd17e\") " pod="openstack/neutron-dc694898-lnc2f" Jan 25 08:16:17 crc kubenswrapper[4832]: I0125 08:16:17.266860 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/1fdbaf45-d8d7-430d-9c6d-29359e4dd17e-ovndb-tls-certs\") pod \"neutron-dc694898-lnc2f\" (UID: \"1fdbaf45-d8d7-430d-9c6d-29359e4dd17e\") " pod="openstack/neutron-dc694898-lnc2f" Jan 25 08:16:17 crc kubenswrapper[4832]: I0125 08:16:17.271460 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/1fdbaf45-d8d7-430d-9c6d-29359e4dd17e-ovndb-tls-certs\") pod \"neutron-dc694898-lnc2f\" (UID: \"1fdbaf45-d8d7-430d-9c6d-29359e4dd17e\") " pod="openstack/neutron-dc694898-lnc2f" Jan 25 08:16:17 crc kubenswrapper[4832]: I0125 08:16:17.273148 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1fdbaf45-d8d7-430d-9c6d-29359e4dd17e-combined-ca-bundle\") pod \"neutron-dc694898-lnc2f\" (UID: \"1fdbaf45-d8d7-430d-9c6d-29359e4dd17e\") " pod="openstack/neutron-dc694898-lnc2f" Jan 25 08:16:17 crc kubenswrapper[4832]: I0125 08:16:17.276750 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/1fdbaf45-d8d7-430d-9c6d-29359e4dd17e-httpd-config\") pod \"neutron-dc694898-lnc2f\" (UID: \"1fdbaf45-d8d7-430d-9c6d-29359e4dd17e\") " pod="openstack/neutron-dc694898-lnc2f" Jan 25 08:16:17 crc kubenswrapper[4832]: I0125 08:16:17.290501 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/1fdbaf45-d8d7-430d-9c6d-29359e4dd17e-config\") pod \"neutron-dc694898-lnc2f\" (UID: \"1fdbaf45-d8d7-430d-9c6d-29359e4dd17e\") " pod="openstack/neutron-dc694898-lnc2f" Jan 25 08:16:17 crc kubenswrapper[4832]: I0125 08:16:17.292217 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ljht2\" (UniqueName: \"kubernetes.io/projected/1fdbaf45-d8d7-430d-9c6d-29359e4dd17e-kube-api-access-ljht2\") pod \"neutron-dc694898-lnc2f\" (UID: \"1fdbaf45-d8d7-430d-9c6d-29359e4dd17e\") " pod="openstack/neutron-dc694898-lnc2f" Jan 25 08:16:17 crc kubenswrapper[4832]: I0125 08:16:17.354880 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-65965d6475-wsdhh" Jan 25 08:16:17 crc kubenswrapper[4832]: I0125 08:16:17.426806 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-dc694898-lnc2f" Jan 25 08:16:17 crc kubenswrapper[4832]: I0125 08:16:17.948276 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-65965d6475-wsdhh"] Jan 25 08:16:18 crc kubenswrapper[4832]: I0125 08:16:18.465337 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-dc694898-lnc2f"] Jan 25 08:16:18 crc kubenswrapper[4832]: W0125 08:16:18.469331 4832 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1fdbaf45_d8d7_430d_9c6d_29359e4dd17e.slice/crio-213d462405947634370f71339ea2118b5da6e85e044eaa057a761b433eab668a WatchSource:0}: Error finding container 213d462405947634370f71339ea2118b5da6e85e044eaa057a761b433eab668a: Status 404 returned error can't find the container with id 213d462405947634370f71339ea2118b5da6e85e044eaa057a761b433eab668a Jan 25 08:16:18 crc kubenswrapper[4832]: I0125 08:16:18.862638 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-dc694898-lnc2f" event={"ID":"1fdbaf45-d8d7-430d-9c6d-29359e4dd17e","Type":"ContainerStarted","Data":"213d462405947634370f71339ea2118b5da6e85e044eaa057a761b433eab668a"} Jan 25 08:16:18 crc kubenswrapper[4832]: I0125 08:16:18.885084 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-65965d6475-wsdhh" event={"ID":"aba728c5-d77a-4d46-a3e8-2e0d1e31756a","Type":"ContainerStarted","Data":"6b530cc1cf0e1578b59b872971bed0b5dcd8232ba169e2fd47e6516092de68a5"} Jan 25 08:16:18 crc kubenswrapper[4832]: I0125 08:16:18.885500 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-65965d6475-wsdhh" event={"ID":"aba728c5-d77a-4d46-a3e8-2e0d1e31756a","Type":"ContainerStarted","Data":"0cd5a4cfbdefaf225008f59f21b0fb893920316413297ec6544d38ee4fcb350a"} Jan 25 08:16:19 crc kubenswrapper[4832]: I0125 08:16:19.698168 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-585cc76cc-zg5pq"] Jan 25 08:16:19 crc kubenswrapper[4832]: I0125 08:16:19.700305 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-585cc76cc-zg5pq" Jan 25 08:16:19 crc kubenswrapper[4832]: I0125 08:16:19.703959 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-internal-svc" Jan 25 08:16:19 crc kubenswrapper[4832]: I0125 08:16:19.712462 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-public-svc" Jan 25 08:16:19 crc kubenswrapper[4832]: I0125 08:16:19.714904 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s69lz\" (UniqueName: \"kubernetes.io/projected/196ac30d-ab85-4327-86df-27e637aba0b3-kube-api-access-s69lz\") pod \"neutron-585cc76cc-zg5pq\" (UID: \"196ac30d-ab85-4327-86df-27e637aba0b3\") " pod="openstack/neutron-585cc76cc-zg5pq" Jan 25 08:16:19 crc kubenswrapper[4832]: I0125 08:16:19.714958 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/196ac30d-ab85-4327-86df-27e637aba0b3-config\") pod \"neutron-585cc76cc-zg5pq\" (UID: \"196ac30d-ab85-4327-86df-27e637aba0b3\") " pod="openstack/neutron-585cc76cc-zg5pq" Jan 25 08:16:19 crc kubenswrapper[4832]: I0125 08:16:19.714991 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/196ac30d-ab85-4327-86df-27e637aba0b3-internal-tls-certs\") pod \"neutron-585cc76cc-zg5pq\" (UID: \"196ac30d-ab85-4327-86df-27e637aba0b3\") " pod="openstack/neutron-585cc76cc-zg5pq" Jan 25 08:16:19 crc kubenswrapper[4832]: I0125 08:16:19.715014 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/196ac30d-ab85-4327-86df-27e637aba0b3-httpd-config\") pod \"neutron-585cc76cc-zg5pq\" (UID: \"196ac30d-ab85-4327-86df-27e637aba0b3\") " pod="openstack/neutron-585cc76cc-zg5pq" Jan 25 08:16:19 crc kubenswrapper[4832]: I0125 08:16:19.715048 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/196ac30d-ab85-4327-86df-27e637aba0b3-public-tls-certs\") pod \"neutron-585cc76cc-zg5pq\" (UID: \"196ac30d-ab85-4327-86df-27e637aba0b3\") " pod="openstack/neutron-585cc76cc-zg5pq" Jan 25 08:16:19 crc kubenswrapper[4832]: I0125 08:16:19.715106 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/196ac30d-ab85-4327-86df-27e637aba0b3-combined-ca-bundle\") pod \"neutron-585cc76cc-zg5pq\" (UID: \"196ac30d-ab85-4327-86df-27e637aba0b3\") " pod="openstack/neutron-585cc76cc-zg5pq" Jan 25 08:16:19 crc kubenswrapper[4832]: I0125 08:16:19.715138 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/196ac30d-ab85-4327-86df-27e637aba0b3-ovndb-tls-certs\") pod \"neutron-585cc76cc-zg5pq\" (UID: \"196ac30d-ab85-4327-86df-27e637aba0b3\") " pod="openstack/neutron-585cc76cc-zg5pq" Jan 25 08:16:19 crc kubenswrapper[4832]: I0125 08:16:19.777487 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-585cc76cc-zg5pq"] Jan 25 08:16:19 crc kubenswrapper[4832]: I0125 08:16:19.815618 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-856b6b4996-m59cl" Jan 25 08:16:19 crc kubenswrapper[4832]: I0125 08:16:19.816079 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-856b6b4996-m59cl" Jan 25 08:16:19 crc kubenswrapper[4832]: I0125 08:16:19.816651 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s69lz\" (UniqueName: \"kubernetes.io/projected/196ac30d-ab85-4327-86df-27e637aba0b3-kube-api-access-s69lz\") pod \"neutron-585cc76cc-zg5pq\" (UID: \"196ac30d-ab85-4327-86df-27e637aba0b3\") " pod="openstack/neutron-585cc76cc-zg5pq" Jan 25 08:16:19 crc kubenswrapper[4832]: I0125 08:16:19.816694 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/196ac30d-ab85-4327-86df-27e637aba0b3-config\") pod \"neutron-585cc76cc-zg5pq\" (UID: \"196ac30d-ab85-4327-86df-27e637aba0b3\") " pod="openstack/neutron-585cc76cc-zg5pq" Jan 25 08:16:19 crc kubenswrapper[4832]: I0125 08:16:19.816740 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/196ac30d-ab85-4327-86df-27e637aba0b3-internal-tls-certs\") pod \"neutron-585cc76cc-zg5pq\" (UID: \"196ac30d-ab85-4327-86df-27e637aba0b3\") " pod="openstack/neutron-585cc76cc-zg5pq" Jan 25 08:16:19 crc kubenswrapper[4832]: I0125 08:16:19.816770 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/196ac30d-ab85-4327-86df-27e637aba0b3-httpd-config\") pod \"neutron-585cc76cc-zg5pq\" (UID: \"196ac30d-ab85-4327-86df-27e637aba0b3\") " pod="openstack/neutron-585cc76cc-zg5pq" Jan 25 08:16:19 crc kubenswrapper[4832]: I0125 08:16:19.816826 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/196ac30d-ab85-4327-86df-27e637aba0b3-public-tls-certs\") pod \"neutron-585cc76cc-zg5pq\" (UID: \"196ac30d-ab85-4327-86df-27e637aba0b3\") " pod="openstack/neutron-585cc76cc-zg5pq" Jan 25 08:16:19 crc kubenswrapper[4832]: I0125 08:16:19.816949 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/196ac30d-ab85-4327-86df-27e637aba0b3-combined-ca-bundle\") pod \"neutron-585cc76cc-zg5pq\" (UID: \"196ac30d-ab85-4327-86df-27e637aba0b3\") " pod="openstack/neutron-585cc76cc-zg5pq" Jan 25 08:16:19 crc kubenswrapper[4832]: I0125 08:16:19.816996 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/196ac30d-ab85-4327-86df-27e637aba0b3-ovndb-tls-certs\") pod \"neutron-585cc76cc-zg5pq\" (UID: \"196ac30d-ab85-4327-86df-27e637aba0b3\") " pod="openstack/neutron-585cc76cc-zg5pq" Jan 25 08:16:19 crc kubenswrapper[4832]: I0125 08:16:19.832228 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/196ac30d-ab85-4327-86df-27e637aba0b3-httpd-config\") pod \"neutron-585cc76cc-zg5pq\" (UID: \"196ac30d-ab85-4327-86df-27e637aba0b3\") " pod="openstack/neutron-585cc76cc-zg5pq" Jan 25 08:16:19 crc kubenswrapper[4832]: I0125 08:16:19.836112 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/196ac30d-ab85-4327-86df-27e637aba0b3-config\") pod \"neutron-585cc76cc-zg5pq\" (UID: \"196ac30d-ab85-4327-86df-27e637aba0b3\") " pod="openstack/neutron-585cc76cc-zg5pq" Jan 25 08:16:19 crc kubenswrapper[4832]: I0125 08:16:19.844216 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/196ac30d-ab85-4327-86df-27e637aba0b3-public-tls-certs\") pod \"neutron-585cc76cc-zg5pq\" (UID: \"196ac30d-ab85-4327-86df-27e637aba0b3\") " pod="openstack/neutron-585cc76cc-zg5pq" Jan 25 08:16:19 crc kubenswrapper[4832]: I0125 08:16:19.844271 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/196ac30d-ab85-4327-86df-27e637aba0b3-combined-ca-bundle\") pod \"neutron-585cc76cc-zg5pq\" (UID: \"196ac30d-ab85-4327-86df-27e637aba0b3\") " pod="openstack/neutron-585cc76cc-zg5pq" Jan 25 08:16:19 crc kubenswrapper[4832]: I0125 08:16:19.844778 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/196ac30d-ab85-4327-86df-27e637aba0b3-ovndb-tls-certs\") pod \"neutron-585cc76cc-zg5pq\" (UID: \"196ac30d-ab85-4327-86df-27e637aba0b3\") " pod="openstack/neutron-585cc76cc-zg5pq" Jan 25 08:16:19 crc kubenswrapper[4832]: I0125 08:16:19.844925 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/196ac30d-ab85-4327-86df-27e637aba0b3-internal-tls-certs\") pod \"neutron-585cc76cc-zg5pq\" (UID: \"196ac30d-ab85-4327-86df-27e637aba0b3\") " pod="openstack/neutron-585cc76cc-zg5pq" Jan 25 08:16:19 crc kubenswrapper[4832]: I0125 08:16:19.849427 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s69lz\" (UniqueName: \"kubernetes.io/projected/196ac30d-ab85-4327-86df-27e637aba0b3-kube-api-access-s69lz\") pod \"neutron-585cc76cc-zg5pq\" (UID: \"196ac30d-ab85-4327-86df-27e637aba0b3\") " pod="openstack/neutron-585cc76cc-zg5pq" Jan 25 08:16:19 crc kubenswrapper[4832]: I0125 08:16:19.910365 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-dc694898-lnc2f" event={"ID":"1fdbaf45-d8d7-430d-9c6d-29359e4dd17e","Type":"ContainerStarted","Data":"cfabfac4215c85cb04318d6e8a65d5fc42bf16d1a77ecce2faa828a5db7e7e26"} Jan 25 08:16:19 crc kubenswrapper[4832]: I0125 08:16:19.910978 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-dc694898-lnc2f" event={"ID":"1fdbaf45-d8d7-430d-9c6d-29359e4dd17e","Type":"ContainerStarted","Data":"6b4d8ad30e05cde88c2a993b10597ed6b155ae433b122fd292612d31b3d8090a"} Jan 25 08:16:19 crc kubenswrapper[4832]: I0125 08:16:19.911212 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-dc694898-lnc2f" Jan 25 08:16:19 crc kubenswrapper[4832]: I0125 08:16:19.916338 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-f649cfc6-vzpx7" Jan 25 08:16:19 crc kubenswrapper[4832]: I0125 08:16:19.916806 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-f649cfc6-vzpx7" Jan 25 08:16:19 crc kubenswrapper[4832]: I0125 08:16:19.925687 4832 generic.go:334] "Generic (PLEG): container finished" podID="aba728c5-d77a-4d46-a3e8-2e0d1e31756a" containerID="6b530cc1cf0e1578b59b872971bed0b5dcd8232ba169e2fd47e6516092de68a5" exitCode=0 Jan 25 08:16:19 crc kubenswrapper[4832]: I0125 08:16:19.926731 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-65965d6475-wsdhh" event={"ID":"aba728c5-d77a-4d46-a3e8-2e0d1e31756a","Type":"ContainerDied","Data":"6b530cc1cf0e1578b59b872971bed0b5dcd8232ba169e2fd47e6516092de68a5"} Jan 25 08:16:19 crc kubenswrapper[4832]: I0125 08:16:19.938546 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-dc694898-lnc2f" podStartSLOduration=2.93852597 podStartE2EDuration="2.93852597s" podCreationTimestamp="2026-01-25 08:16:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-25 08:16:19.937650532 +0000 UTC m=+1162.611474075" watchObservedRunningTime="2026-01-25 08:16:19.93852597 +0000 UTC m=+1162.612349503" Jan 25 08:16:20 crc kubenswrapper[4832]: I0125 08:16:20.063386 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-585cc76cc-zg5pq" Jan 25 08:16:20 crc kubenswrapper[4832]: I0125 08:16:20.726448 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-585cc76cc-zg5pq"] Jan 25 08:16:20 crc kubenswrapper[4832]: I0125 08:16:20.940517 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-65965d6475-wsdhh" event={"ID":"aba728c5-d77a-4d46-a3e8-2e0d1e31756a","Type":"ContainerStarted","Data":"05dfd328d0d18ead32420ac258f446591195a0fcebedc0337ea4bf2187fd90f3"} Jan 25 08:16:20 crc kubenswrapper[4832]: I0125 08:16:20.940680 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-65965d6475-wsdhh" Jan 25 08:16:20 crc kubenswrapper[4832]: I0125 08:16:20.941972 4832 generic.go:334] "Generic (PLEG): container finished" podID="0d1875b5-9bf9-49f8-8600-d4e2c2804c47" containerID="60af9015ae9720b19176d23260a846349c530a9b3b692bf9315265e29c80cfec" exitCode=0 Jan 25 08:16:20 crc kubenswrapper[4832]: I0125 08:16:20.942727 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-5dqnt" event={"ID":"0d1875b5-9bf9-49f8-8600-d4e2c2804c47","Type":"ContainerDied","Data":"60af9015ae9720b19176d23260a846349c530a9b3b692bf9315265e29c80cfec"} Jan 25 08:16:20 crc kubenswrapper[4832]: I0125 08:16:20.966234 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-65965d6475-wsdhh" podStartSLOduration=4.966217085 podStartE2EDuration="4.966217085s" podCreationTimestamp="2026-01-25 08:16:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-25 08:16:20.958520515 +0000 UTC m=+1163.632344048" watchObservedRunningTime="2026-01-25 08:16:20.966217085 +0000 UTC m=+1163.640040618" Jan 25 08:16:25 crc kubenswrapper[4832]: W0125 08:16:25.551165 4832 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod196ac30d_ab85_4327_86df_27e637aba0b3.slice/crio-df919a29518d05908d94c9b3701eae5787d62340b7101945762ad8e03234c567 WatchSource:0}: Error finding container df919a29518d05908d94c9b3701eae5787d62340b7101945762ad8e03234c567: Status 404 returned error can't find the container with id df919a29518d05908d94c9b3701eae5787d62340b7101945762ad8e03234c567 Jan 25 08:16:25 crc kubenswrapper[4832]: I0125 08:16:25.764744 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-5dqnt" Jan 25 08:16:25 crc kubenswrapper[4832]: I0125 08:16:25.959488 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/0d1875b5-9bf9-49f8-8600-d4e2c2804c47-fernet-keys\") pod \"0d1875b5-9bf9-49f8-8600-d4e2c2804c47\" (UID: \"0d1875b5-9bf9-49f8-8600-d4e2c2804c47\") " Jan 25 08:16:25 crc kubenswrapper[4832]: I0125 08:16:25.959600 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0d1875b5-9bf9-49f8-8600-d4e2c2804c47-combined-ca-bundle\") pod \"0d1875b5-9bf9-49f8-8600-d4e2c2804c47\" (UID: \"0d1875b5-9bf9-49f8-8600-d4e2c2804c47\") " Jan 25 08:16:25 crc kubenswrapper[4832]: I0125 08:16:25.959761 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0d1875b5-9bf9-49f8-8600-d4e2c2804c47-config-data\") pod \"0d1875b5-9bf9-49f8-8600-d4e2c2804c47\" (UID: \"0d1875b5-9bf9-49f8-8600-d4e2c2804c47\") " Jan 25 08:16:25 crc kubenswrapper[4832]: I0125 08:16:25.959790 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0d1875b5-9bf9-49f8-8600-d4e2c2804c47-scripts\") pod \"0d1875b5-9bf9-49f8-8600-d4e2c2804c47\" (UID: \"0d1875b5-9bf9-49f8-8600-d4e2c2804c47\") " Jan 25 08:16:25 crc kubenswrapper[4832]: I0125 08:16:25.959856 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/0d1875b5-9bf9-49f8-8600-d4e2c2804c47-credential-keys\") pod \"0d1875b5-9bf9-49f8-8600-d4e2c2804c47\" (UID: \"0d1875b5-9bf9-49f8-8600-d4e2c2804c47\") " Jan 25 08:16:25 crc kubenswrapper[4832]: I0125 08:16:25.972356 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vkwpq\" (UniqueName: \"kubernetes.io/projected/0d1875b5-9bf9-49f8-8600-d4e2c2804c47-kube-api-access-vkwpq\") pod \"0d1875b5-9bf9-49f8-8600-d4e2c2804c47\" (UID: \"0d1875b5-9bf9-49f8-8600-d4e2c2804c47\") " Jan 25 08:16:25 crc kubenswrapper[4832]: I0125 08:16:25.996808 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0d1875b5-9bf9-49f8-8600-d4e2c2804c47-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "0d1875b5-9bf9-49f8-8600-d4e2c2804c47" (UID: "0d1875b5-9bf9-49f8-8600-d4e2c2804c47"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 25 08:16:25 crc kubenswrapper[4832]: I0125 08:16:25.997113 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0d1875b5-9bf9-49f8-8600-d4e2c2804c47-scripts" (OuterVolumeSpecName: "scripts") pod "0d1875b5-9bf9-49f8-8600-d4e2c2804c47" (UID: "0d1875b5-9bf9-49f8-8600-d4e2c2804c47"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 25 08:16:26 crc kubenswrapper[4832]: I0125 08:16:25.999897 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0d1875b5-9bf9-49f8-8600-d4e2c2804c47-kube-api-access-vkwpq" (OuterVolumeSpecName: "kube-api-access-vkwpq") pod "0d1875b5-9bf9-49f8-8600-d4e2c2804c47" (UID: "0d1875b5-9bf9-49f8-8600-d4e2c2804c47"). InnerVolumeSpecName "kube-api-access-vkwpq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 25 08:16:26 crc kubenswrapper[4832]: I0125 08:16:26.010484 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0d1875b5-9bf9-49f8-8600-d4e2c2804c47-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "0d1875b5-9bf9-49f8-8600-d4e2c2804c47" (UID: "0d1875b5-9bf9-49f8-8600-d4e2c2804c47"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 25 08:16:26 crc kubenswrapper[4832]: I0125 08:16:26.038078 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-5dqnt" event={"ID":"0d1875b5-9bf9-49f8-8600-d4e2c2804c47","Type":"ContainerDied","Data":"0d1643b679f86171593dad2fb56be137a45ce5df6b9da36709223073de21df45"} Jan 25 08:16:26 crc kubenswrapper[4832]: I0125 08:16:26.038130 4832 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0d1643b679f86171593dad2fb56be137a45ce5df6b9da36709223073de21df45" Jan 25 08:16:26 crc kubenswrapper[4832]: I0125 08:16:26.038218 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-5dqnt" Jan 25 08:16:26 crc kubenswrapper[4832]: I0125 08:16:26.057039 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-585cc76cc-zg5pq" event={"ID":"196ac30d-ab85-4327-86df-27e637aba0b3","Type":"ContainerStarted","Data":"df919a29518d05908d94c9b3701eae5787d62340b7101945762ad8e03234c567"} Jan 25 08:16:26 crc kubenswrapper[4832]: I0125 08:16:26.057699 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0d1875b5-9bf9-49f8-8600-d4e2c2804c47-config-data" (OuterVolumeSpecName: "config-data") pod "0d1875b5-9bf9-49f8-8600-d4e2c2804c47" (UID: "0d1875b5-9bf9-49f8-8600-d4e2c2804c47"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 25 08:16:26 crc kubenswrapper[4832]: I0125 08:16:26.074580 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0d1875b5-9bf9-49f8-8600-d4e2c2804c47-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0d1875b5-9bf9-49f8-8600-d4e2c2804c47" (UID: "0d1875b5-9bf9-49f8-8600-d4e2c2804c47"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 25 08:16:26 crc kubenswrapper[4832]: I0125 08:16:26.075911 4832 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0d1875b5-9bf9-49f8-8600-d4e2c2804c47-config-data\") on node \"crc\" DevicePath \"\"" Jan 25 08:16:26 crc kubenswrapper[4832]: I0125 08:16:26.075933 4832 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0d1875b5-9bf9-49f8-8600-d4e2c2804c47-scripts\") on node \"crc\" DevicePath \"\"" Jan 25 08:16:26 crc kubenswrapper[4832]: I0125 08:16:26.075942 4832 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/0d1875b5-9bf9-49f8-8600-d4e2c2804c47-credential-keys\") on node \"crc\" DevicePath \"\"" Jan 25 08:16:26 crc kubenswrapper[4832]: I0125 08:16:26.075955 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vkwpq\" (UniqueName: \"kubernetes.io/projected/0d1875b5-9bf9-49f8-8600-d4e2c2804c47-kube-api-access-vkwpq\") on node \"crc\" DevicePath \"\"" Jan 25 08:16:26 crc kubenswrapper[4832]: I0125 08:16:26.075963 4832 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/0d1875b5-9bf9-49f8-8600-d4e2c2804c47-fernet-keys\") on node \"crc\" DevicePath \"\"" Jan 25 08:16:26 crc kubenswrapper[4832]: I0125 08:16:26.075970 4832 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0d1875b5-9bf9-49f8-8600-d4e2c2804c47-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 25 08:16:26 crc kubenswrapper[4832]: I0125 08:16:26.879116 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-699f4599dd-j695n"] Jan 25 08:16:26 crc kubenswrapper[4832]: E0125 08:16:26.880022 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0d1875b5-9bf9-49f8-8600-d4e2c2804c47" containerName="keystone-bootstrap" Jan 25 08:16:26 crc kubenswrapper[4832]: I0125 08:16:26.880039 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="0d1875b5-9bf9-49f8-8600-d4e2c2804c47" containerName="keystone-bootstrap" Jan 25 08:16:26 crc kubenswrapper[4832]: I0125 08:16:26.880240 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="0d1875b5-9bf9-49f8-8600-d4e2c2804c47" containerName="keystone-bootstrap" Jan 25 08:16:26 crc kubenswrapper[4832]: I0125 08:16:26.881021 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-699f4599dd-j695n" Jan 25 08:16:26 crc kubenswrapper[4832]: I0125 08:16:26.885026 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-public-svc" Jan 25 08:16:26 crc kubenswrapper[4832]: I0125 08:16:26.894093 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-xml8n" Jan 25 08:16:26 crc kubenswrapper[4832]: I0125 08:16:26.894494 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Jan 25 08:16:26 crc kubenswrapper[4832]: I0125 08:16:26.894630 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Jan 25 08:16:26 crc kubenswrapper[4832]: I0125 08:16:26.895363 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Jan 25 08:16:26 crc kubenswrapper[4832]: I0125 08:16:26.895509 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-internal-svc" Jan 25 08:16:26 crc kubenswrapper[4832]: I0125 08:16:26.895499 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-699f4599dd-j695n"] Jan 25 08:16:26 crc kubenswrapper[4832]: I0125 08:16:26.993455 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b32b998a-5689-42f6-9c15-b7e794acb916-public-tls-certs\") pod \"keystone-699f4599dd-j695n\" (UID: \"b32b998a-5689-42f6-9c15-b7e794acb916\") " pod="openstack/keystone-699f4599dd-j695n" Jan 25 08:16:26 crc kubenswrapper[4832]: I0125 08:16:26.993569 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b32b998a-5689-42f6-9c15-b7e794acb916-scripts\") pod \"keystone-699f4599dd-j695n\" (UID: \"b32b998a-5689-42f6-9c15-b7e794acb916\") " pod="openstack/keystone-699f4599dd-j695n" Jan 25 08:16:26 crc kubenswrapper[4832]: I0125 08:16:26.993603 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b32b998a-5689-42f6-9c15-b7e794acb916-internal-tls-certs\") pod \"keystone-699f4599dd-j695n\" (UID: \"b32b998a-5689-42f6-9c15-b7e794acb916\") " pod="openstack/keystone-699f4599dd-j695n" Jan 25 08:16:26 crc kubenswrapper[4832]: I0125 08:16:26.993636 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b32b998a-5689-42f6-9c15-b7e794acb916-combined-ca-bundle\") pod \"keystone-699f4599dd-j695n\" (UID: \"b32b998a-5689-42f6-9c15-b7e794acb916\") " pod="openstack/keystone-699f4599dd-j695n" Jan 25 08:16:26 crc kubenswrapper[4832]: I0125 08:16:26.993803 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/b32b998a-5689-42f6-9c15-b7e794acb916-fernet-keys\") pod \"keystone-699f4599dd-j695n\" (UID: \"b32b998a-5689-42f6-9c15-b7e794acb916\") " pod="openstack/keystone-699f4599dd-j695n" Jan 25 08:16:26 crc kubenswrapper[4832]: I0125 08:16:26.994252 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/b32b998a-5689-42f6-9c15-b7e794acb916-credential-keys\") pod \"keystone-699f4599dd-j695n\" (UID: \"b32b998a-5689-42f6-9c15-b7e794acb916\") " pod="openstack/keystone-699f4599dd-j695n" Jan 25 08:16:26 crc kubenswrapper[4832]: I0125 08:16:26.994315 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b32b998a-5689-42f6-9c15-b7e794acb916-config-data\") pod \"keystone-699f4599dd-j695n\" (UID: \"b32b998a-5689-42f6-9c15-b7e794acb916\") " pod="openstack/keystone-699f4599dd-j695n" Jan 25 08:16:26 crc kubenswrapper[4832]: I0125 08:16:26.994508 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tjbdc\" (UniqueName: \"kubernetes.io/projected/b32b998a-5689-42f6-9c15-b7e794acb916-kube-api-access-tjbdc\") pod \"keystone-699f4599dd-j695n\" (UID: \"b32b998a-5689-42f6-9c15-b7e794acb916\") " pod="openstack/keystone-699f4599dd-j695n" Jan 25 08:16:27 crc kubenswrapper[4832]: I0125 08:16:27.069209 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-7tnnv" event={"ID":"e1a44ba3-2a1f-4189-80d7-cd0c8795bd9a","Type":"ContainerStarted","Data":"55887aa70bb83eb4a9c37bbf1ffa23262c67a7a0d8e23e20ad96ff018bbb23f2"} Jan 25 08:16:27 crc kubenswrapper[4832]: I0125 08:16:27.071465 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-585cc76cc-zg5pq" event={"ID":"196ac30d-ab85-4327-86df-27e637aba0b3","Type":"ContainerStarted","Data":"08846f1d76951f512607b72d43c94cc03251c22467960102f66d465881deb1f9"} Jan 25 08:16:27 crc kubenswrapper[4832]: I0125 08:16:27.071501 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-585cc76cc-zg5pq" event={"ID":"196ac30d-ab85-4327-86df-27e637aba0b3","Type":"ContainerStarted","Data":"b931c3aab747871a791f5720b4595fc8a711739518f9f979ec95f13285aefd68"} Jan 25 08:16:27 crc kubenswrapper[4832]: I0125 08:16:27.071614 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-585cc76cc-zg5pq" Jan 25 08:16:27 crc kubenswrapper[4832]: I0125 08:16:27.073572 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b48b257e-ddb7-486d-8788-489ca788ac1f","Type":"ContainerStarted","Data":"dad362216754986eabe4008de7a8656a90cebd02e9d6abe54bde28eba71a3667"} Jan 25 08:16:27 crc kubenswrapper[4832]: I0125 08:16:27.095147 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-db-sync-7tnnv" podStartSLOduration=3.517993002 podStartE2EDuration="47.095121988s" podCreationTimestamp="2026-01-25 08:15:40 +0000 UTC" firstStartedPulling="2026-01-25 08:15:42.189826567 +0000 UTC m=+1124.863650100" lastFinishedPulling="2026-01-25 08:16:25.766955553 +0000 UTC m=+1168.440779086" observedRunningTime="2026-01-25 08:16:27.088655766 +0000 UTC m=+1169.762479299" watchObservedRunningTime="2026-01-25 08:16:27.095121988 +0000 UTC m=+1169.768945521" Jan 25 08:16:27 crc kubenswrapper[4832]: I0125 08:16:27.097788 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b32b998a-5689-42f6-9c15-b7e794acb916-public-tls-certs\") pod \"keystone-699f4599dd-j695n\" (UID: \"b32b998a-5689-42f6-9c15-b7e794acb916\") " pod="openstack/keystone-699f4599dd-j695n" Jan 25 08:16:27 crc kubenswrapper[4832]: I0125 08:16:27.097901 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b32b998a-5689-42f6-9c15-b7e794acb916-scripts\") pod \"keystone-699f4599dd-j695n\" (UID: \"b32b998a-5689-42f6-9c15-b7e794acb916\") " pod="openstack/keystone-699f4599dd-j695n" Jan 25 08:16:27 crc kubenswrapper[4832]: I0125 08:16:27.097920 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b32b998a-5689-42f6-9c15-b7e794acb916-internal-tls-certs\") pod \"keystone-699f4599dd-j695n\" (UID: \"b32b998a-5689-42f6-9c15-b7e794acb916\") " pod="openstack/keystone-699f4599dd-j695n" Jan 25 08:16:27 crc kubenswrapper[4832]: I0125 08:16:27.097946 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b32b998a-5689-42f6-9c15-b7e794acb916-combined-ca-bundle\") pod \"keystone-699f4599dd-j695n\" (UID: \"b32b998a-5689-42f6-9c15-b7e794acb916\") " pod="openstack/keystone-699f4599dd-j695n" Jan 25 08:16:27 crc kubenswrapper[4832]: I0125 08:16:27.097992 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/b32b998a-5689-42f6-9c15-b7e794acb916-fernet-keys\") pod \"keystone-699f4599dd-j695n\" (UID: \"b32b998a-5689-42f6-9c15-b7e794acb916\") " pod="openstack/keystone-699f4599dd-j695n" Jan 25 08:16:27 crc kubenswrapper[4832]: I0125 08:16:27.098587 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/b32b998a-5689-42f6-9c15-b7e794acb916-credential-keys\") pod \"keystone-699f4599dd-j695n\" (UID: \"b32b998a-5689-42f6-9c15-b7e794acb916\") " pod="openstack/keystone-699f4599dd-j695n" Jan 25 08:16:27 crc kubenswrapper[4832]: I0125 08:16:27.098645 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b32b998a-5689-42f6-9c15-b7e794acb916-config-data\") pod \"keystone-699f4599dd-j695n\" (UID: \"b32b998a-5689-42f6-9c15-b7e794acb916\") " pod="openstack/keystone-699f4599dd-j695n" Jan 25 08:16:27 crc kubenswrapper[4832]: I0125 08:16:27.098808 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tjbdc\" (UniqueName: \"kubernetes.io/projected/b32b998a-5689-42f6-9c15-b7e794acb916-kube-api-access-tjbdc\") pod \"keystone-699f4599dd-j695n\" (UID: \"b32b998a-5689-42f6-9c15-b7e794acb916\") " pod="openstack/keystone-699f4599dd-j695n" Jan 25 08:16:27 crc kubenswrapper[4832]: I0125 08:16:27.104535 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b32b998a-5689-42f6-9c15-b7e794acb916-public-tls-certs\") pod \"keystone-699f4599dd-j695n\" (UID: \"b32b998a-5689-42f6-9c15-b7e794acb916\") " pod="openstack/keystone-699f4599dd-j695n" Jan 25 08:16:27 crc kubenswrapper[4832]: I0125 08:16:27.106862 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b32b998a-5689-42f6-9c15-b7e794acb916-config-data\") pod \"keystone-699f4599dd-j695n\" (UID: \"b32b998a-5689-42f6-9c15-b7e794acb916\") " pod="openstack/keystone-699f4599dd-j695n" Jan 25 08:16:27 crc kubenswrapper[4832]: I0125 08:16:27.107288 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b32b998a-5689-42f6-9c15-b7e794acb916-internal-tls-certs\") pod \"keystone-699f4599dd-j695n\" (UID: \"b32b998a-5689-42f6-9c15-b7e794acb916\") " pod="openstack/keystone-699f4599dd-j695n" Jan 25 08:16:27 crc kubenswrapper[4832]: I0125 08:16:27.107904 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/b32b998a-5689-42f6-9c15-b7e794acb916-fernet-keys\") pod \"keystone-699f4599dd-j695n\" (UID: \"b32b998a-5689-42f6-9c15-b7e794acb916\") " pod="openstack/keystone-699f4599dd-j695n" Jan 25 08:16:27 crc kubenswrapper[4832]: I0125 08:16:27.130533 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/b32b998a-5689-42f6-9c15-b7e794acb916-credential-keys\") pod \"keystone-699f4599dd-j695n\" (UID: \"b32b998a-5689-42f6-9c15-b7e794acb916\") " pod="openstack/keystone-699f4599dd-j695n" Jan 25 08:16:27 crc kubenswrapper[4832]: I0125 08:16:27.134997 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-585cc76cc-zg5pq" podStartSLOduration=8.134978045 podStartE2EDuration="8.134978045s" podCreationTimestamp="2026-01-25 08:16:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-25 08:16:27.122937658 +0000 UTC m=+1169.796761191" watchObservedRunningTime="2026-01-25 08:16:27.134978045 +0000 UTC m=+1169.808801578" Jan 25 08:16:27 crc kubenswrapper[4832]: I0125 08:16:27.135067 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b32b998a-5689-42f6-9c15-b7e794acb916-combined-ca-bundle\") pod \"keystone-699f4599dd-j695n\" (UID: \"b32b998a-5689-42f6-9c15-b7e794acb916\") " pod="openstack/keystone-699f4599dd-j695n" Jan 25 08:16:27 crc kubenswrapper[4832]: I0125 08:16:27.135898 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b32b998a-5689-42f6-9c15-b7e794acb916-scripts\") pod \"keystone-699f4599dd-j695n\" (UID: \"b32b998a-5689-42f6-9c15-b7e794acb916\") " pod="openstack/keystone-699f4599dd-j695n" Jan 25 08:16:27 crc kubenswrapper[4832]: I0125 08:16:27.141724 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tjbdc\" (UniqueName: \"kubernetes.io/projected/b32b998a-5689-42f6-9c15-b7e794acb916-kube-api-access-tjbdc\") pod \"keystone-699f4599dd-j695n\" (UID: \"b32b998a-5689-42f6-9c15-b7e794acb916\") " pod="openstack/keystone-699f4599dd-j695n" Jan 25 08:16:27 crc kubenswrapper[4832]: I0125 08:16:27.200967 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-699f4599dd-j695n" Jan 25 08:16:27 crc kubenswrapper[4832]: I0125 08:16:27.358442 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-65965d6475-wsdhh" Jan 25 08:16:27 crc kubenswrapper[4832]: I0125 08:16:27.444241 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-76fcf4b695-75nt4"] Jan 25 08:16:27 crc kubenswrapper[4832]: I0125 08:16:27.444530 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-76fcf4b695-75nt4" podUID="91ca2186-0d45-4246-9a45-4cca828f2e82" containerName="dnsmasq-dns" containerID="cri-o://cccdd5eb5e560ba70b508b907ea7b798ab0112f27af79429468d64cff012ad9c" gracePeriod=10 Jan 25 08:16:27 crc kubenswrapper[4832]: I0125 08:16:27.854258 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-699f4599dd-j695n"] Jan 25 08:16:28 crc kubenswrapper[4832]: I0125 08:16:28.057047 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-76fcf4b695-75nt4" Jan 25 08:16:28 crc kubenswrapper[4832]: I0125 08:16:28.127344 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/91ca2186-0d45-4246-9a45-4cca828f2e82-dns-svc\") pod \"91ca2186-0d45-4246-9a45-4cca828f2e82\" (UID: \"91ca2186-0d45-4246-9a45-4cca828f2e82\") " Jan 25 08:16:28 crc kubenswrapper[4832]: I0125 08:16:28.127752 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/91ca2186-0d45-4246-9a45-4cca828f2e82-config\") pod \"91ca2186-0d45-4246-9a45-4cca828f2e82\" (UID: \"91ca2186-0d45-4246-9a45-4cca828f2e82\") " Jan 25 08:16:28 crc kubenswrapper[4832]: I0125 08:16:28.128186 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/91ca2186-0d45-4246-9a45-4cca828f2e82-ovsdbserver-sb\") pod \"91ca2186-0d45-4246-9a45-4cca828f2e82\" (UID: \"91ca2186-0d45-4246-9a45-4cca828f2e82\") " Jan 25 08:16:28 crc kubenswrapper[4832]: I0125 08:16:28.128323 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/91ca2186-0d45-4246-9a45-4cca828f2e82-dns-swift-storage-0\") pod \"91ca2186-0d45-4246-9a45-4cca828f2e82\" (UID: \"91ca2186-0d45-4246-9a45-4cca828f2e82\") " Jan 25 08:16:28 crc kubenswrapper[4832]: I0125 08:16:28.128528 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/91ca2186-0d45-4246-9a45-4cca828f2e82-ovsdbserver-nb\") pod \"91ca2186-0d45-4246-9a45-4cca828f2e82\" (UID: \"91ca2186-0d45-4246-9a45-4cca828f2e82\") " Jan 25 08:16:28 crc kubenswrapper[4832]: I0125 08:16:28.128659 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vhv9f\" (UniqueName: \"kubernetes.io/projected/91ca2186-0d45-4246-9a45-4cca828f2e82-kube-api-access-vhv9f\") pod \"91ca2186-0d45-4246-9a45-4cca828f2e82\" (UID: \"91ca2186-0d45-4246-9a45-4cca828f2e82\") " Jan 25 08:16:28 crc kubenswrapper[4832]: I0125 08:16:28.132441 4832 generic.go:334] "Generic (PLEG): container finished" podID="91ca2186-0d45-4246-9a45-4cca828f2e82" containerID="cccdd5eb5e560ba70b508b907ea7b798ab0112f27af79429468d64cff012ad9c" exitCode=0 Jan 25 08:16:28 crc kubenswrapper[4832]: I0125 08:16:28.132546 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-76fcf4b695-75nt4" event={"ID":"91ca2186-0d45-4246-9a45-4cca828f2e82","Type":"ContainerDied","Data":"cccdd5eb5e560ba70b508b907ea7b798ab0112f27af79429468d64cff012ad9c"} Jan 25 08:16:28 crc kubenswrapper[4832]: I0125 08:16:28.132576 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-76fcf4b695-75nt4" event={"ID":"91ca2186-0d45-4246-9a45-4cca828f2e82","Type":"ContainerDied","Data":"ca321f98194076d5703d70240a900848c2a8c4c646e8b2085a92dfeadb9d203d"} Jan 25 08:16:28 crc kubenswrapper[4832]: I0125 08:16:28.132601 4832 scope.go:117] "RemoveContainer" containerID="cccdd5eb5e560ba70b508b907ea7b798ab0112f27af79429468d64cff012ad9c" Jan 25 08:16:28 crc kubenswrapper[4832]: I0125 08:16:28.137001 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-76fcf4b695-75nt4" Jan 25 08:16:28 crc kubenswrapper[4832]: I0125 08:16:28.137598 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/91ca2186-0d45-4246-9a45-4cca828f2e82-kube-api-access-vhv9f" (OuterVolumeSpecName: "kube-api-access-vhv9f") pod "91ca2186-0d45-4246-9a45-4cca828f2e82" (UID: "91ca2186-0d45-4246-9a45-4cca828f2e82"). InnerVolumeSpecName "kube-api-access-vhv9f". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 25 08:16:28 crc kubenswrapper[4832]: I0125 08:16:28.146857 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-699f4599dd-j695n" event={"ID":"b32b998a-5689-42f6-9c15-b7e794acb916","Type":"ContainerStarted","Data":"0c7385dbd0c606b8fe3c5c39463c7445530dbe725735cf5c51880c1f9b6eca00"} Jan 25 08:16:28 crc kubenswrapper[4832]: I0125 08:16:28.242342 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vhv9f\" (UniqueName: \"kubernetes.io/projected/91ca2186-0d45-4246-9a45-4cca828f2e82-kube-api-access-vhv9f\") on node \"crc\" DevicePath \"\"" Jan 25 08:16:28 crc kubenswrapper[4832]: I0125 08:16:28.265467 4832 scope.go:117] "RemoveContainer" containerID="e16923b764baff25929fb5e9daa5e321a58cccbb09101587d4be62a6a05ffaf4" Jan 25 08:16:28 crc kubenswrapper[4832]: I0125 08:16:28.282574 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/91ca2186-0d45-4246-9a45-4cca828f2e82-config" (OuterVolumeSpecName: "config") pod "91ca2186-0d45-4246-9a45-4cca828f2e82" (UID: "91ca2186-0d45-4246-9a45-4cca828f2e82"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 25 08:16:28 crc kubenswrapper[4832]: I0125 08:16:28.297357 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/91ca2186-0d45-4246-9a45-4cca828f2e82-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "91ca2186-0d45-4246-9a45-4cca828f2e82" (UID: "91ca2186-0d45-4246-9a45-4cca828f2e82"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 25 08:16:28 crc kubenswrapper[4832]: I0125 08:16:28.346116 4832 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/91ca2186-0d45-4246-9a45-4cca828f2e82-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 25 08:16:28 crc kubenswrapper[4832]: I0125 08:16:28.346165 4832 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/91ca2186-0d45-4246-9a45-4cca828f2e82-config\") on node \"crc\" DevicePath \"\"" Jan 25 08:16:28 crc kubenswrapper[4832]: I0125 08:16:28.378621 4832 scope.go:117] "RemoveContainer" containerID="cccdd5eb5e560ba70b508b907ea7b798ab0112f27af79429468d64cff012ad9c" Jan 25 08:16:28 crc kubenswrapper[4832]: E0125 08:16:28.379195 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cccdd5eb5e560ba70b508b907ea7b798ab0112f27af79429468d64cff012ad9c\": container with ID starting with cccdd5eb5e560ba70b508b907ea7b798ab0112f27af79429468d64cff012ad9c not found: ID does not exist" containerID="cccdd5eb5e560ba70b508b907ea7b798ab0112f27af79429468d64cff012ad9c" Jan 25 08:16:28 crc kubenswrapper[4832]: I0125 08:16:28.379257 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cccdd5eb5e560ba70b508b907ea7b798ab0112f27af79429468d64cff012ad9c"} err="failed to get container status \"cccdd5eb5e560ba70b508b907ea7b798ab0112f27af79429468d64cff012ad9c\": rpc error: code = NotFound desc = could not find container \"cccdd5eb5e560ba70b508b907ea7b798ab0112f27af79429468d64cff012ad9c\": container with ID starting with cccdd5eb5e560ba70b508b907ea7b798ab0112f27af79429468d64cff012ad9c not found: ID does not exist" Jan 25 08:16:28 crc kubenswrapper[4832]: I0125 08:16:28.379327 4832 scope.go:117] "RemoveContainer" containerID="e16923b764baff25929fb5e9daa5e321a58cccbb09101587d4be62a6a05ffaf4" Jan 25 08:16:28 crc kubenswrapper[4832]: E0125 08:16:28.380517 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e16923b764baff25929fb5e9daa5e321a58cccbb09101587d4be62a6a05ffaf4\": container with ID starting with e16923b764baff25929fb5e9daa5e321a58cccbb09101587d4be62a6a05ffaf4 not found: ID does not exist" containerID="e16923b764baff25929fb5e9daa5e321a58cccbb09101587d4be62a6a05ffaf4" Jan 25 08:16:28 crc kubenswrapper[4832]: I0125 08:16:28.380610 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e16923b764baff25929fb5e9daa5e321a58cccbb09101587d4be62a6a05ffaf4"} err="failed to get container status \"e16923b764baff25929fb5e9daa5e321a58cccbb09101587d4be62a6a05ffaf4\": rpc error: code = NotFound desc = could not find container \"e16923b764baff25929fb5e9daa5e321a58cccbb09101587d4be62a6a05ffaf4\": container with ID starting with e16923b764baff25929fb5e9daa5e321a58cccbb09101587d4be62a6a05ffaf4 not found: ID does not exist" Jan 25 08:16:28 crc kubenswrapper[4832]: I0125 08:16:28.449126 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/91ca2186-0d45-4246-9a45-4cca828f2e82-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "91ca2186-0d45-4246-9a45-4cca828f2e82" (UID: "91ca2186-0d45-4246-9a45-4cca828f2e82"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 25 08:16:28 crc kubenswrapper[4832]: I0125 08:16:28.464358 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/91ca2186-0d45-4246-9a45-4cca828f2e82-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "91ca2186-0d45-4246-9a45-4cca828f2e82" (UID: "91ca2186-0d45-4246-9a45-4cca828f2e82"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 25 08:16:28 crc kubenswrapper[4832]: I0125 08:16:28.468989 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/91ca2186-0d45-4246-9a45-4cca828f2e82-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "91ca2186-0d45-4246-9a45-4cca828f2e82" (UID: "91ca2186-0d45-4246-9a45-4cca828f2e82"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 25 08:16:28 crc kubenswrapper[4832]: I0125 08:16:28.550148 4832 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/91ca2186-0d45-4246-9a45-4cca828f2e82-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 25 08:16:28 crc kubenswrapper[4832]: I0125 08:16:28.550651 4832 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/91ca2186-0d45-4246-9a45-4cca828f2e82-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 25 08:16:28 crc kubenswrapper[4832]: I0125 08:16:28.550664 4832 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/91ca2186-0d45-4246-9a45-4cca828f2e82-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 25 08:16:28 crc kubenswrapper[4832]: I0125 08:16:28.774220 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-76fcf4b695-75nt4"] Jan 25 08:16:28 crc kubenswrapper[4832]: I0125 08:16:28.784245 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-76fcf4b695-75nt4"] Jan 25 08:16:29 crc kubenswrapper[4832]: I0125 08:16:29.162792 4832 generic.go:334] "Generic (PLEG): container finished" podID="e1a44ba3-2a1f-4189-80d7-cd0c8795bd9a" containerID="55887aa70bb83eb4a9c37bbf1ffa23262c67a7a0d8e23e20ad96ff018bbb23f2" exitCode=0 Jan 25 08:16:29 crc kubenswrapper[4832]: I0125 08:16:29.162812 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-7tnnv" event={"ID":"e1a44ba3-2a1f-4189-80d7-cd0c8795bd9a","Type":"ContainerDied","Data":"55887aa70bb83eb4a9c37bbf1ffa23262c67a7a0d8e23e20ad96ff018bbb23f2"} Jan 25 08:16:29 crc kubenswrapper[4832]: I0125 08:16:29.165571 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-699f4599dd-j695n" event={"ID":"b32b998a-5689-42f6-9c15-b7e794acb916","Type":"ContainerStarted","Data":"e4f105a2252f473f28e95b98bb15b2d1b8fbfccf1daf8643f415c23d96d1d6a0"} Jan 25 08:16:29 crc kubenswrapper[4832]: I0125 08:16:29.167033 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/keystone-699f4599dd-j695n" Jan 25 08:16:29 crc kubenswrapper[4832]: I0125 08:16:29.176399 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-vrvb2" event={"ID":"e793ce7a-261b-4b97-8436-c7a5efc5e126","Type":"ContainerStarted","Data":"882c4811454c01f87f413004ff277f6ed02b5c631dc3dfb6708b5bf0b9e8e5b1"} Jan 25 08:16:29 crc kubenswrapper[4832]: I0125 08:16:29.216204 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-db-sync-vrvb2" podStartSLOduration=3.824067226 podStartE2EDuration="49.216152154s" podCreationTimestamp="2026-01-25 08:15:40 +0000 UTC" firstStartedPulling="2026-01-25 08:15:41.890475853 +0000 UTC m=+1124.564299396" lastFinishedPulling="2026-01-25 08:16:27.282560791 +0000 UTC m=+1169.956384324" observedRunningTime="2026-01-25 08:16:29.206100279 +0000 UTC m=+1171.879923812" watchObservedRunningTime="2026-01-25 08:16:29.216152154 +0000 UTC m=+1171.889975687" Jan 25 08:16:29 crc kubenswrapper[4832]: I0125 08:16:29.238095 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-699f4599dd-j695n" podStartSLOduration=3.23806863 podStartE2EDuration="3.23806863s" podCreationTimestamp="2026-01-25 08:16:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-25 08:16:29.234432635 +0000 UTC m=+1171.908256168" watchObservedRunningTime="2026-01-25 08:16:29.23806863 +0000 UTC m=+1171.911892163" Jan 25 08:16:29 crc kubenswrapper[4832]: I0125 08:16:29.686040 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="91ca2186-0d45-4246-9a45-4cca828f2e82" path="/var/lib/kubelet/pods/91ca2186-0d45-4246-9a45-4cca828f2e82/volumes" Jan 25 08:16:29 crc kubenswrapper[4832]: I0125 08:16:29.813437 4832 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-856b6b4996-m59cl" podUID="573d9b12-352d-4b14-b79c-f2a4a3bfec61" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.145:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.145:8443: connect: connection refused" Jan 25 08:16:29 crc kubenswrapper[4832]: I0125 08:16:29.919314 4832 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-f649cfc6-vzpx7" podUID="26fd6803-3263-4989-a86e-908f6a504d14" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.146:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.146:8443: connect: connection refused" Jan 25 08:16:30 crc kubenswrapper[4832]: I0125 08:16:30.666343 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-7tnnv" Jan 25 08:16:30 crc kubenswrapper[4832]: I0125 08:16:30.734312 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e1a44ba3-2a1f-4189-80d7-cd0c8795bd9a-scripts\") pod \"e1a44ba3-2a1f-4189-80d7-cd0c8795bd9a\" (UID: \"e1a44ba3-2a1f-4189-80d7-cd0c8795bd9a\") " Jan 25 08:16:30 crc kubenswrapper[4832]: I0125 08:16:30.734805 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kgc9s\" (UniqueName: \"kubernetes.io/projected/e1a44ba3-2a1f-4189-80d7-cd0c8795bd9a-kube-api-access-kgc9s\") pod \"e1a44ba3-2a1f-4189-80d7-cd0c8795bd9a\" (UID: \"e1a44ba3-2a1f-4189-80d7-cd0c8795bd9a\") " Jan 25 08:16:30 crc kubenswrapper[4832]: I0125 08:16:30.734864 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e1a44ba3-2a1f-4189-80d7-cd0c8795bd9a-combined-ca-bundle\") pod \"e1a44ba3-2a1f-4189-80d7-cd0c8795bd9a\" (UID: \"e1a44ba3-2a1f-4189-80d7-cd0c8795bd9a\") " Jan 25 08:16:30 crc kubenswrapper[4832]: I0125 08:16:30.735024 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e1a44ba3-2a1f-4189-80d7-cd0c8795bd9a-logs\") pod \"e1a44ba3-2a1f-4189-80d7-cd0c8795bd9a\" (UID: \"e1a44ba3-2a1f-4189-80d7-cd0c8795bd9a\") " Jan 25 08:16:30 crc kubenswrapper[4832]: I0125 08:16:30.735232 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e1a44ba3-2a1f-4189-80d7-cd0c8795bd9a-config-data\") pod \"e1a44ba3-2a1f-4189-80d7-cd0c8795bd9a\" (UID: \"e1a44ba3-2a1f-4189-80d7-cd0c8795bd9a\") " Jan 25 08:16:30 crc kubenswrapper[4832]: I0125 08:16:30.736316 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e1a44ba3-2a1f-4189-80d7-cd0c8795bd9a-logs" (OuterVolumeSpecName: "logs") pod "e1a44ba3-2a1f-4189-80d7-cd0c8795bd9a" (UID: "e1a44ba3-2a1f-4189-80d7-cd0c8795bd9a"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 25 08:16:30 crc kubenswrapper[4832]: I0125 08:16:30.741797 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e1a44ba3-2a1f-4189-80d7-cd0c8795bd9a-scripts" (OuterVolumeSpecName: "scripts") pod "e1a44ba3-2a1f-4189-80d7-cd0c8795bd9a" (UID: "e1a44ba3-2a1f-4189-80d7-cd0c8795bd9a"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 25 08:16:30 crc kubenswrapper[4832]: I0125 08:16:30.744666 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e1a44ba3-2a1f-4189-80d7-cd0c8795bd9a-kube-api-access-kgc9s" (OuterVolumeSpecName: "kube-api-access-kgc9s") pod "e1a44ba3-2a1f-4189-80d7-cd0c8795bd9a" (UID: "e1a44ba3-2a1f-4189-80d7-cd0c8795bd9a"). InnerVolumeSpecName "kube-api-access-kgc9s". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 25 08:16:30 crc kubenswrapper[4832]: I0125 08:16:30.774702 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e1a44ba3-2a1f-4189-80d7-cd0c8795bd9a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e1a44ba3-2a1f-4189-80d7-cd0c8795bd9a" (UID: "e1a44ba3-2a1f-4189-80d7-cd0c8795bd9a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 25 08:16:30 crc kubenswrapper[4832]: I0125 08:16:30.788467 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e1a44ba3-2a1f-4189-80d7-cd0c8795bd9a-config-data" (OuterVolumeSpecName: "config-data") pod "e1a44ba3-2a1f-4189-80d7-cd0c8795bd9a" (UID: "e1a44ba3-2a1f-4189-80d7-cd0c8795bd9a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 25 08:16:30 crc kubenswrapper[4832]: I0125 08:16:30.837141 4832 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e1a44ba3-2a1f-4189-80d7-cd0c8795bd9a-logs\") on node \"crc\" DevicePath \"\"" Jan 25 08:16:30 crc kubenswrapper[4832]: I0125 08:16:30.837166 4832 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e1a44ba3-2a1f-4189-80d7-cd0c8795bd9a-config-data\") on node \"crc\" DevicePath \"\"" Jan 25 08:16:30 crc kubenswrapper[4832]: I0125 08:16:30.837175 4832 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e1a44ba3-2a1f-4189-80d7-cd0c8795bd9a-scripts\") on node \"crc\" DevicePath \"\"" Jan 25 08:16:30 crc kubenswrapper[4832]: I0125 08:16:30.837183 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kgc9s\" (UniqueName: \"kubernetes.io/projected/e1a44ba3-2a1f-4189-80d7-cd0c8795bd9a-kube-api-access-kgc9s\") on node \"crc\" DevicePath \"\"" Jan 25 08:16:30 crc kubenswrapper[4832]: I0125 08:16:30.837219 4832 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e1a44ba3-2a1f-4189-80d7-cd0c8795bd9a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 25 08:16:31 crc kubenswrapper[4832]: I0125 08:16:31.213760 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-7tnnv" event={"ID":"e1a44ba3-2a1f-4189-80d7-cd0c8795bd9a","Type":"ContainerDied","Data":"52cb9f8c097f83c92f04258a204ad51177bbcb4f0218431527547abdd379a578"} Jan 25 08:16:31 crc kubenswrapper[4832]: I0125 08:16:31.213814 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-7tnnv" Jan 25 08:16:31 crc kubenswrapper[4832]: I0125 08:16:31.213834 4832 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="52cb9f8c097f83c92f04258a204ad51177bbcb4f0218431527547abdd379a578" Jan 25 08:16:31 crc kubenswrapper[4832]: I0125 08:16:31.215575 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-xdqfx" event={"ID":"f4bbdba8-c7bc-4dd7-ae19-1655bc089a86","Type":"ContainerStarted","Data":"7d46d3eff94d22ea0ddca1e6e36f9d0cc0da8afd772359a66cb4417d7e75bfec"} Jan 25 08:16:31 crc kubenswrapper[4832]: I0125 08:16:31.240977 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-db-sync-xdqfx" podStartSLOduration=2.892054633 podStartE2EDuration="51.240939989s" podCreationTimestamp="2026-01-25 08:15:40 +0000 UTC" firstStartedPulling="2026-01-25 08:15:42.189865998 +0000 UTC m=+1124.863689531" lastFinishedPulling="2026-01-25 08:16:30.538751354 +0000 UTC m=+1173.212574887" observedRunningTime="2026-01-25 08:16:31.23712523 +0000 UTC m=+1173.910948763" watchObservedRunningTime="2026-01-25 08:16:31.240939989 +0000 UTC m=+1173.914763532" Jan 25 08:16:31 crc kubenswrapper[4832]: I0125 08:16:31.332575 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-5cd5868dbb-cxxfw"] Jan 25 08:16:31 crc kubenswrapper[4832]: E0125 08:16:31.333319 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="91ca2186-0d45-4246-9a45-4cca828f2e82" containerName="dnsmasq-dns" Jan 25 08:16:31 crc kubenswrapper[4832]: I0125 08:16:31.333338 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="91ca2186-0d45-4246-9a45-4cca828f2e82" containerName="dnsmasq-dns" Jan 25 08:16:31 crc kubenswrapper[4832]: E0125 08:16:31.333368 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="91ca2186-0d45-4246-9a45-4cca828f2e82" containerName="init" Jan 25 08:16:31 crc kubenswrapper[4832]: I0125 08:16:31.333375 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="91ca2186-0d45-4246-9a45-4cca828f2e82" containerName="init" Jan 25 08:16:31 crc kubenswrapper[4832]: E0125 08:16:31.333412 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e1a44ba3-2a1f-4189-80d7-cd0c8795bd9a" containerName="placement-db-sync" Jan 25 08:16:31 crc kubenswrapper[4832]: I0125 08:16:31.333423 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="e1a44ba3-2a1f-4189-80d7-cd0c8795bd9a" containerName="placement-db-sync" Jan 25 08:16:31 crc kubenswrapper[4832]: I0125 08:16:31.333634 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="e1a44ba3-2a1f-4189-80d7-cd0c8795bd9a" containerName="placement-db-sync" Jan 25 08:16:31 crc kubenswrapper[4832]: I0125 08:16:31.333660 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="91ca2186-0d45-4246-9a45-4cca828f2e82" containerName="dnsmasq-dns" Jan 25 08:16:31 crc kubenswrapper[4832]: I0125 08:16:31.334586 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-5cd5868dbb-cxxfw" Jan 25 08:16:31 crc kubenswrapper[4832]: I0125 08:16:31.340924 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Jan 25 08:16:31 crc kubenswrapper[4832]: I0125 08:16:31.341288 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-internal-svc" Jan 25 08:16:31 crc kubenswrapper[4832]: I0125 08:16:31.341517 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Jan 25 08:16:31 crc kubenswrapper[4832]: I0125 08:16:31.341647 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-public-svc" Jan 25 08:16:31 crc kubenswrapper[4832]: I0125 08:16:31.341893 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-gj2fx" Jan 25 08:16:31 crc kubenswrapper[4832]: I0125 08:16:31.420412 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-5cd5868dbb-cxxfw"] Jan 25 08:16:31 crc kubenswrapper[4832]: I0125 08:16:31.456595 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c6f5e19c-ec70-424e-a446-09b1b78697be-config-data\") pod \"placement-5cd5868dbb-cxxfw\" (UID: \"c6f5e19c-ec70-424e-a446-09b1b78697be\") " pod="openstack/placement-5cd5868dbb-cxxfw" Jan 25 08:16:31 crc kubenswrapper[4832]: I0125 08:16:31.456656 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c6f5e19c-ec70-424e-a446-09b1b78697be-internal-tls-certs\") pod \"placement-5cd5868dbb-cxxfw\" (UID: \"c6f5e19c-ec70-424e-a446-09b1b78697be\") " pod="openstack/placement-5cd5868dbb-cxxfw" Jan 25 08:16:31 crc kubenswrapper[4832]: I0125 08:16:31.456735 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c6f5e19c-ec70-424e-a446-09b1b78697be-scripts\") pod \"placement-5cd5868dbb-cxxfw\" (UID: \"c6f5e19c-ec70-424e-a446-09b1b78697be\") " pod="openstack/placement-5cd5868dbb-cxxfw" Jan 25 08:16:31 crc kubenswrapper[4832]: I0125 08:16:31.456758 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c6f5e19c-ec70-424e-a446-09b1b78697be-logs\") pod \"placement-5cd5868dbb-cxxfw\" (UID: \"c6f5e19c-ec70-424e-a446-09b1b78697be\") " pod="openstack/placement-5cd5868dbb-cxxfw" Jan 25 08:16:31 crc kubenswrapper[4832]: I0125 08:16:31.456783 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gjrdn\" (UniqueName: \"kubernetes.io/projected/c6f5e19c-ec70-424e-a446-09b1b78697be-kube-api-access-gjrdn\") pod \"placement-5cd5868dbb-cxxfw\" (UID: \"c6f5e19c-ec70-424e-a446-09b1b78697be\") " pod="openstack/placement-5cd5868dbb-cxxfw" Jan 25 08:16:31 crc kubenswrapper[4832]: I0125 08:16:31.456820 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c6f5e19c-ec70-424e-a446-09b1b78697be-combined-ca-bundle\") pod \"placement-5cd5868dbb-cxxfw\" (UID: \"c6f5e19c-ec70-424e-a446-09b1b78697be\") " pod="openstack/placement-5cd5868dbb-cxxfw" Jan 25 08:16:31 crc kubenswrapper[4832]: I0125 08:16:31.456837 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c6f5e19c-ec70-424e-a446-09b1b78697be-public-tls-certs\") pod \"placement-5cd5868dbb-cxxfw\" (UID: \"c6f5e19c-ec70-424e-a446-09b1b78697be\") " pod="openstack/placement-5cd5868dbb-cxxfw" Jan 25 08:16:31 crc kubenswrapper[4832]: I0125 08:16:31.558724 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c6f5e19c-ec70-424e-a446-09b1b78697be-combined-ca-bundle\") pod \"placement-5cd5868dbb-cxxfw\" (UID: \"c6f5e19c-ec70-424e-a446-09b1b78697be\") " pod="openstack/placement-5cd5868dbb-cxxfw" Jan 25 08:16:31 crc kubenswrapper[4832]: I0125 08:16:31.558771 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c6f5e19c-ec70-424e-a446-09b1b78697be-public-tls-certs\") pod \"placement-5cd5868dbb-cxxfw\" (UID: \"c6f5e19c-ec70-424e-a446-09b1b78697be\") " pod="openstack/placement-5cd5868dbb-cxxfw" Jan 25 08:16:31 crc kubenswrapper[4832]: I0125 08:16:31.558817 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c6f5e19c-ec70-424e-a446-09b1b78697be-config-data\") pod \"placement-5cd5868dbb-cxxfw\" (UID: \"c6f5e19c-ec70-424e-a446-09b1b78697be\") " pod="openstack/placement-5cd5868dbb-cxxfw" Jan 25 08:16:31 crc kubenswrapper[4832]: I0125 08:16:31.558840 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c6f5e19c-ec70-424e-a446-09b1b78697be-internal-tls-certs\") pod \"placement-5cd5868dbb-cxxfw\" (UID: \"c6f5e19c-ec70-424e-a446-09b1b78697be\") " pod="openstack/placement-5cd5868dbb-cxxfw" Jan 25 08:16:31 crc kubenswrapper[4832]: I0125 08:16:31.558914 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c6f5e19c-ec70-424e-a446-09b1b78697be-scripts\") pod \"placement-5cd5868dbb-cxxfw\" (UID: \"c6f5e19c-ec70-424e-a446-09b1b78697be\") " pod="openstack/placement-5cd5868dbb-cxxfw" Jan 25 08:16:31 crc kubenswrapper[4832]: I0125 08:16:31.558939 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c6f5e19c-ec70-424e-a446-09b1b78697be-logs\") pod \"placement-5cd5868dbb-cxxfw\" (UID: \"c6f5e19c-ec70-424e-a446-09b1b78697be\") " pod="openstack/placement-5cd5868dbb-cxxfw" Jan 25 08:16:31 crc kubenswrapper[4832]: I0125 08:16:31.558963 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gjrdn\" (UniqueName: \"kubernetes.io/projected/c6f5e19c-ec70-424e-a446-09b1b78697be-kube-api-access-gjrdn\") pod \"placement-5cd5868dbb-cxxfw\" (UID: \"c6f5e19c-ec70-424e-a446-09b1b78697be\") " pod="openstack/placement-5cd5868dbb-cxxfw" Jan 25 08:16:31 crc kubenswrapper[4832]: I0125 08:16:31.563698 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c6f5e19c-ec70-424e-a446-09b1b78697be-scripts\") pod \"placement-5cd5868dbb-cxxfw\" (UID: \"c6f5e19c-ec70-424e-a446-09b1b78697be\") " pod="openstack/placement-5cd5868dbb-cxxfw" Jan 25 08:16:31 crc kubenswrapper[4832]: I0125 08:16:31.564072 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c6f5e19c-ec70-424e-a446-09b1b78697be-combined-ca-bundle\") pod \"placement-5cd5868dbb-cxxfw\" (UID: \"c6f5e19c-ec70-424e-a446-09b1b78697be\") " pod="openstack/placement-5cd5868dbb-cxxfw" Jan 25 08:16:31 crc kubenswrapper[4832]: I0125 08:16:31.564089 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c6f5e19c-ec70-424e-a446-09b1b78697be-public-tls-certs\") pod \"placement-5cd5868dbb-cxxfw\" (UID: \"c6f5e19c-ec70-424e-a446-09b1b78697be\") " pod="openstack/placement-5cd5868dbb-cxxfw" Jan 25 08:16:31 crc kubenswrapper[4832]: I0125 08:16:31.564502 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c6f5e19c-ec70-424e-a446-09b1b78697be-logs\") pod \"placement-5cd5868dbb-cxxfw\" (UID: \"c6f5e19c-ec70-424e-a446-09b1b78697be\") " pod="openstack/placement-5cd5868dbb-cxxfw" Jan 25 08:16:31 crc kubenswrapper[4832]: I0125 08:16:31.564751 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c6f5e19c-ec70-424e-a446-09b1b78697be-config-data\") pod \"placement-5cd5868dbb-cxxfw\" (UID: \"c6f5e19c-ec70-424e-a446-09b1b78697be\") " pod="openstack/placement-5cd5868dbb-cxxfw" Jan 25 08:16:31 crc kubenswrapper[4832]: I0125 08:16:31.567760 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c6f5e19c-ec70-424e-a446-09b1b78697be-internal-tls-certs\") pod \"placement-5cd5868dbb-cxxfw\" (UID: \"c6f5e19c-ec70-424e-a446-09b1b78697be\") " pod="openstack/placement-5cd5868dbb-cxxfw" Jan 25 08:16:31 crc kubenswrapper[4832]: I0125 08:16:31.580207 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gjrdn\" (UniqueName: \"kubernetes.io/projected/c6f5e19c-ec70-424e-a446-09b1b78697be-kube-api-access-gjrdn\") pod \"placement-5cd5868dbb-cxxfw\" (UID: \"c6f5e19c-ec70-424e-a446-09b1b78697be\") " pod="openstack/placement-5cd5868dbb-cxxfw" Jan 25 08:16:31 crc kubenswrapper[4832]: I0125 08:16:31.662675 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-5cd5868dbb-cxxfw" Jan 25 08:16:32 crc kubenswrapper[4832]: I0125 08:16:32.247237 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-5cd5868dbb-cxxfw"] Jan 25 08:16:32 crc kubenswrapper[4832]: W0125 08:16:32.257644 4832 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc6f5e19c_ec70_424e_a446_09b1b78697be.slice/crio-73621412426c70acc65a3efbe11dc3e8cd3972349445cc8154cef14c8e71ef28 WatchSource:0}: Error finding container 73621412426c70acc65a3efbe11dc3e8cd3972349445cc8154cef14c8e71ef28: Status 404 returned error can't find the container with id 73621412426c70acc65a3efbe11dc3e8cd3972349445cc8154cef14c8e71ef28 Jan 25 08:16:33 crc kubenswrapper[4832]: I0125 08:16:33.244155 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-5cd5868dbb-cxxfw" event={"ID":"c6f5e19c-ec70-424e-a446-09b1b78697be","Type":"ContainerStarted","Data":"c2d93da09803d4c7ea0121a3d1623044877e75d28a597705a66cf475f0184295"} Jan 25 08:16:33 crc kubenswrapper[4832]: I0125 08:16:33.244634 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-5cd5868dbb-cxxfw" event={"ID":"c6f5e19c-ec70-424e-a446-09b1b78697be","Type":"ContainerStarted","Data":"73621412426c70acc65a3efbe11dc3e8cd3972349445cc8154cef14c8e71ef28"} Jan 25 08:16:33 crc kubenswrapper[4832]: I0125 08:16:33.254279 4832 generic.go:334] "Generic (PLEG): container finished" podID="88b922f3-0125-4078-8ec7-ad4edd04d0ed" containerID="2bc24f26d829b53a811da3b1657056332cb5bca551cb0d9c4b02484b0306b433" exitCode=0 Jan 25 08:16:33 crc kubenswrapper[4832]: I0125 08:16:33.254355 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-dnzjb" event={"ID":"88b922f3-0125-4078-8ec7-ad4edd04d0ed","Type":"ContainerDied","Data":"2bc24f26d829b53a811da3b1657056332cb5bca551cb0d9c4b02484b0306b433"} Jan 25 08:16:36 crc kubenswrapper[4832]: I0125 08:16:36.318739 4832 generic.go:334] "Generic (PLEG): container finished" podID="f4bbdba8-c7bc-4dd7-ae19-1655bc089a86" containerID="7d46d3eff94d22ea0ddca1e6e36f9d0cc0da8afd772359a66cb4417d7e75bfec" exitCode=0 Jan 25 08:16:36 crc kubenswrapper[4832]: I0125 08:16:36.318961 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-xdqfx" event={"ID":"f4bbdba8-c7bc-4dd7-ae19-1655bc089a86","Type":"ContainerDied","Data":"7d46d3eff94d22ea0ddca1e6e36f9d0cc0da8afd772359a66cb4417d7e75bfec"} Jan 25 08:16:37 crc kubenswrapper[4832]: I0125 08:16:37.023847 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-dnzjb" Jan 25 08:16:37 crc kubenswrapper[4832]: I0125 08:16:37.185725 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/88b922f3-0125-4078-8ec7-ad4edd04d0ed-config-data\") pod \"88b922f3-0125-4078-8ec7-ad4edd04d0ed\" (UID: \"88b922f3-0125-4078-8ec7-ad4edd04d0ed\") " Jan 25 08:16:37 crc kubenswrapper[4832]: I0125 08:16:37.185994 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t6g5x\" (UniqueName: \"kubernetes.io/projected/88b922f3-0125-4078-8ec7-ad4edd04d0ed-kube-api-access-t6g5x\") pod \"88b922f3-0125-4078-8ec7-ad4edd04d0ed\" (UID: \"88b922f3-0125-4078-8ec7-ad4edd04d0ed\") " Jan 25 08:16:37 crc kubenswrapper[4832]: I0125 08:16:37.186019 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/88b922f3-0125-4078-8ec7-ad4edd04d0ed-db-sync-config-data\") pod \"88b922f3-0125-4078-8ec7-ad4edd04d0ed\" (UID: \"88b922f3-0125-4078-8ec7-ad4edd04d0ed\") " Jan 25 08:16:37 crc kubenswrapper[4832]: I0125 08:16:37.186063 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/88b922f3-0125-4078-8ec7-ad4edd04d0ed-combined-ca-bundle\") pod \"88b922f3-0125-4078-8ec7-ad4edd04d0ed\" (UID: \"88b922f3-0125-4078-8ec7-ad4edd04d0ed\") " Jan 25 08:16:37 crc kubenswrapper[4832]: I0125 08:16:37.210881 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/88b922f3-0125-4078-8ec7-ad4edd04d0ed-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "88b922f3-0125-4078-8ec7-ad4edd04d0ed" (UID: "88b922f3-0125-4078-8ec7-ad4edd04d0ed"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 25 08:16:37 crc kubenswrapper[4832]: I0125 08:16:37.214592 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/88b922f3-0125-4078-8ec7-ad4edd04d0ed-kube-api-access-t6g5x" (OuterVolumeSpecName: "kube-api-access-t6g5x") pod "88b922f3-0125-4078-8ec7-ad4edd04d0ed" (UID: "88b922f3-0125-4078-8ec7-ad4edd04d0ed"). InnerVolumeSpecName "kube-api-access-t6g5x". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 25 08:16:37 crc kubenswrapper[4832]: I0125 08:16:37.248496 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/88b922f3-0125-4078-8ec7-ad4edd04d0ed-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "88b922f3-0125-4078-8ec7-ad4edd04d0ed" (UID: "88b922f3-0125-4078-8ec7-ad4edd04d0ed"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 25 08:16:37 crc kubenswrapper[4832]: I0125 08:16:37.292117 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t6g5x\" (UniqueName: \"kubernetes.io/projected/88b922f3-0125-4078-8ec7-ad4edd04d0ed-kube-api-access-t6g5x\") on node \"crc\" DevicePath \"\"" Jan 25 08:16:37 crc kubenswrapper[4832]: I0125 08:16:37.292173 4832 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/88b922f3-0125-4078-8ec7-ad4edd04d0ed-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Jan 25 08:16:37 crc kubenswrapper[4832]: I0125 08:16:37.292187 4832 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/88b922f3-0125-4078-8ec7-ad4edd04d0ed-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 25 08:16:37 crc kubenswrapper[4832]: I0125 08:16:37.324672 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/88b922f3-0125-4078-8ec7-ad4edd04d0ed-config-data" (OuterVolumeSpecName: "config-data") pod "88b922f3-0125-4078-8ec7-ad4edd04d0ed" (UID: "88b922f3-0125-4078-8ec7-ad4edd04d0ed"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 25 08:16:37 crc kubenswrapper[4832]: I0125 08:16:37.348325 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-dnzjb" event={"ID":"88b922f3-0125-4078-8ec7-ad4edd04d0ed","Type":"ContainerDied","Data":"247fdb440e453d46419f89dae43eed7cd9e2f304234fbe8ac79722c75dd0e797"} Jan 25 08:16:37 crc kubenswrapper[4832]: I0125 08:16:37.348379 4832 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="247fdb440e453d46419f89dae43eed7cd9e2f304234fbe8ac79722c75dd0e797" Jan 25 08:16:37 crc kubenswrapper[4832]: I0125 08:16:37.348466 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-dnzjb" Jan 25 08:16:37 crc kubenswrapper[4832]: I0125 08:16:37.351632 4832 generic.go:334] "Generic (PLEG): container finished" podID="e793ce7a-261b-4b97-8436-c7a5efc5e126" containerID="882c4811454c01f87f413004ff277f6ed02b5c631dc3dfb6708b5bf0b9e8e5b1" exitCode=0 Jan 25 08:16:37 crc kubenswrapper[4832]: I0125 08:16:37.351727 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-vrvb2" event={"ID":"e793ce7a-261b-4b97-8436-c7a5efc5e126","Type":"ContainerDied","Data":"882c4811454c01f87f413004ff277f6ed02b5c631dc3dfb6708b5bf0b9e8e5b1"} Jan 25 08:16:37 crc kubenswrapper[4832]: I0125 08:16:37.393940 4832 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/88b922f3-0125-4078-8ec7-ad4edd04d0ed-config-data\") on node \"crc\" DevicePath \"\"" Jan 25 08:16:38 crc kubenswrapper[4832]: I0125 08:16:38.080310 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-xdqfx" Jan 25 08:16:38 crc kubenswrapper[4832]: I0125 08:16:38.214093 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/f4bbdba8-c7bc-4dd7-ae19-1655bc089a86-db-sync-config-data\") pod \"f4bbdba8-c7bc-4dd7-ae19-1655bc089a86\" (UID: \"f4bbdba8-c7bc-4dd7-ae19-1655bc089a86\") " Jan 25 08:16:38 crc kubenswrapper[4832]: I0125 08:16:38.214200 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5xcbj\" (UniqueName: \"kubernetes.io/projected/f4bbdba8-c7bc-4dd7-ae19-1655bc089a86-kube-api-access-5xcbj\") pod \"f4bbdba8-c7bc-4dd7-ae19-1655bc089a86\" (UID: \"f4bbdba8-c7bc-4dd7-ae19-1655bc089a86\") " Jan 25 08:16:38 crc kubenswrapper[4832]: I0125 08:16:38.214269 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f4bbdba8-c7bc-4dd7-ae19-1655bc089a86-combined-ca-bundle\") pod \"f4bbdba8-c7bc-4dd7-ae19-1655bc089a86\" (UID: \"f4bbdba8-c7bc-4dd7-ae19-1655bc089a86\") " Jan 25 08:16:38 crc kubenswrapper[4832]: I0125 08:16:38.219150 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f4bbdba8-c7bc-4dd7-ae19-1655bc089a86-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "f4bbdba8-c7bc-4dd7-ae19-1655bc089a86" (UID: "f4bbdba8-c7bc-4dd7-ae19-1655bc089a86"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 25 08:16:38 crc kubenswrapper[4832]: I0125 08:16:38.223978 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f4bbdba8-c7bc-4dd7-ae19-1655bc089a86-kube-api-access-5xcbj" (OuterVolumeSpecName: "kube-api-access-5xcbj") pod "f4bbdba8-c7bc-4dd7-ae19-1655bc089a86" (UID: "f4bbdba8-c7bc-4dd7-ae19-1655bc089a86"). InnerVolumeSpecName "kube-api-access-5xcbj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 25 08:16:38 crc kubenswrapper[4832]: I0125 08:16:38.259119 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f4bbdba8-c7bc-4dd7-ae19-1655bc089a86-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f4bbdba8-c7bc-4dd7-ae19-1655bc089a86" (UID: "f4bbdba8-c7bc-4dd7-ae19-1655bc089a86"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 25 08:16:38 crc kubenswrapper[4832]: I0125 08:16:38.316043 4832 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/f4bbdba8-c7bc-4dd7-ae19-1655bc089a86-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Jan 25 08:16:38 crc kubenswrapper[4832]: I0125 08:16:38.316074 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5xcbj\" (UniqueName: \"kubernetes.io/projected/f4bbdba8-c7bc-4dd7-ae19-1655bc089a86-kube-api-access-5xcbj\") on node \"crc\" DevicePath \"\"" Jan 25 08:16:38 crc kubenswrapper[4832]: I0125 08:16:38.316085 4832 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f4bbdba8-c7bc-4dd7-ae19-1655bc089a86-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 25 08:16:38 crc kubenswrapper[4832]: I0125 08:16:38.412194 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-xdqfx" Jan 25 08:16:38 crc kubenswrapper[4832]: I0125 08:16:38.416208 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-xdqfx" event={"ID":"f4bbdba8-c7bc-4dd7-ae19-1655bc089a86","Type":"ContainerDied","Data":"9852031006e2acef3d7437f0532401e427230998dce4ddc63ff7eb29fb7daee9"} Jan 25 08:16:38 crc kubenswrapper[4832]: I0125 08:16:38.416283 4832 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9852031006e2acef3d7437f0532401e427230998dce4ddc63ff7eb29fb7daee9" Jan 25 08:16:38 crc kubenswrapper[4832]: E0125 08:16:38.615548 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/ceilometer-0" podUID="b48b257e-ddb7-486d-8788-489ca788ac1f" Jan 25 08:16:38 crc kubenswrapper[4832]: I0125 08:16:38.721345 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-84b966f6c9-8n6dh"] Jan 25 08:16:38 crc kubenswrapper[4832]: E0125 08:16:38.721862 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="88b922f3-0125-4078-8ec7-ad4edd04d0ed" containerName="glance-db-sync" Jan 25 08:16:38 crc kubenswrapper[4832]: I0125 08:16:38.721876 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="88b922f3-0125-4078-8ec7-ad4edd04d0ed" containerName="glance-db-sync" Jan 25 08:16:38 crc kubenswrapper[4832]: E0125 08:16:38.721910 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4bbdba8-c7bc-4dd7-ae19-1655bc089a86" containerName="barbican-db-sync" Jan 25 08:16:38 crc kubenswrapper[4832]: I0125 08:16:38.721918 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4bbdba8-c7bc-4dd7-ae19-1655bc089a86" containerName="barbican-db-sync" Jan 25 08:16:38 crc kubenswrapper[4832]: I0125 08:16:38.722140 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4bbdba8-c7bc-4dd7-ae19-1655bc089a86" containerName="barbican-db-sync" Jan 25 08:16:38 crc kubenswrapper[4832]: I0125 08:16:38.722184 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="88b922f3-0125-4078-8ec7-ad4edd04d0ed" containerName="glance-db-sync" Jan 25 08:16:38 crc kubenswrapper[4832]: I0125 08:16:38.723356 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-84b966f6c9-8n6dh" Jan 25 08:16:38 crc kubenswrapper[4832]: I0125 08:16:38.768041 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-84b966f6c9-8n6dh"] Jan 25 08:16:38 crc kubenswrapper[4832]: I0125 08:16:38.842003 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/bae90205-c6b8-4fa2-b527-e9788ef6ae5b-ovsdbserver-sb\") pod \"dnsmasq-dns-84b966f6c9-8n6dh\" (UID: \"bae90205-c6b8-4fa2-b527-e9788ef6ae5b\") " pod="openstack/dnsmasq-dns-84b966f6c9-8n6dh" Jan 25 08:16:38 crc kubenswrapper[4832]: I0125 08:16:38.842061 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bae90205-c6b8-4fa2-b527-e9788ef6ae5b-dns-svc\") pod \"dnsmasq-dns-84b966f6c9-8n6dh\" (UID: \"bae90205-c6b8-4fa2-b527-e9788ef6ae5b\") " pod="openstack/dnsmasq-dns-84b966f6c9-8n6dh" Jan 25 08:16:38 crc kubenswrapper[4832]: I0125 08:16:38.842093 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/bae90205-c6b8-4fa2-b527-e9788ef6ae5b-dns-swift-storage-0\") pod \"dnsmasq-dns-84b966f6c9-8n6dh\" (UID: \"bae90205-c6b8-4fa2-b527-e9788ef6ae5b\") " pod="openstack/dnsmasq-dns-84b966f6c9-8n6dh" Jan 25 08:16:38 crc kubenswrapper[4832]: I0125 08:16:38.842318 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/bae90205-c6b8-4fa2-b527-e9788ef6ae5b-ovsdbserver-nb\") pod \"dnsmasq-dns-84b966f6c9-8n6dh\" (UID: \"bae90205-c6b8-4fa2-b527-e9788ef6ae5b\") " pod="openstack/dnsmasq-dns-84b966f6c9-8n6dh" Jan 25 08:16:38 crc kubenswrapper[4832]: I0125 08:16:38.842508 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bae90205-c6b8-4fa2-b527-e9788ef6ae5b-config\") pod \"dnsmasq-dns-84b966f6c9-8n6dh\" (UID: \"bae90205-c6b8-4fa2-b527-e9788ef6ae5b\") " pod="openstack/dnsmasq-dns-84b966f6c9-8n6dh" Jan 25 08:16:38 crc kubenswrapper[4832]: I0125 08:16:38.843912 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gcgsc\" (UniqueName: \"kubernetes.io/projected/bae90205-c6b8-4fa2-b527-e9788ef6ae5b-kube-api-access-gcgsc\") pod \"dnsmasq-dns-84b966f6c9-8n6dh\" (UID: \"bae90205-c6b8-4fa2-b527-e9788ef6ae5b\") " pod="openstack/dnsmasq-dns-84b966f6c9-8n6dh" Jan 25 08:16:38 crc kubenswrapper[4832]: I0125 08:16:38.868621 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-keystone-listener-7b4947bb84-pmdh6"] Jan 25 08:16:38 crc kubenswrapper[4832]: I0125 08:16:38.932952 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-7b4947bb84-pmdh6" Jan 25 08:16:38 crc kubenswrapper[4832]: I0125 08:16:38.936888 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-keystone-listener-config-data" Jan 25 08:16:38 crc kubenswrapper[4832]: I0125 08:16:38.937336 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-bmfkx" Jan 25 08:16:38 crc kubenswrapper[4832]: I0125 08:16:38.937629 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Jan 25 08:16:38 crc kubenswrapper[4832]: I0125 08:16:38.961682 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4899f618-1f51-4d34-9970-7c096359b47e-logs\") pod \"barbican-keystone-listener-7b4947bb84-pmdh6\" (UID: \"4899f618-1f51-4d34-9970-7c096359b47e\") " pod="openstack/barbican-keystone-listener-7b4947bb84-pmdh6" Jan 25 08:16:38 crc kubenswrapper[4832]: I0125 08:16:38.961775 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4899f618-1f51-4d34-9970-7c096359b47e-combined-ca-bundle\") pod \"barbican-keystone-listener-7b4947bb84-pmdh6\" (UID: \"4899f618-1f51-4d34-9970-7c096359b47e\") " pod="openstack/barbican-keystone-listener-7b4947bb84-pmdh6" Jan 25 08:16:38 crc kubenswrapper[4832]: I0125 08:16:38.961845 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/bae90205-c6b8-4fa2-b527-e9788ef6ae5b-ovsdbserver-sb\") pod \"dnsmasq-dns-84b966f6c9-8n6dh\" (UID: \"bae90205-c6b8-4fa2-b527-e9788ef6ae5b\") " pod="openstack/dnsmasq-dns-84b966f6c9-8n6dh" Jan 25 08:16:38 crc kubenswrapper[4832]: I0125 08:16:38.961872 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4899f618-1f51-4d34-9970-7c096359b47e-config-data-custom\") pod \"barbican-keystone-listener-7b4947bb84-pmdh6\" (UID: \"4899f618-1f51-4d34-9970-7c096359b47e\") " pod="openstack/barbican-keystone-listener-7b4947bb84-pmdh6" Jan 25 08:16:38 crc kubenswrapper[4832]: I0125 08:16:38.961900 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bae90205-c6b8-4fa2-b527-e9788ef6ae5b-dns-svc\") pod \"dnsmasq-dns-84b966f6c9-8n6dh\" (UID: \"bae90205-c6b8-4fa2-b527-e9788ef6ae5b\") " pod="openstack/dnsmasq-dns-84b966f6c9-8n6dh" Jan 25 08:16:38 crc kubenswrapper[4832]: I0125 08:16:38.961926 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/bae90205-c6b8-4fa2-b527-e9788ef6ae5b-dns-swift-storage-0\") pod \"dnsmasq-dns-84b966f6c9-8n6dh\" (UID: \"bae90205-c6b8-4fa2-b527-e9788ef6ae5b\") " pod="openstack/dnsmasq-dns-84b966f6c9-8n6dh" Jan 25 08:16:38 crc kubenswrapper[4832]: I0125 08:16:38.961964 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/bae90205-c6b8-4fa2-b527-e9788ef6ae5b-ovsdbserver-nb\") pod \"dnsmasq-dns-84b966f6c9-8n6dh\" (UID: \"bae90205-c6b8-4fa2-b527-e9788ef6ae5b\") " pod="openstack/dnsmasq-dns-84b966f6c9-8n6dh" Jan 25 08:16:38 crc kubenswrapper[4832]: I0125 08:16:38.961981 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4899f618-1f51-4d34-9970-7c096359b47e-config-data\") pod \"barbican-keystone-listener-7b4947bb84-pmdh6\" (UID: \"4899f618-1f51-4d34-9970-7c096359b47e\") " pod="openstack/barbican-keystone-listener-7b4947bb84-pmdh6" Jan 25 08:16:38 crc kubenswrapper[4832]: I0125 08:16:38.962018 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bae90205-c6b8-4fa2-b527-e9788ef6ae5b-config\") pod \"dnsmasq-dns-84b966f6c9-8n6dh\" (UID: \"bae90205-c6b8-4fa2-b527-e9788ef6ae5b\") " pod="openstack/dnsmasq-dns-84b966f6c9-8n6dh" Jan 25 08:16:38 crc kubenswrapper[4832]: I0125 08:16:38.962049 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gcgsc\" (UniqueName: \"kubernetes.io/projected/bae90205-c6b8-4fa2-b527-e9788ef6ae5b-kube-api-access-gcgsc\") pod \"dnsmasq-dns-84b966f6c9-8n6dh\" (UID: \"bae90205-c6b8-4fa2-b527-e9788ef6ae5b\") " pod="openstack/dnsmasq-dns-84b966f6c9-8n6dh" Jan 25 08:16:38 crc kubenswrapper[4832]: I0125 08:16:38.962077 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p4zln\" (UniqueName: \"kubernetes.io/projected/4899f618-1f51-4d34-9970-7c096359b47e-kube-api-access-p4zln\") pod \"barbican-keystone-listener-7b4947bb84-pmdh6\" (UID: \"4899f618-1f51-4d34-9970-7c096359b47e\") " pod="openstack/barbican-keystone-listener-7b4947bb84-pmdh6" Jan 25 08:16:38 crc kubenswrapper[4832]: I0125 08:16:38.983902 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/bae90205-c6b8-4fa2-b527-e9788ef6ae5b-dns-swift-storage-0\") pod \"dnsmasq-dns-84b966f6c9-8n6dh\" (UID: \"bae90205-c6b8-4fa2-b527-e9788ef6ae5b\") " pod="openstack/dnsmasq-dns-84b966f6c9-8n6dh" Jan 25 08:16:38 crc kubenswrapper[4832]: I0125 08:16:38.989691 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/bae90205-c6b8-4fa2-b527-e9788ef6ae5b-ovsdbserver-nb\") pod \"dnsmasq-dns-84b966f6c9-8n6dh\" (UID: \"bae90205-c6b8-4fa2-b527-e9788ef6ae5b\") " pod="openstack/dnsmasq-dns-84b966f6c9-8n6dh" Jan 25 08:16:38 crc kubenswrapper[4832]: I0125 08:16:38.990771 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bae90205-c6b8-4fa2-b527-e9788ef6ae5b-dns-svc\") pod \"dnsmasq-dns-84b966f6c9-8n6dh\" (UID: \"bae90205-c6b8-4fa2-b527-e9788ef6ae5b\") " pod="openstack/dnsmasq-dns-84b966f6c9-8n6dh" Jan 25 08:16:38 crc kubenswrapper[4832]: I0125 08:16:38.996148 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-worker-855cdf875c-rxk79"] Jan 25 08:16:38 crc kubenswrapper[4832]: I0125 08:16:38.996461 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bae90205-c6b8-4fa2-b527-e9788ef6ae5b-config\") pod \"dnsmasq-dns-84b966f6c9-8n6dh\" (UID: \"bae90205-c6b8-4fa2-b527-e9788ef6ae5b\") " pod="openstack/dnsmasq-dns-84b966f6c9-8n6dh" Jan 25 08:16:38 crc kubenswrapper[4832]: I0125 08:16:38.997050 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/bae90205-c6b8-4fa2-b527-e9788ef6ae5b-ovsdbserver-sb\") pod \"dnsmasq-dns-84b966f6c9-8n6dh\" (UID: \"bae90205-c6b8-4fa2-b527-e9788ef6ae5b\") " pod="openstack/dnsmasq-dns-84b966f6c9-8n6dh" Jan 25 08:16:38 crc kubenswrapper[4832]: I0125 08:16:38.999070 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-855cdf875c-rxk79" Jan 25 08:16:39 crc kubenswrapper[4832]: I0125 08:16:39.018675 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-worker-config-data" Jan 25 08:16:39 crc kubenswrapper[4832]: I0125 08:16:39.051922 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gcgsc\" (UniqueName: \"kubernetes.io/projected/bae90205-c6b8-4fa2-b527-e9788ef6ae5b-kube-api-access-gcgsc\") pod \"dnsmasq-dns-84b966f6c9-8n6dh\" (UID: \"bae90205-c6b8-4fa2-b527-e9788ef6ae5b\") " pod="openstack/dnsmasq-dns-84b966f6c9-8n6dh" Jan 25 08:16:39 crc kubenswrapper[4832]: I0125 08:16:39.058830 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-84b966f6c9-8n6dh" Jan 25 08:16:39 crc kubenswrapper[4832]: I0125 08:16:39.066850 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4899f618-1f51-4d34-9970-7c096359b47e-config-data\") pod \"barbican-keystone-listener-7b4947bb84-pmdh6\" (UID: \"4899f618-1f51-4d34-9970-7c096359b47e\") " pod="openstack/barbican-keystone-listener-7b4947bb84-pmdh6" Jan 25 08:16:39 crc kubenswrapper[4832]: I0125 08:16:39.066928 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p4zln\" (UniqueName: \"kubernetes.io/projected/4899f618-1f51-4d34-9970-7c096359b47e-kube-api-access-p4zln\") pod \"barbican-keystone-listener-7b4947bb84-pmdh6\" (UID: \"4899f618-1f51-4d34-9970-7c096359b47e\") " pod="openstack/barbican-keystone-listener-7b4947bb84-pmdh6" Jan 25 08:16:39 crc kubenswrapper[4832]: I0125 08:16:39.066967 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/26baac3d-6d07-4f33-956e-4048e3318099-combined-ca-bundle\") pod \"barbican-worker-855cdf875c-rxk79\" (UID: \"26baac3d-6d07-4f33-956e-4048e3318099\") " pod="openstack/barbican-worker-855cdf875c-rxk79" Jan 25 08:16:39 crc kubenswrapper[4832]: I0125 08:16:39.066989 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4899f618-1f51-4d34-9970-7c096359b47e-logs\") pod \"barbican-keystone-listener-7b4947bb84-pmdh6\" (UID: \"4899f618-1f51-4d34-9970-7c096359b47e\") " pod="openstack/barbican-keystone-listener-7b4947bb84-pmdh6" Jan 25 08:16:39 crc kubenswrapper[4832]: I0125 08:16:39.067011 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/26baac3d-6d07-4f33-956e-4048e3318099-logs\") pod \"barbican-worker-855cdf875c-rxk79\" (UID: \"26baac3d-6d07-4f33-956e-4048e3318099\") " pod="openstack/barbican-worker-855cdf875c-rxk79" Jan 25 08:16:39 crc kubenswrapper[4832]: I0125 08:16:39.067049 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4899f618-1f51-4d34-9970-7c096359b47e-combined-ca-bundle\") pod \"barbican-keystone-listener-7b4947bb84-pmdh6\" (UID: \"4899f618-1f51-4d34-9970-7c096359b47e\") " pod="openstack/barbican-keystone-listener-7b4947bb84-pmdh6" Jan 25 08:16:39 crc kubenswrapper[4832]: I0125 08:16:39.067088 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/26baac3d-6d07-4f33-956e-4048e3318099-config-data-custom\") pod \"barbican-worker-855cdf875c-rxk79\" (UID: \"26baac3d-6d07-4f33-956e-4048e3318099\") " pod="openstack/barbican-worker-855cdf875c-rxk79" Jan 25 08:16:39 crc kubenswrapper[4832]: I0125 08:16:39.067117 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-77tg9\" (UniqueName: \"kubernetes.io/projected/26baac3d-6d07-4f33-956e-4048e3318099-kube-api-access-77tg9\") pod \"barbican-worker-855cdf875c-rxk79\" (UID: \"26baac3d-6d07-4f33-956e-4048e3318099\") " pod="openstack/barbican-worker-855cdf875c-rxk79" Jan 25 08:16:39 crc kubenswrapper[4832]: I0125 08:16:39.067138 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4899f618-1f51-4d34-9970-7c096359b47e-config-data-custom\") pod \"barbican-keystone-listener-7b4947bb84-pmdh6\" (UID: \"4899f618-1f51-4d34-9970-7c096359b47e\") " pod="openstack/barbican-keystone-listener-7b4947bb84-pmdh6" Jan 25 08:16:39 crc kubenswrapper[4832]: I0125 08:16:39.067161 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/26baac3d-6d07-4f33-956e-4048e3318099-config-data\") pod \"barbican-worker-855cdf875c-rxk79\" (UID: \"26baac3d-6d07-4f33-956e-4048e3318099\") " pod="openstack/barbican-worker-855cdf875c-rxk79" Jan 25 08:16:39 crc kubenswrapper[4832]: I0125 08:16:39.069570 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4899f618-1f51-4d34-9970-7c096359b47e-logs\") pod \"barbican-keystone-listener-7b4947bb84-pmdh6\" (UID: \"4899f618-1f51-4d34-9970-7c096359b47e\") " pod="openstack/barbican-keystone-listener-7b4947bb84-pmdh6" Jan 25 08:16:39 crc kubenswrapper[4832]: I0125 08:16:39.076105 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4899f618-1f51-4d34-9970-7c096359b47e-combined-ca-bundle\") pod \"barbican-keystone-listener-7b4947bb84-pmdh6\" (UID: \"4899f618-1f51-4d34-9970-7c096359b47e\") " pod="openstack/barbican-keystone-listener-7b4947bb84-pmdh6" Jan 25 08:16:39 crc kubenswrapper[4832]: I0125 08:16:39.076235 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4899f618-1f51-4d34-9970-7c096359b47e-config-data-custom\") pod \"barbican-keystone-listener-7b4947bb84-pmdh6\" (UID: \"4899f618-1f51-4d34-9970-7c096359b47e\") " pod="openstack/barbican-keystone-listener-7b4947bb84-pmdh6" Jan 25 08:16:39 crc kubenswrapper[4832]: I0125 08:16:39.080708 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4899f618-1f51-4d34-9970-7c096359b47e-config-data\") pod \"barbican-keystone-listener-7b4947bb84-pmdh6\" (UID: \"4899f618-1f51-4d34-9970-7c096359b47e\") " pod="openstack/barbican-keystone-listener-7b4947bb84-pmdh6" Jan 25 08:16:39 crc kubenswrapper[4832]: I0125 08:16:39.102477 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-7b4947bb84-pmdh6"] Jan 25 08:16:39 crc kubenswrapper[4832]: I0125 08:16:39.129088 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p4zln\" (UniqueName: \"kubernetes.io/projected/4899f618-1f51-4d34-9970-7c096359b47e-kube-api-access-p4zln\") pod \"barbican-keystone-listener-7b4947bb84-pmdh6\" (UID: \"4899f618-1f51-4d34-9970-7c096359b47e\") " pod="openstack/barbican-keystone-listener-7b4947bb84-pmdh6" Jan 25 08:16:39 crc kubenswrapper[4832]: I0125 08:16:39.162448 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-855cdf875c-rxk79"] Jan 25 08:16:39 crc kubenswrapper[4832]: I0125 08:16:39.169807 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/26baac3d-6d07-4f33-956e-4048e3318099-config-data-custom\") pod \"barbican-worker-855cdf875c-rxk79\" (UID: \"26baac3d-6d07-4f33-956e-4048e3318099\") " pod="openstack/barbican-worker-855cdf875c-rxk79" Jan 25 08:16:39 crc kubenswrapper[4832]: I0125 08:16:39.169880 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-77tg9\" (UniqueName: \"kubernetes.io/projected/26baac3d-6d07-4f33-956e-4048e3318099-kube-api-access-77tg9\") pod \"barbican-worker-855cdf875c-rxk79\" (UID: \"26baac3d-6d07-4f33-956e-4048e3318099\") " pod="openstack/barbican-worker-855cdf875c-rxk79" Jan 25 08:16:39 crc kubenswrapper[4832]: I0125 08:16:39.169910 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/26baac3d-6d07-4f33-956e-4048e3318099-config-data\") pod \"barbican-worker-855cdf875c-rxk79\" (UID: \"26baac3d-6d07-4f33-956e-4048e3318099\") " pod="openstack/barbican-worker-855cdf875c-rxk79" Jan 25 08:16:39 crc kubenswrapper[4832]: I0125 08:16:39.169990 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/26baac3d-6d07-4f33-956e-4048e3318099-combined-ca-bundle\") pod \"barbican-worker-855cdf875c-rxk79\" (UID: \"26baac3d-6d07-4f33-956e-4048e3318099\") " pod="openstack/barbican-worker-855cdf875c-rxk79" Jan 25 08:16:39 crc kubenswrapper[4832]: I0125 08:16:39.170013 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/26baac3d-6d07-4f33-956e-4048e3318099-logs\") pod \"barbican-worker-855cdf875c-rxk79\" (UID: \"26baac3d-6d07-4f33-956e-4048e3318099\") " pod="openstack/barbican-worker-855cdf875c-rxk79" Jan 25 08:16:39 crc kubenswrapper[4832]: I0125 08:16:39.170548 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/26baac3d-6d07-4f33-956e-4048e3318099-logs\") pod \"barbican-worker-855cdf875c-rxk79\" (UID: \"26baac3d-6d07-4f33-956e-4048e3318099\") " pod="openstack/barbican-worker-855cdf875c-rxk79" Jan 25 08:16:39 crc kubenswrapper[4832]: I0125 08:16:39.175756 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-84b966f6c9-8n6dh"] Jan 25 08:16:39 crc kubenswrapper[4832]: I0125 08:16:39.185025 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/26baac3d-6d07-4f33-956e-4048e3318099-combined-ca-bundle\") pod \"barbican-worker-855cdf875c-rxk79\" (UID: \"26baac3d-6d07-4f33-956e-4048e3318099\") " pod="openstack/barbican-worker-855cdf875c-rxk79" Jan 25 08:16:39 crc kubenswrapper[4832]: I0125 08:16:39.192784 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/26baac3d-6d07-4f33-956e-4048e3318099-config-data-custom\") pod \"barbican-worker-855cdf875c-rxk79\" (UID: \"26baac3d-6d07-4f33-956e-4048e3318099\") " pod="openstack/barbican-worker-855cdf875c-rxk79" Jan 25 08:16:39 crc kubenswrapper[4832]: I0125 08:16:39.203431 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/26baac3d-6d07-4f33-956e-4048e3318099-config-data\") pod \"barbican-worker-855cdf875c-rxk79\" (UID: \"26baac3d-6d07-4f33-956e-4048e3318099\") " pod="openstack/barbican-worker-855cdf875c-rxk79" Jan 25 08:16:39 crc kubenswrapper[4832]: I0125 08:16:39.207106 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-77tg9\" (UniqueName: \"kubernetes.io/projected/26baac3d-6d07-4f33-956e-4048e3318099-kube-api-access-77tg9\") pod \"barbican-worker-855cdf875c-rxk79\" (UID: \"26baac3d-6d07-4f33-956e-4048e3318099\") " pod="openstack/barbican-worker-855cdf875c-rxk79" Jan 25 08:16:39 crc kubenswrapper[4832]: I0125 08:16:39.228445 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-75c8ddd69c-582pd"] Jan 25 08:16:39 crc kubenswrapper[4832]: I0125 08:16:39.229856 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-75c8ddd69c-582pd" Jan 25 08:16:39 crc kubenswrapper[4832]: I0125 08:16:39.255820 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-855cdf875c-rxk79" Jan 25 08:16:39 crc kubenswrapper[4832]: I0125 08:16:39.271531 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/986e4317-1281-48cd-962b-0873de0e5744-dns-swift-storage-0\") pod \"dnsmasq-dns-75c8ddd69c-582pd\" (UID: \"986e4317-1281-48cd-962b-0873de0e5744\") " pod="openstack/dnsmasq-dns-75c8ddd69c-582pd" Jan 25 08:16:39 crc kubenswrapper[4832]: I0125 08:16:39.271602 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/986e4317-1281-48cd-962b-0873de0e5744-ovsdbserver-nb\") pod \"dnsmasq-dns-75c8ddd69c-582pd\" (UID: \"986e4317-1281-48cd-962b-0873de0e5744\") " pod="openstack/dnsmasq-dns-75c8ddd69c-582pd" Jan 25 08:16:39 crc kubenswrapper[4832]: I0125 08:16:39.271638 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6wr54\" (UniqueName: \"kubernetes.io/projected/986e4317-1281-48cd-962b-0873de0e5744-kube-api-access-6wr54\") pod \"dnsmasq-dns-75c8ddd69c-582pd\" (UID: \"986e4317-1281-48cd-962b-0873de0e5744\") " pod="openstack/dnsmasq-dns-75c8ddd69c-582pd" Jan 25 08:16:39 crc kubenswrapper[4832]: I0125 08:16:39.271707 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/986e4317-1281-48cd-962b-0873de0e5744-ovsdbserver-sb\") pod \"dnsmasq-dns-75c8ddd69c-582pd\" (UID: \"986e4317-1281-48cd-962b-0873de0e5744\") " pod="openstack/dnsmasq-dns-75c8ddd69c-582pd" Jan 25 08:16:39 crc kubenswrapper[4832]: I0125 08:16:39.271750 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/986e4317-1281-48cd-962b-0873de0e5744-dns-svc\") pod \"dnsmasq-dns-75c8ddd69c-582pd\" (UID: \"986e4317-1281-48cd-962b-0873de0e5744\") " pod="openstack/dnsmasq-dns-75c8ddd69c-582pd" Jan 25 08:16:39 crc kubenswrapper[4832]: I0125 08:16:39.271767 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/986e4317-1281-48cd-962b-0873de0e5744-config\") pod \"dnsmasq-dns-75c8ddd69c-582pd\" (UID: \"986e4317-1281-48cd-962b-0873de0e5744\") " pod="openstack/dnsmasq-dns-75c8ddd69c-582pd" Jan 25 08:16:39 crc kubenswrapper[4832]: I0125 08:16:39.289492 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-vrvb2" Jan 25 08:16:39 crc kubenswrapper[4832]: I0125 08:16:39.293095 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-75c8ddd69c-582pd"] Jan 25 08:16:39 crc kubenswrapper[4832]: I0125 08:16:39.303441 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-6d6d8975cd-v8jf8"] Jan 25 08:16:39 crc kubenswrapper[4832]: E0125 08:16:39.303917 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e793ce7a-261b-4b97-8436-c7a5efc5e126" containerName="cinder-db-sync" Jan 25 08:16:39 crc kubenswrapper[4832]: I0125 08:16:39.303933 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="e793ce7a-261b-4b97-8436-c7a5efc5e126" containerName="cinder-db-sync" Jan 25 08:16:39 crc kubenswrapper[4832]: I0125 08:16:39.304107 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="e793ce7a-261b-4b97-8436-c7a5efc5e126" containerName="cinder-db-sync" Jan 25 08:16:39 crc kubenswrapper[4832]: I0125 08:16:39.312436 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-6d6d8975cd-v8jf8" Jan 25 08:16:39 crc kubenswrapper[4832]: I0125 08:16:39.316187 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-api-config-data" Jan 25 08:16:39 crc kubenswrapper[4832]: I0125 08:16:39.338484 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-6d6d8975cd-v8jf8"] Jan 25 08:16:39 crc kubenswrapper[4832]: I0125 08:16:39.338969 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-7b4947bb84-pmdh6" Jan 25 08:16:39 crc kubenswrapper[4832]: I0125 08:16:39.382552 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e793ce7a-261b-4b97-8436-c7a5efc5e126-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "e793ce7a-261b-4b97-8436-c7a5efc5e126" (UID: "e793ce7a-261b-4b97-8436-c7a5efc5e126"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 25 08:16:39 crc kubenswrapper[4832]: I0125 08:16:39.383086 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/e793ce7a-261b-4b97-8436-c7a5efc5e126-etc-machine-id\") pod \"e793ce7a-261b-4b97-8436-c7a5efc5e126\" (UID: \"e793ce7a-261b-4b97-8436-c7a5efc5e126\") " Jan 25 08:16:39 crc kubenswrapper[4832]: I0125 08:16:39.383250 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vxq2n\" (UniqueName: \"kubernetes.io/projected/e793ce7a-261b-4b97-8436-c7a5efc5e126-kube-api-access-vxq2n\") pod \"e793ce7a-261b-4b97-8436-c7a5efc5e126\" (UID: \"e793ce7a-261b-4b97-8436-c7a5efc5e126\") " Jan 25 08:16:39 crc kubenswrapper[4832]: I0125 08:16:39.384320 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/e793ce7a-261b-4b97-8436-c7a5efc5e126-db-sync-config-data\") pod \"e793ce7a-261b-4b97-8436-c7a5efc5e126\" (UID: \"e793ce7a-261b-4b97-8436-c7a5efc5e126\") " Jan 25 08:16:39 crc kubenswrapper[4832]: I0125 08:16:39.384415 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e793ce7a-261b-4b97-8436-c7a5efc5e126-combined-ca-bundle\") pod \"e793ce7a-261b-4b97-8436-c7a5efc5e126\" (UID: \"e793ce7a-261b-4b97-8436-c7a5efc5e126\") " Jan 25 08:16:39 crc kubenswrapper[4832]: I0125 08:16:39.384573 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e793ce7a-261b-4b97-8436-c7a5efc5e126-config-data\") pod \"e793ce7a-261b-4b97-8436-c7a5efc5e126\" (UID: \"e793ce7a-261b-4b97-8436-c7a5efc5e126\") " Jan 25 08:16:39 crc kubenswrapper[4832]: I0125 08:16:39.384700 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e793ce7a-261b-4b97-8436-c7a5efc5e126-scripts\") pod \"e793ce7a-261b-4b97-8436-c7a5efc5e126\" (UID: \"e793ce7a-261b-4b97-8436-c7a5efc5e126\") " Jan 25 08:16:39 crc kubenswrapper[4832]: I0125 08:16:39.385301 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/31271ce3-bbf8-4033-b2ba-5e47f4e9a151-config-data-custom\") pod \"barbican-api-6d6d8975cd-v8jf8\" (UID: \"31271ce3-bbf8-4033-b2ba-5e47f4e9a151\") " pod="openstack/barbican-api-6d6d8975cd-v8jf8" Jan 25 08:16:39 crc kubenswrapper[4832]: I0125 08:16:39.386457 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/31271ce3-bbf8-4033-b2ba-5e47f4e9a151-combined-ca-bundle\") pod \"barbican-api-6d6d8975cd-v8jf8\" (UID: \"31271ce3-bbf8-4033-b2ba-5e47f4e9a151\") " pod="openstack/barbican-api-6d6d8975cd-v8jf8" Jan 25 08:16:39 crc kubenswrapper[4832]: I0125 08:16:39.386540 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/986e4317-1281-48cd-962b-0873de0e5744-ovsdbserver-sb\") pod \"dnsmasq-dns-75c8ddd69c-582pd\" (UID: \"986e4317-1281-48cd-962b-0873de0e5744\") " pod="openstack/dnsmasq-dns-75c8ddd69c-582pd" Jan 25 08:16:39 crc kubenswrapper[4832]: I0125 08:16:39.386613 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bjjjs\" (UniqueName: \"kubernetes.io/projected/31271ce3-bbf8-4033-b2ba-5e47f4e9a151-kube-api-access-bjjjs\") pod \"barbican-api-6d6d8975cd-v8jf8\" (UID: \"31271ce3-bbf8-4033-b2ba-5e47f4e9a151\") " pod="openstack/barbican-api-6d6d8975cd-v8jf8" Jan 25 08:16:39 crc kubenswrapper[4832]: I0125 08:16:39.386655 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/31271ce3-bbf8-4033-b2ba-5e47f4e9a151-logs\") pod \"barbican-api-6d6d8975cd-v8jf8\" (UID: \"31271ce3-bbf8-4033-b2ba-5e47f4e9a151\") " pod="openstack/barbican-api-6d6d8975cd-v8jf8" Jan 25 08:16:39 crc kubenswrapper[4832]: I0125 08:16:39.386770 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/986e4317-1281-48cd-962b-0873de0e5744-config\") pod \"dnsmasq-dns-75c8ddd69c-582pd\" (UID: \"986e4317-1281-48cd-962b-0873de0e5744\") " pod="openstack/dnsmasq-dns-75c8ddd69c-582pd" Jan 25 08:16:39 crc kubenswrapper[4832]: I0125 08:16:39.386791 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/986e4317-1281-48cd-962b-0873de0e5744-dns-svc\") pod \"dnsmasq-dns-75c8ddd69c-582pd\" (UID: \"986e4317-1281-48cd-962b-0873de0e5744\") " pod="openstack/dnsmasq-dns-75c8ddd69c-582pd" Jan 25 08:16:39 crc kubenswrapper[4832]: I0125 08:16:39.386876 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/31271ce3-bbf8-4033-b2ba-5e47f4e9a151-config-data\") pod \"barbican-api-6d6d8975cd-v8jf8\" (UID: \"31271ce3-bbf8-4033-b2ba-5e47f4e9a151\") " pod="openstack/barbican-api-6d6d8975cd-v8jf8" Jan 25 08:16:39 crc kubenswrapper[4832]: I0125 08:16:39.392410 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/986e4317-1281-48cd-962b-0873de0e5744-dns-swift-storage-0\") pod \"dnsmasq-dns-75c8ddd69c-582pd\" (UID: \"986e4317-1281-48cd-962b-0873de0e5744\") " pod="openstack/dnsmasq-dns-75c8ddd69c-582pd" Jan 25 08:16:39 crc kubenswrapper[4832]: I0125 08:16:39.392567 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/986e4317-1281-48cd-962b-0873de0e5744-ovsdbserver-nb\") pod \"dnsmasq-dns-75c8ddd69c-582pd\" (UID: \"986e4317-1281-48cd-962b-0873de0e5744\") " pod="openstack/dnsmasq-dns-75c8ddd69c-582pd" Jan 25 08:16:39 crc kubenswrapper[4832]: I0125 08:16:39.392725 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6wr54\" (UniqueName: \"kubernetes.io/projected/986e4317-1281-48cd-962b-0873de0e5744-kube-api-access-6wr54\") pod \"dnsmasq-dns-75c8ddd69c-582pd\" (UID: \"986e4317-1281-48cd-962b-0873de0e5744\") " pod="openstack/dnsmasq-dns-75c8ddd69c-582pd" Jan 25 08:16:39 crc kubenswrapper[4832]: I0125 08:16:39.392933 4832 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/e793ce7a-261b-4b97-8436-c7a5efc5e126-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 25 08:16:39 crc kubenswrapper[4832]: I0125 08:16:39.415233 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/986e4317-1281-48cd-962b-0873de0e5744-config\") pod \"dnsmasq-dns-75c8ddd69c-582pd\" (UID: \"986e4317-1281-48cd-962b-0873de0e5744\") " pod="openstack/dnsmasq-dns-75c8ddd69c-582pd" Jan 25 08:16:39 crc kubenswrapper[4832]: I0125 08:16:39.417517 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/986e4317-1281-48cd-962b-0873de0e5744-dns-svc\") pod \"dnsmasq-dns-75c8ddd69c-582pd\" (UID: \"986e4317-1281-48cd-962b-0873de0e5744\") " pod="openstack/dnsmasq-dns-75c8ddd69c-582pd" Jan 25 08:16:39 crc kubenswrapper[4832]: I0125 08:16:39.417603 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/986e4317-1281-48cd-962b-0873de0e5744-dns-swift-storage-0\") pod \"dnsmasq-dns-75c8ddd69c-582pd\" (UID: \"986e4317-1281-48cd-962b-0873de0e5744\") " pod="openstack/dnsmasq-dns-75c8ddd69c-582pd" Jan 25 08:16:39 crc kubenswrapper[4832]: I0125 08:16:39.418496 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/986e4317-1281-48cd-962b-0873de0e5744-ovsdbserver-sb\") pod \"dnsmasq-dns-75c8ddd69c-582pd\" (UID: \"986e4317-1281-48cd-962b-0873de0e5744\") " pod="openstack/dnsmasq-dns-75c8ddd69c-582pd" Jan 25 08:16:39 crc kubenswrapper[4832]: I0125 08:16:39.418524 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/986e4317-1281-48cd-962b-0873de0e5744-ovsdbserver-nb\") pod \"dnsmasq-dns-75c8ddd69c-582pd\" (UID: \"986e4317-1281-48cd-962b-0873de0e5744\") " pod="openstack/dnsmasq-dns-75c8ddd69c-582pd" Jan 25 08:16:39 crc kubenswrapper[4832]: I0125 08:16:39.429154 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e793ce7a-261b-4b97-8436-c7a5efc5e126-scripts" (OuterVolumeSpecName: "scripts") pod "e793ce7a-261b-4b97-8436-c7a5efc5e126" (UID: "e793ce7a-261b-4b97-8436-c7a5efc5e126"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 25 08:16:39 crc kubenswrapper[4832]: I0125 08:16:39.429339 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e793ce7a-261b-4b97-8436-c7a5efc5e126-kube-api-access-vxq2n" (OuterVolumeSpecName: "kube-api-access-vxq2n") pod "e793ce7a-261b-4b97-8436-c7a5efc5e126" (UID: "e793ce7a-261b-4b97-8436-c7a5efc5e126"). InnerVolumeSpecName "kube-api-access-vxq2n". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 25 08:16:39 crc kubenswrapper[4832]: I0125 08:16:39.436191 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e793ce7a-261b-4b97-8436-c7a5efc5e126-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "e793ce7a-261b-4b97-8436-c7a5efc5e126" (UID: "e793ce7a-261b-4b97-8436-c7a5efc5e126"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 25 08:16:39 crc kubenswrapper[4832]: I0125 08:16:39.440377 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6wr54\" (UniqueName: \"kubernetes.io/projected/986e4317-1281-48cd-962b-0873de0e5744-kube-api-access-6wr54\") pod \"dnsmasq-dns-75c8ddd69c-582pd\" (UID: \"986e4317-1281-48cd-962b-0873de0e5744\") " pod="openstack/dnsmasq-dns-75c8ddd69c-582pd" Jan 25 08:16:39 crc kubenswrapper[4832]: I0125 08:16:39.487444 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e793ce7a-261b-4b97-8436-c7a5efc5e126-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e793ce7a-261b-4b97-8436-c7a5efc5e126" (UID: "e793ce7a-261b-4b97-8436-c7a5efc5e126"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 25 08:16:39 crc kubenswrapper[4832]: I0125 08:16:39.504419 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/31271ce3-bbf8-4033-b2ba-5e47f4e9a151-combined-ca-bundle\") pod \"barbican-api-6d6d8975cd-v8jf8\" (UID: \"31271ce3-bbf8-4033-b2ba-5e47f4e9a151\") " pod="openstack/barbican-api-6d6d8975cd-v8jf8" Jan 25 08:16:39 crc kubenswrapper[4832]: I0125 08:16:39.504485 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bjjjs\" (UniqueName: \"kubernetes.io/projected/31271ce3-bbf8-4033-b2ba-5e47f4e9a151-kube-api-access-bjjjs\") pod \"barbican-api-6d6d8975cd-v8jf8\" (UID: \"31271ce3-bbf8-4033-b2ba-5e47f4e9a151\") " pod="openstack/barbican-api-6d6d8975cd-v8jf8" Jan 25 08:16:39 crc kubenswrapper[4832]: I0125 08:16:39.504516 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/31271ce3-bbf8-4033-b2ba-5e47f4e9a151-logs\") pod \"barbican-api-6d6d8975cd-v8jf8\" (UID: \"31271ce3-bbf8-4033-b2ba-5e47f4e9a151\") " pod="openstack/barbican-api-6d6d8975cd-v8jf8" Jan 25 08:16:39 crc kubenswrapper[4832]: I0125 08:16:39.504563 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/31271ce3-bbf8-4033-b2ba-5e47f4e9a151-config-data\") pod \"barbican-api-6d6d8975cd-v8jf8\" (UID: \"31271ce3-bbf8-4033-b2ba-5e47f4e9a151\") " pod="openstack/barbican-api-6d6d8975cd-v8jf8" Jan 25 08:16:39 crc kubenswrapper[4832]: I0125 08:16:39.504715 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/31271ce3-bbf8-4033-b2ba-5e47f4e9a151-config-data-custom\") pod \"barbican-api-6d6d8975cd-v8jf8\" (UID: \"31271ce3-bbf8-4033-b2ba-5e47f4e9a151\") " pod="openstack/barbican-api-6d6d8975cd-v8jf8" Jan 25 08:16:39 crc kubenswrapper[4832]: I0125 08:16:39.504809 4832 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/e793ce7a-261b-4b97-8436-c7a5efc5e126-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Jan 25 08:16:39 crc kubenswrapper[4832]: I0125 08:16:39.504826 4832 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e793ce7a-261b-4b97-8436-c7a5efc5e126-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 25 08:16:39 crc kubenswrapper[4832]: I0125 08:16:39.504836 4832 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e793ce7a-261b-4b97-8436-c7a5efc5e126-scripts\") on node \"crc\" DevicePath \"\"" Jan 25 08:16:39 crc kubenswrapper[4832]: I0125 08:16:39.504847 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vxq2n\" (UniqueName: \"kubernetes.io/projected/e793ce7a-261b-4b97-8436-c7a5efc5e126-kube-api-access-vxq2n\") on node \"crc\" DevicePath \"\"" Jan 25 08:16:39 crc kubenswrapper[4832]: I0125 08:16:39.505481 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/31271ce3-bbf8-4033-b2ba-5e47f4e9a151-logs\") pod \"barbican-api-6d6d8975cd-v8jf8\" (UID: \"31271ce3-bbf8-4033-b2ba-5e47f4e9a151\") " pod="openstack/barbican-api-6d6d8975cd-v8jf8" Jan 25 08:16:39 crc kubenswrapper[4832]: I0125 08:16:39.518310 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e793ce7a-261b-4b97-8436-c7a5efc5e126-config-data" (OuterVolumeSpecName: "config-data") pod "e793ce7a-261b-4b97-8436-c7a5efc5e126" (UID: "e793ce7a-261b-4b97-8436-c7a5efc5e126"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 25 08:16:39 crc kubenswrapper[4832]: I0125 08:16:39.520211 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b48b257e-ddb7-486d-8788-489ca788ac1f","Type":"ContainerStarted","Data":"acbe2cd8067e9a50f978ad1b1a6c5d6ece519325f7bb452b369529624b9c7801"} Jan 25 08:16:39 crc kubenswrapper[4832]: I0125 08:16:39.520534 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="b48b257e-ddb7-486d-8788-489ca788ac1f" containerName="ceilometer-notification-agent" containerID="cri-o://f68d63b552212b0d184f580f49e465d6ead51b8d0e31c283a3b07b744696dda7" gracePeriod=30 Jan 25 08:16:39 crc kubenswrapper[4832]: I0125 08:16:39.521969 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 25 08:16:39 crc kubenswrapper[4832]: I0125 08:16:39.522272 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="b48b257e-ddb7-486d-8788-489ca788ac1f" containerName="proxy-httpd" containerID="cri-o://acbe2cd8067e9a50f978ad1b1a6c5d6ece519325f7bb452b369529624b9c7801" gracePeriod=30 Jan 25 08:16:39 crc kubenswrapper[4832]: I0125 08:16:39.522330 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="b48b257e-ddb7-486d-8788-489ca788ac1f" containerName="sg-core" containerID="cri-o://dad362216754986eabe4008de7a8656a90cebd02e9d6abe54bde28eba71a3667" gracePeriod=30 Jan 25 08:16:39 crc kubenswrapper[4832]: I0125 08:16:39.528872 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/31271ce3-bbf8-4033-b2ba-5e47f4e9a151-config-data\") pod \"barbican-api-6d6d8975cd-v8jf8\" (UID: \"31271ce3-bbf8-4033-b2ba-5e47f4e9a151\") " pod="openstack/barbican-api-6d6d8975cd-v8jf8" Jan 25 08:16:39 crc kubenswrapper[4832]: I0125 08:16:39.529445 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/31271ce3-bbf8-4033-b2ba-5e47f4e9a151-config-data-custom\") pod \"barbican-api-6d6d8975cd-v8jf8\" (UID: \"31271ce3-bbf8-4033-b2ba-5e47f4e9a151\") " pod="openstack/barbican-api-6d6d8975cd-v8jf8" Jan 25 08:16:39 crc kubenswrapper[4832]: I0125 08:16:39.530330 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/31271ce3-bbf8-4033-b2ba-5e47f4e9a151-combined-ca-bundle\") pod \"barbican-api-6d6d8975cd-v8jf8\" (UID: \"31271ce3-bbf8-4033-b2ba-5e47f4e9a151\") " pod="openstack/barbican-api-6d6d8975cd-v8jf8" Jan 25 08:16:39 crc kubenswrapper[4832]: I0125 08:16:39.549459 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Jan 25 08:16:39 crc kubenswrapper[4832]: I0125 08:16:39.551167 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 25 08:16:39 crc kubenswrapper[4832]: I0125 08:16:39.557853 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Jan 25 08:16:39 crc kubenswrapper[4832]: I0125 08:16:39.558528 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-8rn6w" Jan 25 08:16:39 crc kubenswrapper[4832]: I0125 08:16:39.558560 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Jan 25 08:16:39 crc kubenswrapper[4832]: I0125 08:16:39.559411 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bjjjs\" (UniqueName: \"kubernetes.io/projected/31271ce3-bbf8-4033-b2ba-5e47f4e9a151-kube-api-access-bjjjs\") pod \"barbican-api-6d6d8975cd-v8jf8\" (UID: \"31271ce3-bbf8-4033-b2ba-5e47f4e9a151\") " pod="openstack/barbican-api-6d6d8975cd-v8jf8" Jan 25 08:16:39 crc kubenswrapper[4832]: I0125 08:16:39.572054 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-5cd5868dbb-cxxfw" event={"ID":"c6f5e19c-ec70-424e-a446-09b1b78697be","Type":"ContainerStarted","Data":"f63093c448aef39511b6c26e0f5d07487acbbb1e89addbc03d6013d0d6ccc68b"} Jan 25 08:16:39 crc kubenswrapper[4832]: I0125 08:16:39.575503 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-5cd5868dbb-cxxfw" Jan 25 08:16:39 crc kubenswrapper[4832]: I0125 08:16:39.575567 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-5cd5868dbb-cxxfw" Jan 25 08:16:39 crc kubenswrapper[4832]: I0125 08:16:39.580713 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 25 08:16:39 crc kubenswrapper[4832]: I0125 08:16:39.598357 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-75c8ddd69c-582pd" Jan 25 08:16:39 crc kubenswrapper[4832]: I0125 08:16:39.610884 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/a7429790-03f9-46f3-96d2-5cf0e5323437-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"a7429790-03f9-46f3-96d2-5cf0e5323437\") " pod="openstack/glance-default-external-api-0" Jan 25 08:16:39 crc kubenswrapper[4832]: I0125 08:16:39.610969 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a7429790-03f9-46f3-96d2-5cf0e5323437-logs\") pod \"glance-default-external-api-0\" (UID: \"a7429790-03f9-46f3-96d2-5cf0e5323437\") " pod="openstack/glance-default-external-api-0" Jan 25 08:16:39 crc kubenswrapper[4832]: I0125 08:16:39.610988 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d8r8b\" (UniqueName: \"kubernetes.io/projected/a7429790-03f9-46f3-96d2-5cf0e5323437-kube-api-access-d8r8b\") pod \"glance-default-external-api-0\" (UID: \"a7429790-03f9-46f3-96d2-5cf0e5323437\") " pod="openstack/glance-default-external-api-0" Jan 25 08:16:39 crc kubenswrapper[4832]: I0125 08:16:39.611067 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"glance-default-external-api-0\" (UID: \"a7429790-03f9-46f3-96d2-5cf0e5323437\") " pod="openstack/glance-default-external-api-0" Jan 25 08:16:39 crc kubenswrapper[4832]: I0125 08:16:39.611126 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a7429790-03f9-46f3-96d2-5cf0e5323437-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"a7429790-03f9-46f3-96d2-5cf0e5323437\") " pod="openstack/glance-default-external-api-0" Jan 25 08:16:39 crc kubenswrapper[4832]: I0125 08:16:39.611154 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a7429790-03f9-46f3-96d2-5cf0e5323437-config-data\") pod \"glance-default-external-api-0\" (UID: \"a7429790-03f9-46f3-96d2-5cf0e5323437\") " pod="openstack/glance-default-external-api-0" Jan 25 08:16:39 crc kubenswrapper[4832]: I0125 08:16:39.611183 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a7429790-03f9-46f3-96d2-5cf0e5323437-scripts\") pod \"glance-default-external-api-0\" (UID: \"a7429790-03f9-46f3-96d2-5cf0e5323437\") " pod="openstack/glance-default-external-api-0" Jan 25 08:16:39 crc kubenswrapper[4832]: I0125 08:16:39.611253 4832 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e793ce7a-261b-4b97-8436-c7a5efc5e126-config-data\") on node \"crc\" DevicePath \"\"" Jan 25 08:16:39 crc kubenswrapper[4832]: I0125 08:16:39.612491 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-vrvb2" event={"ID":"e793ce7a-261b-4b97-8436-c7a5efc5e126","Type":"ContainerDied","Data":"4ef0043ab9b84224998d2924f415885a4ca6ee4ec856bd4bbbdc72dd45a762ee"} Jan 25 08:16:39 crc kubenswrapper[4832]: I0125 08:16:39.612525 4832 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4ef0043ab9b84224998d2924f415885a4ca6ee4ec856bd4bbbdc72dd45a762ee" Jan 25 08:16:39 crc kubenswrapper[4832]: I0125 08:16:39.612575 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-vrvb2" Jan 25 08:16:39 crc kubenswrapper[4832]: I0125 08:16:39.704509 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-6d6d8975cd-v8jf8" Jan 25 08:16:39 crc kubenswrapper[4832]: I0125 08:16:39.705195 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-5cd5868dbb-cxxfw" podStartSLOduration=8.705178902 podStartE2EDuration="8.705178902s" podCreationTimestamp="2026-01-25 08:16:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-25 08:16:39.666947376 +0000 UTC m=+1182.340770909" watchObservedRunningTime="2026-01-25 08:16:39.705178902 +0000 UTC m=+1182.379002435" Jan 25 08:16:39 crc kubenswrapper[4832]: I0125 08:16:39.720696 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a7429790-03f9-46f3-96d2-5cf0e5323437-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"a7429790-03f9-46f3-96d2-5cf0e5323437\") " pod="openstack/glance-default-external-api-0" Jan 25 08:16:39 crc kubenswrapper[4832]: I0125 08:16:39.720746 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a7429790-03f9-46f3-96d2-5cf0e5323437-config-data\") pod \"glance-default-external-api-0\" (UID: \"a7429790-03f9-46f3-96d2-5cf0e5323437\") " pod="openstack/glance-default-external-api-0" Jan 25 08:16:39 crc kubenswrapper[4832]: I0125 08:16:39.720769 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a7429790-03f9-46f3-96d2-5cf0e5323437-scripts\") pod \"glance-default-external-api-0\" (UID: \"a7429790-03f9-46f3-96d2-5cf0e5323437\") " pod="openstack/glance-default-external-api-0" Jan 25 08:16:39 crc kubenswrapper[4832]: I0125 08:16:39.720808 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/a7429790-03f9-46f3-96d2-5cf0e5323437-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"a7429790-03f9-46f3-96d2-5cf0e5323437\") " pod="openstack/glance-default-external-api-0" Jan 25 08:16:39 crc kubenswrapper[4832]: I0125 08:16:39.720866 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a7429790-03f9-46f3-96d2-5cf0e5323437-logs\") pod \"glance-default-external-api-0\" (UID: \"a7429790-03f9-46f3-96d2-5cf0e5323437\") " pod="openstack/glance-default-external-api-0" Jan 25 08:16:39 crc kubenswrapper[4832]: I0125 08:16:39.720884 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d8r8b\" (UniqueName: \"kubernetes.io/projected/a7429790-03f9-46f3-96d2-5cf0e5323437-kube-api-access-d8r8b\") pod \"glance-default-external-api-0\" (UID: \"a7429790-03f9-46f3-96d2-5cf0e5323437\") " pod="openstack/glance-default-external-api-0" Jan 25 08:16:39 crc kubenswrapper[4832]: I0125 08:16:39.720926 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"glance-default-external-api-0\" (UID: \"a7429790-03f9-46f3-96d2-5cf0e5323437\") " pod="openstack/glance-default-external-api-0" Jan 25 08:16:39 crc kubenswrapper[4832]: I0125 08:16:39.721323 4832 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"glance-default-external-api-0\" (UID: \"a7429790-03f9-46f3-96d2-5cf0e5323437\") device mount path \"/mnt/openstack/pv02\"" pod="openstack/glance-default-external-api-0" Jan 25 08:16:39 crc kubenswrapper[4832]: I0125 08:16:39.726559 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/a7429790-03f9-46f3-96d2-5cf0e5323437-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"a7429790-03f9-46f3-96d2-5cf0e5323437\") " pod="openstack/glance-default-external-api-0" Jan 25 08:16:39 crc kubenswrapper[4832]: I0125 08:16:39.736077 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a7429790-03f9-46f3-96d2-5cf0e5323437-logs\") pod \"glance-default-external-api-0\" (UID: \"a7429790-03f9-46f3-96d2-5cf0e5323437\") " pod="openstack/glance-default-external-api-0" Jan 25 08:16:39 crc kubenswrapper[4832]: I0125 08:16:39.743482 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a7429790-03f9-46f3-96d2-5cf0e5323437-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"a7429790-03f9-46f3-96d2-5cf0e5323437\") " pod="openstack/glance-default-external-api-0" Jan 25 08:16:39 crc kubenswrapper[4832]: I0125 08:16:39.746779 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a7429790-03f9-46f3-96d2-5cf0e5323437-config-data\") pod \"glance-default-external-api-0\" (UID: \"a7429790-03f9-46f3-96d2-5cf0e5323437\") " pod="openstack/glance-default-external-api-0" Jan 25 08:16:39 crc kubenswrapper[4832]: I0125 08:16:39.781999 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a7429790-03f9-46f3-96d2-5cf0e5323437-scripts\") pod \"glance-default-external-api-0\" (UID: \"a7429790-03f9-46f3-96d2-5cf0e5323437\") " pod="openstack/glance-default-external-api-0" Jan 25 08:16:39 crc kubenswrapper[4832]: I0125 08:16:39.819026 4832 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-856b6b4996-m59cl" podUID="573d9b12-352d-4b14-b79c-f2a4a3bfec61" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.145:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.145:8443: connect: connection refused" Jan 25 08:16:39 crc kubenswrapper[4832]: I0125 08:16:39.853968 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d8r8b\" (UniqueName: \"kubernetes.io/projected/a7429790-03f9-46f3-96d2-5cf0e5323437-kube-api-access-d8r8b\") pod \"glance-default-external-api-0\" (UID: \"a7429790-03f9-46f3-96d2-5cf0e5323437\") " pod="openstack/glance-default-external-api-0" Jan 25 08:16:39 crc kubenswrapper[4832]: I0125 08:16:39.871824 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"glance-default-external-api-0\" (UID: \"a7429790-03f9-46f3-96d2-5cf0e5323437\") " pod="openstack/glance-default-external-api-0" Jan 25 08:16:39 crc kubenswrapper[4832]: I0125 08:16:39.875323 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Jan 25 08:16:39 crc kubenswrapper[4832]: I0125 08:16:39.885026 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 25 08:16:39 crc kubenswrapper[4832]: I0125 08:16:39.900962 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 25 08:16:39 crc kubenswrapper[4832]: I0125 08:16:39.909750 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 25 08:16:39 crc kubenswrapper[4832]: I0125 08:16:39.944149 4832 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-f649cfc6-vzpx7" podUID="26fd6803-3263-4989-a86e-908f6a504d14" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.146:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.146:8443: connect: connection refused" Jan 25 08:16:39 crc kubenswrapper[4832]: I0125 08:16:39.944281 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Jan 25 08:16:39 crc kubenswrapper[4832]: I0125 08:16:39.944321 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Jan 25 08:16:39 crc kubenswrapper[4832]: I0125 08:16:39.944607 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Jan 25 08:16:39 crc kubenswrapper[4832]: I0125 08:16:39.990685 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-975sp" Jan 25 08:16:40 crc kubenswrapper[4832]: I0125 08:16:40.008598 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 25 08:16:40 crc kubenswrapper[4832]: I0125 08:16:40.014136 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 25 08:16:40 crc kubenswrapper[4832]: I0125 08:16:40.043578 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Jan 25 08:16:40 crc kubenswrapper[4832]: I0125 08:16:40.044079 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 25 08:16:40 crc kubenswrapper[4832]: I0125 08:16:40.086550 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-75c8ddd69c-582pd"] Jan 25 08:16:40 crc kubenswrapper[4832]: I0125 08:16:40.096905 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/20df59e8-9934-47c9-9d8f-a97e0f046368-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"20df59e8-9934-47c9-9d8f-a97e0f046368\") " pod="openstack/cinder-scheduler-0" Jan 25 08:16:40 crc kubenswrapper[4832]: I0125 08:16:40.096984 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/20df59e8-9934-47c9-9d8f-a97e0f046368-config-data\") pod \"cinder-scheduler-0\" (UID: \"20df59e8-9934-47c9-9d8f-a97e0f046368\") " pod="openstack/cinder-scheduler-0" Jan 25 08:16:40 crc kubenswrapper[4832]: I0125 08:16:40.097031 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/20df59e8-9934-47c9-9d8f-a97e0f046368-scripts\") pod \"cinder-scheduler-0\" (UID: \"20df59e8-9934-47c9-9d8f-a97e0f046368\") " pod="openstack/cinder-scheduler-0" Jan 25 08:16:40 crc kubenswrapper[4832]: I0125 08:16:40.097078 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/20df59e8-9934-47c9-9d8f-a97e0f046368-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"20df59e8-9934-47c9-9d8f-a97e0f046368\") " pod="openstack/cinder-scheduler-0" Jan 25 08:16:40 crc kubenswrapper[4832]: I0125 08:16:40.097132 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-22mls\" (UniqueName: \"kubernetes.io/projected/20df59e8-9934-47c9-9d8f-a97e0f046368-kube-api-access-22mls\") pod \"cinder-scheduler-0\" (UID: \"20df59e8-9934-47c9-9d8f-a97e0f046368\") " pod="openstack/cinder-scheduler-0" Jan 25 08:16:40 crc kubenswrapper[4832]: I0125 08:16:40.097171 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/20df59e8-9934-47c9-9d8f-a97e0f046368-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"20df59e8-9934-47c9-9d8f-a97e0f046368\") " pod="openstack/cinder-scheduler-0" Jan 25 08:16:40 crc kubenswrapper[4832]: I0125 08:16:40.207076 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tskr5\" (UniqueName: \"kubernetes.io/projected/cf6bae18-db06-4abf-a6b1-aa1eda2cc70e-kube-api-access-tskr5\") pod \"glance-default-internal-api-0\" (UID: \"cf6bae18-db06-4abf-a6b1-aa1eda2cc70e\") " pod="openstack/glance-default-internal-api-0" Jan 25 08:16:40 crc kubenswrapper[4832]: I0125 08:16:40.207164 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/20df59e8-9934-47c9-9d8f-a97e0f046368-scripts\") pod \"cinder-scheduler-0\" (UID: \"20df59e8-9934-47c9-9d8f-a97e0f046368\") " pod="openstack/cinder-scheduler-0" Jan 25 08:16:40 crc kubenswrapper[4832]: I0125 08:16:40.207230 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"glance-default-internal-api-0\" (UID: \"cf6bae18-db06-4abf-a6b1-aa1eda2cc70e\") " pod="openstack/glance-default-internal-api-0" Jan 25 08:16:40 crc kubenswrapper[4832]: I0125 08:16:40.207256 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cf6bae18-db06-4abf-a6b1-aa1eda2cc70e-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"cf6bae18-db06-4abf-a6b1-aa1eda2cc70e\") " pod="openstack/glance-default-internal-api-0" Jan 25 08:16:40 crc kubenswrapper[4832]: I0125 08:16:40.207308 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/20df59e8-9934-47c9-9d8f-a97e0f046368-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"20df59e8-9934-47c9-9d8f-a97e0f046368\") " pod="openstack/cinder-scheduler-0" Jan 25 08:16:40 crc kubenswrapper[4832]: I0125 08:16:40.207362 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/cf6bae18-db06-4abf-a6b1-aa1eda2cc70e-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"cf6bae18-db06-4abf-a6b1-aa1eda2cc70e\") " pod="openstack/glance-default-internal-api-0" Jan 25 08:16:40 crc kubenswrapper[4832]: I0125 08:16:40.207433 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cf6bae18-db06-4abf-a6b1-aa1eda2cc70e-config-data\") pod \"glance-default-internal-api-0\" (UID: \"cf6bae18-db06-4abf-a6b1-aa1eda2cc70e\") " pod="openstack/glance-default-internal-api-0" Jan 25 08:16:40 crc kubenswrapper[4832]: I0125 08:16:40.207485 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-22mls\" (UniqueName: \"kubernetes.io/projected/20df59e8-9934-47c9-9d8f-a97e0f046368-kube-api-access-22mls\") pod \"cinder-scheduler-0\" (UID: \"20df59e8-9934-47c9-9d8f-a97e0f046368\") " pod="openstack/cinder-scheduler-0" Jan 25 08:16:40 crc kubenswrapper[4832]: I0125 08:16:40.207692 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/20df59e8-9934-47c9-9d8f-a97e0f046368-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"20df59e8-9934-47c9-9d8f-a97e0f046368\") " pod="openstack/cinder-scheduler-0" Jan 25 08:16:40 crc kubenswrapper[4832]: I0125 08:16:40.207809 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cf6bae18-db06-4abf-a6b1-aa1eda2cc70e-scripts\") pod \"glance-default-internal-api-0\" (UID: \"cf6bae18-db06-4abf-a6b1-aa1eda2cc70e\") " pod="openstack/glance-default-internal-api-0" Jan 25 08:16:40 crc kubenswrapper[4832]: I0125 08:16:40.207910 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/20df59e8-9934-47c9-9d8f-a97e0f046368-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"20df59e8-9934-47c9-9d8f-a97e0f046368\") " pod="openstack/cinder-scheduler-0" Jan 25 08:16:40 crc kubenswrapper[4832]: I0125 08:16:40.208007 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cf6bae18-db06-4abf-a6b1-aa1eda2cc70e-logs\") pod \"glance-default-internal-api-0\" (UID: \"cf6bae18-db06-4abf-a6b1-aa1eda2cc70e\") " pod="openstack/glance-default-internal-api-0" Jan 25 08:16:40 crc kubenswrapper[4832]: I0125 08:16:40.208044 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/20df59e8-9934-47c9-9d8f-a97e0f046368-config-data\") pod \"cinder-scheduler-0\" (UID: \"20df59e8-9934-47c9-9d8f-a97e0f046368\") " pod="openstack/cinder-scheduler-0" Jan 25 08:16:40 crc kubenswrapper[4832]: I0125 08:16:40.217268 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/20df59e8-9934-47c9-9d8f-a97e0f046368-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"20df59e8-9934-47c9-9d8f-a97e0f046368\") " pod="openstack/cinder-scheduler-0" Jan 25 08:16:40 crc kubenswrapper[4832]: I0125 08:16:40.218145 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/20df59e8-9934-47c9-9d8f-a97e0f046368-scripts\") pod \"cinder-scheduler-0\" (UID: \"20df59e8-9934-47c9-9d8f-a97e0f046368\") " pod="openstack/cinder-scheduler-0" Jan 25 08:16:40 crc kubenswrapper[4832]: I0125 08:16:40.227943 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/20df59e8-9934-47c9-9d8f-a97e0f046368-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"20df59e8-9934-47c9-9d8f-a97e0f046368\") " pod="openstack/cinder-scheduler-0" Jan 25 08:16:40 crc kubenswrapper[4832]: I0125 08:16:40.250191 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/20df59e8-9934-47c9-9d8f-a97e0f046368-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"20df59e8-9934-47c9-9d8f-a97e0f046368\") " pod="openstack/cinder-scheduler-0" Jan 25 08:16:40 crc kubenswrapper[4832]: I0125 08:16:40.292786 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/20df59e8-9934-47c9-9d8f-a97e0f046368-config-data\") pod \"cinder-scheduler-0\" (UID: \"20df59e8-9934-47c9-9d8f-a97e0f046368\") " pod="openstack/cinder-scheduler-0" Jan 25 08:16:40 crc kubenswrapper[4832]: I0125 08:16:40.310259 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cf6bae18-db06-4abf-a6b1-aa1eda2cc70e-scripts\") pod \"glance-default-internal-api-0\" (UID: \"cf6bae18-db06-4abf-a6b1-aa1eda2cc70e\") " pod="openstack/glance-default-internal-api-0" Jan 25 08:16:40 crc kubenswrapper[4832]: I0125 08:16:40.310342 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cf6bae18-db06-4abf-a6b1-aa1eda2cc70e-logs\") pod \"glance-default-internal-api-0\" (UID: \"cf6bae18-db06-4abf-a6b1-aa1eda2cc70e\") " pod="openstack/glance-default-internal-api-0" Jan 25 08:16:40 crc kubenswrapper[4832]: I0125 08:16:40.310376 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tskr5\" (UniqueName: \"kubernetes.io/projected/cf6bae18-db06-4abf-a6b1-aa1eda2cc70e-kube-api-access-tskr5\") pod \"glance-default-internal-api-0\" (UID: \"cf6bae18-db06-4abf-a6b1-aa1eda2cc70e\") " pod="openstack/glance-default-internal-api-0" Jan 25 08:16:40 crc kubenswrapper[4832]: I0125 08:16:40.310428 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"glance-default-internal-api-0\" (UID: \"cf6bae18-db06-4abf-a6b1-aa1eda2cc70e\") " pod="openstack/glance-default-internal-api-0" Jan 25 08:16:40 crc kubenswrapper[4832]: I0125 08:16:40.310444 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cf6bae18-db06-4abf-a6b1-aa1eda2cc70e-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"cf6bae18-db06-4abf-a6b1-aa1eda2cc70e\") " pod="openstack/glance-default-internal-api-0" Jan 25 08:16:40 crc kubenswrapper[4832]: I0125 08:16:40.310480 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/cf6bae18-db06-4abf-a6b1-aa1eda2cc70e-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"cf6bae18-db06-4abf-a6b1-aa1eda2cc70e\") " pod="openstack/glance-default-internal-api-0" Jan 25 08:16:40 crc kubenswrapper[4832]: I0125 08:16:40.310503 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cf6bae18-db06-4abf-a6b1-aa1eda2cc70e-config-data\") pod \"glance-default-internal-api-0\" (UID: \"cf6bae18-db06-4abf-a6b1-aa1eda2cc70e\") " pod="openstack/glance-default-internal-api-0" Jan 25 08:16:40 crc kubenswrapper[4832]: I0125 08:16:40.314916 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cf6bae18-db06-4abf-a6b1-aa1eda2cc70e-config-data\") pod \"glance-default-internal-api-0\" (UID: \"cf6bae18-db06-4abf-a6b1-aa1eda2cc70e\") " pod="openstack/glance-default-internal-api-0" Jan 25 08:16:40 crc kubenswrapper[4832]: I0125 08:16:40.317498 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cf6bae18-db06-4abf-a6b1-aa1eda2cc70e-scripts\") pod \"glance-default-internal-api-0\" (UID: \"cf6bae18-db06-4abf-a6b1-aa1eda2cc70e\") " pod="openstack/glance-default-internal-api-0" Jan 25 08:16:40 crc kubenswrapper[4832]: I0125 08:16:40.317820 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cf6bae18-db06-4abf-a6b1-aa1eda2cc70e-logs\") pod \"glance-default-internal-api-0\" (UID: \"cf6bae18-db06-4abf-a6b1-aa1eda2cc70e\") " pod="openstack/glance-default-internal-api-0" Jan 25 08:16:40 crc kubenswrapper[4832]: I0125 08:16:40.318369 4832 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"glance-default-internal-api-0\" (UID: \"cf6bae18-db06-4abf-a6b1-aa1eda2cc70e\") device mount path \"/mnt/openstack/pv05\"" pod="openstack/glance-default-internal-api-0" Jan 25 08:16:40 crc kubenswrapper[4832]: I0125 08:16:40.322076 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/cf6bae18-db06-4abf-a6b1-aa1eda2cc70e-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"cf6bae18-db06-4abf-a6b1-aa1eda2cc70e\") " pod="openstack/glance-default-internal-api-0" Jan 25 08:16:40 crc kubenswrapper[4832]: I0125 08:16:40.324182 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-22mls\" (UniqueName: \"kubernetes.io/projected/20df59e8-9934-47c9-9d8f-a97e0f046368-kube-api-access-22mls\") pod \"cinder-scheduler-0\" (UID: \"20df59e8-9934-47c9-9d8f-a97e0f046368\") " pod="openstack/cinder-scheduler-0" Jan 25 08:16:40 crc kubenswrapper[4832]: I0125 08:16:40.334706 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 25 08:16:40 crc kubenswrapper[4832]: I0125 08:16:40.364417 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cf6bae18-db06-4abf-a6b1-aa1eda2cc70e-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"cf6bae18-db06-4abf-a6b1-aa1eda2cc70e\") " pod="openstack/glance-default-internal-api-0" Jan 25 08:16:40 crc kubenswrapper[4832]: I0125 08:16:40.383730 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"glance-default-internal-api-0\" (UID: \"cf6bae18-db06-4abf-a6b1-aa1eda2cc70e\") " pod="openstack/glance-default-internal-api-0" Jan 25 08:16:40 crc kubenswrapper[4832]: I0125 08:16:40.419873 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5784cf869f-5ld69"] Jan 25 08:16:40 crc kubenswrapper[4832]: I0125 08:16:40.430863 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5784cf869f-5ld69" Jan 25 08:16:40 crc kubenswrapper[4832]: I0125 08:16:40.492673 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tskr5\" (UniqueName: \"kubernetes.io/projected/cf6bae18-db06-4abf-a6b1-aa1eda2cc70e-kube-api-access-tskr5\") pod \"glance-default-internal-api-0\" (UID: \"cf6bae18-db06-4abf-a6b1-aa1eda2cc70e\") " pod="openstack/glance-default-internal-api-0" Jan 25 08:16:40 crc kubenswrapper[4832]: I0125 08:16:40.517348 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-84b966f6c9-8n6dh"] Jan 25 08:16:40 crc kubenswrapper[4832]: I0125 08:16:40.519203 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/23584092-31c4-45a1-bf04-88e7f6bb9ece-ovsdbserver-nb\") pod \"dnsmasq-dns-5784cf869f-5ld69\" (UID: \"23584092-31c4-45a1-bf04-88e7f6bb9ece\") " pod="openstack/dnsmasq-dns-5784cf869f-5ld69" Jan 25 08:16:40 crc kubenswrapper[4832]: I0125 08:16:40.519304 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7gg5k\" (UniqueName: \"kubernetes.io/projected/23584092-31c4-45a1-bf04-88e7f6bb9ece-kube-api-access-7gg5k\") pod \"dnsmasq-dns-5784cf869f-5ld69\" (UID: \"23584092-31c4-45a1-bf04-88e7f6bb9ece\") " pod="openstack/dnsmasq-dns-5784cf869f-5ld69" Jan 25 08:16:40 crc kubenswrapper[4832]: I0125 08:16:40.525062 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/23584092-31c4-45a1-bf04-88e7f6bb9ece-dns-swift-storage-0\") pod \"dnsmasq-dns-5784cf869f-5ld69\" (UID: \"23584092-31c4-45a1-bf04-88e7f6bb9ece\") " pod="openstack/dnsmasq-dns-5784cf869f-5ld69" Jan 25 08:16:40 crc kubenswrapper[4832]: I0125 08:16:40.525226 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/23584092-31c4-45a1-bf04-88e7f6bb9ece-config\") pod \"dnsmasq-dns-5784cf869f-5ld69\" (UID: \"23584092-31c4-45a1-bf04-88e7f6bb9ece\") " pod="openstack/dnsmasq-dns-5784cf869f-5ld69" Jan 25 08:16:40 crc kubenswrapper[4832]: I0125 08:16:40.525307 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/23584092-31c4-45a1-bf04-88e7f6bb9ece-dns-svc\") pod \"dnsmasq-dns-5784cf869f-5ld69\" (UID: \"23584092-31c4-45a1-bf04-88e7f6bb9ece\") " pod="openstack/dnsmasq-dns-5784cf869f-5ld69" Jan 25 08:16:40 crc kubenswrapper[4832]: I0125 08:16:40.525399 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/23584092-31c4-45a1-bf04-88e7f6bb9ece-ovsdbserver-sb\") pod \"dnsmasq-dns-5784cf869f-5ld69\" (UID: \"23584092-31c4-45a1-bf04-88e7f6bb9ece\") " pod="openstack/dnsmasq-dns-5784cf869f-5ld69" Jan 25 08:16:40 crc kubenswrapper[4832]: I0125 08:16:40.582224 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5784cf869f-5ld69"] Jan 25 08:16:40 crc kubenswrapper[4832]: I0125 08:16:40.632562 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/23584092-31c4-45a1-bf04-88e7f6bb9ece-ovsdbserver-nb\") pod \"dnsmasq-dns-5784cf869f-5ld69\" (UID: \"23584092-31c4-45a1-bf04-88e7f6bb9ece\") " pod="openstack/dnsmasq-dns-5784cf869f-5ld69" Jan 25 08:16:40 crc kubenswrapper[4832]: I0125 08:16:40.632642 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7gg5k\" (UniqueName: \"kubernetes.io/projected/23584092-31c4-45a1-bf04-88e7f6bb9ece-kube-api-access-7gg5k\") pod \"dnsmasq-dns-5784cf869f-5ld69\" (UID: \"23584092-31c4-45a1-bf04-88e7f6bb9ece\") " pod="openstack/dnsmasq-dns-5784cf869f-5ld69" Jan 25 08:16:40 crc kubenswrapper[4832]: I0125 08:16:40.632693 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/23584092-31c4-45a1-bf04-88e7f6bb9ece-dns-swift-storage-0\") pod \"dnsmasq-dns-5784cf869f-5ld69\" (UID: \"23584092-31c4-45a1-bf04-88e7f6bb9ece\") " pod="openstack/dnsmasq-dns-5784cf869f-5ld69" Jan 25 08:16:40 crc kubenswrapper[4832]: I0125 08:16:40.632721 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/23584092-31c4-45a1-bf04-88e7f6bb9ece-config\") pod \"dnsmasq-dns-5784cf869f-5ld69\" (UID: \"23584092-31c4-45a1-bf04-88e7f6bb9ece\") " pod="openstack/dnsmasq-dns-5784cf869f-5ld69" Jan 25 08:16:40 crc kubenswrapper[4832]: I0125 08:16:40.632746 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/23584092-31c4-45a1-bf04-88e7f6bb9ece-dns-svc\") pod \"dnsmasq-dns-5784cf869f-5ld69\" (UID: \"23584092-31c4-45a1-bf04-88e7f6bb9ece\") " pod="openstack/dnsmasq-dns-5784cf869f-5ld69" Jan 25 08:16:40 crc kubenswrapper[4832]: I0125 08:16:40.632772 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/23584092-31c4-45a1-bf04-88e7f6bb9ece-ovsdbserver-sb\") pod \"dnsmasq-dns-5784cf869f-5ld69\" (UID: \"23584092-31c4-45a1-bf04-88e7f6bb9ece\") " pod="openstack/dnsmasq-dns-5784cf869f-5ld69" Jan 25 08:16:40 crc kubenswrapper[4832]: I0125 08:16:40.634167 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/23584092-31c4-45a1-bf04-88e7f6bb9ece-dns-swift-storage-0\") pod \"dnsmasq-dns-5784cf869f-5ld69\" (UID: \"23584092-31c4-45a1-bf04-88e7f6bb9ece\") " pod="openstack/dnsmasq-dns-5784cf869f-5ld69" Jan 25 08:16:40 crc kubenswrapper[4832]: I0125 08:16:40.634885 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/23584092-31c4-45a1-bf04-88e7f6bb9ece-ovsdbserver-nb\") pod \"dnsmasq-dns-5784cf869f-5ld69\" (UID: \"23584092-31c4-45a1-bf04-88e7f6bb9ece\") " pod="openstack/dnsmasq-dns-5784cf869f-5ld69" Jan 25 08:16:40 crc kubenswrapper[4832]: I0125 08:16:40.635557 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/23584092-31c4-45a1-bf04-88e7f6bb9ece-config\") pod \"dnsmasq-dns-5784cf869f-5ld69\" (UID: \"23584092-31c4-45a1-bf04-88e7f6bb9ece\") " pod="openstack/dnsmasq-dns-5784cf869f-5ld69" Jan 25 08:16:40 crc kubenswrapper[4832]: I0125 08:16:40.635708 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/23584092-31c4-45a1-bf04-88e7f6bb9ece-ovsdbserver-sb\") pod \"dnsmasq-dns-5784cf869f-5ld69\" (UID: \"23584092-31c4-45a1-bf04-88e7f6bb9ece\") " pod="openstack/dnsmasq-dns-5784cf869f-5ld69" Jan 25 08:16:40 crc kubenswrapper[4832]: I0125 08:16:40.635872 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/23584092-31c4-45a1-bf04-88e7f6bb9ece-dns-svc\") pod \"dnsmasq-dns-5784cf869f-5ld69\" (UID: \"23584092-31c4-45a1-bf04-88e7f6bb9ece\") " pod="openstack/dnsmasq-dns-5784cf869f-5ld69" Jan 25 08:16:40 crc kubenswrapper[4832]: I0125 08:16:40.674495 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Jan 25 08:16:40 crc kubenswrapper[4832]: I0125 08:16:40.676649 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7gg5k\" (UniqueName: \"kubernetes.io/projected/23584092-31c4-45a1-bf04-88e7f6bb9ece-kube-api-access-7gg5k\") pod \"dnsmasq-dns-5784cf869f-5ld69\" (UID: \"23584092-31c4-45a1-bf04-88e7f6bb9ece\") " pod="openstack/dnsmasq-dns-5784cf869f-5ld69" Jan 25 08:16:40 crc kubenswrapper[4832]: I0125 08:16:40.676698 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 25 08:16:40 crc kubenswrapper[4832]: I0125 08:16:40.679664 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Jan 25 08:16:40 crc kubenswrapper[4832]: I0125 08:16:40.681055 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 25 08:16:40 crc kubenswrapper[4832]: I0125 08:16:40.706644 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Jan 25 08:16:40 crc kubenswrapper[4832]: I0125 08:16:40.730080 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-84b966f6c9-8n6dh" event={"ID":"bae90205-c6b8-4fa2-b527-e9788ef6ae5b","Type":"ContainerStarted","Data":"aac097c812dd6dc72b5ea4201c53744621d4b4820301967098cd5375077e8a2c"} Jan 25 08:16:40 crc kubenswrapper[4832]: I0125 08:16:40.735711 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/57235bbb-0d8b-45ea-ad16-e42723ce9047-scripts\") pod \"cinder-api-0\" (UID: \"57235bbb-0d8b-45ea-ad16-e42723ce9047\") " pod="openstack/cinder-api-0" Jan 25 08:16:40 crc kubenswrapper[4832]: I0125 08:16:40.735820 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/57235bbb-0d8b-45ea-ad16-e42723ce9047-logs\") pod \"cinder-api-0\" (UID: \"57235bbb-0d8b-45ea-ad16-e42723ce9047\") " pod="openstack/cinder-api-0" Jan 25 08:16:40 crc kubenswrapper[4832]: I0125 08:16:40.735849 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/57235bbb-0d8b-45ea-ad16-e42723ce9047-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"57235bbb-0d8b-45ea-ad16-e42723ce9047\") " pod="openstack/cinder-api-0" Jan 25 08:16:40 crc kubenswrapper[4832]: I0125 08:16:40.735887 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/57235bbb-0d8b-45ea-ad16-e42723ce9047-etc-machine-id\") pod \"cinder-api-0\" (UID: \"57235bbb-0d8b-45ea-ad16-e42723ce9047\") " pod="openstack/cinder-api-0" Jan 25 08:16:40 crc kubenswrapper[4832]: I0125 08:16:40.735939 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/57235bbb-0d8b-45ea-ad16-e42723ce9047-config-data\") pod \"cinder-api-0\" (UID: \"57235bbb-0d8b-45ea-ad16-e42723ce9047\") " pod="openstack/cinder-api-0" Jan 25 08:16:40 crc kubenswrapper[4832]: I0125 08:16:40.735975 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-59dcg\" (UniqueName: \"kubernetes.io/projected/57235bbb-0d8b-45ea-ad16-e42723ce9047-kube-api-access-59dcg\") pod \"cinder-api-0\" (UID: \"57235bbb-0d8b-45ea-ad16-e42723ce9047\") " pod="openstack/cinder-api-0" Jan 25 08:16:40 crc kubenswrapper[4832]: I0125 08:16:40.736077 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/57235bbb-0d8b-45ea-ad16-e42723ce9047-config-data-custom\") pod \"cinder-api-0\" (UID: \"57235bbb-0d8b-45ea-ad16-e42723ce9047\") " pod="openstack/cinder-api-0" Jan 25 08:16:40 crc kubenswrapper[4832]: I0125 08:16:40.807284 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5784cf869f-5ld69" Jan 25 08:16:40 crc kubenswrapper[4832]: I0125 08:16:40.839114 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/57235bbb-0d8b-45ea-ad16-e42723ce9047-logs\") pod \"cinder-api-0\" (UID: \"57235bbb-0d8b-45ea-ad16-e42723ce9047\") " pod="openstack/cinder-api-0" Jan 25 08:16:40 crc kubenswrapper[4832]: I0125 08:16:40.839500 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/57235bbb-0d8b-45ea-ad16-e42723ce9047-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"57235bbb-0d8b-45ea-ad16-e42723ce9047\") " pod="openstack/cinder-api-0" Jan 25 08:16:40 crc kubenswrapper[4832]: I0125 08:16:40.839610 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/57235bbb-0d8b-45ea-ad16-e42723ce9047-etc-machine-id\") pod \"cinder-api-0\" (UID: \"57235bbb-0d8b-45ea-ad16-e42723ce9047\") " pod="openstack/cinder-api-0" Jan 25 08:16:40 crc kubenswrapper[4832]: I0125 08:16:40.839744 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/57235bbb-0d8b-45ea-ad16-e42723ce9047-config-data\") pod \"cinder-api-0\" (UID: \"57235bbb-0d8b-45ea-ad16-e42723ce9047\") " pod="openstack/cinder-api-0" Jan 25 08:16:40 crc kubenswrapper[4832]: I0125 08:16:40.839824 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-59dcg\" (UniqueName: \"kubernetes.io/projected/57235bbb-0d8b-45ea-ad16-e42723ce9047-kube-api-access-59dcg\") pod \"cinder-api-0\" (UID: \"57235bbb-0d8b-45ea-ad16-e42723ce9047\") " pod="openstack/cinder-api-0" Jan 25 08:16:40 crc kubenswrapper[4832]: I0125 08:16:40.839956 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/57235bbb-0d8b-45ea-ad16-e42723ce9047-config-data-custom\") pod \"cinder-api-0\" (UID: \"57235bbb-0d8b-45ea-ad16-e42723ce9047\") " pod="openstack/cinder-api-0" Jan 25 08:16:40 crc kubenswrapper[4832]: I0125 08:16:40.840085 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/57235bbb-0d8b-45ea-ad16-e42723ce9047-scripts\") pod \"cinder-api-0\" (UID: \"57235bbb-0d8b-45ea-ad16-e42723ce9047\") " pod="openstack/cinder-api-0" Jan 25 08:16:40 crc kubenswrapper[4832]: I0125 08:16:40.842427 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/57235bbb-0d8b-45ea-ad16-e42723ce9047-logs\") pod \"cinder-api-0\" (UID: \"57235bbb-0d8b-45ea-ad16-e42723ce9047\") " pod="openstack/cinder-api-0" Jan 25 08:16:40 crc kubenswrapper[4832]: I0125 08:16:40.847982 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/57235bbb-0d8b-45ea-ad16-e42723ce9047-scripts\") pod \"cinder-api-0\" (UID: \"57235bbb-0d8b-45ea-ad16-e42723ce9047\") " pod="openstack/cinder-api-0" Jan 25 08:16:40 crc kubenswrapper[4832]: I0125 08:16:40.850536 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/57235bbb-0d8b-45ea-ad16-e42723ce9047-etc-machine-id\") pod \"cinder-api-0\" (UID: \"57235bbb-0d8b-45ea-ad16-e42723ce9047\") " pod="openstack/cinder-api-0" Jan 25 08:16:40 crc kubenswrapper[4832]: I0125 08:16:40.851624 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/57235bbb-0d8b-45ea-ad16-e42723ce9047-config-data\") pod \"cinder-api-0\" (UID: \"57235bbb-0d8b-45ea-ad16-e42723ce9047\") " pod="openstack/cinder-api-0" Jan 25 08:16:40 crc kubenswrapper[4832]: I0125 08:16:40.866204 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/57235bbb-0d8b-45ea-ad16-e42723ce9047-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"57235bbb-0d8b-45ea-ad16-e42723ce9047\") " pod="openstack/cinder-api-0" Jan 25 08:16:40 crc kubenswrapper[4832]: I0125 08:16:40.868450 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/57235bbb-0d8b-45ea-ad16-e42723ce9047-config-data-custom\") pod \"cinder-api-0\" (UID: \"57235bbb-0d8b-45ea-ad16-e42723ce9047\") " pod="openstack/cinder-api-0" Jan 25 08:16:40 crc kubenswrapper[4832]: I0125 08:16:40.900160 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-59dcg\" (UniqueName: \"kubernetes.io/projected/57235bbb-0d8b-45ea-ad16-e42723ce9047-kube-api-access-59dcg\") pod \"cinder-api-0\" (UID: \"57235bbb-0d8b-45ea-ad16-e42723ce9047\") " pod="openstack/cinder-api-0" Jan 25 08:16:40 crc kubenswrapper[4832]: I0125 08:16:40.907378 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-855cdf875c-rxk79"] Jan 25 08:16:41 crc kubenswrapper[4832]: I0125 08:16:41.053073 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 25 08:16:41 crc kubenswrapper[4832]: I0125 08:16:41.092300 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-7b4947bb84-pmdh6"] Jan 25 08:16:41 crc kubenswrapper[4832]: W0125 08:16:41.128858 4832 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4899f618_1f51_4d34_9970_7c096359b47e.slice/crio-e05e1ce0da5156ef2f2788464a9a1a3c101f1568a97141741b4a0dfc7d7e3451 WatchSource:0}: Error finding container e05e1ce0da5156ef2f2788464a9a1a3c101f1568a97141741b4a0dfc7d7e3451: Status 404 returned error can't find the container with id e05e1ce0da5156ef2f2788464a9a1a3c101f1568a97141741b4a0dfc7d7e3451 Jan 25 08:16:41 crc kubenswrapper[4832]: I0125 08:16:41.444032 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-75c8ddd69c-582pd"] Jan 25 08:16:41 crc kubenswrapper[4832]: I0125 08:16:41.515895 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-6d6d8975cd-v8jf8"] Jan 25 08:16:41 crc kubenswrapper[4832]: I0125 08:16:41.536282 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 25 08:16:41 crc kubenswrapper[4832]: I0125 08:16:41.762120 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 25 08:16:41 crc kubenswrapper[4832]: I0125 08:16:41.837992 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 25 08:16:41 crc kubenswrapper[4832]: I0125 08:16:41.844754 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-75c8ddd69c-582pd" event={"ID":"986e4317-1281-48cd-962b-0873de0e5744","Type":"ContainerStarted","Data":"bc7333c71eb44d1d3685b4a89b7fde97d817aec90d41b7fc7cfafdbd576f9fb2"} Jan 25 08:16:41 crc kubenswrapper[4832]: I0125 08:16:41.850583 4832 generic.go:334] "Generic (PLEG): container finished" podID="bae90205-c6b8-4fa2-b527-e9788ef6ae5b" containerID="10f8207df235e165ac87488b90a96abccb9c9fac006f274e9abb7d308c89e414" exitCode=0 Jan 25 08:16:41 crc kubenswrapper[4832]: I0125 08:16:41.850779 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-84b966f6c9-8n6dh" event={"ID":"bae90205-c6b8-4fa2-b527-e9788ef6ae5b","Type":"ContainerDied","Data":"10f8207df235e165ac87488b90a96abccb9c9fac006f274e9abb7d308c89e414"} Jan 25 08:16:41 crc kubenswrapper[4832]: I0125 08:16:41.883906 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-6d6d8975cd-v8jf8" event={"ID":"31271ce3-bbf8-4033-b2ba-5e47f4e9a151","Type":"ContainerStarted","Data":"f335e6e3ca8d120dfbad0813fc9a9b858a9dd23b810eed789b4c3dba1d083056"} Jan 25 08:16:41 crc kubenswrapper[4832]: I0125 08:16:41.906406 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"a7429790-03f9-46f3-96d2-5cf0e5323437","Type":"ContainerStarted","Data":"772cc3399cb5a642c32abf2cd2fea48abdd4e5d59e8938f4689361fdcbeca864"} Jan 25 08:16:41 crc kubenswrapper[4832]: I0125 08:16:41.919147 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5784cf869f-5ld69"] Jan 25 08:16:41 crc kubenswrapper[4832]: I0125 08:16:41.925767 4832 generic.go:334] "Generic (PLEG): container finished" podID="b48b257e-ddb7-486d-8788-489ca788ac1f" containerID="acbe2cd8067e9a50f978ad1b1a6c5d6ece519325f7bb452b369529624b9c7801" exitCode=0 Jan 25 08:16:41 crc kubenswrapper[4832]: I0125 08:16:41.925879 4832 generic.go:334] "Generic (PLEG): container finished" podID="b48b257e-ddb7-486d-8788-489ca788ac1f" containerID="dad362216754986eabe4008de7a8656a90cebd02e9d6abe54bde28eba71a3667" exitCode=2 Jan 25 08:16:41 crc kubenswrapper[4832]: I0125 08:16:41.926036 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b48b257e-ddb7-486d-8788-489ca788ac1f","Type":"ContainerDied","Data":"acbe2cd8067e9a50f978ad1b1a6c5d6ece519325f7bb452b369529624b9c7801"} Jan 25 08:16:41 crc kubenswrapper[4832]: I0125 08:16:41.926148 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b48b257e-ddb7-486d-8788-489ca788ac1f","Type":"ContainerDied","Data":"dad362216754986eabe4008de7a8656a90cebd02e9d6abe54bde28eba71a3667"} Jan 25 08:16:41 crc kubenswrapper[4832]: I0125 08:16:41.930681 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-855cdf875c-rxk79" event={"ID":"26baac3d-6d07-4f33-956e-4048e3318099","Type":"ContainerStarted","Data":"e31cc48063751b433e3d5f968d75ef0aec9c2d92494650c4725b97fbd75d42f6"} Jan 25 08:16:41 crc kubenswrapper[4832]: I0125 08:16:41.934298 4832 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 25 08:16:41 crc kubenswrapper[4832]: I0125 08:16:41.935871 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-7b4947bb84-pmdh6" event={"ID":"4899f618-1f51-4d34-9970-7c096359b47e","Type":"ContainerStarted","Data":"e05e1ce0da5156ef2f2788464a9a1a3c101f1568a97141741b4a0dfc7d7e3451"} Jan 25 08:16:42 crc kubenswrapper[4832]: I0125 08:16:42.192584 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Jan 25 08:16:42 crc kubenswrapper[4832]: I0125 08:16:42.375899 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-84b966f6c9-8n6dh" Jan 25 08:16:42 crc kubenswrapper[4832]: I0125 08:16:42.515931 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-5cd5868dbb-cxxfw" Jan 25 08:16:42 crc kubenswrapper[4832]: I0125 08:16:42.525035 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/bae90205-c6b8-4fa2-b527-e9788ef6ae5b-ovsdbserver-nb\") pod \"bae90205-c6b8-4fa2-b527-e9788ef6ae5b\" (UID: \"bae90205-c6b8-4fa2-b527-e9788ef6ae5b\") " Jan 25 08:16:42 crc kubenswrapper[4832]: I0125 08:16:42.525139 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bae90205-c6b8-4fa2-b527-e9788ef6ae5b-dns-svc\") pod \"bae90205-c6b8-4fa2-b527-e9788ef6ae5b\" (UID: \"bae90205-c6b8-4fa2-b527-e9788ef6ae5b\") " Jan 25 08:16:42 crc kubenswrapper[4832]: I0125 08:16:42.525200 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bae90205-c6b8-4fa2-b527-e9788ef6ae5b-config\") pod \"bae90205-c6b8-4fa2-b527-e9788ef6ae5b\" (UID: \"bae90205-c6b8-4fa2-b527-e9788ef6ae5b\") " Jan 25 08:16:42 crc kubenswrapper[4832]: I0125 08:16:42.525252 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/bae90205-c6b8-4fa2-b527-e9788ef6ae5b-dns-swift-storage-0\") pod \"bae90205-c6b8-4fa2-b527-e9788ef6ae5b\" (UID: \"bae90205-c6b8-4fa2-b527-e9788ef6ae5b\") " Jan 25 08:16:42 crc kubenswrapper[4832]: I0125 08:16:42.525308 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/bae90205-c6b8-4fa2-b527-e9788ef6ae5b-ovsdbserver-sb\") pod \"bae90205-c6b8-4fa2-b527-e9788ef6ae5b\" (UID: \"bae90205-c6b8-4fa2-b527-e9788ef6ae5b\") " Jan 25 08:16:42 crc kubenswrapper[4832]: I0125 08:16:42.526339 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gcgsc\" (UniqueName: \"kubernetes.io/projected/bae90205-c6b8-4fa2-b527-e9788ef6ae5b-kube-api-access-gcgsc\") pod \"bae90205-c6b8-4fa2-b527-e9788ef6ae5b\" (UID: \"bae90205-c6b8-4fa2-b527-e9788ef6ae5b\") " Jan 25 08:16:42 crc kubenswrapper[4832]: I0125 08:16:42.542645 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bae90205-c6b8-4fa2-b527-e9788ef6ae5b-kube-api-access-gcgsc" (OuterVolumeSpecName: "kube-api-access-gcgsc") pod "bae90205-c6b8-4fa2-b527-e9788ef6ae5b" (UID: "bae90205-c6b8-4fa2-b527-e9788ef6ae5b"). InnerVolumeSpecName "kube-api-access-gcgsc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 25 08:16:42 crc kubenswrapper[4832]: I0125 08:16:42.556232 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bae90205-c6b8-4fa2-b527-e9788ef6ae5b-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "bae90205-c6b8-4fa2-b527-e9788ef6ae5b" (UID: "bae90205-c6b8-4fa2-b527-e9788ef6ae5b"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 25 08:16:42 crc kubenswrapper[4832]: I0125 08:16:42.568120 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bae90205-c6b8-4fa2-b527-e9788ef6ae5b-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "bae90205-c6b8-4fa2-b527-e9788ef6ae5b" (UID: "bae90205-c6b8-4fa2-b527-e9788ef6ae5b"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 25 08:16:42 crc kubenswrapper[4832]: I0125 08:16:42.578202 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bae90205-c6b8-4fa2-b527-e9788ef6ae5b-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "bae90205-c6b8-4fa2-b527-e9788ef6ae5b" (UID: "bae90205-c6b8-4fa2-b527-e9788ef6ae5b"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 25 08:16:42 crc kubenswrapper[4832]: I0125 08:16:42.584056 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bae90205-c6b8-4fa2-b527-e9788ef6ae5b-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "bae90205-c6b8-4fa2-b527-e9788ef6ae5b" (UID: "bae90205-c6b8-4fa2-b527-e9788ef6ae5b"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 25 08:16:42 crc kubenswrapper[4832]: I0125 08:16:42.606690 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bae90205-c6b8-4fa2-b527-e9788ef6ae5b-config" (OuterVolumeSpecName: "config") pod "bae90205-c6b8-4fa2-b527-e9788ef6ae5b" (UID: "bae90205-c6b8-4fa2-b527-e9788ef6ae5b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 25 08:16:42 crc kubenswrapper[4832]: I0125 08:16:42.632937 4832 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/bae90205-c6b8-4fa2-b527-e9788ef6ae5b-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 25 08:16:42 crc kubenswrapper[4832]: I0125 08:16:42.633009 4832 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bae90205-c6b8-4fa2-b527-e9788ef6ae5b-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 25 08:16:42 crc kubenswrapper[4832]: I0125 08:16:42.633021 4832 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bae90205-c6b8-4fa2-b527-e9788ef6ae5b-config\") on node \"crc\" DevicePath \"\"" Jan 25 08:16:42 crc kubenswrapper[4832]: I0125 08:16:42.633031 4832 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/bae90205-c6b8-4fa2-b527-e9788ef6ae5b-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 25 08:16:42 crc kubenswrapper[4832]: I0125 08:16:42.633043 4832 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/bae90205-c6b8-4fa2-b527-e9788ef6ae5b-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 25 08:16:42 crc kubenswrapper[4832]: I0125 08:16:42.633052 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gcgsc\" (UniqueName: \"kubernetes.io/projected/bae90205-c6b8-4fa2-b527-e9788ef6ae5b-kube-api-access-gcgsc\") on node \"crc\" DevicePath \"\"" Jan 25 08:16:43 crc kubenswrapper[4832]: I0125 08:16:43.006043 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"a7429790-03f9-46f3-96d2-5cf0e5323437","Type":"ContainerStarted","Data":"a59ae8b90264c76e026f186a375ce99f1a0ea5f86a87ea27bf15d9c60ad2587b"} Jan 25 08:16:43 crc kubenswrapper[4832]: I0125 08:16:43.031719 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"20df59e8-9934-47c9-9d8f-a97e0f046368","Type":"ContainerStarted","Data":"50105461bcbe121d53114c0cc573823f4f72024263a595b9363688fc3b8d5881"} Jan 25 08:16:43 crc kubenswrapper[4832]: I0125 08:16:43.041068 4832 generic.go:334] "Generic (PLEG): container finished" podID="23584092-31c4-45a1-bf04-88e7f6bb9ece" containerID="b14131af1f01635c790897f065c2918beb976670f6a0aa776de8cb70a7977691" exitCode=0 Jan 25 08:16:43 crc kubenswrapper[4832]: I0125 08:16:43.041243 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5784cf869f-5ld69" event={"ID":"23584092-31c4-45a1-bf04-88e7f6bb9ece","Type":"ContainerDied","Data":"b14131af1f01635c790897f065c2918beb976670f6a0aa776de8cb70a7977691"} Jan 25 08:16:43 crc kubenswrapper[4832]: I0125 08:16:43.041285 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5784cf869f-5ld69" event={"ID":"23584092-31c4-45a1-bf04-88e7f6bb9ece","Type":"ContainerStarted","Data":"3a9334c361bca692b685a64fae3b6a9bb4c9df39a7756612e7c611056f12bab4"} Jan 25 08:16:43 crc kubenswrapper[4832]: I0125 08:16:43.052211 4832 generic.go:334] "Generic (PLEG): container finished" podID="986e4317-1281-48cd-962b-0873de0e5744" containerID="fafa4bc0f3b74a34fc849ba9d298b937bb7f565887ef7b9fb4566a43a735f35a" exitCode=0 Jan 25 08:16:43 crc kubenswrapper[4832]: I0125 08:16:43.052324 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-75c8ddd69c-582pd" event={"ID":"986e4317-1281-48cd-962b-0873de0e5744","Type":"ContainerDied","Data":"fafa4bc0f3b74a34fc849ba9d298b937bb7f565887ef7b9fb4566a43a735f35a"} Jan 25 08:16:43 crc kubenswrapper[4832]: I0125 08:16:43.073790 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-84b966f6c9-8n6dh" event={"ID":"bae90205-c6b8-4fa2-b527-e9788ef6ae5b","Type":"ContainerDied","Data":"aac097c812dd6dc72b5ea4201c53744621d4b4820301967098cd5375077e8a2c"} Jan 25 08:16:43 crc kubenswrapper[4832]: I0125 08:16:43.073841 4832 scope.go:117] "RemoveContainer" containerID="10f8207df235e165ac87488b90a96abccb9c9fac006f274e9abb7d308c89e414" Jan 25 08:16:43 crc kubenswrapper[4832]: I0125 08:16:43.073995 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-84b966f6c9-8n6dh" Jan 25 08:16:43 crc kubenswrapper[4832]: I0125 08:16:43.099563 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"57235bbb-0d8b-45ea-ad16-e42723ce9047","Type":"ContainerStarted","Data":"88d93d8922f40a654683282cb6c67b5d9a2abcfb865d9c7d1af96a3a9b19ec48"} Jan 25 08:16:43 crc kubenswrapper[4832]: I0125 08:16:43.110331 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"cf6bae18-db06-4abf-a6b1-aa1eda2cc70e","Type":"ContainerStarted","Data":"765105d11f9a9e0a6de4d477348f16fde3891fcd1fcb4f70c8fa137cac7aed7e"} Jan 25 08:16:43 crc kubenswrapper[4832]: I0125 08:16:43.176102 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-6d6d8975cd-v8jf8" event={"ID":"31271ce3-bbf8-4033-b2ba-5e47f4e9a151","Type":"ContainerStarted","Data":"ac30079689906c935c5df69c10e6f58d72656176e6acd96a6f8750d0d5df0de9"} Jan 25 08:16:43 crc kubenswrapper[4832]: I0125 08:16:43.176267 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-6d6d8975cd-v8jf8" Jan 25 08:16:43 crc kubenswrapper[4832]: I0125 08:16:43.176289 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-6d6d8975cd-v8jf8" Jan 25 08:16:43 crc kubenswrapper[4832]: I0125 08:16:43.176306 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-6d6d8975cd-v8jf8" event={"ID":"31271ce3-bbf8-4033-b2ba-5e47f4e9a151","Type":"ContainerStarted","Data":"ba53edfcba5fb3514f58bd4974ce0ce60f36709ad87b64874274b12e9e753968"} Jan 25 08:16:43 crc kubenswrapper[4832]: I0125 08:16:43.474226 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-6d6d8975cd-v8jf8" podStartSLOduration=4.474198337 podStartE2EDuration="4.474198337s" podCreationTimestamp="2026-01-25 08:16:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-25 08:16:43.329322245 +0000 UTC m=+1186.003145798" watchObservedRunningTime="2026-01-25 08:16:43.474198337 +0000 UTC m=+1186.148021870" Jan 25 08:16:43 crc kubenswrapper[4832]: I0125 08:16:43.502187 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-84b966f6c9-8n6dh"] Jan 25 08:16:43 crc kubenswrapper[4832]: I0125 08:16:43.510973 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-84b966f6c9-8n6dh"] Jan 25 08:16:43 crc kubenswrapper[4832]: I0125 08:16:43.726054 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bae90205-c6b8-4fa2-b527-e9788ef6ae5b" path="/var/lib/kubelet/pods/bae90205-c6b8-4fa2-b527-e9788ef6ae5b/volumes" Jan 25 08:16:43 crc kubenswrapper[4832]: I0125 08:16:43.917833 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-75c8ddd69c-582pd" Jan 25 08:16:44 crc kubenswrapper[4832]: I0125 08:16:44.085355 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/986e4317-1281-48cd-962b-0873de0e5744-ovsdbserver-sb\") pod \"986e4317-1281-48cd-962b-0873de0e5744\" (UID: \"986e4317-1281-48cd-962b-0873de0e5744\") " Jan 25 08:16:44 crc kubenswrapper[4832]: I0125 08:16:44.085859 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/986e4317-1281-48cd-962b-0873de0e5744-config\") pod \"986e4317-1281-48cd-962b-0873de0e5744\" (UID: \"986e4317-1281-48cd-962b-0873de0e5744\") " Jan 25 08:16:44 crc kubenswrapper[4832]: I0125 08:16:44.086206 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/986e4317-1281-48cd-962b-0873de0e5744-ovsdbserver-nb\") pod \"986e4317-1281-48cd-962b-0873de0e5744\" (UID: \"986e4317-1281-48cd-962b-0873de0e5744\") " Jan 25 08:16:44 crc kubenswrapper[4832]: I0125 08:16:44.086255 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/986e4317-1281-48cd-962b-0873de0e5744-dns-svc\") pod \"986e4317-1281-48cd-962b-0873de0e5744\" (UID: \"986e4317-1281-48cd-962b-0873de0e5744\") " Jan 25 08:16:44 crc kubenswrapper[4832]: I0125 08:16:44.086280 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/986e4317-1281-48cd-962b-0873de0e5744-dns-swift-storage-0\") pod \"986e4317-1281-48cd-962b-0873de0e5744\" (UID: \"986e4317-1281-48cd-962b-0873de0e5744\") " Jan 25 08:16:44 crc kubenswrapper[4832]: I0125 08:16:44.086350 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6wr54\" (UniqueName: \"kubernetes.io/projected/986e4317-1281-48cd-962b-0873de0e5744-kube-api-access-6wr54\") pod \"986e4317-1281-48cd-962b-0873de0e5744\" (UID: \"986e4317-1281-48cd-962b-0873de0e5744\") " Jan 25 08:16:44 crc kubenswrapper[4832]: I0125 08:16:44.113244 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 25 08:16:44 crc kubenswrapper[4832]: I0125 08:16:44.132852 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/986e4317-1281-48cd-962b-0873de0e5744-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "986e4317-1281-48cd-962b-0873de0e5744" (UID: "986e4317-1281-48cd-962b-0873de0e5744"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 25 08:16:44 crc kubenswrapper[4832]: I0125 08:16:44.164447 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/986e4317-1281-48cd-962b-0873de0e5744-kube-api-access-6wr54" (OuterVolumeSpecName: "kube-api-access-6wr54") pod "986e4317-1281-48cd-962b-0873de0e5744" (UID: "986e4317-1281-48cd-962b-0873de0e5744"). InnerVolumeSpecName "kube-api-access-6wr54". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 25 08:16:44 crc kubenswrapper[4832]: I0125 08:16:44.174552 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/986e4317-1281-48cd-962b-0873de0e5744-config" (OuterVolumeSpecName: "config") pod "986e4317-1281-48cd-962b-0873de0e5744" (UID: "986e4317-1281-48cd-962b-0873de0e5744"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 25 08:16:44 crc kubenswrapper[4832]: I0125 08:16:44.188721 4832 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/986e4317-1281-48cd-962b-0873de0e5744-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 25 08:16:44 crc kubenswrapper[4832]: I0125 08:16:44.189116 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6wr54\" (UniqueName: \"kubernetes.io/projected/986e4317-1281-48cd-962b-0873de0e5744-kube-api-access-6wr54\") on node \"crc\" DevicePath \"\"" Jan 25 08:16:44 crc kubenswrapper[4832]: I0125 08:16:44.189189 4832 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/986e4317-1281-48cd-962b-0873de0e5744-config\") on node \"crc\" DevicePath \"\"" Jan 25 08:16:44 crc kubenswrapper[4832]: I0125 08:16:44.189010 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/986e4317-1281-48cd-962b-0873de0e5744-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "986e4317-1281-48cd-962b-0873de0e5744" (UID: "986e4317-1281-48cd-962b-0873de0e5744"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 25 08:16:44 crc kubenswrapper[4832]: I0125 08:16:44.190000 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/986e4317-1281-48cd-962b-0873de0e5744-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "986e4317-1281-48cd-962b-0873de0e5744" (UID: "986e4317-1281-48cd-962b-0873de0e5744"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 25 08:16:44 crc kubenswrapper[4832]: I0125 08:16:44.210982 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/986e4317-1281-48cd-962b-0873de0e5744-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "986e4317-1281-48cd-962b-0873de0e5744" (UID: "986e4317-1281-48cd-962b-0873de0e5744"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 25 08:16:44 crc kubenswrapper[4832]: I0125 08:16:44.237844 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 25 08:16:44 crc kubenswrapper[4832]: I0125 08:16:44.267267 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5784cf869f-5ld69" event={"ID":"23584092-31c4-45a1-bf04-88e7f6bb9ece","Type":"ContainerStarted","Data":"b8928205d0efd78f2007dc8145ab2101564458ff697c4c0457d12393a40ff035"} Jan 25 08:16:44 crc kubenswrapper[4832]: I0125 08:16:44.268923 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5784cf869f-5ld69" Jan 25 08:16:44 crc kubenswrapper[4832]: I0125 08:16:44.300841 4832 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/986e4317-1281-48cd-962b-0873de0e5744-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 25 08:16:44 crc kubenswrapper[4832]: I0125 08:16:44.300884 4832 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/986e4317-1281-48cd-962b-0873de0e5744-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 25 08:16:44 crc kubenswrapper[4832]: I0125 08:16:44.300897 4832 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/986e4317-1281-48cd-962b-0873de0e5744-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 25 08:16:44 crc kubenswrapper[4832]: I0125 08:16:44.308123 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-75c8ddd69c-582pd" event={"ID":"986e4317-1281-48cd-962b-0873de0e5744","Type":"ContainerDied","Data":"bc7333c71eb44d1d3685b4a89b7fde97d817aec90d41b7fc7cfafdbd576f9fb2"} Jan 25 08:16:44 crc kubenswrapper[4832]: I0125 08:16:44.308178 4832 scope.go:117] "RemoveContainer" containerID="fafa4bc0f3b74a34fc849ba9d298b937bb7f565887ef7b9fb4566a43a735f35a" Jan 25 08:16:44 crc kubenswrapper[4832]: I0125 08:16:44.308309 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-75c8ddd69c-582pd" Jan 25 08:16:44 crc kubenswrapper[4832]: I0125 08:16:44.319288 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5784cf869f-5ld69" podStartSLOduration=4.319268641 podStartE2EDuration="4.319268641s" podCreationTimestamp="2026-01-25 08:16:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-25 08:16:44.299678928 +0000 UTC m=+1186.973502481" watchObservedRunningTime="2026-01-25 08:16:44.319268641 +0000 UTC m=+1186.993092174" Jan 25 08:16:44 crc kubenswrapper[4832]: I0125 08:16:44.358675 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"57235bbb-0d8b-45ea-ad16-e42723ce9047","Type":"ContainerStarted","Data":"ba3fd16ebff598e441a9cf6472d940128bd9a9fcc112f1d4c39af51485562467"} Jan 25 08:16:44 crc kubenswrapper[4832]: I0125 08:16:44.365657 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"cf6bae18-db06-4abf-a6b1-aa1eda2cc70e","Type":"ContainerStarted","Data":"a0c9d10629fda3bdc77408cc23ba7e9811800a8d879969e78f7af0bd70a947a1"} Jan 25 08:16:44 crc kubenswrapper[4832]: I0125 08:16:44.450128 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-75c8ddd69c-582pd"] Jan 25 08:16:44 crc kubenswrapper[4832]: I0125 08:16:44.489297 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-75c8ddd69c-582pd"] Jan 25 08:16:45 crc kubenswrapper[4832]: I0125 08:16:45.306160 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-5cd5868dbb-cxxfw" Jan 25 08:16:45 crc kubenswrapper[4832]: I0125 08:16:45.411135 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"a7429790-03f9-46f3-96d2-5cf0e5323437","Type":"ContainerStarted","Data":"f54a9d4c576e11ec459f931e16bc247e33e80eceec82fa91cd62dc1de2c6ddb5"} Jan 25 08:16:45 crc kubenswrapper[4832]: I0125 08:16:45.412130 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="a7429790-03f9-46f3-96d2-5cf0e5323437" containerName="glance-log" containerID="cri-o://a59ae8b90264c76e026f186a375ce99f1a0ea5f86a87ea27bf15d9c60ad2587b" gracePeriod=30 Jan 25 08:16:45 crc kubenswrapper[4832]: I0125 08:16:45.412425 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="a7429790-03f9-46f3-96d2-5cf0e5323437" containerName="glance-httpd" containerID="cri-o://f54a9d4c576e11ec459f931e16bc247e33e80eceec82fa91cd62dc1de2c6ddb5" gracePeriod=30 Jan 25 08:16:45 crc kubenswrapper[4832]: I0125 08:16:45.415937 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"20df59e8-9934-47c9-9d8f-a97e0f046368","Type":"ContainerStarted","Data":"09ca5f5ac2308a34d67b7f3713bdec702e3804405ce910494e50503e064a9dba"} Jan 25 08:16:45 crc kubenswrapper[4832]: I0125 08:16:45.468866 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=7.468838879 podStartE2EDuration="7.468838879s" podCreationTimestamp="2026-01-25 08:16:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-25 08:16:45.440659888 +0000 UTC m=+1188.114483421" watchObservedRunningTime="2026-01-25 08:16:45.468838879 +0000 UTC m=+1188.142662402" Jan 25 08:16:45 crc kubenswrapper[4832]: I0125 08:16:45.693913 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="986e4317-1281-48cd-962b-0873de0e5744" path="/var/lib/kubelet/pods/986e4317-1281-48cd-962b-0873de0e5744/volumes" Jan 25 08:16:46 crc kubenswrapper[4832]: I0125 08:16:46.009081 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Jan 25 08:16:46 crc kubenswrapper[4832]: I0125 08:16:46.027777 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 25 08:16:46 crc kubenswrapper[4832]: I0125 08:16:46.172690 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b48b257e-ddb7-486d-8788-489ca788ac1f-run-httpd\") pod \"b48b257e-ddb7-486d-8788-489ca788ac1f\" (UID: \"b48b257e-ddb7-486d-8788-489ca788ac1f\") " Jan 25 08:16:46 crc kubenswrapper[4832]: I0125 08:16:46.172813 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b48b257e-ddb7-486d-8788-489ca788ac1f-config-data\") pod \"b48b257e-ddb7-486d-8788-489ca788ac1f\" (UID: \"b48b257e-ddb7-486d-8788-489ca788ac1f\") " Jan 25 08:16:46 crc kubenswrapper[4832]: I0125 08:16:46.172845 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t5q9s\" (UniqueName: \"kubernetes.io/projected/b48b257e-ddb7-486d-8788-489ca788ac1f-kube-api-access-t5q9s\") pod \"b48b257e-ddb7-486d-8788-489ca788ac1f\" (UID: \"b48b257e-ddb7-486d-8788-489ca788ac1f\") " Jan 25 08:16:46 crc kubenswrapper[4832]: I0125 08:16:46.172886 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b48b257e-ddb7-486d-8788-489ca788ac1f-scripts\") pod \"b48b257e-ddb7-486d-8788-489ca788ac1f\" (UID: \"b48b257e-ddb7-486d-8788-489ca788ac1f\") " Jan 25 08:16:46 crc kubenswrapper[4832]: I0125 08:16:46.172973 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b48b257e-ddb7-486d-8788-489ca788ac1f-combined-ca-bundle\") pod \"b48b257e-ddb7-486d-8788-489ca788ac1f\" (UID: \"b48b257e-ddb7-486d-8788-489ca788ac1f\") " Jan 25 08:16:46 crc kubenswrapper[4832]: I0125 08:16:46.172992 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b48b257e-ddb7-486d-8788-489ca788ac1f-log-httpd\") pod \"b48b257e-ddb7-486d-8788-489ca788ac1f\" (UID: \"b48b257e-ddb7-486d-8788-489ca788ac1f\") " Jan 25 08:16:46 crc kubenswrapper[4832]: I0125 08:16:46.173026 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b48b257e-ddb7-486d-8788-489ca788ac1f-sg-core-conf-yaml\") pod \"b48b257e-ddb7-486d-8788-489ca788ac1f\" (UID: \"b48b257e-ddb7-486d-8788-489ca788ac1f\") " Jan 25 08:16:46 crc kubenswrapper[4832]: I0125 08:16:46.174955 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b48b257e-ddb7-486d-8788-489ca788ac1f-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "b48b257e-ddb7-486d-8788-489ca788ac1f" (UID: "b48b257e-ddb7-486d-8788-489ca788ac1f"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 25 08:16:46 crc kubenswrapper[4832]: I0125 08:16:46.179942 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b48b257e-ddb7-486d-8788-489ca788ac1f-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "b48b257e-ddb7-486d-8788-489ca788ac1f" (UID: "b48b257e-ddb7-486d-8788-489ca788ac1f"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 25 08:16:46 crc kubenswrapper[4832]: I0125 08:16:46.186464 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b48b257e-ddb7-486d-8788-489ca788ac1f-kube-api-access-t5q9s" (OuterVolumeSpecName: "kube-api-access-t5q9s") pod "b48b257e-ddb7-486d-8788-489ca788ac1f" (UID: "b48b257e-ddb7-486d-8788-489ca788ac1f"). InnerVolumeSpecName "kube-api-access-t5q9s". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 25 08:16:46 crc kubenswrapper[4832]: I0125 08:16:46.204881 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b48b257e-ddb7-486d-8788-489ca788ac1f-scripts" (OuterVolumeSpecName: "scripts") pod "b48b257e-ddb7-486d-8788-489ca788ac1f" (UID: "b48b257e-ddb7-486d-8788-489ca788ac1f"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 25 08:16:46 crc kubenswrapper[4832]: I0125 08:16:46.279709 4832 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b48b257e-ddb7-486d-8788-489ca788ac1f-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 25 08:16:46 crc kubenswrapper[4832]: I0125 08:16:46.279740 4832 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b48b257e-ddb7-486d-8788-489ca788ac1f-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 25 08:16:46 crc kubenswrapper[4832]: I0125 08:16:46.279753 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t5q9s\" (UniqueName: \"kubernetes.io/projected/b48b257e-ddb7-486d-8788-489ca788ac1f-kube-api-access-t5q9s\") on node \"crc\" DevicePath \"\"" Jan 25 08:16:46 crc kubenswrapper[4832]: I0125 08:16:46.279765 4832 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b48b257e-ddb7-486d-8788-489ca788ac1f-scripts\") on node \"crc\" DevicePath \"\"" Jan 25 08:16:46 crc kubenswrapper[4832]: I0125 08:16:46.537121 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b48b257e-ddb7-486d-8788-489ca788ac1f-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "b48b257e-ddb7-486d-8788-489ca788ac1f" (UID: "b48b257e-ddb7-486d-8788-489ca788ac1f"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 25 08:16:46 crc kubenswrapper[4832]: I0125 08:16:46.558713 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b48b257e-ddb7-486d-8788-489ca788ac1f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b48b257e-ddb7-486d-8788-489ca788ac1f" (UID: "b48b257e-ddb7-486d-8788-489ca788ac1f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 25 08:16:46 crc kubenswrapper[4832]: I0125 08:16:46.579584 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b48b257e-ddb7-486d-8788-489ca788ac1f-config-data" (OuterVolumeSpecName: "config-data") pod "b48b257e-ddb7-486d-8788-489ca788ac1f" (UID: "b48b257e-ddb7-486d-8788-489ca788ac1f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 25 08:16:46 crc kubenswrapper[4832]: I0125 08:16:46.633599 4832 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b48b257e-ddb7-486d-8788-489ca788ac1f-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 25 08:16:46 crc kubenswrapper[4832]: I0125 08:16:46.633680 4832 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b48b257e-ddb7-486d-8788-489ca788ac1f-config-data\") on node \"crc\" DevicePath \"\"" Jan 25 08:16:46 crc kubenswrapper[4832]: I0125 08:16:46.633690 4832 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b48b257e-ddb7-486d-8788-489ca788ac1f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 25 08:16:46 crc kubenswrapper[4832]: I0125 08:16:46.648576 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-7b4947bb84-pmdh6" event={"ID":"4899f618-1f51-4d34-9970-7c096359b47e","Type":"ContainerStarted","Data":"0b5557efdb76de4a00b7b43e280754cd38dc04709d2e6409897912b63e6d2a01"} Jan 25 08:16:46 crc kubenswrapper[4832]: I0125 08:16:46.685256 4832 generic.go:334] "Generic (PLEG): container finished" podID="a7429790-03f9-46f3-96d2-5cf0e5323437" containerID="f54a9d4c576e11ec459f931e16bc247e33e80eceec82fa91cd62dc1de2c6ddb5" exitCode=0 Jan 25 08:16:46 crc kubenswrapper[4832]: I0125 08:16:46.685292 4832 generic.go:334] "Generic (PLEG): container finished" podID="a7429790-03f9-46f3-96d2-5cf0e5323437" containerID="a59ae8b90264c76e026f186a375ce99f1a0ea5f86a87ea27bf15d9c60ad2587b" exitCode=143 Jan 25 08:16:46 crc kubenswrapper[4832]: I0125 08:16:46.685394 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"a7429790-03f9-46f3-96d2-5cf0e5323437","Type":"ContainerDied","Data":"f54a9d4c576e11ec459f931e16bc247e33e80eceec82fa91cd62dc1de2c6ddb5"} Jan 25 08:16:46 crc kubenswrapper[4832]: I0125 08:16:46.685427 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"a7429790-03f9-46f3-96d2-5cf0e5323437","Type":"ContainerDied","Data":"a59ae8b90264c76e026f186a375ce99f1a0ea5f86a87ea27bf15d9c60ad2587b"} Jan 25 08:16:46 crc kubenswrapper[4832]: I0125 08:16:46.735003 4832 generic.go:334] "Generic (PLEG): container finished" podID="b48b257e-ddb7-486d-8788-489ca788ac1f" containerID="f68d63b552212b0d184f580f49e465d6ead51b8d0e31c283a3b07b744696dda7" exitCode=0 Jan 25 08:16:46 crc kubenswrapper[4832]: I0125 08:16:46.735130 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b48b257e-ddb7-486d-8788-489ca788ac1f","Type":"ContainerDied","Data":"f68d63b552212b0d184f580f49e465d6ead51b8d0e31c283a3b07b744696dda7"} Jan 25 08:16:46 crc kubenswrapper[4832]: I0125 08:16:46.735181 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b48b257e-ddb7-486d-8788-489ca788ac1f","Type":"ContainerDied","Data":"bd97d431faa8df4bf55472aa074f3fa273172c7c61c899751f1fbe4fb586947e"} Jan 25 08:16:46 crc kubenswrapper[4832]: I0125 08:16:46.735205 4832 scope.go:117] "RemoveContainer" containerID="acbe2cd8067e9a50f978ad1b1a6c5d6ece519325f7bb452b369529624b9c7801" Jan 25 08:16:46 crc kubenswrapper[4832]: I0125 08:16:46.737888 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 25 08:16:46 crc kubenswrapper[4832]: I0125 08:16:46.747124 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-855cdf875c-rxk79" event={"ID":"26baac3d-6d07-4f33-956e-4048e3318099","Type":"ContainerStarted","Data":"78bbbd3302088b6a011018d6b2f9efa3b3684447ffbee079975268022feadb99"} Jan 25 08:16:46 crc kubenswrapper[4832]: I0125 08:16:46.791165 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 25 08:16:46 crc kubenswrapper[4832]: I0125 08:16:46.901936 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 25 08:16:46 crc kubenswrapper[4832]: I0125 08:16:46.916154 4832 scope.go:117] "RemoveContainer" containerID="dad362216754986eabe4008de7a8656a90cebd02e9d6abe54bde28eba71a3667" Jan 25 08:16:46 crc kubenswrapper[4832]: I0125 08:16:46.942454 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a7429790-03f9-46f3-96d2-5cf0e5323437-combined-ca-bundle\") pod \"a7429790-03f9-46f3-96d2-5cf0e5323437\" (UID: \"a7429790-03f9-46f3-96d2-5cf0e5323437\") " Jan 25 08:16:46 crc kubenswrapper[4832]: I0125 08:16:46.942525 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d8r8b\" (UniqueName: \"kubernetes.io/projected/a7429790-03f9-46f3-96d2-5cf0e5323437-kube-api-access-d8r8b\") pod \"a7429790-03f9-46f3-96d2-5cf0e5323437\" (UID: \"a7429790-03f9-46f3-96d2-5cf0e5323437\") " Jan 25 08:16:46 crc kubenswrapper[4832]: I0125 08:16:46.942621 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/a7429790-03f9-46f3-96d2-5cf0e5323437-httpd-run\") pod \"a7429790-03f9-46f3-96d2-5cf0e5323437\" (UID: \"a7429790-03f9-46f3-96d2-5cf0e5323437\") " Jan 25 08:16:46 crc kubenswrapper[4832]: I0125 08:16:46.942722 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a7429790-03f9-46f3-96d2-5cf0e5323437-scripts\") pod \"a7429790-03f9-46f3-96d2-5cf0e5323437\" (UID: \"a7429790-03f9-46f3-96d2-5cf0e5323437\") " Jan 25 08:16:46 crc kubenswrapper[4832]: I0125 08:16:46.942791 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a7429790-03f9-46f3-96d2-5cf0e5323437-config-data\") pod \"a7429790-03f9-46f3-96d2-5cf0e5323437\" (UID: \"a7429790-03f9-46f3-96d2-5cf0e5323437\") " Jan 25 08:16:46 crc kubenswrapper[4832]: I0125 08:16:46.942819 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a7429790-03f9-46f3-96d2-5cf0e5323437-logs\") pod \"a7429790-03f9-46f3-96d2-5cf0e5323437\" (UID: \"a7429790-03f9-46f3-96d2-5cf0e5323437\") " Jan 25 08:16:46 crc kubenswrapper[4832]: I0125 08:16:46.942842 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"a7429790-03f9-46f3-96d2-5cf0e5323437\" (UID: \"a7429790-03f9-46f3-96d2-5cf0e5323437\") " Jan 25 08:16:46 crc kubenswrapper[4832]: I0125 08:16:46.945713 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a7429790-03f9-46f3-96d2-5cf0e5323437-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "a7429790-03f9-46f3-96d2-5cf0e5323437" (UID: "a7429790-03f9-46f3-96d2-5cf0e5323437"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 25 08:16:46 crc kubenswrapper[4832]: I0125 08:16:46.948080 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a7429790-03f9-46f3-96d2-5cf0e5323437-logs" (OuterVolumeSpecName: "logs") pod "a7429790-03f9-46f3-96d2-5cf0e5323437" (UID: "a7429790-03f9-46f3-96d2-5cf0e5323437"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 25 08:16:46 crc kubenswrapper[4832]: I0125 08:16:46.952327 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 25 08:16:46 crc kubenswrapper[4832]: I0125 08:16:46.956564 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a7429790-03f9-46f3-96d2-5cf0e5323437-scripts" (OuterVolumeSpecName: "scripts") pod "a7429790-03f9-46f3-96d2-5cf0e5323437" (UID: "a7429790-03f9-46f3-96d2-5cf0e5323437"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 25 08:16:46 crc kubenswrapper[4832]: I0125 08:16:46.962069 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a7429790-03f9-46f3-96d2-5cf0e5323437-kube-api-access-d8r8b" (OuterVolumeSpecName: "kube-api-access-d8r8b") pod "a7429790-03f9-46f3-96d2-5cf0e5323437" (UID: "a7429790-03f9-46f3-96d2-5cf0e5323437"). InnerVolumeSpecName "kube-api-access-d8r8b". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 25 08:16:46 crc kubenswrapper[4832]: I0125 08:16:46.972942 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage02-crc" (OuterVolumeSpecName: "glance") pod "a7429790-03f9-46f3-96d2-5cf0e5323437" (UID: "a7429790-03f9-46f3-96d2-5cf0e5323437"). InnerVolumeSpecName "local-storage02-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 25 08:16:46 crc kubenswrapper[4832]: I0125 08:16:46.973751 4832 scope.go:117] "RemoveContainer" containerID="f68d63b552212b0d184f580f49e465d6ead51b8d0e31c283a3b07b744696dda7" Jan 25 08:16:46 crc kubenswrapper[4832]: I0125 08:16:46.987239 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 25 08:16:46 crc kubenswrapper[4832]: E0125 08:16:46.989937 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="986e4317-1281-48cd-962b-0873de0e5744" containerName="init" Jan 25 08:16:46 crc kubenswrapper[4832]: I0125 08:16:46.989968 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="986e4317-1281-48cd-962b-0873de0e5744" containerName="init" Jan 25 08:16:46 crc kubenswrapper[4832]: E0125 08:16:46.989987 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a7429790-03f9-46f3-96d2-5cf0e5323437" containerName="glance-httpd" Jan 25 08:16:46 crc kubenswrapper[4832]: I0125 08:16:46.989997 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="a7429790-03f9-46f3-96d2-5cf0e5323437" containerName="glance-httpd" Jan 25 08:16:46 crc kubenswrapper[4832]: E0125 08:16:46.990017 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b48b257e-ddb7-486d-8788-489ca788ac1f" containerName="sg-core" Jan 25 08:16:46 crc kubenswrapper[4832]: I0125 08:16:46.990025 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="b48b257e-ddb7-486d-8788-489ca788ac1f" containerName="sg-core" Jan 25 08:16:46 crc kubenswrapper[4832]: E0125 08:16:46.990037 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b48b257e-ddb7-486d-8788-489ca788ac1f" containerName="ceilometer-notification-agent" Jan 25 08:16:46 crc kubenswrapper[4832]: I0125 08:16:46.990047 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="b48b257e-ddb7-486d-8788-489ca788ac1f" containerName="ceilometer-notification-agent" Jan 25 08:16:46 crc kubenswrapper[4832]: E0125 08:16:46.990061 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b48b257e-ddb7-486d-8788-489ca788ac1f" containerName="proxy-httpd" Jan 25 08:16:46 crc kubenswrapper[4832]: I0125 08:16:46.990069 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="b48b257e-ddb7-486d-8788-489ca788ac1f" containerName="proxy-httpd" Jan 25 08:16:46 crc kubenswrapper[4832]: E0125 08:16:46.990082 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bae90205-c6b8-4fa2-b527-e9788ef6ae5b" containerName="init" Jan 25 08:16:46 crc kubenswrapper[4832]: I0125 08:16:46.990090 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="bae90205-c6b8-4fa2-b527-e9788ef6ae5b" containerName="init" Jan 25 08:16:46 crc kubenswrapper[4832]: E0125 08:16:46.990105 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a7429790-03f9-46f3-96d2-5cf0e5323437" containerName="glance-log" Jan 25 08:16:46 crc kubenswrapper[4832]: I0125 08:16:46.990114 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="a7429790-03f9-46f3-96d2-5cf0e5323437" containerName="glance-log" Jan 25 08:16:46 crc kubenswrapper[4832]: I0125 08:16:46.990693 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="b48b257e-ddb7-486d-8788-489ca788ac1f" containerName="sg-core" Jan 25 08:16:46 crc kubenswrapper[4832]: I0125 08:16:46.990784 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="986e4317-1281-48cd-962b-0873de0e5744" containerName="init" Jan 25 08:16:46 crc kubenswrapper[4832]: I0125 08:16:46.990816 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="a7429790-03f9-46f3-96d2-5cf0e5323437" containerName="glance-log" Jan 25 08:16:46 crc kubenswrapper[4832]: I0125 08:16:46.990826 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="bae90205-c6b8-4fa2-b527-e9788ef6ae5b" containerName="init" Jan 25 08:16:46 crc kubenswrapper[4832]: I0125 08:16:46.990850 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="b48b257e-ddb7-486d-8788-489ca788ac1f" containerName="ceilometer-notification-agent" Jan 25 08:16:46 crc kubenswrapper[4832]: I0125 08:16:46.990867 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="a7429790-03f9-46f3-96d2-5cf0e5323437" containerName="glance-httpd" Jan 25 08:16:46 crc kubenswrapper[4832]: I0125 08:16:46.990890 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="b48b257e-ddb7-486d-8788-489ca788ac1f" containerName="proxy-httpd" Jan 25 08:16:47 crc kubenswrapper[4832]: I0125 08:16:47.003157 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 25 08:16:47 crc kubenswrapper[4832]: I0125 08:16:47.003277 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 25 08:16:47 crc kubenswrapper[4832]: I0125 08:16:47.007755 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 25 08:16:47 crc kubenswrapper[4832]: I0125 08:16:47.007972 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 25 08:16:47 crc kubenswrapper[4832]: I0125 08:16:47.015004 4832 scope.go:117] "RemoveContainer" containerID="acbe2cd8067e9a50f978ad1b1a6c5d6ece519325f7bb452b369529624b9c7801" Jan 25 08:16:47 crc kubenswrapper[4832]: E0125 08:16:47.023528 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"acbe2cd8067e9a50f978ad1b1a6c5d6ece519325f7bb452b369529624b9c7801\": container with ID starting with acbe2cd8067e9a50f978ad1b1a6c5d6ece519325f7bb452b369529624b9c7801 not found: ID does not exist" containerID="acbe2cd8067e9a50f978ad1b1a6c5d6ece519325f7bb452b369529624b9c7801" Jan 25 08:16:47 crc kubenswrapper[4832]: I0125 08:16:47.023587 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"acbe2cd8067e9a50f978ad1b1a6c5d6ece519325f7bb452b369529624b9c7801"} err="failed to get container status \"acbe2cd8067e9a50f978ad1b1a6c5d6ece519325f7bb452b369529624b9c7801\": rpc error: code = NotFound desc = could not find container \"acbe2cd8067e9a50f978ad1b1a6c5d6ece519325f7bb452b369529624b9c7801\": container with ID starting with acbe2cd8067e9a50f978ad1b1a6c5d6ece519325f7bb452b369529624b9c7801 not found: ID does not exist" Jan 25 08:16:47 crc kubenswrapper[4832]: I0125 08:16:47.023628 4832 scope.go:117] "RemoveContainer" containerID="dad362216754986eabe4008de7a8656a90cebd02e9d6abe54bde28eba71a3667" Jan 25 08:16:47 crc kubenswrapper[4832]: E0125 08:16:47.024557 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dad362216754986eabe4008de7a8656a90cebd02e9d6abe54bde28eba71a3667\": container with ID starting with dad362216754986eabe4008de7a8656a90cebd02e9d6abe54bde28eba71a3667 not found: ID does not exist" containerID="dad362216754986eabe4008de7a8656a90cebd02e9d6abe54bde28eba71a3667" Jan 25 08:16:47 crc kubenswrapper[4832]: I0125 08:16:47.024625 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dad362216754986eabe4008de7a8656a90cebd02e9d6abe54bde28eba71a3667"} err="failed to get container status \"dad362216754986eabe4008de7a8656a90cebd02e9d6abe54bde28eba71a3667\": rpc error: code = NotFound desc = could not find container \"dad362216754986eabe4008de7a8656a90cebd02e9d6abe54bde28eba71a3667\": container with ID starting with dad362216754986eabe4008de7a8656a90cebd02e9d6abe54bde28eba71a3667 not found: ID does not exist" Jan 25 08:16:47 crc kubenswrapper[4832]: I0125 08:16:47.024662 4832 scope.go:117] "RemoveContainer" containerID="f68d63b552212b0d184f580f49e465d6ead51b8d0e31c283a3b07b744696dda7" Jan 25 08:16:47 crc kubenswrapper[4832]: E0125 08:16:47.025284 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f68d63b552212b0d184f580f49e465d6ead51b8d0e31c283a3b07b744696dda7\": container with ID starting with f68d63b552212b0d184f580f49e465d6ead51b8d0e31c283a3b07b744696dda7 not found: ID does not exist" containerID="f68d63b552212b0d184f580f49e465d6ead51b8d0e31c283a3b07b744696dda7" Jan 25 08:16:47 crc kubenswrapper[4832]: I0125 08:16:47.025326 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f68d63b552212b0d184f580f49e465d6ead51b8d0e31c283a3b07b744696dda7"} err="failed to get container status \"f68d63b552212b0d184f580f49e465d6ead51b8d0e31c283a3b07b744696dda7\": rpc error: code = NotFound desc = could not find container \"f68d63b552212b0d184f580f49e465d6ead51b8d0e31c283a3b07b744696dda7\": container with ID starting with f68d63b552212b0d184f580f49e465d6ead51b8d0e31c283a3b07b744696dda7 not found: ID does not exist" Jan 25 08:16:47 crc kubenswrapper[4832]: I0125 08:16:47.033210 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a7429790-03f9-46f3-96d2-5cf0e5323437-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a7429790-03f9-46f3-96d2-5cf0e5323437" (UID: "a7429790-03f9-46f3-96d2-5cf0e5323437"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 25 08:16:47 crc kubenswrapper[4832]: I0125 08:16:47.046734 4832 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a7429790-03f9-46f3-96d2-5cf0e5323437-scripts\") on node \"crc\" DevicePath \"\"" Jan 25 08:16:47 crc kubenswrapper[4832]: I0125 08:16:47.046778 4832 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a7429790-03f9-46f3-96d2-5cf0e5323437-logs\") on node \"crc\" DevicePath \"\"" Jan 25 08:16:47 crc kubenswrapper[4832]: I0125 08:16:47.046803 4832 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") on node \"crc\" " Jan 25 08:16:47 crc kubenswrapper[4832]: I0125 08:16:47.046814 4832 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a7429790-03f9-46f3-96d2-5cf0e5323437-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 25 08:16:47 crc kubenswrapper[4832]: I0125 08:16:47.046826 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d8r8b\" (UniqueName: \"kubernetes.io/projected/a7429790-03f9-46f3-96d2-5cf0e5323437-kube-api-access-d8r8b\") on node \"crc\" DevicePath \"\"" Jan 25 08:16:47 crc kubenswrapper[4832]: I0125 08:16:47.046836 4832 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/a7429790-03f9-46f3-96d2-5cf0e5323437-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 25 08:16:47 crc kubenswrapper[4832]: I0125 08:16:47.082817 4832 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage02-crc" (UniqueName: "kubernetes.io/local-volume/local-storage02-crc") on node "crc" Jan 25 08:16:47 crc kubenswrapper[4832]: I0125 08:16:47.084693 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a7429790-03f9-46f3-96d2-5cf0e5323437-config-data" (OuterVolumeSpecName: "config-data") pod "a7429790-03f9-46f3-96d2-5cf0e5323437" (UID: "a7429790-03f9-46f3-96d2-5cf0e5323437"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 25 08:16:47 crc kubenswrapper[4832]: I0125 08:16:47.148795 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/46d917e3-482a-43d4-9c3a-a632acb41838-config-data\") pod \"ceilometer-0\" (UID: \"46d917e3-482a-43d4-9c3a-a632acb41838\") " pod="openstack/ceilometer-0" Jan 25 08:16:47 crc kubenswrapper[4832]: I0125 08:16:47.148877 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/46d917e3-482a-43d4-9c3a-a632acb41838-scripts\") pod \"ceilometer-0\" (UID: \"46d917e3-482a-43d4-9c3a-a632acb41838\") " pod="openstack/ceilometer-0" Jan 25 08:16:47 crc kubenswrapper[4832]: I0125 08:16:47.148899 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/46d917e3-482a-43d4-9c3a-a632acb41838-log-httpd\") pod \"ceilometer-0\" (UID: \"46d917e3-482a-43d4-9c3a-a632acb41838\") " pod="openstack/ceilometer-0" Jan 25 08:16:47 crc kubenswrapper[4832]: I0125 08:16:47.149078 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/46d917e3-482a-43d4-9c3a-a632acb41838-run-httpd\") pod \"ceilometer-0\" (UID: \"46d917e3-482a-43d4-9c3a-a632acb41838\") " pod="openstack/ceilometer-0" Jan 25 08:16:47 crc kubenswrapper[4832]: I0125 08:16:47.149220 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/46d917e3-482a-43d4-9c3a-a632acb41838-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"46d917e3-482a-43d4-9c3a-a632acb41838\") " pod="openstack/ceilometer-0" Jan 25 08:16:47 crc kubenswrapper[4832]: I0125 08:16:47.149278 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/46d917e3-482a-43d4-9c3a-a632acb41838-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"46d917e3-482a-43d4-9c3a-a632acb41838\") " pod="openstack/ceilometer-0" Jan 25 08:16:47 crc kubenswrapper[4832]: I0125 08:16:47.149304 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n7hw7\" (UniqueName: \"kubernetes.io/projected/46d917e3-482a-43d4-9c3a-a632acb41838-kube-api-access-n7hw7\") pod \"ceilometer-0\" (UID: \"46d917e3-482a-43d4-9c3a-a632acb41838\") " pod="openstack/ceilometer-0" Jan 25 08:16:47 crc kubenswrapper[4832]: I0125 08:16:47.149745 4832 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a7429790-03f9-46f3-96d2-5cf0e5323437-config-data\") on node \"crc\" DevicePath \"\"" Jan 25 08:16:47 crc kubenswrapper[4832]: I0125 08:16:47.149798 4832 reconciler_common.go:293] "Volume detached for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") on node \"crc\" DevicePath \"\"" Jan 25 08:16:47 crc kubenswrapper[4832]: I0125 08:16:47.251835 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/46d917e3-482a-43d4-9c3a-a632acb41838-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"46d917e3-482a-43d4-9c3a-a632acb41838\") " pod="openstack/ceilometer-0" Jan 25 08:16:47 crc kubenswrapper[4832]: I0125 08:16:47.251954 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/46d917e3-482a-43d4-9c3a-a632acb41838-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"46d917e3-482a-43d4-9c3a-a632acb41838\") " pod="openstack/ceilometer-0" Jan 25 08:16:47 crc kubenswrapper[4832]: I0125 08:16:47.251982 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n7hw7\" (UniqueName: \"kubernetes.io/projected/46d917e3-482a-43d4-9c3a-a632acb41838-kube-api-access-n7hw7\") pod \"ceilometer-0\" (UID: \"46d917e3-482a-43d4-9c3a-a632acb41838\") " pod="openstack/ceilometer-0" Jan 25 08:16:47 crc kubenswrapper[4832]: I0125 08:16:47.252021 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/46d917e3-482a-43d4-9c3a-a632acb41838-config-data\") pod \"ceilometer-0\" (UID: \"46d917e3-482a-43d4-9c3a-a632acb41838\") " pod="openstack/ceilometer-0" Jan 25 08:16:47 crc kubenswrapper[4832]: I0125 08:16:47.252083 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/46d917e3-482a-43d4-9c3a-a632acb41838-scripts\") pod \"ceilometer-0\" (UID: \"46d917e3-482a-43d4-9c3a-a632acb41838\") " pod="openstack/ceilometer-0" Jan 25 08:16:47 crc kubenswrapper[4832]: I0125 08:16:47.252104 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/46d917e3-482a-43d4-9c3a-a632acb41838-log-httpd\") pod \"ceilometer-0\" (UID: \"46d917e3-482a-43d4-9c3a-a632acb41838\") " pod="openstack/ceilometer-0" Jan 25 08:16:47 crc kubenswrapper[4832]: I0125 08:16:47.252122 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/46d917e3-482a-43d4-9c3a-a632acb41838-run-httpd\") pod \"ceilometer-0\" (UID: \"46d917e3-482a-43d4-9c3a-a632acb41838\") " pod="openstack/ceilometer-0" Jan 25 08:16:47 crc kubenswrapper[4832]: I0125 08:16:47.252731 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/46d917e3-482a-43d4-9c3a-a632acb41838-log-httpd\") pod \"ceilometer-0\" (UID: \"46d917e3-482a-43d4-9c3a-a632acb41838\") " pod="openstack/ceilometer-0" Jan 25 08:16:47 crc kubenswrapper[4832]: I0125 08:16:47.252756 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/46d917e3-482a-43d4-9c3a-a632acb41838-run-httpd\") pod \"ceilometer-0\" (UID: \"46d917e3-482a-43d4-9c3a-a632acb41838\") " pod="openstack/ceilometer-0" Jan 25 08:16:47 crc kubenswrapper[4832]: I0125 08:16:47.257494 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/46d917e3-482a-43d4-9c3a-a632acb41838-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"46d917e3-482a-43d4-9c3a-a632acb41838\") " pod="openstack/ceilometer-0" Jan 25 08:16:47 crc kubenswrapper[4832]: I0125 08:16:47.258768 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/46d917e3-482a-43d4-9c3a-a632acb41838-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"46d917e3-482a-43d4-9c3a-a632acb41838\") " pod="openstack/ceilometer-0" Jan 25 08:16:47 crc kubenswrapper[4832]: I0125 08:16:47.261357 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/46d917e3-482a-43d4-9c3a-a632acb41838-config-data\") pod \"ceilometer-0\" (UID: \"46d917e3-482a-43d4-9c3a-a632acb41838\") " pod="openstack/ceilometer-0" Jan 25 08:16:47 crc kubenswrapper[4832]: I0125 08:16:47.268259 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/46d917e3-482a-43d4-9c3a-a632acb41838-scripts\") pod \"ceilometer-0\" (UID: \"46d917e3-482a-43d4-9c3a-a632acb41838\") " pod="openstack/ceilometer-0" Jan 25 08:16:47 crc kubenswrapper[4832]: I0125 08:16:47.272174 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n7hw7\" (UniqueName: \"kubernetes.io/projected/46d917e3-482a-43d4-9c3a-a632acb41838-kube-api-access-n7hw7\") pod \"ceilometer-0\" (UID: \"46d917e3-482a-43d4-9c3a-a632acb41838\") " pod="openstack/ceilometer-0" Jan 25 08:16:47 crc kubenswrapper[4832]: I0125 08:16:47.327962 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 25 08:16:47 crc kubenswrapper[4832]: I0125 08:16:47.440755 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-dc694898-lnc2f" Jan 25 08:16:47 crc kubenswrapper[4832]: I0125 08:16:47.685086 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b48b257e-ddb7-486d-8788-489ca788ac1f" path="/var/lib/kubelet/pods/b48b257e-ddb7-486d-8788-489ca788ac1f/volumes" Jan 25 08:16:47 crc kubenswrapper[4832]: I0125 08:16:47.732987 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-585cc76cc-zg5pq"] Jan 25 08:16:47 crc kubenswrapper[4832]: I0125 08:16:47.733476 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-585cc76cc-zg5pq" podUID="196ac30d-ab85-4327-86df-27e637aba0b3" containerName="neutron-httpd" containerID="cri-o://08846f1d76951f512607b72d43c94cc03251c22467960102f66d465881deb1f9" gracePeriod=30 Jan 25 08:16:47 crc kubenswrapper[4832]: I0125 08:16:47.734004 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-585cc76cc-zg5pq" podUID="196ac30d-ab85-4327-86df-27e637aba0b3" containerName="neutron-api" containerID="cri-o://b931c3aab747871a791f5720b4595fc8a711739518f9f979ec95f13285aefd68" gracePeriod=30 Jan 25 08:16:47 crc kubenswrapper[4832]: I0125 08:16:47.783673 4832 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/neutron-585cc76cc-zg5pq" podUID="196ac30d-ab85-4327-86df-27e637aba0b3" containerName="neutron-httpd" probeResult="failure" output="Get \"https://10.217.0.150:9696/\": EOF" Jan 25 08:16:47 crc kubenswrapper[4832]: I0125 08:16:47.789973 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-857c8bdbcf-kwd2q"] Jan 25 08:16:47 crc kubenswrapper[4832]: I0125 08:16:47.792163 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-857c8bdbcf-kwd2q" Jan 25 08:16:47 crc kubenswrapper[4832]: I0125 08:16:47.810693 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-857c8bdbcf-kwd2q"] Jan 25 08:16:47 crc kubenswrapper[4832]: I0125 08:16:47.846255 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"a7429790-03f9-46f3-96d2-5cf0e5323437","Type":"ContainerDied","Data":"772cc3399cb5a642c32abf2cd2fea48abdd4e5d59e8938f4689361fdcbeca864"} Jan 25 08:16:47 crc kubenswrapper[4832]: I0125 08:16:47.846311 4832 scope.go:117] "RemoveContainer" containerID="f54a9d4c576e11ec459f931e16bc247e33e80eceec82fa91cd62dc1de2c6ddb5" Jan 25 08:16:47 crc kubenswrapper[4832]: I0125 08:16:47.846422 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 25 08:16:47 crc kubenswrapper[4832]: I0125 08:16:47.865318 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gpsvm\" (UniqueName: \"kubernetes.io/projected/d1a230b2-45ba-4298-b3d6-2280431c592d-kube-api-access-gpsvm\") pod \"neutron-857c8bdbcf-kwd2q\" (UID: \"d1a230b2-45ba-4298-b3d6-2280431c592d\") " pod="openstack/neutron-857c8bdbcf-kwd2q" Jan 25 08:16:47 crc kubenswrapper[4832]: I0125 08:16:47.865436 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d1a230b2-45ba-4298-b3d6-2280431c592d-public-tls-certs\") pod \"neutron-857c8bdbcf-kwd2q\" (UID: \"d1a230b2-45ba-4298-b3d6-2280431c592d\") " pod="openstack/neutron-857c8bdbcf-kwd2q" Jan 25 08:16:47 crc kubenswrapper[4832]: I0125 08:16:47.865478 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d1a230b2-45ba-4298-b3d6-2280431c592d-combined-ca-bundle\") pod \"neutron-857c8bdbcf-kwd2q\" (UID: \"d1a230b2-45ba-4298-b3d6-2280431c592d\") " pod="openstack/neutron-857c8bdbcf-kwd2q" Jan 25 08:16:47 crc kubenswrapper[4832]: I0125 08:16:47.865501 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/d1a230b2-45ba-4298-b3d6-2280431c592d-config\") pod \"neutron-857c8bdbcf-kwd2q\" (UID: \"d1a230b2-45ba-4298-b3d6-2280431c592d\") " pod="openstack/neutron-857c8bdbcf-kwd2q" Jan 25 08:16:47 crc kubenswrapper[4832]: I0125 08:16:47.865541 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/d1a230b2-45ba-4298-b3d6-2280431c592d-ovndb-tls-certs\") pod \"neutron-857c8bdbcf-kwd2q\" (UID: \"d1a230b2-45ba-4298-b3d6-2280431c592d\") " pod="openstack/neutron-857c8bdbcf-kwd2q" Jan 25 08:16:47 crc kubenswrapper[4832]: I0125 08:16:47.865569 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d1a230b2-45ba-4298-b3d6-2280431c592d-internal-tls-certs\") pod \"neutron-857c8bdbcf-kwd2q\" (UID: \"d1a230b2-45ba-4298-b3d6-2280431c592d\") " pod="openstack/neutron-857c8bdbcf-kwd2q" Jan 25 08:16:47 crc kubenswrapper[4832]: I0125 08:16:47.865595 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/d1a230b2-45ba-4298-b3d6-2280431c592d-httpd-config\") pod \"neutron-857c8bdbcf-kwd2q\" (UID: \"d1a230b2-45ba-4298-b3d6-2280431c592d\") " pod="openstack/neutron-857c8bdbcf-kwd2q" Jan 25 08:16:47 crc kubenswrapper[4832]: I0125 08:16:47.885932 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"20df59e8-9934-47c9-9d8f-a97e0f046368","Type":"ContainerStarted","Data":"c48473d332c2caa4d22a48db50bb44a185daaba009b8f93a65257a4927f90826"} Jan 25 08:16:47 crc kubenswrapper[4832]: I0125 08:16:47.892362 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-855cdf875c-rxk79" event={"ID":"26baac3d-6d07-4f33-956e-4048e3318099","Type":"ContainerStarted","Data":"aef455b78b0b8057d81c7ad7523d653b4c2d8f9a156e332b44183e1e07cc750d"} Jan 25 08:16:47 crc kubenswrapper[4832]: I0125 08:16:47.902113 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-7b4947bb84-pmdh6" event={"ID":"4899f618-1f51-4d34-9970-7c096359b47e","Type":"ContainerStarted","Data":"405af769c1213df9c7bd0ebd0368bb3b1393d2b296e0a08e27948feea279580b"} Jan 25 08:16:47 crc kubenswrapper[4832]: I0125 08:16:47.904554 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"57235bbb-0d8b-45ea-ad16-e42723ce9047","Type":"ContainerStarted","Data":"bae2373b694c9972e143725fa85e35e68d5fb447a16ecac95865f648c026706c"} Jan 25 08:16:47 crc kubenswrapper[4832]: I0125 08:16:47.904641 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="57235bbb-0d8b-45ea-ad16-e42723ce9047" containerName="cinder-api-log" containerID="cri-o://ba3fd16ebff598e441a9cf6472d940128bd9a9fcc112f1d4c39af51485562467" gracePeriod=30 Jan 25 08:16:47 crc kubenswrapper[4832]: I0125 08:16:47.904699 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Jan 25 08:16:47 crc kubenswrapper[4832]: I0125 08:16:47.904711 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="57235bbb-0d8b-45ea-ad16-e42723ce9047" containerName="cinder-api" containerID="cri-o://bae2373b694c9972e143725fa85e35e68d5fb447a16ecac95865f648c026706c" gracePeriod=30 Jan 25 08:16:47 crc kubenswrapper[4832]: I0125 08:16:47.926732 4832 scope.go:117] "RemoveContainer" containerID="a59ae8b90264c76e026f186a375ce99f1a0ea5f86a87ea27bf15d9c60ad2587b" Jan 25 08:16:47 crc kubenswrapper[4832]: I0125 08:16:47.928169 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"cf6bae18-db06-4abf-a6b1-aa1eda2cc70e","Type":"ContainerStarted","Data":"03354875f2e085d3c53b87c51864e8f11ec119997799a0926ea69932b7de703e"} Jan 25 08:16:47 crc kubenswrapper[4832]: I0125 08:16:47.928341 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="cf6bae18-db06-4abf-a6b1-aa1eda2cc70e" containerName="glance-log" containerID="cri-o://a0c9d10629fda3bdc77408cc23ba7e9811800a8d879969e78f7af0bd70a947a1" gracePeriod=30 Jan 25 08:16:47 crc kubenswrapper[4832]: I0125 08:16:47.928455 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="cf6bae18-db06-4abf-a6b1-aa1eda2cc70e" containerName="glance-httpd" containerID="cri-o://03354875f2e085d3c53b87c51864e8f11ec119997799a0926ea69932b7de703e" gracePeriod=30 Jan 25 08:16:47 crc kubenswrapper[4832]: I0125 08:16:47.967685 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gpsvm\" (UniqueName: \"kubernetes.io/projected/d1a230b2-45ba-4298-b3d6-2280431c592d-kube-api-access-gpsvm\") pod \"neutron-857c8bdbcf-kwd2q\" (UID: \"d1a230b2-45ba-4298-b3d6-2280431c592d\") " pod="openstack/neutron-857c8bdbcf-kwd2q" Jan 25 08:16:47 crc kubenswrapper[4832]: I0125 08:16:47.968696 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d1a230b2-45ba-4298-b3d6-2280431c592d-public-tls-certs\") pod \"neutron-857c8bdbcf-kwd2q\" (UID: \"d1a230b2-45ba-4298-b3d6-2280431c592d\") " pod="openstack/neutron-857c8bdbcf-kwd2q" Jan 25 08:16:47 crc kubenswrapper[4832]: I0125 08:16:47.968744 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d1a230b2-45ba-4298-b3d6-2280431c592d-combined-ca-bundle\") pod \"neutron-857c8bdbcf-kwd2q\" (UID: \"d1a230b2-45ba-4298-b3d6-2280431c592d\") " pod="openstack/neutron-857c8bdbcf-kwd2q" Jan 25 08:16:47 crc kubenswrapper[4832]: I0125 08:16:47.968824 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/d1a230b2-45ba-4298-b3d6-2280431c592d-config\") pod \"neutron-857c8bdbcf-kwd2q\" (UID: \"d1a230b2-45ba-4298-b3d6-2280431c592d\") " pod="openstack/neutron-857c8bdbcf-kwd2q" Jan 25 08:16:47 crc kubenswrapper[4832]: I0125 08:16:47.968961 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/d1a230b2-45ba-4298-b3d6-2280431c592d-ovndb-tls-certs\") pod \"neutron-857c8bdbcf-kwd2q\" (UID: \"d1a230b2-45ba-4298-b3d6-2280431c592d\") " pod="openstack/neutron-857c8bdbcf-kwd2q" Jan 25 08:16:47 crc kubenswrapper[4832]: I0125 08:16:47.969037 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d1a230b2-45ba-4298-b3d6-2280431c592d-internal-tls-certs\") pod \"neutron-857c8bdbcf-kwd2q\" (UID: \"d1a230b2-45ba-4298-b3d6-2280431c592d\") " pod="openstack/neutron-857c8bdbcf-kwd2q" Jan 25 08:16:47 crc kubenswrapper[4832]: I0125 08:16:47.969070 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/d1a230b2-45ba-4298-b3d6-2280431c592d-httpd-config\") pod \"neutron-857c8bdbcf-kwd2q\" (UID: \"d1a230b2-45ba-4298-b3d6-2280431c592d\") " pod="openstack/neutron-857c8bdbcf-kwd2q" Jan 25 08:16:47 crc kubenswrapper[4832]: I0125 08:16:47.994089 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/d1a230b2-45ba-4298-b3d6-2280431c592d-config\") pod \"neutron-857c8bdbcf-kwd2q\" (UID: \"d1a230b2-45ba-4298-b3d6-2280431c592d\") " pod="openstack/neutron-857c8bdbcf-kwd2q" Jan 25 08:16:48 crc kubenswrapper[4832]: I0125 08:16:48.002415 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/d1a230b2-45ba-4298-b3d6-2280431c592d-ovndb-tls-certs\") pod \"neutron-857c8bdbcf-kwd2q\" (UID: \"d1a230b2-45ba-4298-b3d6-2280431c592d\") " pod="openstack/neutron-857c8bdbcf-kwd2q" Jan 25 08:16:48 crc kubenswrapper[4832]: I0125 08:16:48.004013 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d1a230b2-45ba-4298-b3d6-2280431c592d-combined-ca-bundle\") pod \"neutron-857c8bdbcf-kwd2q\" (UID: \"d1a230b2-45ba-4298-b3d6-2280431c592d\") " pod="openstack/neutron-857c8bdbcf-kwd2q" Jan 25 08:16:48 crc kubenswrapper[4832]: I0125 08:16:48.006336 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d1a230b2-45ba-4298-b3d6-2280431c592d-public-tls-certs\") pod \"neutron-857c8bdbcf-kwd2q\" (UID: \"d1a230b2-45ba-4298-b3d6-2280431c592d\") " pod="openstack/neutron-857c8bdbcf-kwd2q" Jan 25 08:16:48 crc kubenswrapper[4832]: I0125 08:16:48.032616 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/d1a230b2-45ba-4298-b3d6-2280431c592d-httpd-config\") pod \"neutron-857c8bdbcf-kwd2q\" (UID: \"d1a230b2-45ba-4298-b3d6-2280431c592d\") " pod="openstack/neutron-857c8bdbcf-kwd2q" Jan 25 08:16:48 crc kubenswrapper[4832]: I0125 08:16:48.033274 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d1a230b2-45ba-4298-b3d6-2280431c592d-internal-tls-certs\") pod \"neutron-857c8bdbcf-kwd2q\" (UID: \"d1a230b2-45ba-4298-b3d6-2280431c592d\") " pod="openstack/neutron-857c8bdbcf-kwd2q" Jan 25 08:16:48 crc kubenswrapper[4832]: I0125 08:16:48.039315 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=7.838970608 podStartE2EDuration="9.039290743s" podCreationTimestamp="2026-01-25 08:16:39 +0000 UTC" firstStartedPulling="2026-01-25 08:16:41.906316584 +0000 UTC m=+1184.580140117" lastFinishedPulling="2026-01-25 08:16:43.106636719 +0000 UTC m=+1185.780460252" observedRunningTime="2026-01-25 08:16:47.927115064 +0000 UTC m=+1190.600938597" watchObservedRunningTime="2026-01-25 08:16:48.039290743 +0000 UTC m=+1190.713114266" Jan 25 08:16:48 crc kubenswrapper[4832]: I0125 08:16:48.050560 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gpsvm\" (UniqueName: \"kubernetes.io/projected/d1a230b2-45ba-4298-b3d6-2280431c592d-kube-api-access-gpsvm\") pod \"neutron-857c8bdbcf-kwd2q\" (UID: \"d1a230b2-45ba-4298-b3d6-2280431c592d\") " pod="openstack/neutron-857c8bdbcf-kwd2q" Jan 25 08:16:48 crc kubenswrapper[4832]: I0125 08:16:48.114873 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 25 08:16:48 crc kubenswrapper[4832]: I0125 08:16:48.141134 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 25 08:16:48 crc kubenswrapper[4832]: I0125 08:16:48.149594 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-857c8bdbcf-kwd2q" Jan 25 08:16:48 crc kubenswrapper[4832]: I0125 08:16:48.167150 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 25 08:16:48 crc kubenswrapper[4832]: I0125 08:16:48.180035 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Jan 25 08:16:48 crc kubenswrapper[4832]: I0125 08:16:48.202154 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 25 08:16:48 crc kubenswrapper[4832]: I0125 08:16:48.205951 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-worker-855cdf875c-rxk79" podStartSLOduration=5.537023542 podStartE2EDuration="10.205926225s" podCreationTimestamp="2026-01-25 08:16:38 +0000 UTC" firstStartedPulling="2026-01-25 08:16:40.967680313 +0000 UTC m=+1183.641503846" lastFinishedPulling="2026-01-25 08:16:45.636582996 +0000 UTC m=+1188.310406529" observedRunningTime="2026-01-25 08:16:48.002883314 +0000 UTC m=+1190.676706857" watchObservedRunningTime="2026-01-25 08:16:48.205926225 +0000 UTC m=+1190.879749758" Jan 25 08:16:48 crc kubenswrapper[4832]: I0125 08:16:48.209299 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Jan 25 08:16:48 crc kubenswrapper[4832]: I0125 08:16:48.226483 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Jan 25 08:16:48 crc kubenswrapper[4832]: I0125 08:16:48.288466 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 25 08:16:48 crc kubenswrapper[4832]: I0125 08:16:48.295921 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=10.29589924 podStartE2EDuration="10.29589924s" podCreationTimestamp="2026-01-25 08:16:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-25 08:16:48.067416963 +0000 UTC m=+1190.741240506" watchObservedRunningTime="2026-01-25 08:16:48.29589924 +0000 UTC m=+1190.969722773" Jan 25 08:16:48 crc kubenswrapper[4832]: I0125 08:16:48.298661 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-keystone-listener-7b4947bb84-pmdh6" podStartSLOduration=5.824309429 podStartE2EDuration="10.298649996s" podCreationTimestamp="2026-01-25 08:16:38 +0000 UTC" firstStartedPulling="2026-01-25 08:16:41.159323578 +0000 UTC m=+1183.833147111" lastFinishedPulling="2026-01-25 08:16:45.633664145 +0000 UTC m=+1188.307487678" observedRunningTime="2026-01-25 08:16:48.098140214 +0000 UTC m=+1190.771963747" watchObservedRunningTime="2026-01-25 08:16:48.298649996 +0000 UTC m=+1190.972473539" Jan 25 08:16:48 crc kubenswrapper[4832]: I0125 08:16:48.321803 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"glance-default-external-api-0\" (UID: \"0cdb9042-6480-49eb-b855-ac5c5adce9a4\") " pod="openstack/glance-default-external-api-0" Jan 25 08:16:48 crc kubenswrapper[4832]: I0125 08:16:48.322303 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/0cdb9042-6480-49eb-b855-ac5c5adce9a4-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"0cdb9042-6480-49eb-b855-ac5c5adce9a4\") " pod="openstack/glance-default-external-api-0" Jan 25 08:16:48 crc kubenswrapper[4832]: I0125 08:16:48.322417 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0cdb9042-6480-49eb-b855-ac5c5adce9a4-config-data\") pod \"glance-default-external-api-0\" (UID: \"0cdb9042-6480-49eb-b855-ac5c5adce9a4\") " pod="openstack/glance-default-external-api-0" Jan 25 08:16:48 crc kubenswrapper[4832]: I0125 08:16:48.322525 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0cdb9042-6480-49eb-b855-ac5c5adce9a4-logs\") pod \"glance-default-external-api-0\" (UID: \"0cdb9042-6480-49eb-b855-ac5c5adce9a4\") " pod="openstack/glance-default-external-api-0" Jan 25 08:16:48 crc kubenswrapper[4832]: I0125 08:16:48.322543 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0cdb9042-6480-49eb-b855-ac5c5adce9a4-scripts\") pod \"glance-default-external-api-0\" (UID: \"0cdb9042-6480-49eb-b855-ac5c5adce9a4\") " pod="openstack/glance-default-external-api-0" Jan 25 08:16:48 crc kubenswrapper[4832]: I0125 08:16:48.322616 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hksc7\" (UniqueName: \"kubernetes.io/projected/0cdb9042-6480-49eb-b855-ac5c5adce9a4-kube-api-access-hksc7\") pod \"glance-default-external-api-0\" (UID: \"0cdb9042-6480-49eb-b855-ac5c5adce9a4\") " pod="openstack/glance-default-external-api-0" Jan 25 08:16:48 crc kubenswrapper[4832]: I0125 08:16:48.322647 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0cdb9042-6480-49eb-b855-ac5c5adce9a4-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"0cdb9042-6480-49eb-b855-ac5c5adce9a4\") " pod="openstack/glance-default-external-api-0" Jan 25 08:16:48 crc kubenswrapper[4832]: I0125 08:16:48.322675 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/0cdb9042-6480-49eb-b855-ac5c5adce9a4-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"0cdb9042-6480-49eb-b855-ac5c5adce9a4\") " pod="openstack/glance-default-external-api-0" Jan 25 08:16:48 crc kubenswrapper[4832]: I0125 08:16:48.346961 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=8.346937376 podStartE2EDuration="8.346937376s" podCreationTimestamp="2026-01-25 08:16:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-25 08:16:48.155836198 +0000 UTC m=+1190.829659731" watchObservedRunningTime="2026-01-25 08:16:48.346937376 +0000 UTC m=+1191.020760909" Jan 25 08:16:48 crc kubenswrapper[4832]: I0125 08:16:48.428076 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hksc7\" (UniqueName: \"kubernetes.io/projected/0cdb9042-6480-49eb-b855-ac5c5adce9a4-kube-api-access-hksc7\") pod \"glance-default-external-api-0\" (UID: \"0cdb9042-6480-49eb-b855-ac5c5adce9a4\") " pod="openstack/glance-default-external-api-0" Jan 25 08:16:48 crc kubenswrapper[4832]: I0125 08:16:48.428174 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0cdb9042-6480-49eb-b855-ac5c5adce9a4-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"0cdb9042-6480-49eb-b855-ac5c5adce9a4\") " pod="openstack/glance-default-external-api-0" Jan 25 08:16:48 crc kubenswrapper[4832]: I0125 08:16:48.428200 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/0cdb9042-6480-49eb-b855-ac5c5adce9a4-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"0cdb9042-6480-49eb-b855-ac5c5adce9a4\") " pod="openstack/glance-default-external-api-0" Jan 25 08:16:48 crc kubenswrapper[4832]: I0125 08:16:48.428331 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"glance-default-external-api-0\" (UID: \"0cdb9042-6480-49eb-b855-ac5c5adce9a4\") " pod="openstack/glance-default-external-api-0" Jan 25 08:16:48 crc kubenswrapper[4832]: I0125 08:16:48.428423 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/0cdb9042-6480-49eb-b855-ac5c5adce9a4-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"0cdb9042-6480-49eb-b855-ac5c5adce9a4\") " pod="openstack/glance-default-external-api-0" Jan 25 08:16:48 crc kubenswrapper[4832]: I0125 08:16:48.428526 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0cdb9042-6480-49eb-b855-ac5c5adce9a4-config-data\") pod \"glance-default-external-api-0\" (UID: \"0cdb9042-6480-49eb-b855-ac5c5adce9a4\") " pod="openstack/glance-default-external-api-0" Jan 25 08:16:48 crc kubenswrapper[4832]: I0125 08:16:48.428654 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0cdb9042-6480-49eb-b855-ac5c5adce9a4-logs\") pod \"glance-default-external-api-0\" (UID: \"0cdb9042-6480-49eb-b855-ac5c5adce9a4\") " pod="openstack/glance-default-external-api-0" Jan 25 08:16:48 crc kubenswrapper[4832]: I0125 08:16:48.428688 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0cdb9042-6480-49eb-b855-ac5c5adce9a4-scripts\") pod \"glance-default-external-api-0\" (UID: \"0cdb9042-6480-49eb-b855-ac5c5adce9a4\") " pod="openstack/glance-default-external-api-0" Jan 25 08:16:48 crc kubenswrapper[4832]: I0125 08:16:48.429221 4832 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"glance-default-external-api-0\" (UID: \"0cdb9042-6480-49eb-b855-ac5c5adce9a4\") device mount path \"/mnt/openstack/pv02\"" pod="openstack/glance-default-external-api-0" Jan 25 08:16:48 crc kubenswrapper[4832]: I0125 08:16:48.429528 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/0cdb9042-6480-49eb-b855-ac5c5adce9a4-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"0cdb9042-6480-49eb-b855-ac5c5adce9a4\") " pod="openstack/glance-default-external-api-0" Jan 25 08:16:48 crc kubenswrapper[4832]: I0125 08:16:48.429598 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0cdb9042-6480-49eb-b855-ac5c5adce9a4-logs\") pod \"glance-default-external-api-0\" (UID: \"0cdb9042-6480-49eb-b855-ac5c5adce9a4\") " pod="openstack/glance-default-external-api-0" Jan 25 08:16:48 crc kubenswrapper[4832]: I0125 08:16:48.441681 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0cdb9042-6480-49eb-b855-ac5c5adce9a4-config-data\") pod \"glance-default-external-api-0\" (UID: \"0cdb9042-6480-49eb-b855-ac5c5adce9a4\") " pod="openstack/glance-default-external-api-0" Jan 25 08:16:48 crc kubenswrapper[4832]: I0125 08:16:48.444282 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0cdb9042-6480-49eb-b855-ac5c5adce9a4-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"0cdb9042-6480-49eb-b855-ac5c5adce9a4\") " pod="openstack/glance-default-external-api-0" Jan 25 08:16:48 crc kubenswrapper[4832]: I0125 08:16:48.451552 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/0cdb9042-6480-49eb-b855-ac5c5adce9a4-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"0cdb9042-6480-49eb-b855-ac5c5adce9a4\") " pod="openstack/glance-default-external-api-0" Jan 25 08:16:48 crc kubenswrapper[4832]: I0125 08:16:48.455046 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0cdb9042-6480-49eb-b855-ac5c5adce9a4-scripts\") pod \"glance-default-external-api-0\" (UID: \"0cdb9042-6480-49eb-b855-ac5c5adce9a4\") " pod="openstack/glance-default-external-api-0" Jan 25 08:16:48 crc kubenswrapper[4832]: I0125 08:16:48.466108 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hksc7\" (UniqueName: \"kubernetes.io/projected/0cdb9042-6480-49eb-b855-ac5c5adce9a4-kube-api-access-hksc7\") pod \"glance-default-external-api-0\" (UID: \"0cdb9042-6480-49eb-b855-ac5c5adce9a4\") " pod="openstack/glance-default-external-api-0" Jan 25 08:16:48 crc kubenswrapper[4832]: I0125 08:16:48.495660 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"glance-default-external-api-0\" (UID: \"0cdb9042-6480-49eb-b855-ac5c5adce9a4\") " pod="openstack/glance-default-external-api-0" Jan 25 08:16:48 crc kubenswrapper[4832]: I0125 08:16:48.529002 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 25 08:16:48 crc kubenswrapper[4832]: I0125 08:16:48.956010 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"46d917e3-482a-43d4-9c3a-a632acb41838","Type":"ContainerStarted","Data":"48336d09063ad4801f89338c7c7500974726159f5210b3bacea1fac7f0d18594"} Jan 25 08:16:48 crc kubenswrapper[4832]: I0125 08:16:48.974112 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-9f466dd54-88fdd"] Jan 25 08:16:48 crc kubenswrapper[4832]: I0125 08:16:48.976080 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-9f466dd54-88fdd" Jan 25 08:16:48 crc kubenswrapper[4832]: I0125 08:16:48.987344 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-internal-svc" Jan 25 08:16:48 crc kubenswrapper[4832]: I0125 08:16:48.987613 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-public-svc" Jan 25 08:16:48 crc kubenswrapper[4832]: I0125 08:16:48.994553 4832 generic.go:334] "Generic (PLEG): container finished" podID="57235bbb-0d8b-45ea-ad16-e42723ce9047" containerID="bae2373b694c9972e143725fa85e35e68d5fb447a16ecac95865f648c026706c" exitCode=0 Jan 25 08:16:48 crc kubenswrapper[4832]: I0125 08:16:48.994602 4832 generic.go:334] "Generic (PLEG): container finished" podID="57235bbb-0d8b-45ea-ad16-e42723ce9047" containerID="ba3fd16ebff598e441a9cf6472d940128bd9a9fcc112f1d4c39af51485562467" exitCode=143 Jan 25 08:16:48 crc kubenswrapper[4832]: I0125 08:16:48.994706 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"57235bbb-0d8b-45ea-ad16-e42723ce9047","Type":"ContainerDied","Data":"bae2373b694c9972e143725fa85e35e68d5fb447a16ecac95865f648c026706c"} Jan 25 08:16:48 crc kubenswrapper[4832]: I0125 08:16:48.994744 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"57235bbb-0d8b-45ea-ad16-e42723ce9047","Type":"ContainerDied","Data":"ba3fd16ebff598e441a9cf6472d940128bd9a9fcc112f1d4c39af51485562467"} Jan 25 08:16:49 crc kubenswrapper[4832]: I0125 08:16:49.027802 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-9f466dd54-88fdd"] Jan 25 08:16:49 crc kubenswrapper[4832]: I0125 08:16:49.034496 4832 generic.go:334] "Generic (PLEG): container finished" podID="cf6bae18-db06-4abf-a6b1-aa1eda2cc70e" containerID="03354875f2e085d3c53b87c51864e8f11ec119997799a0926ea69932b7de703e" exitCode=0 Jan 25 08:16:49 crc kubenswrapper[4832]: I0125 08:16:49.034539 4832 generic.go:334] "Generic (PLEG): container finished" podID="cf6bae18-db06-4abf-a6b1-aa1eda2cc70e" containerID="a0c9d10629fda3bdc77408cc23ba7e9811800a8d879969e78f7af0bd70a947a1" exitCode=143 Jan 25 08:16:49 crc kubenswrapper[4832]: I0125 08:16:49.034638 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"cf6bae18-db06-4abf-a6b1-aa1eda2cc70e","Type":"ContainerDied","Data":"03354875f2e085d3c53b87c51864e8f11ec119997799a0926ea69932b7de703e"} Jan 25 08:16:49 crc kubenswrapper[4832]: I0125 08:16:49.034676 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"cf6bae18-db06-4abf-a6b1-aa1eda2cc70e","Type":"ContainerDied","Data":"a0c9d10629fda3bdc77408cc23ba7e9811800a8d879969e78f7af0bd70a947a1"} Jan 25 08:16:49 crc kubenswrapper[4832]: I0125 08:16:49.047164 4832 generic.go:334] "Generic (PLEG): container finished" podID="196ac30d-ab85-4327-86df-27e637aba0b3" containerID="08846f1d76951f512607b72d43c94cc03251c22467960102f66d465881deb1f9" exitCode=0 Jan 25 08:16:49 crc kubenswrapper[4832]: I0125 08:16:49.047265 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-585cc76cc-zg5pq" event={"ID":"196ac30d-ab85-4327-86df-27e637aba0b3","Type":"ContainerDied","Data":"08846f1d76951f512607b72d43c94cc03251c22467960102f66d465881deb1f9"} Jan 25 08:16:49 crc kubenswrapper[4832]: I0125 08:16:49.051499 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-857c8bdbcf-kwd2q"] Jan 25 08:16:49 crc kubenswrapper[4832]: I0125 08:16:49.181078 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ae8a1d7e-bb0c-4228-b39b-1de7e6c62ff5-logs\") pod \"barbican-api-9f466dd54-88fdd\" (UID: \"ae8a1d7e-bb0c-4228-b39b-1de7e6c62ff5\") " pod="openstack/barbican-api-9f466dd54-88fdd" Jan 25 08:16:49 crc kubenswrapper[4832]: I0125 08:16:49.181171 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pnv94\" (UniqueName: \"kubernetes.io/projected/ae8a1d7e-bb0c-4228-b39b-1de7e6c62ff5-kube-api-access-pnv94\") pod \"barbican-api-9f466dd54-88fdd\" (UID: \"ae8a1d7e-bb0c-4228-b39b-1de7e6c62ff5\") " pod="openstack/barbican-api-9f466dd54-88fdd" Jan 25 08:16:49 crc kubenswrapper[4832]: I0125 08:16:49.181225 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ae8a1d7e-bb0c-4228-b39b-1de7e6c62ff5-config-data\") pod \"barbican-api-9f466dd54-88fdd\" (UID: \"ae8a1d7e-bb0c-4228-b39b-1de7e6c62ff5\") " pod="openstack/barbican-api-9f466dd54-88fdd" Jan 25 08:16:49 crc kubenswrapper[4832]: I0125 08:16:49.181248 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ae8a1d7e-bb0c-4228-b39b-1de7e6c62ff5-public-tls-certs\") pod \"barbican-api-9f466dd54-88fdd\" (UID: \"ae8a1d7e-bb0c-4228-b39b-1de7e6c62ff5\") " pod="openstack/barbican-api-9f466dd54-88fdd" Jan 25 08:16:49 crc kubenswrapper[4832]: I0125 08:16:49.181304 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ae8a1d7e-bb0c-4228-b39b-1de7e6c62ff5-internal-tls-certs\") pod \"barbican-api-9f466dd54-88fdd\" (UID: \"ae8a1d7e-bb0c-4228-b39b-1de7e6c62ff5\") " pod="openstack/barbican-api-9f466dd54-88fdd" Jan 25 08:16:49 crc kubenswrapper[4832]: I0125 08:16:49.181476 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ae8a1d7e-bb0c-4228-b39b-1de7e6c62ff5-combined-ca-bundle\") pod \"barbican-api-9f466dd54-88fdd\" (UID: \"ae8a1d7e-bb0c-4228-b39b-1de7e6c62ff5\") " pod="openstack/barbican-api-9f466dd54-88fdd" Jan 25 08:16:49 crc kubenswrapper[4832]: I0125 08:16:49.181510 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ae8a1d7e-bb0c-4228-b39b-1de7e6c62ff5-config-data-custom\") pod \"barbican-api-9f466dd54-88fdd\" (UID: \"ae8a1d7e-bb0c-4228-b39b-1de7e6c62ff5\") " pod="openstack/barbican-api-9f466dd54-88fdd" Jan 25 08:16:49 crc kubenswrapper[4832]: I0125 08:16:49.218094 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 25 08:16:49 crc kubenswrapper[4832]: I0125 08:16:49.289287 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ae8a1d7e-bb0c-4228-b39b-1de7e6c62ff5-config-data-custom\") pod \"barbican-api-9f466dd54-88fdd\" (UID: \"ae8a1d7e-bb0c-4228-b39b-1de7e6c62ff5\") " pod="openstack/barbican-api-9f466dd54-88fdd" Jan 25 08:16:49 crc kubenswrapper[4832]: I0125 08:16:49.289464 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ae8a1d7e-bb0c-4228-b39b-1de7e6c62ff5-logs\") pod \"barbican-api-9f466dd54-88fdd\" (UID: \"ae8a1d7e-bb0c-4228-b39b-1de7e6c62ff5\") " pod="openstack/barbican-api-9f466dd54-88fdd" Jan 25 08:16:49 crc kubenswrapper[4832]: I0125 08:16:49.289528 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pnv94\" (UniqueName: \"kubernetes.io/projected/ae8a1d7e-bb0c-4228-b39b-1de7e6c62ff5-kube-api-access-pnv94\") pod \"barbican-api-9f466dd54-88fdd\" (UID: \"ae8a1d7e-bb0c-4228-b39b-1de7e6c62ff5\") " pod="openstack/barbican-api-9f466dd54-88fdd" Jan 25 08:16:49 crc kubenswrapper[4832]: I0125 08:16:49.289579 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ae8a1d7e-bb0c-4228-b39b-1de7e6c62ff5-config-data\") pod \"barbican-api-9f466dd54-88fdd\" (UID: \"ae8a1d7e-bb0c-4228-b39b-1de7e6c62ff5\") " pod="openstack/barbican-api-9f466dd54-88fdd" Jan 25 08:16:49 crc kubenswrapper[4832]: I0125 08:16:49.289608 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ae8a1d7e-bb0c-4228-b39b-1de7e6c62ff5-public-tls-certs\") pod \"barbican-api-9f466dd54-88fdd\" (UID: \"ae8a1d7e-bb0c-4228-b39b-1de7e6c62ff5\") " pod="openstack/barbican-api-9f466dd54-88fdd" Jan 25 08:16:49 crc kubenswrapper[4832]: I0125 08:16:49.289662 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ae8a1d7e-bb0c-4228-b39b-1de7e6c62ff5-internal-tls-certs\") pod \"barbican-api-9f466dd54-88fdd\" (UID: \"ae8a1d7e-bb0c-4228-b39b-1de7e6c62ff5\") " pod="openstack/barbican-api-9f466dd54-88fdd" Jan 25 08:16:49 crc kubenswrapper[4832]: I0125 08:16:49.291441 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ae8a1d7e-bb0c-4228-b39b-1de7e6c62ff5-combined-ca-bundle\") pod \"barbican-api-9f466dd54-88fdd\" (UID: \"ae8a1d7e-bb0c-4228-b39b-1de7e6c62ff5\") " pod="openstack/barbican-api-9f466dd54-88fdd" Jan 25 08:16:49 crc kubenswrapper[4832]: I0125 08:16:49.298086 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ae8a1d7e-bb0c-4228-b39b-1de7e6c62ff5-logs\") pod \"barbican-api-9f466dd54-88fdd\" (UID: \"ae8a1d7e-bb0c-4228-b39b-1de7e6c62ff5\") " pod="openstack/barbican-api-9f466dd54-88fdd" Jan 25 08:16:49 crc kubenswrapper[4832]: I0125 08:16:49.312143 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ae8a1d7e-bb0c-4228-b39b-1de7e6c62ff5-public-tls-certs\") pod \"barbican-api-9f466dd54-88fdd\" (UID: \"ae8a1d7e-bb0c-4228-b39b-1de7e6c62ff5\") " pod="openstack/barbican-api-9f466dd54-88fdd" Jan 25 08:16:49 crc kubenswrapper[4832]: I0125 08:16:49.312768 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ae8a1d7e-bb0c-4228-b39b-1de7e6c62ff5-combined-ca-bundle\") pod \"barbican-api-9f466dd54-88fdd\" (UID: \"ae8a1d7e-bb0c-4228-b39b-1de7e6c62ff5\") " pod="openstack/barbican-api-9f466dd54-88fdd" Jan 25 08:16:49 crc kubenswrapper[4832]: I0125 08:16:49.323108 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ae8a1d7e-bb0c-4228-b39b-1de7e6c62ff5-internal-tls-certs\") pod \"barbican-api-9f466dd54-88fdd\" (UID: \"ae8a1d7e-bb0c-4228-b39b-1de7e6c62ff5\") " pod="openstack/barbican-api-9f466dd54-88fdd" Jan 25 08:16:49 crc kubenswrapper[4832]: I0125 08:16:49.329262 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ae8a1d7e-bb0c-4228-b39b-1de7e6c62ff5-config-data\") pod \"barbican-api-9f466dd54-88fdd\" (UID: \"ae8a1d7e-bb0c-4228-b39b-1de7e6c62ff5\") " pod="openstack/barbican-api-9f466dd54-88fdd" Jan 25 08:16:49 crc kubenswrapper[4832]: I0125 08:16:49.366734 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ae8a1d7e-bb0c-4228-b39b-1de7e6c62ff5-config-data-custom\") pod \"barbican-api-9f466dd54-88fdd\" (UID: \"ae8a1d7e-bb0c-4228-b39b-1de7e6c62ff5\") " pod="openstack/barbican-api-9f466dd54-88fdd" Jan 25 08:16:49 crc kubenswrapper[4832]: I0125 08:16:49.377060 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pnv94\" (UniqueName: \"kubernetes.io/projected/ae8a1d7e-bb0c-4228-b39b-1de7e6c62ff5-kube-api-access-pnv94\") pod \"barbican-api-9f466dd54-88fdd\" (UID: \"ae8a1d7e-bb0c-4228-b39b-1de7e6c62ff5\") " pod="openstack/barbican-api-9f466dd54-88fdd" Jan 25 08:16:49 crc kubenswrapper[4832]: I0125 08:16:49.394364 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/57235bbb-0d8b-45ea-ad16-e42723ce9047-config-data\") pod \"57235bbb-0d8b-45ea-ad16-e42723ce9047\" (UID: \"57235bbb-0d8b-45ea-ad16-e42723ce9047\") " Jan 25 08:16:49 crc kubenswrapper[4832]: I0125 08:16:49.394433 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/57235bbb-0d8b-45ea-ad16-e42723ce9047-config-data-custom\") pod \"57235bbb-0d8b-45ea-ad16-e42723ce9047\" (UID: \"57235bbb-0d8b-45ea-ad16-e42723ce9047\") " Jan 25 08:16:49 crc kubenswrapper[4832]: I0125 08:16:49.394460 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/57235bbb-0d8b-45ea-ad16-e42723ce9047-scripts\") pod \"57235bbb-0d8b-45ea-ad16-e42723ce9047\" (UID: \"57235bbb-0d8b-45ea-ad16-e42723ce9047\") " Jan 25 08:16:49 crc kubenswrapper[4832]: I0125 08:16:49.394541 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/57235bbb-0d8b-45ea-ad16-e42723ce9047-logs\") pod \"57235bbb-0d8b-45ea-ad16-e42723ce9047\" (UID: \"57235bbb-0d8b-45ea-ad16-e42723ce9047\") " Jan 25 08:16:49 crc kubenswrapper[4832]: I0125 08:16:49.394612 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-59dcg\" (UniqueName: \"kubernetes.io/projected/57235bbb-0d8b-45ea-ad16-e42723ce9047-kube-api-access-59dcg\") pod \"57235bbb-0d8b-45ea-ad16-e42723ce9047\" (UID: \"57235bbb-0d8b-45ea-ad16-e42723ce9047\") " Jan 25 08:16:49 crc kubenswrapper[4832]: I0125 08:16:49.394673 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/57235bbb-0d8b-45ea-ad16-e42723ce9047-etc-machine-id\") pod \"57235bbb-0d8b-45ea-ad16-e42723ce9047\" (UID: \"57235bbb-0d8b-45ea-ad16-e42723ce9047\") " Jan 25 08:16:49 crc kubenswrapper[4832]: I0125 08:16:49.394741 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/57235bbb-0d8b-45ea-ad16-e42723ce9047-combined-ca-bundle\") pod \"57235bbb-0d8b-45ea-ad16-e42723ce9047\" (UID: \"57235bbb-0d8b-45ea-ad16-e42723ce9047\") " Jan 25 08:16:49 crc kubenswrapper[4832]: I0125 08:16:49.398227 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57235bbb-0d8b-45ea-ad16-e42723ce9047-logs" (OuterVolumeSpecName: "logs") pod "57235bbb-0d8b-45ea-ad16-e42723ce9047" (UID: "57235bbb-0d8b-45ea-ad16-e42723ce9047"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 25 08:16:49 crc kubenswrapper[4832]: I0125 08:16:49.398314 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/57235bbb-0d8b-45ea-ad16-e42723ce9047-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "57235bbb-0d8b-45ea-ad16-e42723ce9047" (UID: "57235bbb-0d8b-45ea-ad16-e42723ce9047"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 25 08:16:49 crc kubenswrapper[4832]: I0125 08:16:49.407933 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/57235bbb-0d8b-45ea-ad16-e42723ce9047-scripts" (OuterVolumeSpecName: "scripts") pod "57235bbb-0d8b-45ea-ad16-e42723ce9047" (UID: "57235bbb-0d8b-45ea-ad16-e42723ce9047"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 25 08:16:49 crc kubenswrapper[4832]: I0125 08:16:49.428836 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57235bbb-0d8b-45ea-ad16-e42723ce9047-kube-api-access-59dcg" (OuterVolumeSpecName: "kube-api-access-59dcg") pod "57235bbb-0d8b-45ea-ad16-e42723ce9047" (UID: "57235bbb-0d8b-45ea-ad16-e42723ce9047"). InnerVolumeSpecName "kube-api-access-59dcg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 25 08:16:49 crc kubenswrapper[4832]: I0125 08:16:49.429544 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/57235bbb-0d8b-45ea-ad16-e42723ce9047-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "57235bbb-0d8b-45ea-ad16-e42723ce9047" (UID: "57235bbb-0d8b-45ea-ad16-e42723ce9047"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 25 08:16:49 crc kubenswrapper[4832]: I0125 08:16:49.477577 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/57235bbb-0d8b-45ea-ad16-e42723ce9047-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "57235bbb-0d8b-45ea-ad16-e42723ce9047" (UID: "57235bbb-0d8b-45ea-ad16-e42723ce9047"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 25 08:16:49 crc kubenswrapper[4832]: I0125 08:16:49.498489 4832 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/57235bbb-0d8b-45ea-ad16-e42723ce9047-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 25 08:16:49 crc kubenswrapper[4832]: I0125 08:16:49.498533 4832 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/57235bbb-0d8b-45ea-ad16-e42723ce9047-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 25 08:16:49 crc kubenswrapper[4832]: I0125 08:16:49.498544 4832 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/57235bbb-0d8b-45ea-ad16-e42723ce9047-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 25 08:16:49 crc kubenswrapper[4832]: I0125 08:16:49.498553 4832 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/57235bbb-0d8b-45ea-ad16-e42723ce9047-scripts\") on node \"crc\" DevicePath \"\"" Jan 25 08:16:49 crc kubenswrapper[4832]: I0125 08:16:49.498565 4832 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/57235bbb-0d8b-45ea-ad16-e42723ce9047-logs\") on node \"crc\" DevicePath \"\"" Jan 25 08:16:49 crc kubenswrapper[4832]: I0125 08:16:49.498574 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-59dcg\" (UniqueName: \"kubernetes.io/projected/57235bbb-0d8b-45ea-ad16-e42723ce9047-kube-api-access-59dcg\") on node \"crc\" DevicePath \"\"" Jan 25 08:16:49 crc kubenswrapper[4832]: I0125 08:16:49.584065 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/57235bbb-0d8b-45ea-ad16-e42723ce9047-config-data" (OuterVolumeSpecName: "config-data") pod "57235bbb-0d8b-45ea-ad16-e42723ce9047" (UID: "57235bbb-0d8b-45ea-ad16-e42723ce9047"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 25 08:16:49 crc kubenswrapper[4832]: I0125 08:16:49.601077 4832 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/57235bbb-0d8b-45ea-ad16-e42723ce9047-config-data\") on node \"crc\" DevicePath \"\"" Jan 25 08:16:49 crc kubenswrapper[4832]: I0125 08:16:49.670157 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-9f466dd54-88fdd" Jan 25 08:16:49 crc kubenswrapper[4832]: I0125 08:16:49.702246 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a7429790-03f9-46f3-96d2-5cf0e5323437" path="/var/lib/kubelet/pods/a7429790-03f9-46f3-96d2-5cf0e5323437/volumes" Jan 25 08:16:49 crc kubenswrapper[4832]: I0125 08:16:49.718565 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 25 08:16:49 crc kubenswrapper[4832]: I0125 08:16:49.805953 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/cf6bae18-db06-4abf-a6b1-aa1eda2cc70e-httpd-run\") pod \"cf6bae18-db06-4abf-a6b1-aa1eda2cc70e\" (UID: \"cf6bae18-db06-4abf-a6b1-aa1eda2cc70e\") " Jan 25 08:16:49 crc kubenswrapper[4832]: I0125 08:16:49.806025 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cf6bae18-db06-4abf-a6b1-aa1eda2cc70e-logs\") pod \"cf6bae18-db06-4abf-a6b1-aa1eda2cc70e\" (UID: \"cf6bae18-db06-4abf-a6b1-aa1eda2cc70e\") " Jan 25 08:16:49 crc kubenswrapper[4832]: I0125 08:16:49.806188 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"cf6bae18-db06-4abf-a6b1-aa1eda2cc70e\" (UID: \"cf6bae18-db06-4abf-a6b1-aa1eda2cc70e\") " Jan 25 08:16:49 crc kubenswrapper[4832]: I0125 08:16:49.806225 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cf6bae18-db06-4abf-a6b1-aa1eda2cc70e-config-data\") pod \"cf6bae18-db06-4abf-a6b1-aa1eda2cc70e\" (UID: \"cf6bae18-db06-4abf-a6b1-aa1eda2cc70e\") " Jan 25 08:16:49 crc kubenswrapper[4832]: I0125 08:16:49.806287 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cf6bae18-db06-4abf-a6b1-aa1eda2cc70e-scripts\") pod \"cf6bae18-db06-4abf-a6b1-aa1eda2cc70e\" (UID: \"cf6bae18-db06-4abf-a6b1-aa1eda2cc70e\") " Jan 25 08:16:49 crc kubenswrapper[4832]: I0125 08:16:49.806315 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cf6bae18-db06-4abf-a6b1-aa1eda2cc70e-combined-ca-bundle\") pod \"cf6bae18-db06-4abf-a6b1-aa1eda2cc70e\" (UID: \"cf6bae18-db06-4abf-a6b1-aa1eda2cc70e\") " Jan 25 08:16:49 crc kubenswrapper[4832]: I0125 08:16:49.806361 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tskr5\" (UniqueName: \"kubernetes.io/projected/cf6bae18-db06-4abf-a6b1-aa1eda2cc70e-kube-api-access-tskr5\") pod \"cf6bae18-db06-4abf-a6b1-aa1eda2cc70e\" (UID: \"cf6bae18-db06-4abf-a6b1-aa1eda2cc70e\") " Jan 25 08:16:49 crc kubenswrapper[4832]: I0125 08:16:49.808250 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cf6bae18-db06-4abf-a6b1-aa1eda2cc70e-logs" (OuterVolumeSpecName: "logs") pod "cf6bae18-db06-4abf-a6b1-aa1eda2cc70e" (UID: "cf6bae18-db06-4abf-a6b1-aa1eda2cc70e"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 25 08:16:49 crc kubenswrapper[4832]: I0125 08:16:49.808626 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cf6bae18-db06-4abf-a6b1-aa1eda2cc70e-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "cf6bae18-db06-4abf-a6b1-aa1eda2cc70e" (UID: "cf6bae18-db06-4abf-a6b1-aa1eda2cc70e"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 25 08:16:49 crc kubenswrapper[4832]: I0125 08:16:49.826645 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cf6bae18-db06-4abf-a6b1-aa1eda2cc70e-scripts" (OuterVolumeSpecName: "scripts") pod "cf6bae18-db06-4abf-a6b1-aa1eda2cc70e" (UID: "cf6bae18-db06-4abf-a6b1-aa1eda2cc70e"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 25 08:16:49 crc kubenswrapper[4832]: I0125 08:16:49.829407 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage05-crc" (OuterVolumeSpecName: "glance") pod "cf6bae18-db06-4abf-a6b1-aa1eda2cc70e" (UID: "cf6bae18-db06-4abf-a6b1-aa1eda2cc70e"). InnerVolumeSpecName "local-storage05-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 25 08:16:49 crc kubenswrapper[4832]: I0125 08:16:49.847515 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 25 08:16:49 crc kubenswrapper[4832]: I0125 08:16:49.855745 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cf6bae18-db06-4abf-a6b1-aa1eda2cc70e-kube-api-access-tskr5" (OuterVolumeSpecName: "kube-api-access-tskr5") pod "cf6bae18-db06-4abf-a6b1-aa1eda2cc70e" (UID: "cf6bae18-db06-4abf-a6b1-aa1eda2cc70e"). InnerVolumeSpecName "kube-api-access-tskr5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 25 08:16:49 crc kubenswrapper[4832]: I0125 08:16:49.885546 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cf6bae18-db06-4abf-a6b1-aa1eda2cc70e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "cf6bae18-db06-4abf-a6b1-aa1eda2cc70e" (UID: "cf6bae18-db06-4abf-a6b1-aa1eda2cc70e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 25 08:16:49 crc kubenswrapper[4832]: I0125 08:16:49.909547 4832 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/cf6bae18-db06-4abf-a6b1-aa1eda2cc70e-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 25 08:16:49 crc kubenswrapper[4832]: I0125 08:16:49.910851 4832 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cf6bae18-db06-4abf-a6b1-aa1eda2cc70e-logs\") on node \"crc\" DevicePath \"\"" Jan 25 08:16:49 crc kubenswrapper[4832]: I0125 08:16:49.910886 4832 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") on node \"crc\" " Jan 25 08:16:49 crc kubenswrapper[4832]: I0125 08:16:49.910897 4832 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cf6bae18-db06-4abf-a6b1-aa1eda2cc70e-scripts\") on node \"crc\" DevicePath \"\"" Jan 25 08:16:49 crc kubenswrapper[4832]: I0125 08:16:49.912614 4832 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cf6bae18-db06-4abf-a6b1-aa1eda2cc70e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 25 08:16:49 crc kubenswrapper[4832]: I0125 08:16:49.922807 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tskr5\" (UniqueName: \"kubernetes.io/projected/cf6bae18-db06-4abf-a6b1-aa1eda2cc70e-kube-api-access-tskr5\") on node \"crc\" DevicePath \"\"" Jan 25 08:16:49 crc kubenswrapper[4832]: I0125 08:16:49.939547 4832 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage05-crc" (UniqueName: "kubernetes.io/local-volume/local-storage05-crc") on node "crc" Jan 25 08:16:50 crc kubenswrapper[4832]: I0125 08:16:50.006950 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cf6bae18-db06-4abf-a6b1-aa1eda2cc70e-config-data" (OuterVolumeSpecName: "config-data") pod "cf6bae18-db06-4abf-a6b1-aa1eda2cc70e" (UID: "cf6bae18-db06-4abf-a6b1-aa1eda2cc70e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 25 08:16:50 crc kubenswrapper[4832]: I0125 08:16:50.048944 4832 reconciler_common.go:293] "Volume detached for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") on node \"crc\" DevicePath \"\"" Jan 25 08:16:50 crc kubenswrapper[4832]: I0125 08:16:50.048985 4832 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cf6bae18-db06-4abf-a6b1-aa1eda2cc70e-config-data\") on node \"crc\" DevicePath \"\"" Jan 25 08:16:50 crc kubenswrapper[4832]: I0125 08:16:50.069095 4832 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/neutron-585cc76cc-zg5pq" podUID="196ac30d-ab85-4327-86df-27e637aba0b3" containerName="neutron-httpd" probeResult="failure" output="Get \"https://10.217.0.150:9696/\": dial tcp 10.217.0.150:9696: connect: connection refused" Jan 25 08:16:50 crc kubenswrapper[4832]: I0125 08:16:50.079616 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-857c8bdbcf-kwd2q" event={"ID":"d1a230b2-45ba-4298-b3d6-2280431c592d","Type":"ContainerStarted","Data":"250a07b533bccf4ec453521575ebcddc5eb7d8d6850e7b5358bd76d199f8829e"} Jan 25 08:16:50 crc kubenswrapper[4832]: I0125 08:16:50.079666 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-857c8bdbcf-kwd2q" event={"ID":"d1a230b2-45ba-4298-b3d6-2280431c592d","Type":"ContainerStarted","Data":"895bead0bfaee4729c8baab955645849b1bd7dc9808647b05f04d50858ecbdbb"} Jan 25 08:16:50 crc kubenswrapper[4832]: I0125 08:16:50.098517 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"46d917e3-482a-43d4-9c3a-a632acb41838","Type":"ContainerStarted","Data":"2fa7c62cf872eec62993feebb547efd1d836fc78c352d62ecb389fc5263fa964"} Jan 25 08:16:50 crc kubenswrapper[4832]: I0125 08:16:50.105612 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"0cdb9042-6480-49eb-b855-ac5c5adce9a4","Type":"ContainerStarted","Data":"07aff72612b79ee6dc51ed271f971665125e4d4fee2a158cbca36df69b45ceb6"} Jan 25 08:16:50 crc kubenswrapper[4832]: I0125 08:16:50.126450 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"57235bbb-0d8b-45ea-ad16-e42723ce9047","Type":"ContainerDied","Data":"88d93d8922f40a654683282cb6c67b5d9a2abcfb865d9c7d1af96a3a9b19ec48"} Jan 25 08:16:50 crc kubenswrapper[4832]: I0125 08:16:50.127477 4832 scope.go:117] "RemoveContainer" containerID="bae2373b694c9972e143725fa85e35e68d5fb447a16ecac95865f648c026706c" Jan 25 08:16:50 crc kubenswrapper[4832]: I0125 08:16:50.127647 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 25 08:16:50 crc kubenswrapper[4832]: I0125 08:16:50.142573 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"cf6bae18-db06-4abf-a6b1-aa1eda2cc70e","Type":"ContainerDied","Data":"765105d11f9a9e0a6de4d477348f16fde3891fcd1fcb4f70c8fa137cac7aed7e"} Jan 25 08:16:50 crc kubenswrapper[4832]: I0125 08:16:50.143164 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 25 08:16:50 crc kubenswrapper[4832]: I0125 08:16:50.189270 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Jan 25 08:16:50 crc kubenswrapper[4832]: I0125 08:16:50.200499 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-api-0"] Jan 25 08:16:50 crc kubenswrapper[4832]: I0125 08:16:50.210269 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Jan 25 08:16:50 crc kubenswrapper[4832]: E0125 08:16:50.210790 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="57235bbb-0d8b-45ea-ad16-e42723ce9047" containerName="cinder-api-log" Jan 25 08:16:50 crc kubenswrapper[4832]: I0125 08:16:50.210812 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="57235bbb-0d8b-45ea-ad16-e42723ce9047" containerName="cinder-api-log" Jan 25 08:16:50 crc kubenswrapper[4832]: E0125 08:16:50.210834 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cf6bae18-db06-4abf-a6b1-aa1eda2cc70e" containerName="glance-log" Jan 25 08:16:50 crc kubenswrapper[4832]: I0125 08:16:50.210841 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="cf6bae18-db06-4abf-a6b1-aa1eda2cc70e" containerName="glance-log" Jan 25 08:16:50 crc kubenswrapper[4832]: E0125 08:16:50.210865 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="57235bbb-0d8b-45ea-ad16-e42723ce9047" containerName="cinder-api" Jan 25 08:16:50 crc kubenswrapper[4832]: I0125 08:16:50.210871 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="57235bbb-0d8b-45ea-ad16-e42723ce9047" containerName="cinder-api" Jan 25 08:16:50 crc kubenswrapper[4832]: E0125 08:16:50.210880 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cf6bae18-db06-4abf-a6b1-aa1eda2cc70e" containerName="glance-httpd" Jan 25 08:16:50 crc kubenswrapper[4832]: I0125 08:16:50.210887 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="cf6bae18-db06-4abf-a6b1-aa1eda2cc70e" containerName="glance-httpd" Jan 25 08:16:50 crc kubenswrapper[4832]: I0125 08:16:50.211072 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="cf6bae18-db06-4abf-a6b1-aa1eda2cc70e" containerName="glance-httpd" Jan 25 08:16:50 crc kubenswrapper[4832]: I0125 08:16:50.211086 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="57235bbb-0d8b-45ea-ad16-e42723ce9047" containerName="cinder-api" Jan 25 08:16:50 crc kubenswrapper[4832]: I0125 08:16:50.211103 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="cf6bae18-db06-4abf-a6b1-aa1eda2cc70e" containerName="glance-log" Jan 25 08:16:50 crc kubenswrapper[4832]: I0125 08:16:50.211112 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="57235bbb-0d8b-45ea-ad16-e42723ce9047" containerName="cinder-api-log" Jan 25 08:16:50 crc kubenswrapper[4832]: I0125 08:16:50.212280 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 25 08:16:50 crc kubenswrapper[4832]: I0125 08:16:50.221941 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Jan 25 08:16:50 crc kubenswrapper[4832]: I0125 08:16:50.221956 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-internal-svc" Jan 25 08:16:50 crc kubenswrapper[4832]: I0125 08:16:50.222543 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-public-svc" Jan 25 08:16:50 crc kubenswrapper[4832]: I0125 08:16:50.310994 4832 scope.go:117] "RemoveContainer" containerID="ba3fd16ebff598e441a9cf6472d940128bd9a9fcc112f1d4c39af51485562467" Jan 25 08:16:50 crc kubenswrapper[4832]: I0125 08:16:50.338498 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Jan 25 08:16:50 crc kubenswrapper[4832]: I0125 08:16:50.352102 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Jan 25 08:16:50 crc kubenswrapper[4832]: I0125 08:16:50.356375 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/db0ff763-c24c-45a4-b3c5-7dc32962816f-logs\") pod \"cinder-api-0\" (UID: \"db0ff763-c24c-45a4-b3c5-7dc32962816f\") " pod="openstack/cinder-api-0" Jan 25 08:16:50 crc kubenswrapper[4832]: I0125 08:16:50.356483 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/db0ff763-c24c-45a4-b3c5-7dc32962816f-config-data-custom\") pod \"cinder-api-0\" (UID: \"db0ff763-c24c-45a4-b3c5-7dc32962816f\") " pod="openstack/cinder-api-0" Jan 25 08:16:50 crc kubenswrapper[4832]: I0125 08:16:50.356528 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/db0ff763-c24c-45a4-b3c5-7dc32962816f-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"db0ff763-c24c-45a4-b3c5-7dc32962816f\") " pod="openstack/cinder-api-0" Jan 25 08:16:50 crc kubenswrapper[4832]: I0125 08:16:50.356579 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/db0ff763-c24c-45a4-b3c5-7dc32962816f-scripts\") pod \"cinder-api-0\" (UID: \"db0ff763-c24c-45a4-b3c5-7dc32962816f\") " pod="openstack/cinder-api-0" Jan 25 08:16:50 crc kubenswrapper[4832]: I0125 08:16:50.356618 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/db0ff763-c24c-45a4-b3c5-7dc32962816f-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"db0ff763-c24c-45a4-b3c5-7dc32962816f\") " pod="openstack/cinder-api-0" Jan 25 08:16:50 crc kubenswrapper[4832]: I0125 08:16:50.356636 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/db0ff763-c24c-45a4-b3c5-7dc32962816f-config-data\") pod \"cinder-api-0\" (UID: \"db0ff763-c24c-45a4-b3c5-7dc32962816f\") " pod="openstack/cinder-api-0" Jan 25 08:16:50 crc kubenswrapper[4832]: I0125 08:16:50.356679 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/db0ff763-c24c-45a4-b3c5-7dc32962816f-public-tls-certs\") pod \"cinder-api-0\" (UID: \"db0ff763-c24c-45a4-b3c5-7dc32962816f\") " pod="openstack/cinder-api-0" Jan 25 08:16:50 crc kubenswrapper[4832]: I0125 08:16:50.356718 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/db0ff763-c24c-45a4-b3c5-7dc32962816f-etc-machine-id\") pod \"cinder-api-0\" (UID: \"db0ff763-c24c-45a4-b3c5-7dc32962816f\") " pod="openstack/cinder-api-0" Jan 25 08:16:50 crc kubenswrapper[4832]: I0125 08:16:50.356847 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mg6ds\" (UniqueName: \"kubernetes.io/projected/db0ff763-c24c-45a4-b3c5-7dc32962816f-kube-api-access-mg6ds\") pod \"cinder-api-0\" (UID: \"db0ff763-c24c-45a4-b3c5-7dc32962816f\") " pod="openstack/cinder-api-0" Jan 25 08:16:50 crc kubenswrapper[4832]: I0125 08:16:50.418697 4832 scope.go:117] "RemoveContainer" containerID="03354875f2e085d3c53b87c51864e8f11ec119997799a0926ea69932b7de703e" Jan 25 08:16:50 crc kubenswrapper[4832]: I0125 08:16:50.461077 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/db0ff763-c24c-45a4-b3c5-7dc32962816f-scripts\") pod \"cinder-api-0\" (UID: \"db0ff763-c24c-45a4-b3c5-7dc32962816f\") " pod="openstack/cinder-api-0" Jan 25 08:16:50 crc kubenswrapper[4832]: I0125 08:16:50.461180 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/db0ff763-c24c-45a4-b3c5-7dc32962816f-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"db0ff763-c24c-45a4-b3c5-7dc32962816f\") " pod="openstack/cinder-api-0" Jan 25 08:16:50 crc kubenswrapper[4832]: I0125 08:16:50.461209 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/db0ff763-c24c-45a4-b3c5-7dc32962816f-config-data\") pod \"cinder-api-0\" (UID: \"db0ff763-c24c-45a4-b3c5-7dc32962816f\") " pod="openstack/cinder-api-0" Jan 25 08:16:50 crc kubenswrapper[4832]: I0125 08:16:50.461269 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/db0ff763-c24c-45a4-b3c5-7dc32962816f-public-tls-certs\") pod \"cinder-api-0\" (UID: \"db0ff763-c24c-45a4-b3c5-7dc32962816f\") " pod="openstack/cinder-api-0" Jan 25 08:16:50 crc kubenswrapper[4832]: I0125 08:16:50.461300 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/db0ff763-c24c-45a4-b3c5-7dc32962816f-etc-machine-id\") pod \"cinder-api-0\" (UID: \"db0ff763-c24c-45a4-b3c5-7dc32962816f\") " pod="openstack/cinder-api-0" Jan 25 08:16:50 crc kubenswrapper[4832]: I0125 08:16:50.461322 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mg6ds\" (UniqueName: \"kubernetes.io/projected/db0ff763-c24c-45a4-b3c5-7dc32962816f-kube-api-access-mg6ds\") pod \"cinder-api-0\" (UID: \"db0ff763-c24c-45a4-b3c5-7dc32962816f\") " pod="openstack/cinder-api-0" Jan 25 08:16:50 crc kubenswrapper[4832]: I0125 08:16:50.461345 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/db0ff763-c24c-45a4-b3c5-7dc32962816f-logs\") pod \"cinder-api-0\" (UID: \"db0ff763-c24c-45a4-b3c5-7dc32962816f\") " pod="openstack/cinder-api-0" Jan 25 08:16:50 crc kubenswrapper[4832]: I0125 08:16:50.461422 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/db0ff763-c24c-45a4-b3c5-7dc32962816f-config-data-custom\") pod \"cinder-api-0\" (UID: \"db0ff763-c24c-45a4-b3c5-7dc32962816f\") " pod="openstack/cinder-api-0" Jan 25 08:16:50 crc kubenswrapper[4832]: I0125 08:16:50.461473 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/db0ff763-c24c-45a4-b3c5-7dc32962816f-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"db0ff763-c24c-45a4-b3c5-7dc32962816f\") " pod="openstack/cinder-api-0" Jan 25 08:16:50 crc kubenswrapper[4832]: I0125 08:16:50.462164 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/db0ff763-c24c-45a4-b3c5-7dc32962816f-etc-machine-id\") pod \"cinder-api-0\" (UID: \"db0ff763-c24c-45a4-b3c5-7dc32962816f\") " pod="openstack/cinder-api-0" Jan 25 08:16:50 crc kubenswrapper[4832]: I0125 08:16:50.462766 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/db0ff763-c24c-45a4-b3c5-7dc32962816f-logs\") pod \"cinder-api-0\" (UID: \"db0ff763-c24c-45a4-b3c5-7dc32962816f\") " pod="openstack/cinder-api-0" Jan 25 08:16:50 crc kubenswrapper[4832]: I0125 08:16:50.462875 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 25 08:16:50 crc kubenswrapper[4832]: I0125 08:16:50.493356 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/db0ff763-c24c-45a4-b3c5-7dc32962816f-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"db0ff763-c24c-45a4-b3c5-7dc32962816f\") " pod="openstack/cinder-api-0" Jan 25 08:16:50 crc kubenswrapper[4832]: I0125 08:16:50.496561 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/db0ff763-c24c-45a4-b3c5-7dc32962816f-public-tls-certs\") pod \"cinder-api-0\" (UID: \"db0ff763-c24c-45a4-b3c5-7dc32962816f\") " pod="openstack/cinder-api-0" Jan 25 08:16:50 crc kubenswrapper[4832]: I0125 08:16:50.500544 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 25 08:16:50 crc kubenswrapper[4832]: I0125 08:16:50.511450 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 25 08:16:50 crc kubenswrapper[4832]: I0125 08:16:50.514926 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 25 08:16:50 crc kubenswrapper[4832]: I0125 08:16:50.520334 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 25 08:16:50 crc kubenswrapper[4832]: I0125 08:16:50.521977 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Jan 25 08:16:50 crc kubenswrapper[4832]: I0125 08:16:50.522246 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Jan 25 08:16:50 crc kubenswrapper[4832]: I0125 08:16:50.522344 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/db0ff763-c24c-45a4-b3c5-7dc32962816f-scripts\") pod \"cinder-api-0\" (UID: \"db0ff763-c24c-45a4-b3c5-7dc32962816f\") " pod="openstack/cinder-api-0" Jan 25 08:16:50 crc kubenswrapper[4832]: I0125 08:16:50.522645 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/db0ff763-c24c-45a4-b3c5-7dc32962816f-config-data\") pod \"cinder-api-0\" (UID: \"db0ff763-c24c-45a4-b3c5-7dc32962816f\") " pod="openstack/cinder-api-0" Jan 25 08:16:50 crc kubenswrapper[4832]: I0125 08:16:50.522949 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mg6ds\" (UniqueName: \"kubernetes.io/projected/db0ff763-c24c-45a4-b3c5-7dc32962816f-kube-api-access-mg6ds\") pod \"cinder-api-0\" (UID: \"db0ff763-c24c-45a4-b3c5-7dc32962816f\") " pod="openstack/cinder-api-0" Jan 25 08:16:50 crc kubenswrapper[4832]: I0125 08:16:50.523047 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/db0ff763-c24c-45a4-b3c5-7dc32962816f-config-data-custom\") pod \"cinder-api-0\" (UID: \"db0ff763-c24c-45a4-b3c5-7dc32962816f\") " pod="openstack/cinder-api-0" Jan 25 08:16:50 crc kubenswrapper[4832]: I0125 08:16:50.526422 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/db0ff763-c24c-45a4-b3c5-7dc32962816f-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"db0ff763-c24c-45a4-b3c5-7dc32962816f\") " pod="openstack/cinder-api-0" Jan 25 08:16:50 crc kubenswrapper[4832]: I0125 08:16:50.532052 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-9f466dd54-88fdd"] Jan 25 08:16:50 crc kubenswrapper[4832]: I0125 08:16:50.575558 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 25 08:16:50 crc kubenswrapper[4832]: I0125 08:16:50.667201 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/2d5b38e8-fe79-41d7-9c0e-f053ae1029a6-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"2d5b38e8-fe79-41d7-9c0e-f053ae1029a6\") " pod="openstack/glance-default-internal-api-0" Jan 25 08:16:50 crc kubenswrapper[4832]: I0125 08:16:50.667354 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/2d5b38e8-fe79-41d7-9c0e-f053ae1029a6-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"2d5b38e8-fe79-41d7-9c0e-f053ae1029a6\") " pod="openstack/glance-default-internal-api-0" Jan 25 08:16:50 crc kubenswrapper[4832]: I0125 08:16:50.667488 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2d5b38e8-fe79-41d7-9c0e-f053ae1029a6-logs\") pod \"glance-default-internal-api-0\" (UID: \"2d5b38e8-fe79-41d7-9c0e-f053ae1029a6\") " pod="openstack/glance-default-internal-api-0" Jan 25 08:16:50 crc kubenswrapper[4832]: I0125 08:16:50.667516 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2d5b38e8-fe79-41d7-9c0e-f053ae1029a6-config-data\") pod \"glance-default-internal-api-0\" (UID: \"2d5b38e8-fe79-41d7-9c0e-f053ae1029a6\") " pod="openstack/glance-default-internal-api-0" Jan 25 08:16:50 crc kubenswrapper[4832]: I0125 08:16:50.667540 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2d5b38e8-fe79-41d7-9c0e-f053ae1029a6-scripts\") pod \"glance-default-internal-api-0\" (UID: \"2d5b38e8-fe79-41d7-9c0e-f053ae1029a6\") " pod="openstack/glance-default-internal-api-0" Jan 25 08:16:50 crc kubenswrapper[4832]: I0125 08:16:50.667597 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2d5b38e8-fe79-41d7-9c0e-f053ae1029a6-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"2d5b38e8-fe79-41d7-9c0e-f053ae1029a6\") " pod="openstack/glance-default-internal-api-0" Jan 25 08:16:50 crc kubenswrapper[4832]: I0125 08:16:50.667663 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"glance-default-internal-api-0\" (UID: \"2d5b38e8-fe79-41d7-9c0e-f053ae1029a6\") " pod="openstack/glance-default-internal-api-0" Jan 25 08:16:50 crc kubenswrapper[4832]: I0125 08:16:50.667702 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2kgjp\" (UniqueName: \"kubernetes.io/projected/2d5b38e8-fe79-41d7-9c0e-f053ae1029a6-kube-api-access-2kgjp\") pod \"glance-default-internal-api-0\" (UID: \"2d5b38e8-fe79-41d7-9c0e-f053ae1029a6\") " pod="openstack/glance-default-internal-api-0" Jan 25 08:16:50 crc kubenswrapper[4832]: I0125 08:16:50.691892 4832 scope.go:117] "RemoveContainer" containerID="a0c9d10629fda3bdc77408cc23ba7e9811800a8d879969e78f7af0bd70a947a1" Jan 25 08:16:50 crc kubenswrapper[4832]: I0125 08:16:50.771358 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/2d5b38e8-fe79-41d7-9c0e-f053ae1029a6-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"2d5b38e8-fe79-41d7-9c0e-f053ae1029a6\") " pod="openstack/glance-default-internal-api-0" Jan 25 08:16:50 crc kubenswrapper[4832]: I0125 08:16:50.771452 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2d5b38e8-fe79-41d7-9c0e-f053ae1029a6-logs\") pod \"glance-default-internal-api-0\" (UID: \"2d5b38e8-fe79-41d7-9c0e-f053ae1029a6\") " pod="openstack/glance-default-internal-api-0" Jan 25 08:16:50 crc kubenswrapper[4832]: I0125 08:16:50.771477 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2d5b38e8-fe79-41d7-9c0e-f053ae1029a6-config-data\") pod \"glance-default-internal-api-0\" (UID: \"2d5b38e8-fe79-41d7-9c0e-f053ae1029a6\") " pod="openstack/glance-default-internal-api-0" Jan 25 08:16:50 crc kubenswrapper[4832]: I0125 08:16:50.771503 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2d5b38e8-fe79-41d7-9c0e-f053ae1029a6-scripts\") pod \"glance-default-internal-api-0\" (UID: \"2d5b38e8-fe79-41d7-9c0e-f053ae1029a6\") " pod="openstack/glance-default-internal-api-0" Jan 25 08:16:50 crc kubenswrapper[4832]: I0125 08:16:50.771579 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2d5b38e8-fe79-41d7-9c0e-f053ae1029a6-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"2d5b38e8-fe79-41d7-9c0e-f053ae1029a6\") " pod="openstack/glance-default-internal-api-0" Jan 25 08:16:50 crc kubenswrapper[4832]: I0125 08:16:50.771612 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"glance-default-internal-api-0\" (UID: \"2d5b38e8-fe79-41d7-9c0e-f053ae1029a6\") " pod="openstack/glance-default-internal-api-0" Jan 25 08:16:50 crc kubenswrapper[4832]: I0125 08:16:50.771637 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2kgjp\" (UniqueName: \"kubernetes.io/projected/2d5b38e8-fe79-41d7-9c0e-f053ae1029a6-kube-api-access-2kgjp\") pod \"glance-default-internal-api-0\" (UID: \"2d5b38e8-fe79-41d7-9c0e-f053ae1029a6\") " pod="openstack/glance-default-internal-api-0" Jan 25 08:16:50 crc kubenswrapper[4832]: I0125 08:16:50.771679 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/2d5b38e8-fe79-41d7-9c0e-f053ae1029a6-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"2d5b38e8-fe79-41d7-9c0e-f053ae1029a6\") " pod="openstack/glance-default-internal-api-0" Jan 25 08:16:50 crc kubenswrapper[4832]: I0125 08:16:50.776328 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/2d5b38e8-fe79-41d7-9c0e-f053ae1029a6-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"2d5b38e8-fe79-41d7-9c0e-f053ae1029a6\") " pod="openstack/glance-default-internal-api-0" Jan 25 08:16:50 crc kubenswrapper[4832]: I0125 08:16:50.776645 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2d5b38e8-fe79-41d7-9c0e-f053ae1029a6-logs\") pod \"glance-default-internal-api-0\" (UID: \"2d5b38e8-fe79-41d7-9c0e-f053ae1029a6\") " pod="openstack/glance-default-internal-api-0" Jan 25 08:16:50 crc kubenswrapper[4832]: I0125 08:16:50.782601 4832 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"glance-default-internal-api-0\" (UID: \"2d5b38e8-fe79-41d7-9c0e-f053ae1029a6\") device mount path \"/mnt/openstack/pv05\"" pod="openstack/glance-default-internal-api-0" Jan 25 08:16:50 crc kubenswrapper[4832]: I0125 08:16:50.783861 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2d5b38e8-fe79-41d7-9c0e-f053ae1029a6-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"2d5b38e8-fe79-41d7-9c0e-f053ae1029a6\") " pod="openstack/glance-default-internal-api-0" Jan 25 08:16:50 crc kubenswrapper[4832]: I0125 08:16:50.793295 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2d5b38e8-fe79-41d7-9c0e-f053ae1029a6-config-data\") pod \"glance-default-internal-api-0\" (UID: \"2d5b38e8-fe79-41d7-9c0e-f053ae1029a6\") " pod="openstack/glance-default-internal-api-0" Jan 25 08:16:50 crc kubenswrapper[4832]: I0125 08:16:50.794190 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2d5b38e8-fe79-41d7-9c0e-f053ae1029a6-scripts\") pod \"glance-default-internal-api-0\" (UID: \"2d5b38e8-fe79-41d7-9c0e-f053ae1029a6\") " pod="openstack/glance-default-internal-api-0" Jan 25 08:16:50 crc kubenswrapper[4832]: I0125 08:16:50.821594 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2kgjp\" (UniqueName: \"kubernetes.io/projected/2d5b38e8-fe79-41d7-9c0e-f053ae1029a6-kube-api-access-2kgjp\") pod \"glance-default-internal-api-0\" (UID: \"2d5b38e8-fe79-41d7-9c0e-f053ae1029a6\") " pod="openstack/glance-default-internal-api-0" Jan 25 08:16:50 crc kubenswrapper[4832]: I0125 08:16:50.821750 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/2d5b38e8-fe79-41d7-9c0e-f053ae1029a6-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"2d5b38e8-fe79-41d7-9c0e-f053ae1029a6\") " pod="openstack/glance-default-internal-api-0" Jan 25 08:16:50 crc kubenswrapper[4832]: I0125 08:16:50.824423 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5784cf869f-5ld69" Jan 25 08:16:50 crc kubenswrapper[4832]: I0125 08:16:50.920945 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"glance-default-internal-api-0\" (UID: \"2d5b38e8-fe79-41d7-9c0e-f053ae1029a6\") " pod="openstack/glance-default-internal-api-0" Jan 25 08:16:50 crc kubenswrapper[4832]: I0125 08:16:50.921503 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-65965d6475-wsdhh"] Jan 25 08:16:50 crc kubenswrapper[4832]: I0125 08:16:50.925957 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-65965d6475-wsdhh" podUID="aba728c5-d77a-4d46-a3e8-2e0d1e31756a" containerName="dnsmasq-dns" containerID="cri-o://05dfd328d0d18ead32420ac258f446591195a0fcebedc0337ea4bf2187fd90f3" gracePeriod=10 Jan 25 08:16:50 crc kubenswrapper[4832]: I0125 08:16:50.956552 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 25 08:16:51 crc kubenswrapper[4832]: I0125 08:16:51.061835 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Jan 25 08:16:51 crc kubenswrapper[4832]: I0125 08:16:51.191460 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-9f466dd54-88fdd" event={"ID":"ae8a1d7e-bb0c-4228-b39b-1de7e6c62ff5","Type":"ContainerStarted","Data":"5e9e62f77f82489af58c33459ede12189078d7aef6289a8f55a9bca7be7a6473"} Jan 25 08:16:51 crc kubenswrapper[4832]: I0125 08:16:51.212728 4832 generic.go:334] "Generic (PLEG): container finished" podID="aba728c5-d77a-4d46-a3e8-2e0d1e31756a" containerID="05dfd328d0d18ead32420ac258f446591195a0fcebedc0337ea4bf2187fd90f3" exitCode=0 Jan 25 08:16:51 crc kubenswrapper[4832]: I0125 08:16:51.212877 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-65965d6475-wsdhh" event={"ID":"aba728c5-d77a-4d46-a3e8-2e0d1e31756a","Type":"ContainerDied","Data":"05dfd328d0d18ead32420ac258f446591195a0fcebedc0337ea4bf2187fd90f3"} Jan 25 08:16:51 crc kubenswrapper[4832]: I0125 08:16:51.275514 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-857c8bdbcf-kwd2q" event={"ID":"d1a230b2-45ba-4298-b3d6-2280431c592d","Type":"ContainerStarted","Data":"f29b0d14ff918d0ea201c3645520a6daa4de8f3fce543c3dfdd6d97d3f369e42"} Jan 25 08:16:51 crc kubenswrapper[4832]: I0125 08:16:51.275624 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-857c8bdbcf-kwd2q" Jan 25 08:16:51 crc kubenswrapper[4832]: I0125 08:16:51.295283 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"46d917e3-482a-43d4-9c3a-a632acb41838","Type":"ContainerStarted","Data":"8a1e0a575361cee9d184afb1b8fcd954be2b6f9bf2db1a5dc12174982f51b06c"} Jan 25 08:16:51 crc kubenswrapper[4832]: I0125 08:16:51.305348 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-857c8bdbcf-kwd2q" podStartSLOduration=4.305321165 podStartE2EDuration="4.305321165s" podCreationTimestamp="2026-01-25 08:16:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-25 08:16:51.298989667 +0000 UTC m=+1193.972813190" watchObservedRunningTime="2026-01-25 08:16:51.305321165 +0000 UTC m=+1193.979144698" Jan 25 08:16:51 crc kubenswrapper[4832]: I0125 08:16:51.526476 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 25 08:16:51 crc kubenswrapper[4832]: I0125 08:16:51.547487 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Jan 25 08:16:51 crc kubenswrapper[4832]: I0125 08:16:51.708665 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="57235bbb-0d8b-45ea-ad16-e42723ce9047" path="/var/lib/kubelet/pods/57235bbb-0d8b-45ea-ad16-e42723ce9047/volumes" Jan 25 08:16:51 crc kubenswrapper[4832]: I0125 08:16:51.710078 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cf6bae18-db06-4abf-a6b1-aa1eda2cc70e" path="/var/lib/kubelet/pods/cf6bae18-db06-4abf-a6b1-aa1eda2cc70e/volumes" Jan 25 08:16:51 crc kubenswrapper[4832]: I0125 08:16:51.838848 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-65965d6475-wsdhh" Jan 25 08:16:51 crc kubenswrapper[4832]: I0125 08:16:51.984956 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/aba728c5-d77a-4d46-a3e8-2e0d1e31756a-config\") pod \"aba728c5-d77a-4d46-a3e8-2e0d1e31756a\" (UID: \"aba728c5-d77a-4d46-a3e8-2e0d1e31756a\") " Jan 25 08:16:51 crc kubenswrapper[4832]: I0125 08:16:51.985439 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/aba728c5-d77a-4d46-a3e8-2e0d1e31756a-ovsdbserver-nb\") pod \"aba728c5-d77a-4d46-a3e8-2e0d1e31756a\" (UID: \"aba728c5-d77a-4d46-a3e8-2e0d1e31756a\") " Jan 25 08:16:51 crc kubenswrapper[4832]: I0125 08:16:51.985554 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/aba728c5-d77a-4d46-a3e8-2e0d1e31756a-ovsdbserver-sb\") pod \"aba728c5-d77a-4d46-a3e8-2e0d1e31756a\" (UID: \"aba728c5-d77a-4d46-a3e8-2e0d1e31756a\") " Jan 25 08:16:51 crc kubenswrapper[4832]: I0125 08:16:51.985619 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/aba728c5-d77a-4d46-a3e8-2e0d1e31756a-dns-swift-storage-0\") pod \"aba728c5-d77a-4d46-a3e8-2e0d1e31756a\" (UID: \"aba728c5-d77a-4d46-a3e8-2e0d1e31756a\") " Jan 25 08:16:51 crc kubenswrapper[4832]: I0125 08:16:51.985788 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/aba728c5-d77a-4d46-a3e8-2e0d1e31756a-dns-svc\") pod \"aba728c5-d77a-4d46-a3e8-2e0d1e31756a\" (UID: \"aba728c5-d77a-4d46-a3e8-2e0d1e31756a\") " Jan 25 08:16:51 crc kubenswrapper[4832]: I0125 08:16:51.985928 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2hn7q\" (UniqueName: \"kubernetes.io/projected/aba728c5-d77a-4d46-a3e8-2e0d1e31756a-kube-api-access-2hn7q\") pod \"aba728c5-d77a-4d46-a3e8-2e0d1e31756a\" (UID: \"aba728c5-d77a-4d46-a3e8-2e0d1e31756a\") " Jan 25 08:16:52 crc kubenswrapper[4832]: I0125 08:16:52.010284 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/aba728c5-d77a-4d46-a3e8-2e0d1e31756a-kube-api-access-2hn7q" (OuterVolumeSpecName: "kube-api-access-2hn7q") pod "aba728c5-d77a-4d46-a3e8-2e0d1e31756a" (UID: "aba728c5-d77a-4d46-a3e8-2e0d1e31756a"). InnerVolumeSpecName "kube-api-access-2hn7q". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 25 08:16:52 crc kubenswrapper[4832]: I0125 08:16:52.090812 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2hn7q\" (UniqueName: \"kubernetes.io/projected/aba728c5-d77a-4d46-a3e8-2e0d1e31756a-kube-api-access-2hn7q\") on node \"crc\" DevicePath \"\"" Jan 25 08:16:52 crc kubenswrapper[4832]: I0125 08:16:52.153809 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 25 08:16:52 crc kubenswrapper[4832]: I0125 08:16:52.290174 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/aba728c5-d77a-4d46-a3e8-2e0d1e31756a-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "aba728c5-d77a-4d46-a3e8-2e0d1e31756a" (UID: "aba728c5-d77a-4d46-a3e8-2e0d1e31756a"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 25 08:16:52 crc kubenswrapper[4832]: I0125 08:16:52.297789 4832 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/aba728c5-d77a-4d46-a3e8-2e0d1e31756a-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 25 08:16:52 crc kubenswrapper[4832]: I0125 08:16:52.346369 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"0cdb9042-6480-49eb-b855-ac5c5adce9a4","Type":"ContainerStarted","Data":"b7dde5f52c9ae54ed382849789f84bc94b5a67160df613844bf537e0b149ec00"} Jan 25 08:16:52 crc kubenswrapper[4832]: I0125 08:16:52.348847 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"db0ff763-c24c-45a4-b3c5-7dc32962816f","Type":"ContainerStarted","Data":"4efd7c4847d3841be95189daabec212d35da97d14331988867d82992954fadea"} Jan 25 08:16:52 crc kubenswrapper[4832]: I0125 08:16:52.384106 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-9f466dd54-88fdd" event={"ID":"ae8a1d7e-bb0c-4228-b39b-1de7e6c62ff5","Type":"ContainerStarted","Data":"f0381a6e984feb989b083536efb6b088b314256330251278180a4b7c3a9f81fa"} Jan 25 08:16:52 crc kubenswrapper[4832]: I0125 08:16:52.388637 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-65965d6475-wsdhh" Jan 25 08:16:52 crc kubenswrapper[4832]: I0125 08:16:52.388678 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-65965d6475-wsdhh" event={"ID":"aba728c5-d77a-4d46-a3e8-2e0d1e31756a","Type":"ContainerDied","Data":"0cd5a4cfbdefaf225008f59f21b0fb893920316413297ec6544d38ee4fcb350a"} Jan 25 08:16:52 crc kubenswrapper[4832]: I0125 08:16:52.388744 4832 scope.go:117] "RemoveContainer" containerID="05dfd328d0d18ead32420ac258f446591195a0fcebedc0337ea4bf2187fd90f3" Jan 25 08:16:52 crc kubenswrapper[4832]: I0125 08:16:52.399169 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"2d5b38e8-fe79-41d7-9c0e-f053ae1029a6","Type":"ContainerStarted","Data":"15735f60fbb6f2381e175f8be4edec672ef72977049e2445d0afc6c741ef1afb"} Jan 25 08:16:52 crc kubenswrapper[4832]: I0125 08:16:52.401850 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="20df59e8-9934-47c9-9d8f-a97e0f046368" containerName="cinder-scheduler" containerID="cri-o://09ca5f5ac2308a34d67b7f3713bdec702e3804405ce910494e50503e064a9dba" gracePeriod=30 Jan 25 08:16:52 crc kubenswrapper[4832]: I0125 08:16:52.402319 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="20df59e8-9934-47c9-9d8f-a97e0f046368" containerName="probe" containerID="cri-o://c48473d332c2caa4d22a48db50bb44a185daaba009b8f93a65257a4927f90826" gracePeriod=30 Jan 25 08:16:52 crc kubenswrapper[4832]: I0125 08:16:52.412590 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/aba728c5-d77a-4d46-a3e8-2e0d1e31756a-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "aba728c5-d77a-4d46-a3e8-2e0d1e31756a" (UID: "aba728c5-d77a-4d46-a3e8-2e0d1e31756a"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 25 08:16:52 crc kubenswrapper[4832]: I0125 08:16:52.426050 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/aba728c5-d77a-4d46-a3e8-2e0d1e31756a-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "aba728c5-d77a-4d46-a3e8-2e0d1e31756a" (UID: "aba728c5-d77a-4d46-a3e8-2e0d1e31756a"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 25 08:16:52 crc kubenswrapper[4832]: I0125 08:16:52.436195 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/aba728c5-d77a-4d46-a3e8-2e0d1e31756a-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "aba728c5-d77a-4d46-a3e8-2e0d1e31756a" (UID: "aba728c5-d77a-4d46-a3e8-2e0d1e31756a"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 25 08:16:52 crc kubenswrapper[4832]: I0125 08:16:52.437461 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/aba728c5-d77a-4d46-a3e8-2e0d1e31756a-config" (OuterVolumeSpecName: "config") pod "aba728c5-d77a-4d46-a3e8-2e0d1e31756a" (UID: "aba728c5-d77a-4d46-a3e8-2e0d1e31756a"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 25 08:16:52 crc kubenswrapper[4832]: I0125 08:16:52.505417 4832 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/aba728c5-d77a-4d46-a3e8-2e0d1e31756a-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 25 08:16:52 crc kubenswrapper[4832]: I0125 08:16:52.505930 4832 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/aba728c5-d77a-4d46-a3e8-2e0d1e31756a-config\") on node \"crc\" DevicePath \"\"" Jan 25 08:16:52 crc kubenswrapper[4832]: I0125 08:16:52.505943 4832 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/aba728c5-d77a-4d46-a3e8-2e0d1e31756a-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 25 08:16:52 crc kubenswrapper[4832]: I0125 08:16:52.505965 4832 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/aba728c5-d77a-4d46-a3e8-2e0d1e31756a-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 25 08:16:52 crc kubenswrapper[4832]: I0125 08:16:52.550960 4832 scope.go:117] "RemoveContainer" containerID="6b530cc1cf0e1578b59b872971bed0b5dcd8232ba169e2fd47e6516092de68a5" Jan 25 08:16:52 crc kubenswrapper[4832]: I0125 08:16:52.736785 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-65965d6475-wsdhh"] Jan 25 08:16:52 crc kubenswrapper[4832]: I0125 08:16:52.743812 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-6d6d8975cd-v8jf8" Jan 25 08:16:52 crc kubenswrapper[4832]: I0125 08:16:52.772547 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-65965d6475-wsdhh"] Jan 25 08:16:53 crc kubenswrapper[4832]: I0125 08:16:53.465857 4832 generic.go:334] "Generic (PLEG): container finished" podID="196ac30d-ab85-4327-86df-27e637aba0b3" containerID="b931c3aab747871a791f5720b4595fc8a711739518f9f979ec95f13285aefd68" exitCode=0 Jan 25 08:16:53 crc kubenswrapper[4832]: I0125 08:16:53.466561 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-585cc76cc-zg5pq" event={"ID":"196ac30d-ab85-4327-86df-27e637aba0b3","Type":"ContainerDied","Data":"b931c3aab747871a791f5720b4595fc8a711739518f9f979ec95f13285aefd68"} Jan 25 08:16:53 crc kubenswrapper[4832]: I0125 08:16:53.497322 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"46d917e3-482a-43d4-9c3a-a632acb41838","Type":"ContainerStarted","Data":"a9b27d98bc7d6099b4201a731c803c0c1fc266e275ac91fa6f12df69f03df64a"} Jan 25 08:16:53 crc kubenswrapper[4832]: I0125 08:16:53.499499 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"0cdb9042-6480-49eb-b855-ac5c5adce9a4","Type":"ContainerStarted","Data":"55dc6f35742eb6720b81f0c8beb836f9ae06b558c0a1ee8804acc7d548342188"} Jan 25 08:16:53 crc kubenswrapper[4832]: I0125 08:16:53.502290 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"db0ff763-c24c-45a4-b3c5-7dc32962816f","Type":"ContainerStarted","Data":"be97fbba3b09586e8ef77c1ad05959463f73f5e06464fcac66653f8b88f34da0"} Jan 25 08:16:53 crc kubenswrapper[4832]: I0125 08:16:53.531429 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-9f466dd54-88fdd" event={"ID":"ae8a1d7e-bb0c-4228-b39b-1de7e6c62ff5","Type":"ContainerStarted","Data":"1c54eb0d19c6a27ab7c6934d53410f46979adaee572c93254c3d155e427d59a3"} Jan 25 08:16:53 crc kubenswrapper[4832]: I0125 08:16:53.532638 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-9f466dd54-88fdd" Jan 25 08:16:53 crc kubenswrapper[4832]: I0125 08:16:53.532670 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-9f466dd54-88fdd" Jan 25 08:16:53 crc kubenswrapper[4832]: I0125 08:16:53.554190 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=6.554165149 podStartE2EDuration="6.554165149s" podCreationTimestamp="2026-01-25 08:16:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-25 08:16:53.53310187 +0000 UTC m=+1196.206925403" watchObservedRunningTime="2026-01-25 08:16:53.554165149 +0000 UTC m=+1196.227988682" Jan 25 08:16:53 crc kubenswrapper[4832]: I0125 08:16:53.599408 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-9f466dd54-88fdd" podStartSLOduration=5.599375563 podStartE2EDuration="5.599375563s" podCreationTimestamp="2026-01-25 08:16:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-25 08:16:53.575170036 +0000 UTC m=+1196.248993569" watchObservedRunningTime="2026-01-25 08:16:53.599375563 +0000 UTC m=+1196.273199086" Jan 25 08:16:53 crc kubenswrapper[4832]: I0125 08:16:53.709582 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="aba728c5-d77a-4d46-a3e8-2e0d1e31756a" path="/var/lib/kubelet/pods/aba728c5-d77a-4d46-a3e8-2e0d1e31756a/volumes" Jan 25 08:16:53 crc kubenswrapper[4832]: I0125 08:16:53.750675 4832 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/barbican-api-6d6d8975cd-v8jf8" podUID="31271ce3-bbf8-4033-b2ba-5e47f4e9a151" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.0.157:9311/healthcheck\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 25 08:16:53 crc kubenswrapper[4832]: I0125 08:16:53.903068 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-585cc76cc-zg5pq" Jan 25 08:16:54 crc kubenswrapper[4832]: I0125 08:16:54.060050 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s69lz\" (UniqueName: \"kubernetes.io/projected/196ac30d-ab85-4327-86df-27e637aba0b3-kube-api-access-s69lz\") pod \"196ac30d-ab85-4327-86df-27e637aba0b3\" (UID: \"196ac30d-ab85-4327-86df-27e637aba0b3\") " Jan 25 08:16:54 crc kubenswrapper[4832]: I0125 08:16:54.060499 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/196ac30d-ab85-4327-86df-27e637aba0b3-ovndb-tls-certs\") pod \"196ac30d-ab85-4327-86df-27e637aba0b3\" (UID: \"196ac30d-ab85-4327-86df-27e637aba0b3\") " Jan 25 08:16:54 crc kubenswrapper[4832]: I0125 08:16:54.060773 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/196ac30d-ab85-4327-86df-27e637aba0b3-combined-ca-bundle\") pod \"196ac30d-ab85-4327-86df-27e637aba0b3\" (UID: \"196ac30d-ab85-4327-86df-27e637aba0b3\") " Jan 25 08:16:54 crc kubenswrapper[4832]: I0125 08:16:54.060985 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/196ac30d-ab85-4327-86df-27e637aba0b3-public-tls-certs\") pod \"196ac30d-ab85-4327-86df-27e637aba0b3\" (UID: \"196ac30d-ab85-4327-86df-27e637aba0b3\") " Jan 25 08:16:54 crc kubenswrapper[4832]: I0125 08:16:54.061055 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/196ac30d-ab85-4327-86df-27e637aba0b3-config\") pod \"196ac30d-ab85-4327-86df-27e637aba0b3\" (UID: \"196ac30d-ab85-4327-86df-27e637aba0b3\") " Jan 25 08:16:54 crc kubenswrapper[4832]: I0125 08:16:54.061153 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/196ac30d-ab85-4327-86df-27e637aba0b3-internal-tls-certs\") pod \"196ac30d-ab85-4327-86df-27e637aba0b3\" (UID: \"196ac30d-ab85-4327-86df-27e637aba0b3\") " Jan 25 08:16:54 crc kubenswrapper[4832]: I0125 08:16:54.061188 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/196ac30d-ab85-4327-86df-27e637aba0b3-httpd-config\") pod \"196ac30d-ab85-4327-86df-27e637aba0b3\" (UID: \"196ac30d-ab85-4327-86df-27e637aba0b3\") " Jan 25 08:16:54 crc kubenswrapper[4832]: I0125 08:16:54.071290 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/196ac30d-ab85-4327-86df-27e637aba0b3-kube-api-access-s69lz" (OuterVolumeSpecName: "kube-api-access-s69lz") pod "196ac30d-ab85-4327-86df-27e637aba0b3" (UID: "196ac30d-ab85-4327-86df-27e637aba0b3"). InnerVolumeSpecName "kube-api-access-s69lz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 25 08:16:54 crc kubenswrapper[4832]: I0125 08:16:54.086574 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/196ac30d-ab85-4327-86df-27e637aba0b3-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "196ac30d-ab85-4327-86df-27e637aba0b3" (UID: "196ac30d-ab85-4327-86df-27e637aba0b3"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 25 08:16:54 crc kubenswrapper[4832]: I0125 08:16:54.163657 4832 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/196ac30d-ab85-4327-86df-27e637aba0b3-httpd-config\") on node \"crc\" DevicePath \"\"" Jan 25 08:16:54 crc kubenswrapper[4832]: I0125 08:16:54.163697 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s69lz\" (UniqueName: \"kubernetes.io/projected/196ac30d-ab85-4327-86df-27e637aba0b3-kube-api-access-s69lz\") on node \"crc\" DevicePath \"\"" Jan 25 08:16:54 crc kubenswrapper[4832]: I0125 08:16:54.203502 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/196ac30d-ab85-4327-86df-27e637aba0b3-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "196ac30d-ab85-4327-86df-27e637aba0b3" (UID: "196ac30d-ab85-4327-86df-27e637aba0b3"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 25 08:16:54 crc kubenswrapper[4832]: I0125 08:16:54.207614 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/196ac30d-ab85-4327-86df-27e637aba0b3-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "196ac30d-ab85-4327-86df-27e637aba0b3" (UID: "196ac30d-ab85-4327-86df-27e637aba0b3"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 25 08:16:54 crc kubenswrapper[4832]: I0125 08:16:54.240792 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/196ac30d-ab85-4327-86df-27e637aba0b3-config" (OuterVolumeSpecName: "config") pod "196ac30d-ab85-4327-86df-27e637aba0b3" (UID: "196ac30d-ab85-4327-86df-27e637aba0b3"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 25 08:16:54 crc kubenswrapper[4832]: I0125 08:16:54.245162 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/196ac30d-ab85-4327-86df-27e637aba0b3-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "196ac30d-ab85-4327-86df-27e637aba0b3" (UID: "196ac30d-ab85-4327-86df-27e637aba0b3"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 25 08:16:54 crc kubenswrapper[4832]: I0125 08:16:54.245646 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/196ac30d-ab85-4327-86df-27e637aba0b3-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "196ac30d-ab85-4327-86df-27e637aba0b3" (UID: "196ac30d-ab85-4327-86df-27e637aba0b3"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 25 08:16:54 crc kubenswrapper[4832]: I0125 08:16:54.265602 4832 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/196ac30d-ab85-4327-86df-27e637aba0b3-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 25 08:16:54 crc kubenswrapper[4832]: I0125 08:16:54.265632 4832 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/196ac30d-ab85-4327-86df-27e637aba0b3-config\") on node \"crc\" DevicePath \"\"" Jan 25 08:16:54 crc kubenswrapper[4832]: I0125 08:16:54.265641 4832 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/196ac30d-ab85-4327-86df-27e637aba0b3-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 25 08:16:54 crc kubenswrapper[4832]: I0125 08:16:54.265650 4832 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/196ac30d-ab85-4327-86df-27e637aba0b3-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 25 08:16:54 crc kubenswrapper[4832]: I0125 08:16:54.265658 4832 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/196ac30d-ab85-4327-86df-27e637aba0b3-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 25 08:16:54 crc kubenswrapper[4832]: I0125 08:16:54.343999 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/horizon-856b6b4996-m59cl" Jan 25 08:16:54 crc kubenswrapper[4832]: I0125 08:16:54.664825 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-585cc76cc-zg5pq" event={"ID":"196ac30d-ab85-4327-86df-27e637aba0b3","Type":"ContainerDied","Data":"df919a29518d05908d94c9b3701eae5787d62340b7101945762ad8e03234c567"} Jan 25 08:16:54 crc kubenswrapper[4832]: I0125 08:16:54.665370 4832 scope.go:117] "RemoveContainer" containerID="08846f1d76951f512607b72d43c94cc03251c22467960102f66d465881deb1f9" Jan 25 08:16:54 crc kubenswrapper[4832]: I0125 08:16:54.665542 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-585cc76cc-zg5pq" Jan 25 08:16:54 crc kubenswrapper[4832]: I0125 08:16:54.723987 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"2d5b38e8-fe79-41d7-9c0e-f053ae1029a6","Type":"ContainerStarted","Data":"9c76a612cd6731411225aa1754ce7dbee2923523b6cba2bce2299702d69fa5c0"} Jan 25 08:16:54 crc kubenswrapper[4832]: I0125 08:16:54.724062 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"2d5b38e8-fe79-41d7-9c0e-f053ae1029a6","Type":"ContainerStarted","Data":"70dd41b47f030be98780515dc5751d968023e82ef169c81d380463ea5150cd5f"} Jan 25 08:16:54 crc kubenswrapper[4832]: I0125 08:16:54.762753 4832 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-6d6d8975cd-v8jf8" podUID="31271ce3-bbf8-4033-b2ba-5e47f4e9a151" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.157:9311/healthcheck\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 25 08:16:54 crc kubenswrapper[4832]: I0125 08:16:54.771612 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=4.77158995 podStartE2EDuration="4.77158995s" podCreationTimestamp="2026-01-25 08:16:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-25 08:16:54.754832646 +0000 UTC m=+1197.428656179" watchObservedRunningTime="2026-01-25 08:16:54.77158995 +0000 UTC m=+1197.445413483" Jan 25 08:16:54 crc kubenswrapper[4832]: I0125 08:16:54.805003 4832 generic.go:334] "Generic (PLEG): container finished" podID="20df59e8-9934-47c9-9d8f-a97e0f046368" containerID="c48473d332c2caa4d22a48db50bb44a185daaba009b8f93a65257a4927f90826" exitCode=0 Jan 25 08:16:54 crc kubenswrapper[4832]: I0125 08:16:54.805364 4832 generic.go:334] "Generic (PLEG): container finished" podID="20df59e8-9934-47c9-9d8f-a97e0f046368" containerID="09ca5f5ac2308a34d67b7f3713bdec702e3804405ce910494e50503e064a9dba" exitCode=0 Jan 25 08:16:54 crc kubenswrapper[4832]: I0125 08:16:54.809577 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"20df59e8-9934-47c9-9d8f-a97e0f046368","Type":"ContainerDied","Data":"c48473d332c2caa4d22a48db50bb44a185daaba009b8f93a65257a4927f90826"} Jan 25 08:16:54 crc kubenswrapper[4832]: I0125 08:16:54.809631 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"20df59e8-9934-47c9-9d8f-a97e0f046368","Type":"ContainerDied","Data":"09ca5f5ac2308a34d67b7f3713bdec702e3804405ce910494e50503e064a9dba"} Jan 25 08:16:54 crc kubenswrapper[4832]: I0125 08:16:54.867787 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"db0ff763-c24c-45a4-b3c5-7dc32962816f","Type":"ContainerStarted","Data":"a98b3641d5913adc8eb3499cd77a4aa514ebb7a547b00375c304de14001788d7"} Jan 25 08:16:54 crc kubenswrapper[4832]: I0125 08:16:54.868437 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Jan 25 08:16:54 crc kubenswrapper[4832]: I0125 08:16:54.882260 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-585cc76cc-zg5pq"] Jan 25 08:16:54 crc kubenswrapper[4832]: I0125 08:16:54.931753 4832 scope.go:117] "RemoveContainer" containerID="b931c3aab747871a791f5720b4595fc8a711739518f9f979ec95f13285aefd68" Jan 25 08:16:54 crc kubenswrapper[4832]: I0125 08:16:54.936629 4832 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-f649cfc6-vzpx7" podUID="26fd6803-3263-4989-a86e-908f6a504d14" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.146:8443/dashboard/auth/login/?next=/dashboard/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 25 08:16:54 crc kubenswrapper[4832]: I0125 08:16:54.936801 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-f649cfc6-vzpx7" Jan 25 08:16:54 crc kubenswrapper[4832]: I0125 08:16:54.938107 4832 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="horizon" containerStatusID={"Type":"cri-o","ID":"10ffcffcab8dac65ab76aaa66f717c929c0bbdef0bea9e339bf47c7390fd8147"} pod="openstack/horizon-f649cfc6-vzpx7" containerMessage="Container horizon failed startup probe, will be restarted" Jan 25 08:16:54 crc kubenswrapper[4832]: I0125 08:16:54.938168 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-f649cfc6-vzpx7" podUID="26fd6803-3263-4989-a86e-908f6a504d14" containerName="horizon" containerID="cri-o://10ffcffcab8dac65ab76aaa66f717c929c0bbdef0bea9e339bf47c7390fd8147" gracePeriod=30 Jan 25 08:16:54 crc kubenswrapper[4832]: I0125 08:16:54.938821 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-585cc76cc-zg5pq"] Jan 25 08:16:54 crc kubenswrapper[4832]: I0125 08:16:54.941979 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=4.941943598 podStartE2EDuration="4.941943598s" podCreationTimestamp="2026-01-25 08:16:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-25 08:16:54.919901209 +0000 UTC m=+1197.593724752" watchObservedRunningTime="2026-01-25 08:16:54.941943598 +0000 UTC m=+1197.615767131" Jan 25 08:16:55 crc kubenswrapper[4832]: I0125 08:16:55.089279 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 25 08:16:55 crc kubenswrapper[4832]: I0125 08:16:55.132071 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/20df59e8-9934-47c9-9d8f-a97e0f046368-etc-machine-id\") pod \"20df59e8-9934-47c9-9d8f-a97e0f046368\" (UID: \"20df59e8-9934-47c9-9d8f-a97e0f046368\") " Jan 25 08:16:55 crc kubenswrapper[4832]: I0125 08:16:55.132359 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/20df59e8-9934-47c9-9d8f-a97e0f046368-config-data-custom\") pod \"20df59e8-9934-47c9-9d8f-a97e0f046368\" (UID: \"20df59e8-9934-47c9-9d8f-a97e0f046368\") " Jan 25 08:16:55 crc kubenswrapper[4832]: I0125 08:16:55.132395 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/20df59e8-9934-47c9-9d8f-a97e0f046368-config-data\") pod \"20df59e8-9934-47c9-9d8f-a97e0f046368\" (UID: \"20df59e8-9934-47c9-9d8f-a97e0f046368\") " Jan 25 08:16:55 crc kubenswrapper[4832]: I0125 08:16:55.132420 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-22mls\" (UniqueName: \"kubernetes.io/projected/20df59e8-9934-47c9-9d8f-a97e0f046368-kube-api-access-22mls\") pod \"20df59e8-9934-47c9-9d8f-a97e0f046368\" (UID: \"20df59e8-9934-47c9-9d8f-a97e0f046368\") " Jan 25 08:16:55 crc kubenswrapper[4832]: I0125 08:16:55.132439 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/20df59e8-9934-47c9-9d8f-a97e0f046368-combined-ca-bundle\") pod \"20df59e8-9934-47c9-9d8f-a97e0f046368\" (UID: \"20df59e8-9934-47c9-9d8f-a97e0f046368\") " Jan 25 08:16:55 crc kubenswrapper[4832]: I0125 08:16:55.132528 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/20df59e8-9934-47c9-9d8f-a97e0f046368-scripts\") pod \"20df59e8-9934-47c9-9d8f-a97e0f046368\" (UID: \"20df59e8-9934-47c9-9d8f-a97e0f046368\") " Jan 25 08:16:55 crc kubenswrapper[4832]: I0125 08:16:55.134588 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/20df59e8-9934-47c9-9d8f-a97e0f046368-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "20df59e8-9934-47c9-9d8f-a97e0f046368" (UID: "20df59e8-9934-47c9-9d8f-a97e0f046368"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 25 08:16:55 crc kubenswrapper[4832]: I0125 08:16:55.150837 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20df59e8-9934-47c9-9d8f-a97e0f046368-kube-api-access-22mls" (OuterVolumeSpecName: "kube-api-access-22mls") pod "20df59e8-9934-47c9-9d8f-a97e0f046368" (UID: "20df59e8-9934-47c9-9d8f-a97e0f046368"). InnerVolumeSpecName "kube-api-access-22mls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 25 08:16:55 crc kubenswrapper[4832]: I0125 08:16:55.152357 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20df59e8-9934-47c9-9d8f-a97e0f046368-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "20df59e8-9934-47c9-9d8f-a97e0f046368" (UID: "20df59e8-9934-47c9-9d8f-a97e0f046368"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 25 08:16:55 crc kubenswrapper[4832]: I0125 08:16:55.175023 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20df59e8-9934-47c9-9d8f-a97e0f046368-scripts" (OuterVolumeSpecName: "scripts") pod "20df59e8-9934-47c9-9d8f-a97e0f046368" (UID: "20df59e8-9934-47c9-9d8f-a97e0f046368"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 25 08:16:55 crc kubenswrapper[4832]: I0125 08:16:55.234812 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-22mls\" (UniqueName: \"kubernetes.io/projected/20df59e8-9934-47c9-9d8f-a97e0f046368-kube-api-access-22mls\") on node \"crc\" DevicePath \"\"" Jan 25 08:16:55 crc kubenswrapper[4832]: I0125 08:16:55.234858 4832 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/20df59e8-9934-47c9-9d8f-a97e0f046368-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 25 08:16:55 crc kubenswrapper[4832]: I0125 08:16:55.234868 4832 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/20df59e8-9934-47c9-9d8f-a97e0f046368-scripts\") on node \"crc\" DevicePath \"\"" Jan 25 08:16:55 crc kubenswrapper[4832]: I0125 08:16:55.234877 4832 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/20df59e8-9934-47c9-9d8f-a97e0f046368-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 25 08:16:55 crc kubenswrapper[4832]: I0125 08:16:55.252921 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20df59e8-9934-47c9-9d8f-a97e0f046368-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "20df59e8-9934-47c9-9d8f-a97e0f046368" (UID: "20df59e8-9934-47c9-9d8f-a97e0f046368"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 25 08:16:55 crc kubenswrapper[4832]: I0125 08:16:55.318558 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20df59e8-9934-47c9-9d8f-a97e0f046368-config-data" (OuterVolumeSpecName: "config-data") pod "20df59e8-9934-47c9-9d8f-a97e0f046368" (UID: "20df59e8-9934-47c9-9d8f-a97e0f046368"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 25 08:16:55 crc kubenswrapper[4832]: I0125 08:16:55.336267 4832 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/20df59e8-9934-47c9-9d8f-a97e0f046368-config-data\") on node \"crc\" DevicePath \"\"" Jan 25 08:16:55 crc kubenswrapper[4832]: I0125 08:16:55.336302 4832 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/20df59e8-9934-47c9-9d8f-a97e0f046368-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 25 08:16:55 crc kubenswrapper[4832]: I0125 08:16:55.681938 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="196ac30d-ab85-4327-86df-27e637aba0b3" path="/var/lib/kubelet/pods/196ac30d-ab85-4327-86df-27e637aba0b3/volumes" Jan 25 08:16:55 crc kubenswrapper[4832]: I0125 08:16:55.882445 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"46d917e3-482a-43d4-9c3a-a632acb41838","Type":"ContainerStarted","Data":"c6c28cbc3f6313ddab4255c962fb40272c22f8923540363aa61e422db6eb1418"} Jan 25 08:16:55 crc kubenswrapper[4832]: I0125 08:16:55.882642 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 25 08:16:55 crc kubenswrapper[4832]: I0125 08:16:55.884876 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"20df59e8-9934-47c9-9d8f-a97e0f046368","Type":"ContainerDied","Data":"50105461bcbe121d53114c0cc573823f4f72024263a595b9363688fc3b8d5881"} Jan 25 08:16:55 crc kubenswrapper[4832]: I0125 08:16:55.884934 4832 scope.go:117] "RemoveContainer" containerID="c48473d332c2caa4d22a48db50bb44a185daaba009b8f93a65257a4927f90826" Jan 25 08:16:55 crc kubenswrapper[4832]: I0125 08:16:55.884942 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 25 08:16:55 crc kubenswrapper[4832]: I0125 08:16:55.909713 4832 scope.go:117] "RemoveContainer" containerID="09ca5f5ac2308a34d67b7f3713bdec702e3804405ce910494e50503e064a9dba" Jan 25 08:16:55 crc kubenswrapper[4832]: I0125 08:16:55.919917 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=3.365944101 podStartE2EDuration="9.919889419s" podCreationTimestamp="2026-01-25 08:16:46 +0000 UTC" firstStartedPulling="2026-01-25 08:16:48.032573953 +0000 UTC m=+1190.706397486" lastFinishedPulling="2026-01-25 08:16:54.586519271 +0000 UTC m=+1197.260342804" observedRunningTime="2026-01-25 08:16:55.913564581 +0000 UTC m=+1198.587388114" watchObservedRunningTime="2026-01-25 08:16:55.919889419 +0000 UTC m=+1198.593712952" Jan 25 08:16:56 crc kubenswrapper[4832]: I0125 08:16:55.996637 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 25 08:16:56 crc kubenswrapper[4832]: I0125 08:16:56.029609 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 25 08:16:56 crc kubenswrapper[4832]: I0125 08:16:56.052192 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Jan 25 08:16:56 crc kubenswrapper[4832]: E0125 08:16:56.052694 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aba728c5-d77a-4d46-a3e8-2e0d1e31756a" containerName="dnsmasq-dns" Jan 25 08:16:56 crc kubenswrapper[4832]: I0125 08:16:56.052720 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="aba728c5-d77a-4d46-a3e8-2e0d1e31756a" containerName="dnsmasq-dns" Jan 25 08:16:56 crc kubenswrapper[4832]: E0125 08:16:56.052737 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="196ac30d-ab85-4327-86df-27e637aba0b3" containerName="neutron-api" Jan 25 08:16:56 crc kubenswrapper[4832]: I0125 08:16:56.052747 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="196ac30d-ab85-4327-86df-27e637aba0b3" containerName="neutron-api" Jan 25 08:16:56 crc kubenswrapper[4832]: E0125 08:16:56.052768 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="20df59e8-9934-47c9-9d8f-a97e0f046368" containerName="cinder-scheduler" Jan 25 08:16:56 crc kubenswrapper[4832]: I0125 08:16:56.052777 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="20df59e8-9934-47c9-9d8f-a97e0f046368" containerName="cinder-scheduler" Jan 25 08:16:56 crc kubenswrapper[4832]: E0125 08:16:56.052794 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="20df59e8-9934-47c9-9d8f-a97e0f046368" containerName="probe" Jan 25 08:16:56 crc kubenswrapper[4832]: I0125 08:16:56.052803 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="20df59e8-9934-47c9-9d8f-a97e0f046368" containerName="probe" Jan 25 08:16:56 crc kubenswrapper[4832]: E0125 08:16:56.052819 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="196ac30d-ab85-4327-86df-27e637aba0b3" containerName="neutron-httpd" Jan 25 08:16:56 crc kubenswrapper[4832]: I0125 08:16:56.052828 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="196ac30d-ab85-4327-86df-27e637aba0b3" containerName="neutron-httpd" Jan 25 08:16:56 crc kubenswrapper[4832]: E0125 08:16:56.052847 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aba728c5-d77a-4d46-a3e8-2e0d1e31756a" containerName="init" Jan 25 08:16:56 crc kubenswrapper[4832]: I0125 08:16:56.052856 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="aba728c5-d77a-4d46-a3e8-2e0d1e31756a" containerName="init" Jan 25 08:16:56 crc kubenswrapper[4832]: I0125 08:16:56.053068 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="196ac30d-ab85-4327-86df-27e637aba0b3" containerName="neutron-httpd" Jan 25 08:16:56 crc kubenswrapper[4832]: I0125 08:16:56.053093 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="20df59e8-9934-47c9-9d8f-a97e0f046368" containerName="cinder-scheduler" Jan 25 08:16:56 crc kubenswrapper[4832]: I0125 08:16:56.053114 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="aba728c5-d77a-4d46-a3e8-2e0d1e31756a" containerName="dnsmasq-dns" Jan 25 08:16:56 crc kubenswrapper[4832]: I0125 08:16:56.053129 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="20df59e8-9934-47c9-9d8f-a97e0f046368" containerName="probe" Jan 25 08:16:56 crc kubenswrapper[4832]: I0125 08:16:56.053158 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="196ac30d-ab85-4327-86df-27e637aba0b3" containerName="neutron-api" Jan 25 08:16:56 crc kubenswrapper[4832]: I0125 08:16:56.071534 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 25 08:16:56 crc kubenswrapper[4832]: I0125 08:16:56.080960 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Jan 25 08:16:56 crc kubenswrapper[4832]: I0125 08:16:56.084825 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 25 08:16:56 crc kubenswrapper[4832]: I0125 08:16:56.183875 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c3f65dba-194a-46be-b020-24ee852b965a-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"c3f65dba-194a-46be-b020-24ee852b965a\") " pod="openstack/cinder-scheduler-0" Jan 25 08:16:56 crc kubenswrapper[4832]: I0125 08:16:56.183951 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c3f65dba-194a-46be-b020-24ee852b965a-scripts\") pod \"cinder-scheduler-0\" (UID: \"c3f65dba-194a-46be-b020-24ee852b965a\") " pod="openstack/cinder-scheduler-0" Jan 25 08:16:56 crc kubenswrapper[4832]: I0125 08:16:56.183992 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6qhv8\" (UniqueName: \"kubernetes.io/projected/c3f65dba-194a-46be-b020-24ee852b965a-kube-api-access-6qhv8\") pod \"cinder-scheduler-0\" (UID: \"c3f65dba-194a-46be-b020-24ee852b965a\") " pod="openstack/cinder-scheduler-0" Jan 25 08:16:56 crc kubenswrapper[4832]: I0125 08:16:56.184042 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/c3f65dba-194a-46be-b020-24ee852b965a-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"c3f65dba-194a-46be-b020-24ee852b965a\") " pod="openstack/cinder-scheduler-0" Jan 25 08:16:56 crc kubenswrapper[4832]: I0125 08:16:56.184102 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c3f65dba-194a-46be-b020-24ee852b965a-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"c3f65dba-194a-46be-b020-24ee852b965a\") " pod="openstack/cinder-scheduler-0" Jan 25 08:16:56 crc kubenswrapper[4832]: I0125 08:16:56.184165 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c3f65dba-194a-46be-b020-24ee852b965a-config-data\") pod \"cinder-scheduler-0\" (UID: \"c3f65dba-194a-46be-b020-24ee852b965a\") " pod="openstack/cinder-scheduler-0" Jan 25 08:16:56 crc kubenswrapper[4832]: I0125 08:16:56.285668 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6qhv8\" (UniqueName: \"kubernetes.io/projected/c3f65dba-194a-46be-b020-24ee852b965a-kube-api-access-6qhv8\") pod \"cinder-scheduler-0\" (UID: \"c3f65dba-194a-46be-b020-24ee852b965a\") " pod="openstack/cinder-scheduler-0" Jan 25 08:16:56 crc kubenswrapper[4832]: I0125 08:16:56.285777 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/c3f65dba-194a-46be-b020-24ee852b965a-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"c3f65dba-194a-46be-b020-24ee852b965a\") " pod="openstack/cinder-scheduler-0" Jan 25 08:16:56 crc kubenswrapper[4832]: I0125 08:16:56.285865 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c3f65dba-194a-46be-b020-24ee852b965a-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"c3f65dba-194a-46be-b020-24ee852b965a\") " pod="openstack/cinder-scheduler-0" Jan 25 08:16:56 crc kubenswrapper[4832]: I0125 08:16:56.285944 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c3f65dba-194a-46be-b020-24ee852b965a-config-data\") pod \"cinder-scheduler-0\" (UID: \"c3f65dba-194a-46be-b020-24ee852b965a\") " pod="openstack/cinder-scheduler-0" Jan 25 08:16:56 crc kubenswrapper[4832]: I0125 08:16:56.285980 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c3f65dba-194a-46be-b020-24ee852b965a-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"c3f65dba-194a-46be-b020-24ee852b965a\") " pod="openstack/cinder-scheduler-0" Jan 25 08:16:56 crc kubenswrapper[4832]: I0125 08:16:56.286021 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c3f65dba-194a-46be-b020-24ee852b965a-scripts\") pod \"cinder-scheduler-0\" (UID: \"c3f65dba-194a-46be-b020-24ee852b965a\") " pod="openstack/cinder-scheduler-0" Jan 25 08:16:56 crc kubenswrapper[4832]: I0125 08:16:56.286505 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/c3f65dba-194a-46be-b020-24ee852b965a-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"c3f65dba-194a-46be-b020-24ee852b965a\") " pod="openstack/cinder-scheduler-0" Jan 25 08:16:56 crc kubenswrapper[4832]: I0125 08:16:56.295202 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c3f65dba-194a-46be-b020-24ee852b965a-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"c3f65dba-194a-46be-b020-24ee852b965a\") " pod="openstack/cinder-scheduler-0" Jan 25 08:16:56 crc kubenswrapper[4832]: I0125 08:16:56.295852 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c3f65dba-194a-46be-b020-24ee852b965a-scripts\") pod \"cinder-scheduler-0\" (UID: \"c3f65dba-194a-46be-b020-24ee852b965a\") " pod="openstack/cinder-scheduler-0" Jan 25 08:16:56 crc kubenswrapper[4832]: I0125 08:16:56.310283 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c3f65dba-194a-46be-b020-24ee852b965a-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"c3f65dba-194a-46be-b020-24ee852b965a\") " pod="openstack/cinder-scheduler-0" Jan 25 08:16:56 crc kubenswrapper[4832]: I0125 08:16:56.323904 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6qhv8\" (UniqueName: \"kubernetes.io/projected/c3f65dba-194a-46be-b020-24ee852b965a-kube-api-access-6qhv8\") pod \"cinder-scheduler-0\" (UID: \"c3f65dba-194a-46be-b020-24ee852b965a\") " pod="openstack/cinder-scheduler-0" Jan 25 08:16:56 crc kubenswrapper[4832]: I0125 08:16:56.329653 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c3f65dba-194a-46be-b020-24ee852b965a-config-data\") pod \"cinder-scheduler-0\" (UID: \"c3f65dba-194a-46be-b020-24ee852b965a\") " pod="openstack/cinder-scheduler-0" Jan 25 08:16:56 crc kubenswrapper[4832]: I0125 08:16:56.414727 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 25 08:16:56 crc kubenswrapper[4832]: I0125 08:16:56.898078 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 25 08:16:56 crc kubenswrapper[4832]: W0125 08:16:56.901550 4832 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc3f65dba_194a_46be_b020_24ee852b965a.slice/crio-3d606f32b94b131cdc2e394e1b336f3744659fea97c9a9f0affeaf3863826fe5 WatchSource:0}: Error finding container 3d606f32b94b131cdc2e394e1b336f3744659fea97c9a9f0affeaf3863826fe5: Status 404 returned error can't find the container with id 3d606f32b94b131cdc2e394e1b336f3744659fea97c9a9f0affeaf3863826fe5 Jan 25 08:16:57 crc kubenswrapper[4832]: I0125 08:16:57.063337 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-6d6d8975cd-v8jf8" Jan 25 08:16:57 crc kubenswrapper[4832]: I0125 08:16:57.327286 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/horizon-856b6b4996-m59cl" Jan 25 08:16:57 crc kubenswrapper[4832]: I0125 08:16:57.702493 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20df59e8-9934-47c9-9d8f-a97e0f046368" path="/var/lib/kubelet/pods/20df59e8-9934-47c9-9d8f-a97e0f046368/volumes" Jan 25 08:16:57 crc kubenswrapper[4832]: I0125 08:16:57.919703 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"c3f65dba-194a-46be-b020-24ee852b965a","Type":"ContainerStarted","Data":"d3662918a59b720ae80d364667fe4b15e18a55cff2ccb56191d67e1f8724d5cd"} Jan 25 08:16:57 crc kubenswrapper[4832]: I0125 08:16:57.920016 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"c3f65dba-194a-46be-b020-24ee852b965a","Type":"ContainerStarted","Data":"3d606f32b94b131cdc2e394e1b336f3744659fea97c9a9f0affeaf3863826fe5"} Jan 25 08:16:58 crc kubenswrapper[4832]: I0125 08:16:58.529946 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Jan 25 08:16:58 crc kubenswrapper[4832]: I0125 08:16:58.531037 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Jan 25 08:16:58 crc kubenswrapper[4832]: I0125 08:16:58.567888 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Jan 25 08:16:58 crc kubenswrapper[4832]: I0125 08:16:58.587903 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Jan 25 08:16:58 crc kubenswrapper[4832]: I0125 08:16:58.952308 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"c3f65dba-194a-46be-b020-24ee852b965a","Type":"ContainerStarted","Data":"25cfafa0e14b1a2d6e29ac0b6cc70e2aa08a597fbc83625561498b14f75b4797"} Jan 25 08:16:58 crc kubenswrapper[4832]: I0125 08:16:58.952359 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Jan 25 08:16:58 crc kubenswrapper[4832]: I0125 08:16:58.953004 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Jan 25 08:16:58 crc kubenswrapper[4832]: I0125 08:16:58.983050 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=3.983034274 podStartE2EDuration="3.983034274s" podCreationTimestamp="2026-01-25 08:16:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-25 08:16:58.981355582 +0000 UTC m=+1201.655179115" watchObservedRunningTime="2026-01-25 08:16:58.983034274 +0000 UTC m=+1201.656857797" Jan 25 08:16:59 crc kubenswrapper[4832]: I0125 08:16:59.167500 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/keystone-699f4599dd-j695n" Jan 25 08:16:59 crc kubenswrapper[4832]: I0125 08:16:59.962339 4832 generic.go:334] "Generic (PLEG): container finished" podID="26fd6803-3263-4989-a86e-908f6a504d14" containerID="10ffcffcab8dac65ab76aaa66f717c929c0bbdef0bea9e339bf47c7390fd8147" exitCode=0 Jan 25 08:16:59 crc kubenswrapper[4832]: I0125 08:16:59.962457 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-f649cfc6-vzpx7" event={"ID":"26fd6803-3263-4989-a86e-908f6a504d14","Type":"ContainerDied","Data":"10ffcffcab8dac65ab76aaa66f717c929c0bbdef0bea9e339bf47c7390fd8147"} Jan 25 08:17:00 crc kubenswrapper[4832]: I0125 08:17:00.748505 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Jan 25 08:17:00 crc kubenswrapper[4832]: I0125 08:17:00.750470 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 25 08:17:00 crc kubenswrapper[4832]: I0125 08:17:00.752877 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config" Jan 25 08:17:00 crc kubenswrapper[4832]: I0125 08:17:00.753027 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-config-secret" Jan 25 08:17:00 crc kubenswrapper[4832]: I0125 08:17:00.753521 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstackclient-openstackclient-dockercfg-4lzgs" Jan 25 08:17:00 crc kubenswrapper[4832]: I0125 08:17:00.814990 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Jan 25 08:17:00 crc kubenswrapper[4832]: I0125 08:17:00.903501 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/a962ff03-629f-458b-b5dc-3980f55d9f66-openstack-config\") pod \"openstackclient\" (UID: \"a962ff03-629f-458b-b5dc-3980f55d9f66\") " pod="openstack/openstackclient" Jan 25 08:17:00 crc kubenswrapper[4832]: I0125 08:17:00.903588 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/a962ff03-629f-458b-b5dc-3980f55d9f66-openstack-config-secret\") pod \"openstackclient\" (UID: \"a962ff03-629f-458b-b5dc-3980f55d9f66\") " pod="openstack/openstackclient" Jan 25 08:17:00 crc kubenswrapper[4832]: I0125 08:17:00.903672 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a962ff03-629f-458b-b5dc-3980f55d9f66-combined-ca-bundle\") pod \"openstackclient\" (UID: \"a962ff03-629f-458b-b5dc-3980f55d9f66\") " pod="openstack/openstackclient" Jan 25 08:17:00 crc kubenswrapper[4832]: I0125 08:17:00.903718 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qvpjb\" (UniqueName: \"kubernetes.io/projected/a962ff03-629f-458b-b5dc-3980f55d9f66-kube-api-access-qvpjb\") pod \"openstackclient\" (UID: \"a962ff03-629f-458b-b5dc-3980f55d9f66\") " pod="openstack/openstackclient" Jan 25 08:17:00 crc kubenswrapper[4832]: I0125 08:17:00.958539 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Jan 25 08:17:00 crc kubenswrapper[4832]: I0125 08:17:00.958598 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Jan 25 08:17:01 crc kubenswrapper[4832]: I0125 08:17:01.001744 4832 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 25 08:17:01 crc kubenswrapper[4832]: I0125 08:17:01.001777 4832 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 25 08:17:01 crc kubenswrapper[4832]: I0125 08:17:01.003013 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-f649cfc6-vzpx7" event={"ID":"26fd6803-3263-4989-a86e-908f6a504d14","Type":"ContainerStarted","Data":"4ecfca87326a18659a1f1d508180388d9ae799bd0e41aac900a93e548c06fb83"} Jan 25 08:17:01 crc kubenswrapper[4832]: I0125 08:17:01.004632 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Jan 25 08:17:01 crc kubenswrapper[4832]: I0125 08:17:01.004935 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Jan 25 08:17:01 crc kubenswrapper[4832]: I0125 08:17:01.005172 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qvpjb\" (UniqueName: \"kubernetes.io/projected/a962ff03-629f-458b-b5dc-3980f55d9f66-kube-api-access-qvpjb\") pod \"openstackclient\" (UID: \"a962ff03-629f-458b-b5dc-3980f55d9f66\") " pod="openstack/openstackclient" Jan 25 08:17:01 crc kubenswrapper[4832]: I0125 08:17:01.005247 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/a962ff03-629f-458b-b5dc-3980f55d9f66-openstack-config\") pod \"openstackclient\" (UID: \"a962ff03-629f-458b-b5dc-3980f55d9f66\") " pod="openstack/openstackclient" Jan 25 08:17:01 crc kubenswrapper[4832]: I0125 08:17:01.005293 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/a962ff03-629f-458b-b5dc-3980f55d9f66-openstack-config-secret\") pod \"openstackclient\" (UID: \"a962ff03-629f-458b-b5dc-3980f55d9f66\") " pod="openstack/openstackclient" Jan 25 08:17:01 crc kubenswrapper[4832]: I0125 08:17:01.005367 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a962ff03-629f-458b-b5dc-3980f55d9f66-combined-ca-bundle\") pod \"openstackclient\" (UID: \"a962ff03-629f-458b-b5dc-3980f55d9f66\") " pod="openstack/openstackclient" Jan 25 08:17:01 crc kubenswrapper[4832]: I0125 08:17:01.007299 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/a962ff03-629f-458b-b5dc-3980f55d9f66-openstack-config\") pod \"openstackclient\" (UID: \"a962ff03-629f-458b-b5dc-3980f55d9f66\") " pod="openstack/openstackclient" Jan 25 08:17:01 crc kubenswrapper[4832]: I0125 08:17:01.029772 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a962ff03-629f-458b-b5dc-3980f55d9f66-combined-ca-bundle\") pod \"openstackclient\" (UID: \"a962ff03-629f-458b-b5dc-3980f55d9f66\") " pod="openstack/openstackclient" Jan 25 08:17:01 crc kubenswrapper[4832]: I0125 08:17:01.031837 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/a962ff03-629f-458b-b5dc-3980f55d9f66-openstack-config-secret\") pod \"openstackclient\" (UID: \"a962ff03-629f-458b-b5dc-3980f55d9f66\") " pod="openstack/openstackclient" Jan 25 08:17:01 crc kubenswrapper[4832]: I0125 08:17:01.032416 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Jan 25 08:17:01 crc kubenswrapper[4832]: I0125 08:17:01.047704 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qvpjb\" (UniqueName: \"kubernetes.io/projected/a962ff03-629f-458b-b5dc-3980f55d9f66-kube-api-access-qvpjb\") pod \"openstackclient\" (UID: \"a962ff03-629f-458b-b5dc-3980f55d9f66\") " pod="openstack/openstackclient" Jan 25 08:17:01 crc kubenswrapper[4832]: I0125 08:17:01.067534 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 25 08:17:01 crc kubenswrapper[4832]: I0125 08:17:01.414992 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Jan 25 08:17:01 crc kubenswrapper[4832]: I0125 08:17:01.683089 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Jan 25 08:17:02 crc kubenswrapper[4832]: I0125 08:17:02.013056 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"a962ff03-629f-458b-b5dc-3980f55d9f66","Type":"ContainerStarted","Data":"00084a97fff9ba862bd1817876d40098e482376a6b69d5841493011eb8932712"} Jan 25 08:17:02 crc kubenswrapper[4832]: I0125 08:17:02.014955 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Jan 25 08:17:02 crc kubenswrapper[4832]: I0125 08:17:02.455525 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-9f466dd54-88fdd" Jan 25 08:17:02 crc kubenswrapper[4832]: I0125 08:17:02.835134 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-9f466dd54-88fdd" Jan 25 08:17:02 crc kubenswrapper[4832]: I0125 08:17:02.908189 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-6d6d8975cd-v8jf8"] Jan 25 08:17:02 crc kubenswrapper[4832]: I0125 08:17:02.908436 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-6d6d8975cd-v8jf8" podUID="31271ce3-bbf8-4033-b2ba-5e47f4e9a151" containerName="barbican-api-log" containerID="cri-o://ba53edfcba5fb3514f58bd4974ce0ce60f36709ad87b64874274b12e9e753968" gracePeriod=30 Jan 25 08:17:02 crc kubenswrapper[4832]: I0125 08:17:02.909793 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-6d6d8975cd-v8jf8" podUID="31271ce3-bbf8-4033-b2ba-5e47f4e9a151" containerName="barbican-api" containerID="cri-o://ac30079689906c935c5df69c10e6f58d72656176e6acd96a6f8750d0d5df0de9" gracePeriod=30 Jan 25 08:17:03 crc kubenswrapper[4832]: I0125 08:17:03.130650 4832 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 25 08:17:03 crc kubenswrapper[4832]: I0125 08:17:03.654272 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Jan 25 08:17:03 crc kubenswrapper[4832]: I0125 08:17:03.654528 4832 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 25 08:17:04 crc kubenswrapper[4832]: I0125 08:17:04.145037 4832 generic.go:334] "Generic (PLEG): container finished" podID="31271ce3-bbf8-4033-b2ba-5e47f4e9a151" containerID="ba53edfcba5fb3514f58bd4974ce0ce60f36709ad87b64874274b12e9e753968" exitCode=143 Jan 25 08:17:04 crc kubenswrapper[4832]: I0125 08:17:04.145143 4832 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 25 08:17:04 crc kubenswrapper[4832]: I0125 08:17:04.145153 4832 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 25 08:17:04 crc kubenswrapper[4832]: I0125 08:17:04.146344 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-6d6d8975cd-v8jf8" event={"ID":"31271ce3-bbf8-4033-b2ba-5e47f4e9a151","Type":"ContainerDied","Data":"ba53edfcba5fb3514f58bd4974ce0ce60f36709ad87b64874274b12e9e753968"} Jan 25 08:17:04 crc kubenswrapper[4832]: I0125 08:17:04.484950 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Jan 25 08:17:05 crc kubenswrapper[4832]: I0125 08:17:05.440968 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cinder-api-0" Jan 25 08:17:06 crc kubenswrapper[4832]: I0125 08:17:06.054876 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Jan 25 08:17:06 crc kubenswrapper[4832]: I0125 08:17:06.055319 4832 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 25 08:17:06 crc kubenswrapper[4832]: I0125 08:17:06.484538 4832 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-6d6d8975cd-v8jf8" podUID="31271ce3-bbf8-4033-b2ba-5e47f4e9a151" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.0.157:9311/healthcheck\": read tcp 10.217.0.2:39690->10.217.0.157:9311: read: connection reset by peer" Jan 25 08:17:06 crc kubenswrapper[4832]: I0125 08:17:06.484639 4832 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-6d6d8975cd-v8jf8" podUID="31271ce3-bbf8-4033-b2ba-5e47f4e9a151" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.157:9311/healthcheck\": read tcp 10.217.0.2:39680->10.217.0.157:9311: read: connection reset by peer" Jan 25 08:17:06 crc kubenswrapper[4832]: I0125 08:17:06.751158 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Jan 25 08:17:06 crc kubenswrapper[4832]: I0125 08:17:06.995639 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Jan 25 08:17:07 crc kubenswrapper[4832]: I0125 08:17:07.197133 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-6d6d8975cd-v8jf8" Jan 25 08:17:07 crc kubenswrapper[4832]: I0125 08:17:07.245485 4832 generic.go:334] "Generic (PLEG): container finished" podID="31271ce3-bbf8-4033-b2ba-5e47f4e9a151" containerID="ac30079689906c935c5df69c10e6f58d72656176e6acd96a6f8750d0d5df0de9" exitCode=0 Jan 25 08:17:07 crc kubenswrapper[4832]: I0125 08:17:07.245761 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-6d6d8975cd-v8jf8" Jan 25 08:17:07 crc kubenswrapper[4832]: I0125 08:17:07.245781 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-6d6d8975cd-v8jf8" event={"ID":"31271ce3-bbf8-4033-b2ba-5e47f4e9a151","Type":"ContainerDied","Data":"ac30079689906c935c5df69c10e6f58d72656176e6acd96a6f8750d0d5df0de9"} Jan 25 08:17:07 crc kubenswrapper[4832]: I0125 08:17:07.247013 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-6d6d8975cd-v8jf8" event={"ID":"31271ce3-bbf8-4033-b2ba-5e47f4e9a151","Type":"ContainerDied","Data":"f335e6e3ca8d120dfbad0813fc9a9b858a9dd23b810eed789b4c3dba1d083056"} Jan 25 08:17:07 crc kubenswrapper[4832]: I0125 08:17:07.247035 4832 scope.go:117] "RemoveContainer" containerID="ac30079689906c935c5df69c10e6f58d72656176e6acd96a6f8750d0d5df0de9" Jan 25 08:17:07 crc kubenswrapper[4832]: I0125 08:17:07.277309 4832 scope.go:117] "RemoveContainer" containerID="ba53edfcba5fb3514f58bd4974ce0ce60f36709ad87b64874274b12e9e753968" Jan 25 08:17:07 crc kubenswrapper[4832]: I0125 08:17:07.297970 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/31271ce3-bbf8-4033-b2ba-5e47f4e9a151-config-data-custom\") pod \"31271ce3-bbf8-4033-b2ba-5e47f4e9a151\" (UID: \"31271ce3-bbf8-4033-b2ba-5e47f4e9a151\") " Jan 25 08:17:07 crc kubenswrapper[4832]: I0125 08:17:07.298283 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/31271ce3-bbf8-4033-b2ba-5e47f4e9a151-logs\") pod \"31271ce3-bbf8-4033-b2ba-5e47f4e9a151\" (UID: \"31271ce3-bbf8-4033-b2ba-5e47f4e9a151\") " Jan 25 08:17:07 crc kubenswrapper[4832]: I0125 08:17:07.298509 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/31271ce3-bbf8-4033-b2ba-5e47f4e9a151-config-data\") pod \"31271ce3-bbf8-4033-b2ba-5e47f4e9a151\" (UID: \"31271ce3-bbf8-4033-b2ba-5e47f4e9a151\") " Jan 25 08:17:07 crc kubenswrapper[4832]: I0125 08:17:07.298633 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/31271ce3-bbf8-4033-b2ba-5e47f4e9a151-combined-ca-bundle\") pod \"31271ce3-bbf8-4033-b2ba-5e47f4e9a151\" (UID: \"31271ce3-bbf8-4033-b2ba-5e47f4e9a151\") " Jan 25 08:17:07 crc kubenswrapper[4832]: I0125 08:17:07.298756 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bjjjs\" (UniqueName: \"kubernetes.io/projected/31271ce3-bbf8-4033-b2ba-5e47f4e9a151-kube-api-access-bjjjs\") pod \"31271ce3-bbf8-4033-b2ba-5e47f4e9a151\" (UID: \"31271ce3-bbf8-4033-b2ba-5e47f4e9a151\") " Jan 25 08:17:07 crc kubenswrapper[4832]: I0125 08:17:07.307202 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/31271ce3-bbf8-4033-b2ba-5e47f4e9a151-logs" (OuterVolumeSpecName: "logs") pod "31271ce3-bbf8-4033-b2ba-5e47f4e9a151" (UID: "31271ce3-bbf8-4033-b2ba-5e47f4e9a151"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 25 08:17:07 crc kubenswrapper[4832]: I0125 08:17:07.324716 4832 scope.go:117] "RemoveContainer" containerID="ac30079689906c935c5df69c10e6f58d72656176e6acd96a6f8750d0d5df0de9" Jan 25 08:17:07 crc kubenswrapper[4832]: I0125 08:17:07.324765 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31271ce3-bbf8-4033-b2ba-5e47f4e9a151-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "31271ce3-bbf8-4033-b2ba-5e47f4e9a151" (UID: "31271ce3-bbf8-4033-b2ba-5e47f4e9a151"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 25 08:17:07 crc kubenswrapper[4832]: E0125 08:17:07.325979 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ac30079689906c935c5df69c10e6f58d72656176e6acd96a6f8750d0d5df0de9\": container with ID starting with ac30079689906c935c5df69c10e6f58d72656176e6acd96a6f8750d0d5df0de9 not found: ID does not exist" containerID="ac30079689906c935c5df69c10e6f58d72656176e6acd96a6f8750d0d5df0de9" Jan 25 08:17:07 crc kubenswrapper[4832]: I0125 08:17:07.326037 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ac30079689906c935c5df69c10e6f58d72656176e6acd96a6f8750d0d5df0de9"} err="failed to get container status \"ac30079689906c935c5df69c10e6f58d72656176e6acd96a6f8750d0d5df0de9\": rpc error: code = NotFound desc = could not find container \"ac30079689906c935c5df69c10e6f58d72656176e6acd96a6f8750d0d5df0de9\": container with ID starting with ac30079689906c935c5df69c10e6f58d72656176e6acd96a6f8750d0d5df0de9 not found: ID does not exist" Jan 25 08:17:07 crc kubenswrapper[4832]: I0125 08:17:07.326076 4832 scope.go:117] "RemoveContainer" containerID="ba53edfcba5fb3514f58bd4974ce0ce60f36709ad87b64874274b12e9e753968" Jan 25 08:17:07 crc kubenswrapper[4832]: E0125 08:17:07.326877 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ba53edfcba5fb3514f58bd4974ce0ce60f36709ad87b64874274b12e9e753968\": container with ID starting with ba53edfcba5fb3514f58bd4974ce0ce60f36709ad87b64874274b12e9e753968 not found: ID does not exist" containerID="ba53edfcba5fb3514f58bd4974ce0ce60f36709ad87b64874274b12e9e753968" Jan 25 08:17:07 crc kubenswrapper[4832]: I0125 08:17:07.326906 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ba53edfcba5fb3514f58bd4974ce0ce60f36709ad87b64874274b12e9e753968"} err="failed to get container status \"ba53edfcba5fb3514f58bd4974ce0ce60f36709ad87b64874274b12e9e753968\": rpc error: code = NotFound desc = could not find container \"ba53edfcba5fb3514f58bd4974ce0ce60f36709ad87b64874274b12e9e753968\": container with ID starting with ba53edfcba5fb3514f58bd4974ce0ce60f36709ad87b64874274b12e9e753968 not found: ID does not exist" Jan 25 08:17:07 crc kubenswrapper[4832]: I0125 08:17:07.327351 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31271ce3-bbf8-4033-b2ba-5e47f4e9a151-kube-api-access-bjjjs" (OuterVolumeSpecName: "kube-api-access-bjjjs") pod "31271ce3-bbf8-4033-b2ba-5e47f4e9a151" (UID: "31271ce3-bbf8-4033-b2ba-5e47f4e9a151"). InnerVolumeSpecName "kube-api-access-bjjjs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 25 08:17:07 crc kubenswrapper[4832]: I0125 08:17:07.358522 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31271ce3-bbf8-4033-b2ba-5e47f4e9a151-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "31271ce3-bbf8-4033-b2ba-5e47f4e9a151" (UID: "31271ce3-bbf8-4033-b2ba-5e47f4e9a151"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 25 08:17:07 crc kubenswrapper[4832]: I0125 08:17:07.402614 4832 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/31271ce3-bbf8-4033-b2ba-5e47f4e9a151-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 25 08:17:07 crc kubenswrapper[4832]: I0125 08:17:07.402653 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bjjjs\" (UniqueName: \"kubernetes.io/projected/31271ce3-bbf8-4033-b2ba-5e47f4e9a151-kube-api-access-bjjjs\") on node \"crc\" DevicePath \"\"" Jan 25 08:17:07 crc kubenswrapper[4832]: I0125 08:17:07.402662 4832 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/31271ce3-bbf8-4033-b2ba-5e47f4e9a151-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 25 08:17:07 crc kubenswrapper[4832]: I0125 08:17:07.402671 4832 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/31271ce3-bbf8-4033-b2ba-5e47f4e9a151-logs\") on node \"crc\" DevicePath \"\"" Jan 25 08:17:07 crc kubenswrapper[4832]: I0125 08:17:07.419563 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31271ce3-bbf8-4033-b2ba-5e47f4e9a151-config-data" (OuterVolumeSpecName: "config-data") pod "31271ce3-bbf8-4033-b2ba-5e47f4e9a151" (UID: "31271ce3-bbf8-4033-b2ba-5e47f4e9a151"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 25 08:17:07 crc kubenswrapper[4832]: I0125 08:17:07.504685 4832 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/31271ce3-bbf8-4033-b2ba-5e47f4e9a151-config-data\") on node \"crc\" DevicePath \"\"" Jan 25 08:17:07 crc kubenswrapper[4832]: I0125 08:17:07.604125 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-6d6d8975cd-v8jf8"] Jan 25 08:17:07 crc kubenswrapper[4832]: I0125 08:17:07.615451 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-api-6d6d8975cd-v8jf8"] Jan 25 08:17:07 crc kubenswrapper[4832]: I0125 08:17:07.723783 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31271ce3-bbf8-4033-b2ba-5e47f4e9a151" path="/var/lib/kubelet/pods/31271ce3-bbf8-4033-b2ba-5e47f4e9a151/volumes" Jan 25 08:17:09 crc kubenswrapper[4832]: I0125 08:17:09.392678 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-proxy-658c5f7995-t6v6k"] Jan 25 08:17:09 crc kubenswrapper[4832]: E0125 08:17:09.393143 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="31271ce3-bbf8-4033-b2ba-5e47f4e9a151" containerName="barbican-api-log" Jan 25 08:17:09 crc kubenswrapper[4832]: I0125 08:17:09.393159 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="31271ce3-bbf8-4033-b2ba-5e47f4e9a151" containerName="barbican-api-log" Jan 25 08:17:09 crc kubenswrapper[4832]: E0125 08:17:09.393186 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="31271ce3-bbf8-4033-b2ba-5e47f4e9a151" containerName="barbican-api" Jan 25 08:17:09 crc kubenswrapper[4832]: I0125 08:17:09.393194 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="31271ce3-bbf8-4033-b2ba-5e47f4e9a151" containerName="barbican-api" Jan 25 08:17:09 crc kubenswrapper[4832]: I0125 08:17:09.393402 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="31271ce3-bbf8-4033-b2ba-5e47f4e9a151" containerName="barbican-api-log" Jan 25 08:17:09 crc kubenswrapper[4832]: I0125 08:17:09.393426 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="31271ce3-bbf8-4033-b2ba-5e47f4e9a151" containerName="barbican-api" Jan 25 08:17:09 crc kubenswrapper[4832]: I0125 08:17:09.394852 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-658c5f7995-t6v6k" Jan 25 08:17:09 crc kubenswrapper[4832]: I0125 08:17:09.403098 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-public-svc" Jan 25 08:17:09 crc kubenswrapper[4832]: I0125 08:17:09.403354 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-internal-svc" Jan 25 08:17:09 crc kubenswrapper[4832]: I0125 08:17:09.404099 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Jan 25 08:17:09 crc kubenswrapper[4832]: I0125 08:17:09.407565 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-658c5f7995-t6v6k"] Jan 25 08:17:09 crc kubenswrapper[4832]: I0125 08:17:09.447735 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xf9h8\" (UniqueName: \"kubernetes.io/projected/81bd3301-f264-4150-8f71-869af2c1ed3d-kube-api-access-xf9h8\") pod \"swift-proxy-658c5f7995-t6v6k\" (UID: \"81bd3301-f264-4150-8f71-869af2c1ed3d\") " pod="openstack/swift-proxy-658c5f7995-t6v6k" Jan 25 08:17:09 crc kubenswrapper[4832]: I0125 08:17:09.447793 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/81bd3301-f264-4150-8f71-869af2c1ed3d-etc-swift\") pod \"swift-proxy-658c5f7995-t6v6k\" (UID: \"81bd3301-f264-4150-8f71-869af2c1ed3d\") " pod="openstack/swift-proxy-658c5f7995-t6v6k" Jan 25 08:17:09 crc kubenswrapper[4832]: I0125 08:17:09.448055 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/81bd3301-f264-4150-8f71-869af2c1ed3d-config-data\") pod \"swift-proxy-658c5f7995-t6v6k\" (UID: \"81bd3301-f264-4150-8f71-869af2c1ed3d\") " pod="openstack/swift-proxy-658c5f7995-t6v6k" Jan 25 08:17:09 crc kubenswrapper[4832]: I0125 08:17:09.448145 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/81bd3301-f264-4150-8f71-869af2c1ed3d-combined-ca-bundle\") pod \"swift-proxy-658c5f7995-t6v6k\" (UID: \"81bd3301-f264-4150-8f71-869af2c1ed3d\") " pod="openstack/swift-proxy-658c5f7995-t6v6k" Jan 25 08:17:09 crc kubenswrapper[4832]: I0125 08:17:09.448193 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/81bd3301-f264-4150-8f71-869af2c1ed3d-public-tls-certs\") pod \"swift-proxy-658c5f7995-t6v6k\" (UID: \"81bd3301-f264-4150-8f71-869af2c1ed3d\") " pod="openstack/swift-proxy-658c5f7995-t6v6k" Jan 25 08:17:09 crc kubenswrapper[4832]: I0125 08:17:09.448349 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/81bd3301-f264-4150-8f71-869af2c1ed3d-run-httpd\") pod \"swift-proxy-658c5f7995-t6v6k\" (UID: \"81bd3301-f264-4150-8f71-869af2c1ed3d\") " pod="openstack/swift-proxy-658c5f7995-t6v6k" Jan 25 08:17:09 crc kubenswrapper[4832]: I0125 08:17:09.448489 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/81bd3301-f264-4150-8f71-869af2c1ed3d-internal-tls-certs\") pod \"swift-proxy-658c5f7995-t6v6k\" (UID: \"81bd3301-f264-4150-8f71-869af2c1ed3d\") " pod="openstack/swift-proxy-658c5f7995-t6v6k" Jan 25 08:17:09 crc kubenswrapper[4832]: I0125 08:17:09.448522 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/81bd3301-f264-4150-8f71-869af2c1ed3d-log-httpd\") pod \"swift-proxy-658c5f7995-t6v6k\" (UID: \"81bd3301-f264-4150-8f71-869af2c1ed3d\") " pod="openstack/swift-proxy-658c5f7995-t6v6k" Jan 25 08:17:09 crc kubenswrapper[4832]: I0125 08:17:09.549992 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/81bd3301-f264-4150-8f71-869af2c1ed3d-internal-tls-certs\") pod \"swift-proxy-658c5f7995-t6v6k\" (UID: \"81bd3301-f264-4150-8f71-869af2c1ed3d\") " pod="openstack/swift-proxy-658c5f7995-t6v6k" Jan 25 08:17:09 crc kubenswrapper[4832]: I0125 08:17:09.550041 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/81bd3301-f264-4150-8f71-869af2c1ed3d-log-httpd\") pod \"swift-proxy-658c5f7995-t6v6k\" (UID: \"81bd3301-f264-4150-8f71-869af2c1ed3d\") " pod="openstack/swift-proxy-658c5f7995-t6v6k" Jan 25 08:17:09 crc kubenswrapper[4832]: I0125 08:17:09.550078 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xf9h8\" (UniqueName: \"kubernetes.io/projected/81bd3301-f264-4150-8f71-869af2c1ed3d-kube-api-access-xf9h8\") pod \"swift-proxy-658c5f7995-t6v6k\" (UID: \"81bd3301-f264-4150-8f71-869af2c1ed3d\") " pod="openstack/swift-proxy-658c5f7995-t6v6k" Jan 25 08:17:09 crc kubenswrapper[4832]: I0125 08:17:09.550097 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/81bd3301-f264-4150-8f71-869af2c1ed3d-etc-swift\") pod \"swift-proxy-658c5f7995-t6v6k\" (UID: \"81bd3301-f264-4150-8f71-869af2c1ed3d\") " pod="openstack/swift-proxy-658c5f7995-t6v6k" Jan 25 08:17:09 crc kubenswrapper[4832]: I0125 08:17:09.550164 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/81bd3301-f264-4150-8f71-869af2c1ed3d-config-data\") pod \"swift-proxy-658c5f7995-t6v6k\" (UID: \"81bd3301-f264-4150-8f71-869af2c1ed3d\") " pod="openstack/swift-proxy-658c5f7995-t6v6k" Jan 25 08:17:09 crc kubenswrapper[4832]: I0125 08:17:09.550198 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/81bd3301-f264-4150-8f71-869af2c1ed3d-combined-ca-bundle\") pod \"swift-proxy-658c5f7995-t6v6k\" (UID: \"81bd3301-f264-4150-8f71-869af2c1ed3d\") " pod="openstack/swift-proxy-658c5f7995-t6v6k" Jan 25 08:17:09 crc kubenswrapper[4832]: I0125 08:17:09.550220 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/81bd3301-f264-4150-8f71-869af2c1ed3d-public-tls-certs\") pod \"swift-proxy-658c5f7995-t6v6k\" (UID: \"81bd3301-f264-4150-8f71-869af2c1ed3d\") " pod="openstack/swift-proxy-658c5f7995-t6v6k" Jan 25 08:17:09 crc kubenswrapper[4832]: I0125 08:17:09.550297 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/81bd3301-f264-4150-8f71-869af2c1ed3d-run-httpd\") pod \"swift-proxy-658c5f7995-t6v6k\" (UID: \"81bd3301-f264-4150-8f71-869af2c1ed3d\") " pod="openstack/swift-proxy-658c5f7995-t6v6k" Jan 25 08:17:09 crc kubenswrapper[4832]: I0125 08:17:09.551016 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/81bd3301-f264-4150-8f71-869af2c1ed3d-log-httpd\") pod \"swift-proxy-658c5f7995-t6v6k\" (UID: \"81bd3301-f264-4150-8f71-869af2c1ed3d\") " pod="openstack/swift-proxy-658c5f7995-t6v6k" Jan 25 08:17:09 crc kubenswrapper[4832]: I0125 08:17:09.551181 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/81bd3301-f264-4150-8f71-869af2c1ed3d-run-httpd\") pod \"swift-proxy-658c5f7995-t6v6k\" (UID: \"81bd3301-f264-4150-8f71-869af2c1ed3d\") " pod="openstack/swift-proxy-658c5f7995-t6v6k" Jan 25 08:17:09 crc kubenswrapper[4832]: I0125 08:17:09.558243 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/81bd3301-f264-4150-8f71-869af2c1ed3d-public-tls-certs\") pod \"swift-proxy-658c5f7995-t6v6k\" (UID: \"81bd3301-f264-4150-8f71-869af2c1ed3d\") " pod="openstack/swift-proxy-658c5f7995-t6v6k" Jan 25 08:17:09 crc kubenswrapper[4832]: I0125 08:17:09.558265 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/81bd3301-f264-4150-8f71-869af2c1ed3d-combined-ca-bundle\") pod \"swift-proxy-658c5f7995-t6v6k\" (UID: \"81bd3301-f264-4150-8f71-869af2c1ed3d\") " pod="openstack/swift-proxy-658c5f7995-t6v6k" Jan 25 08:17:09 crc kubenswrapper[4832]: I0125 08:17:09.558353 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/81bd3301-f264-4150-8f71-869af2c1ed3d-internal-tls-certs\") pod \"swift-proxy-658c5f7995-t6v6k\" (UID: \"81bd3301-f264-4150-8f71-869af2c1ed3d\") " pod="openstack/swift-proxy-658c5f7995-t6v6k" Jan 25 08:17:09 crc kubenswrapper[4832]: I0125 08:17:09.560539 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/81bd3301-f264-4150-8f71-869af2c1ed3d-config-data\") pod \"swift-proxy-658c5f7995-t6v6k\" (UID: \"81bd3301-f264-4150-8f71-869af2c1ed3d\") " pod="openstack/swift-proxy-658c5f7995-t6v6k" Jan 25 08:17:09 crc kubenswrapper[4832]: I0125 08:17:09.561666 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/81bd3301-f264-4150-8f71-869af2c1ed3d-etc-swift\") pod \"swift-proxy-658c5f7995-t6v6k\" (UID: \"81bd3301-f264-4150-8f71-869af2c1ed3d\") " pod="openstack/swift-proxy-658c5f7995-t6v6k" Jan 25 08:17:09 crc kubenswrapper[4832]: I0125 08:17:09.571282 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xf9h8\" (UniqueName: \"kubernetes.io/projected/81bd3301-f264-4150-8f71-869af2c1ed3d-kube-api-access-xf9h8\") pod \"swift-proxy-658c5f7995-t6v6k\" (UID: \"81bd3301-f264-4150-8f71-869af2c1ed3d\") " pod="openstack/swift-proxy-658c5f7995-t6v6k" Jan 25 08:17:09 crc kubenswrapper[4832]: I0125 08:17:09.714270 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-658c5f7995-t6v6k" Jan 25 08:17:09 crc kubenswrapper[4832]: I0125 08:17:09.916677 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-f649cfc6-vzpx7" Jan 25 08:17:09 crc kubenswrapper[4832]: I0125 08:17:09.917590 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-f649cfc6-vzpx7" Jan 25 08:17:10 crc kubenswrapper[4832]: I0125 08:17:10.527025 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-658c5f7995-t6v6k"] Jan 25 08:17:10 crc kubenswrapper[4832]: I0125 08:17:10.848465 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 25 08:17:10 crc kubenswrapper[4832]: I0125 08:17:10.848751 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="46d917e3-482a-43d4-9c3a-a632acb41838" containerName="ceilometer-central-agent" containerID="cri-o://2fa7c62cf872eec62993feebb547efd1d836fc78c352d62ecb389fc5263fa964" gracePeriod=30 Jan 25 08:17:10 crc kubenswrapper[4832]: I0125 08:17:10.848881 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="46d917e3-482a-43d4-9c3a-a632acb41838" containerName="proxy-httpd" containerID="cri-o://c6c28cbc3f6313ddab4255c962fb40272c22f8923540363aa61e422db6eb1418" gracePeriod=30 Jan 25 08:17:10 crc kubenswrapper[4832]: I0125 08:17:10.849023 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="46d917e3-482a-43d4-9c3a-a632acb41838" containerName="ceilometer-notification-agent" containerID="cri-o://8a1e0a575361cee9d184afb1b8fcd954be2b6f9bf2db1a5dc12174982f51b06c" gracePeriod=30 Jan 25 08:17:10 crc kubenswrapper[4832]: I0125 08:17:10.849035 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="46d917e3-482a-43d4-9c3a-a632acb41838" containerName="sg-core" containerID="cri-o://a9b27d98bc7d6099b4201a731c803c0c1fc266e275ac91fa6f12df69f03df64a" gracePeriod=30 Jan 25 08:17:10 crc kubenswrapper[4832]: I0125 08:17:10.862840 4832 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="46d917e3-482a-43d4-9c3a-a632acb41838" containerName="proxy-httpd" probeResult="failure" output="Get \"http://10.217.0.163:3000/\": EOF" Jan 25 08:17:11 crc kubenswrapper[4832]: I0125 08:17:11.344201 4832 generic.go:334] "Generic (PLEG): container finished" podID="46d917e3-482a-43d4-9c3a-a632acb41838" containerID="c6c28cbc3f6313ddab4255c962fb40272c22f8923540363aa61e422db6eb1418" exitCode=0 Jan 25 08:17:11 crc kubenswrapper[4832]: I0125 08:17:11.344634 4832 generic.go:334] "Generic (PLEG): container finished" podID="46d917e3-482a-43d4-9c3a-a632acb41838" containerID="a9b27d98bc7d6099b4201a731c803c0c1fc266e275ac91fa6f12df69f03df64a" exitCode=2 Jan 25 08:17:11 crc kubenswrapper[4832]: I0125 08:17:11.344463 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"46d917e3-482a-43d4-9c3a-a632acb41838","Type":"ContainerDied","Data":"c6c28cbc3f6313ddab4255c962fb40272c22f8923540363aa61e422db6eb1418"} Jan 25 08:17:11 crc kubenswrapper[4832]: I0125 08:17:11.344679 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"46d917e3-482a-43d4-9c3a-a632acb41838","Type":"ContainerDied","Data":"a9b27d98bc7d6099b4201a731c803c0c1fc266e275ac91fa6f12df69f03df64a"} Jan 25 08:17:12 crc kubenswrapper[4832]: I0125 08:17:12.360522 4832 generic.go:334] "Generic (PLEG): container finished" podID="46d917e3-482a-43d4-9c3a-a632acb41838" containerID="2fa7c62cf872eec62993feebb547efd1d836fc78c352d62ecb389fc5263fa964" exitCode=0 Jan 25 08:17:12 crc kubenswrapper[4832]: I0125 08:17:12.360600 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"46d917e3-482a-43d4-9c3a-a632acb41838","Type":"ContainerDied","Data":"2fa7c62cf872eec62993feebb547efd1d836fc78c352d62ecb389fc5263fa964"} Jan 25 08:17:14 crc kubenswrapper[4832]: I0125 08:17:14.386877 4832 generic.go:334] "Generic (PLEG): container finished" podID="46d917e3-482a-43d4-9c3a-a632acb41838" containerID="8a1e0a575361cee9d184afb1b8fcd954be2b6f9bf2db1a5dc12174982f51b06c" exitCode=0 Jan 25 08:17:14 crc kubenswrapper[4832]: I0125 08:17:14.386979 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"46d917e3-482a-43d4-9c3a-a632acb41838","Type":"ContainerDied","Data":"8a1e0a575361cee9d184afb1b8fcd954be2b6f9bf2db1a5dc12174982f51b06c"} Jan 25 08:17:17 crc kubenswrapper[4832]: I0125 08:17:17.162973 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 25 08:17:17 crc kubenswrapper[4832]: I0125 08:17:17.259957 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n7hw7\" (UniqueName: \"kubernetes.io/projected/46d917e3-482a-43d4-9c3a-a632acb41838-kube-api-access-n7hw7\") pod \"46d917e3-482a-43d4-9c3a-a632acb41838\" (UID: \"46d917e3-482a-43d4-9c3a-a632acb41838\") " Jan 25 08:17:17 crc kubenswrapper[4832]: I0125 08:17:17.260172 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/46d917e3-482a-43d4-9c3a-a632acb41838-config-data\") pod \"46d917e3-482a-43d4-9c3a-a632acb41838\" (UID: \"46d917e3-482a-43d4-9c3a-a632acb41838\") " Jan 25 08:17:17 crc kubenswrapper[4832]: I0125 08:17:17.260220 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/46d917e3-482a-43d4-9c3a-a632acb41838-combined-ca-bundle\") pod \"46d917e3-482a-43d4-9c3a-a632acb41838\" (UID: \"46d917e3-482a-43d4-9c3a-a632acb41838\") " Jan 25 08:17:17 crc kubenswrapper[4832]: I0125 08:17:17.260275 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/46d917e3-482a-43d4-9c3a-a632acb41838-run-httpd\") pod \"46d917e3-482a-43d4-9c3a-a632acb41838\" (UID: \"46d917e3-482a-43d4-9c3a-a632acb41838\") " Jan 25 08:17:17 crc kubenswrapper[4832]: I0125 08:17:17.260311 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/46d917e3-482a-43d4-9c3a-a632acb41838-log-httpd\") pod \"46d917e3-482a-43d4-9c3a-a632acb41838\" (UID: \"46d917e3-482a-43d4-9c3a-a632acb41838\") " Jan 25 08:17:17 crc kubenswrapper[4832]: I0125 08:17:17.260342 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/46d917e3-482a-43d4-9c3a-a632acb41838-scripts\") pod \"46d917e3-482a-43d4-9c3a-a632acb41838\" (UID: \"46d917e3-482a-43d4-9c3a-a632acb41838\") " Jan 25 08:17:17 crc kubenswrapper[4832]: I0125 08:17:17.260366 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/46d917e3-482a-43d4-9c3a-a632acb41838-sg-core-conf-yaml\") pod \"46d917e3-482a-43d4-9c3a-a632acb41838\" (UID: \"46d917e3-482a-43d4-9c3a-a632acb41838\") " Jan 25 08:17:17 crc kubenswrapper[4832]: I0125 08:17:17.266044 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/46d917e3-482a-43d4-9c3a-a632acb41838-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "46d917e3-482a-43d4-9c3a-a632acb41838" (UID: "46d917e3-482a-43d4-9c3a-a632acb41838"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 25 08:17:17 crc kubenswrapper[4832]: I0125 08:17:17.266261 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/46d917e3-482a-43d4-9c3a-a632acb41838-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "46d917e3-482a-43d4-9c3a-a632acb41838" (UID: "46d917e3-482a-43d4-9c3a-a632acb41838"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 25 08:17:17 crc kubenswrapper[4832]: I0125 08:17:17.292658 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/46d917e3-482a-43d4-9c3a-a632acb41838-kube-api-access-n7hw7" (OuterVolumeSpecName: "kube-api-access-n7hw7") pod "46d917e3-482a-43d4-9c3a-a632acb41838" (UID: "46d917e3-482a-43d4-9c3a-a632acb41838"). InnerVolumeSpecName "kube-api-access-n7hw7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 25 08:17:17 crc kubenswrapper[4832]: I0125 08:17:17.301937 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/46d917e3-482a-43d4-9c3a-a632acb41838-scripts" (OuterVolumeSpecName: "scripts") pod "46d917e3-482a-43d4-9c3a-a632acb41838" (UID: "46d917e3-482a-43d4-9c3a-a632acb41838"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 25 08:17:17 crc kubenswrapper[4832]: I0125 08:17:17.376797 4832 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/46d917e3-482a-43d4-9c3a-a632acb41838-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 25 08:17:17 crc kubenswrapper[4832]: I0125 08:17:17.376838 4832 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/46d917e3-482a-43d4-9c3a-a632acb41838-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 25 08:17:17 crc kubenswrapper[4832]: I0125 08:17:17.376848 4832 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/46d917e3-482a-43d4-9c3a-a632acb41838-scripts\") on node \"crc\" DevicePath \"\"" Jan 25 08:17:17 crc kubenswrapper[4832]: I0125 08:17:17.376859 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n7hw7\" (UniqueName: \"kubernetes.io/projected/46d917e3-482a-43d4-9c3a-a632acb41838-kube-api-access-n7hw7\") on node \"crc\" DevicePath \"\"" Jan 25 08:17:17 crc kubenswrapper[4832]: I0125 08:17:17.406643 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/46d917e3-482a-43d4-9c3a-a632acb41838-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "46d917e3-482a-43d4-9c3a-a632acb41838" (UID: "46d917e3-482a-43d4-9c3a-a632acb41838"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 25 08:17:17 crc kubenswrapper[4832]: I0125 08:17:17.415775 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-658c5f7995-t6v6k" event={"ID":"81bd3301-f264-4150-8f71-869af2c1ed3d","Type":"ContainerStarted","Data":"e04270428a783ca8f5621b4ffc49e431b044f9f539cc4281932a4efcb275c7ee"} Jan 25 08:17:17 crc kubenswrapper[4832]: I0125 08:17:17.418378 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"46d917e3-482a-43d4-9c3a-a632acb41838","Type":"ContainerDied","Data":"48336d09063ad4801f89338c7c7500974726159f5210b3bacea1fac7f0d18594"} Jan 25 08:17:17 crc kubenswrapper[4832]: I0125 08:17:17.418450 4832 scope.go:117] "RemoveContainer" containerID="c6c28cbc3f6313ddab4255c962fb40272c22f8923540363aa61e422db6eb1418" Jan 25 08:17:17 crc kubenswrapper[4832]: I0125 08:17:17.418598 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 25 08:17:17 crc kubenswrapper[4832]: I0125 08:17:17.432427 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"a962ff03-629f-458b-b5dc-3980f55d9f66","Type":"ContainerStarted","Data":"eb191fb775e7b39ac567c3a832ef67214ff6d4f3a3ee24dbf449f77706d308f5"} Jan 25 08:17:17 crc kubenswrapper[4832]: I0125 08:17:17.450119 4832 scope.go:117] "RemoveContainer" containerID="a9b27d98bc7d6099b4201a731c803c0c1fc266e275ac91fa6f12df69f03df64a" Jan 25 08:17:17 crc kubenswrapper[4832]: I0125 08:17:17.454524 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/46d917e3-482a-43d4-9c3a-a632acb41838-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "46d917e3-482a-43d4-9c3a-a632acb41838" (UID: "46d917e3-482a-43d4-9c3a-a632acb41838"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 25 08:17:17 crc kubenswrapper[4832]: I0125 08:17:17.457694 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstackclient" podStartSLOduration=2.209038814 podStartE2EDuration="17.457673483s" podCreationTimestamp="2026-01-25 08:17:00 +0000 UTC" firstStartedPulling="2026-01-25 08:17:01.675997 +0000 UTC m=+1204.349820533" lastFinishedPulling="2026-01-25 08:17:16.924631659 +0000 UTC m=+1219.598455202" observedRunningTime="2026-01-25 08:17:17.45025502 +0000 UTC m=+1220.124078543" watchObservedRunningTime="2026-01-25 08:17:17.457673483 +0000 UTC m=+1220.131497016" Jan 25 08:17:17 crc kubenswrapper[4832]: I0125 08:17:17.472764 4832 scope.go:117] "RemoveContainer" containerID="8a1e0a575361cee9d184afb1b8fcd954be2b6f9bf2db1a5dc12174982f51b06c" Jan 25 08:17:17 crc kubenswrapper[4832]: I0125 08:17:17.473892 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/46d917e3-482a-43d4-9c3a-a632acb41838-config-data" (OuterVolumeSpecName: "config-data") pod "46d917e3-482a-43d4-9c3a-a632acb41838" (UID: "46d917e3-482a-43d4-9c3a-a632acb41838"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 25 08:17:17 crc kubenswrapper[4832]: I0125 08:17:17.481594 4832 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/46d917e3-482a-43d4-9c3a-a632acb41838-config-data\") on node \"crc\" DevicePath \"\"" Jan 25 08:17:17 crc kubenswrapper[4832]: I0125 08:17:17.481625 4832 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/46d917e3-482a-43d4-9c3a-a632acb41838-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 25 08:17:17 crc kubenswrapper[4832]: I0125 08:17:17.481637 4832 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/46d917e3-482a-43d4-9c3a-a632acb41838-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 25 08:17:17 crc kubenswrapper[4832]: I0125 08:17:17.514937 4832 scope.go:117] "RemoveContainer" containerID="2fa7c62cf872eec62993feebb547efd1d836fc78c352d62ecb389fc5263fa964" Jan 25 08:17:17 crc kubenswrapper[4832]: I0125 08:17:17.742044 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 25 08:17:17 crc kubenswrapper[4832]: I0125 08:17:17.751106 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 25 08:17:17 crc kubenswrapper[4832]: I0125 08:17:17.789665 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 25 08:17:17 crc kubenswrapper[4832]: E0125 08:17:17.790079 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="46d917e3-482a-43d4-9c3a-a632acb41838" containerName="sg-core" Jan 25 08:17:17 crc kubenswrapper[4832]: I0125 08:17:17.790093 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="46d917e3-482a-43d4-9c3a-a632acb41838" containerName="sg-core" Jan 25 08:17:17 crc kubenswrapper[4832]: E0125 08:17:17.790127 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="46d917e3-482a-43d4-9c3a-a632acb41838" containerName="proxy-httpd" Jan 25 08:17:17 crc kubenswrapper[4832]: I0125 08:17:17.790134 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="46d917e3-482a-43d4-9c3a-a632acb41838" containerName="proxy-httpd" Jan 25 08:17:17 crc kubenswrapper[4832]: E0125 08:17:17.790147 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="46d917e3-482a-43d4-9c3a-a632acb41838" containerName="ceilometer-notification-agent" Jan 25 08:17:17 crc kubenswrapper[4832]: I0125 08:17:17.790153 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="46d917e3-482a-43d4-9c3a-a632acb41838" containerName="ceilometer-notification-agent" Jan 25 08:17:17 crc kubenswrapper[4832]: E0125 08:17:17.790172 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="46d917e3-482a-43d4-9c3a-a632acb41838" containerName="ceilometer-central-agent" Jan 25 08:17:17 crc kubenswrapper[4832]: I0125 08:17:17.790179 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="46d917e3-482a-43d4-9c3a-a632acb41838" containerName="ceilometer-central-agent" Jan 25 08:17:17 crc kubenswrapper[4832]: I0125 08:17:17.790338 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="46d917e3-482a-43d4-9c3a-a632acb41838" containerName="sg-core" Jan 25 08:17:17 crc kubenswrapper[4832]: I0125 08:17:17.790355 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="46d917e3-482a-43d4-9c3a-a632acb41838" containerName="proxy-httpd" Jan 25 08:17:17 crc kubenswrapper[4832]: I0125 08:17:17.790369 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="46d917e3-482a-43d4-9c3a-a632acb41838" containerName="ceilometer-notification-agent" Jan 25 08:17:17 crc kubenswrapper[4832]: I0125 08:17:17.790397 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="46d917e3-482a-43d4-9c3a-a632acb41838" containerName="ceilometer-central-agent" Jan 25 08:17:17 crc kubenswrapper[4832]: I0125 08:17:17.792050 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 25 08:17:17 crc kubenswrapper[4832]: I0125 08:17:17.796413 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 25 08:17:17 crc kubenswrapper[4832]: I0125 08:17:17.800166 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 25 08:17:17 crc kubenswrapper[4832]: I0125 08:17:17.805228 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 25 08:17:17 crc kubenswrapper[4832]: I0125 08:17:17.891044 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7psfk\" (UniqueName: \"kubernetes.io/projected/65a902e4-15aa-499b-aa8e-a5ed097f9918-kube-api-access-7psfk\") pod \"ceilometer-0\" (UID: \"65a902e4-15aa-499b-aa8e-a5ed097f9918\") " pod="openstack/ceilometer-0" Jan 25 08:17:17 crc kubenswrapper[4832]: I0125 08:17:17.891216 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/65a902e4-15aa-499b-aa8e-a5ed097f9918-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"65a902e4-15aa-499b-aa8e-a5ed097f9918\") " pod="openstack/ceilometer-0" Jan 25 08:17:17 crc kubenswrapper[4832]: I0125 08:17:17.891252 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/65a902e4-15aa-499b-aa8e-a5ed097f9918-log-httpd\") pod \"ceilometer-0\" (UID: \"65a902e4-15aa-499b-aa8e-a5ed097f9918\") " pod="openstack/ceilometer-0" Jan 25 08:17:17 crc kubenswrapper[4832]: I0125 08:17:17.891317 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/65a902e4-15aa-499b-aa8e-a5ed097f9918-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"65a902e4-15aa-499b-aa8e-a5ed097f9918\") " pod="openstack/ceilometer-0" Jan 25 08:17:17 crc kubenswrapper[4832]: I0125 08:17:17.891708 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/65a902e4-15aa-499b-aa8e-a5ed097f9918-scripts\") pod \"ceilometer-0\" (UID: \"65a902e4-15aa-499b-aa8e-a5ed097f9918\") " pod="openstack/ceilometer-0" Jan 25 08:17:17 crc kubenswrapper[4832]: I0125 08:17:17.891943 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/65a902e4-15aa-499b-aa8e-a5ed097f9918-config-data\") pod \"ceilometer-0\" (UID: \"65a902e4-15aa-499b-aa8e-a5ed097f9918\") " pod="openstack/ceilometer-0" Jan 25 08:17:17 crc kubenswrapper[4832]: I0125 08:17:17.892096 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/65a902e4-15aa-499b-aa8e-a5ed097f9918-run-httpd\") pod \"ceilometer-0\" (UID: \"65a902e4-15aa-499b-aa8e-a5ed097f9918\") " pod="openstack/ceilometer-0" Jan 25 08:17:17 crc kubenswrapper[4832]: I0125 08:17:17.993807 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/65a902e4-15aa-499b-aa8e-a5ed097f9918-run-httpd\") pod \"ceilometer-0\" (UID: \"65a902e4-15aa-499b-aa8e-a5ed097f9918\") " pod="openstack/ceilometer-0" Jan 25 08:17:17 crc kubenswrapper[4832]: I0125 08:17:17.993920 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7psfk\" (UniqueName: \"kubernetes.io/projected/65a902e4-15aa-499b-aa8e-a5ed097f9918-kube-api-access-7psfk\") pod \"ceilometer-0\" (UID: \"65a902e4-15aa-499b-aa8e-a5ed097f9918\") " pod="openstack/ceilometer-0" Jan 25 08:17:17 crc kubenswrapper[4832]: I0125 08:17:17.993969 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/65a902e4-15aa-499b-aa8e-a5ed097f9918-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"65a902e4-15aa-499b-aa8e-a5ed097f9918\") " pod="openstack/ceilometer-0" Jan 25 08:17:17 crc kubenswrapper[4832]: I0125 08:17:17.994005 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/65a902e4-15aa-499b-aa8e-a5ed097f9918-log-httpd\") pod \"ceilometer-0\" (UID: \"65a902e4-15aa-499b-aa8e-a5ed097f9918\") " pod="openstack/ceilometer-0" Jan 25 08:17:17 crc kubenswrapper[4832]: I0125 08:17:17.994043 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/65a902e4-15aa-499b-aa8e-a5ed097f9918-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"65a902e4-15aa-499b-aa8e-a5ed097f9918\") " pod="openstack/ceilometer-0" Jan 25 08:17:17 crc kubenswrapper[4832]: I0125 08:17:17.994108 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/65a902e4-15aa-499b-aa8e-a5ed097f9918-scripts\") pod \"ceilometer-0\" (UID: \"65a902e4-15aa-499b-aa8e-a5ed097f9918\") " pod="openstack/ceilometer-0" Jan 25 08:17:17 crc kubenswrapper[4832]: I0125 08:17:17.994164 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/65a902e4-15aa-499b-aa8e-a5ed097f9918-config-data\") pod \"ceilometer-0\" (UID: \"65a902e4-15aa-499b-aa8e-a5ed097f9918\") " pod="openstack/ceilometer-0" Jan 25 08:17:17 crc kubenswrapper[4832]: I0125 08:17:17.994587 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/65a902e4-15aa-499b-aa8e-a5ed097f9918-run-httpd\") pod \"ceilometer-0\" (UID: \"65a902e4-15aa-499b-aa8e-a5ed097f9918\") " pod="openstack/ceilometer-0" Jan 25 08:17:17 crc kubenswrapper[4832]: I0125 08:17:17.994903 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/65a902e4-15aa-499b-aa8e-a5ed097f9918-log-httpd\") pod \"ceilometer-0\" (UID: \"65a902e4-15aa-499b-aa8e-a5ed097f9918\") " pod="openstack/ceilometer-0" Jan 25 08:17:17 crc kubenswrapper[4832]: I0125 08:17:17.998880 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/65a902e4-15aa-499b-aa8e-a5ed097f9918-scripts\") pod \"ceilometer-0\" (UID: \"65a902e4-15aa-499b-aa8e-a5ed097f9918\") " pod="openstack/ceilometer-0" Jan 25 08:17:17 crc kubenswrapper[4832]: I0125 08:17:17.999867 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/65a902e4-15aa-499b-aa8e-a5ed097f9918-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"65a902e4-15aa-499b-aa8e-a5ed097f9918\") " pod="openstack/ceilometer-0" Jan 25 08:17:18 crc kubenswrapper[4832]: I0125 08:17:18.000076 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/65a902e4-15aa-499b-aa8e-a5ed097f9918-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"65a902e4-15aa-499b-aa8e-a5ed097f9918\") " pod="openstack/ceilometer-0" Jan 25 08:17:18 crc kubenswrapper[4832]: I0125 08:17:18.000315 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/65a902e4-15aa-499b-aa8e-a5ed097f9918-config-data\") pod \"ceilometer-0\" (UID: \"65a902e4-15aa-499b-aa8e-a5ed097f9918\") " pod="openstack/ceilometer-0" Jan 25 08:17:18 crc kubenswrapper[4832]: I0125 08:17:18.025177 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7psfk\" (UniqueName: \"kubernetes.io/projected/65a902e4-15aa-499b-aa8e-a5ed097f9918-kube-api-access-7psfk\") pod \"ceilometer-0\" (UID: \"65a902e4-15aa-499b-aa8e-a5ed097f9918\") " pod="openstack/ceilometer-0" Jan 25 08:17:18 crc kubenswrapper[4832]: I0125 08:17:18.135095 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 25 08:17:18 crc kubenswrapper[4832]: I0125 08:17:18.187760 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-857c8bdbcf-kwd2q" Jan 25 08:17:18 crc kubenswrapper[4832]: I0125 08:17:18.343755 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-dc694898-lnc2f"] Jan 25 08:17:18 crc kubenswrapper[4832]: I0125 08:17:18.343998 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-dc694898-lnc2f" podUID="1fdbaf45-d8d7-430d-9c6d-29359e4dd17e" containerName="neutron-api" containerID="cri-o://6b4d8ad30e05cde88c2a993b10597ed6b155ae433b122fd292612d31b3d8090a" gracePeriod=30 Jan 25 08:17:18 crc kubenswrapper[4832]: I0125 08:17:18.344462 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-dc694898-lnc2f" podUID="1fdbaf45-d8d7-430d-9c6d-29359e4dd17e" containerName="neutron-httpd" containerID="cri-o://cfabfac4215c85cb04318d6e8a65d5fc42bf16d1a77ecce2faa828a5db7e7e26" gracePeriod=30 Jan 25 08:17:18 crc kubenswrapper[4832]: I0125 08:17:18.546641 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-658c5f7995-t6v6k" event={"ID":"81bd3301-f264-4150-8f71-869af2c1ed3d","Type":"ContainerStarted","Data":"ec80b502ffa3d014fb926517c6327ea20bdff8d6dfa21c9b513fd099aab0866e"} Jan 25 08:17:18 crc kubenswrapper[4832]: I0125 08:17:18.546710 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-658c5f7995-t6v6k" event={"ID":"81bd3301-f264-4150-8f71-869af2c1ed3d","Type":"ContainerStarted","Data":"34bdbaa7095fd73d2c1c05093fa8a724f361fb07a63e9a1db423fad1b2978923"} Jan 25 08:17:18 crc kubenswrapper[4832]: I0125 08:17:18.548058 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-658c5f7995-t6v6k" Jan 25 08:17:18 crc kubenswrapper[4832]: I0125 08:17:18.548105 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-658c5f7995-t6v6k" Jan 25 08:17:18 crc kubenswrapper[4832]: I0125 08:17:18.589099 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 25 08:17:18 crc kubenswrapper[4832]: I0125 08:17:18.595306 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-proxy-658c5f7995-t6v6k" podStartSLOduration=9.595277027 podStartE2EDuration="9.595277027s" podCreationTimestamp="2026-01-25 08:17:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-25 08:17:18.576158899 +0000 UTC m=+1221.249982442" watchObservedRunningTime="2026-01-25 08:17:18.595277027 +0000 UTC m=+1221.269100560" Jan 25 08:17:18 crc kubenswrapper[4832]: W0125 08:17:18.620597 4832 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod65a902e4_15aa_499b_aa8e_a5ed097f9918.slice/crio-971eaaebd328a54cb3148204d3cb86fe51822f75c5d0c97fa3f15a36eda03b96 WatchSource:0}: Error finding container 971eaaebd328a54cb3148204d3cb86fe51822f75c5d0c97fa3f15a36eda03b96: Status 404 returned error can't find the container with id 971eaaebd328a54cb3148204d3cb86fe51822f75c5d0c97fa3f15a36eda03b96 Jan 25 08:17:19 crc kubenswrapper[4832]: I0125 08:17:19.573130 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"65a902e4-15aa-499b-aa8e-a5ed097f9918","Type":"ContainerStarted","Data":"ea6ff49bce14edc653d6dab40433f839eb38f1c615aa3a14dc8b79262ec41d89"} Jan 25 08:17:19 crc kubenswrapper[4832]: I0125 08:17:19.573591 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"65a902e4-15aa-499b-aa8e-a5ed097f9918","Type":"ContainerStarted","Data":"971eaaebd328a54cb3148204d3cb86fe51822f75c5d0c97fa3f15a36eda03b96"} Jan 25 08:17:19 crc kubenswrapper[4832]: I0125 08:17:19.575811 4832 generic.go:334] "Generic (PLEG): container finished" podID="1fdbaf45-d8d7-430d-9c6d-29359e4dd17e" containerID="cfabfac4215c85cb04318d6e8a65d5fc42bf16d1a77ecce2faa828a5db7e7e26" exitCode=0 Jan 25 08:17:19 crc kubenswrapper[4832]: I0125 08:17:19.575895 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-dc694898-lnc2f" event={"ID":"1fdbaf45-d8d7-430d-9c6d-29359e4dd17e","Type":"ContainerDied","Data":"cfabfac4215c85cb04318d6e8a65d5fc42bf16d1a77ecce2faa828a5db7e7e26"} Jan 25 08:17:19 crc kubenswrapper[4832]: I0125 08:17:19.680786 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="46d917e3-482a-43d4-9c3a-a632acb41838" path="/var/lib/kubelet/pods/46d917e3-482a-43d4-9c3a-a632acb41838/volumes" Jan 25 08:17:19 crc kubenswrapper[4832]: I0125 08:17:19.897146 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 25 08:17:19 crc kubenswrapper[4832]: I0125 08:17:19.897972 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="0cdb9042-6480-49eb-b855-ac5c5adce9a4" containerName="glance-log" containerID="cri-o://b7dde5f52c9ae54ed382849789f84bc94b5a67160df613844bf537e0b149ec00" gracePeriod=30 Jan 25 08:17:19 crc kubenswrapper[4832]: I0125 08:17:19.898067 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="0cdb9042-6480-49eb-b855-ac5c5adce9a4" containerName="glance-httpd" containerID="cri-o://55dc6f35742eb6720b81f0c8beb836f9ae06b558c0a1ee8804acc7d548342188" gracePeriod=30 Jan 25 08:17:19 crc kubenswrapper[4832]: I0125 08:17:19.904264 4832 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/glance-default-external-api-0" podUID="0cdb9042-6480-49eb-b855-ac5c5adce9a4" containerName="glance-log" probeResult="failure" output="Get \"https://10.217.0.165:9292/healthcheck\": EOF" Jan 25 08:17:19 crc kubenswrapper[4832]: I0125 08:17:19.907352 4832 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/glance-default-external-api-0" podUID="0cdb9042-6480-49eb-b855-ac5c5adce9a4" containerName="glance-log" probeResult="failure" output="Get \"https://10.217.0.165:9292/healthcheck\": EOF" Jan 25 08:17:19 crc kubenswrapper[4832]: I0125 08:17:19.917943 4832 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-f649cfc6-vzpx7" podUID="26fd6803-3263-4989-a86e-908f6a504d14" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.146:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.146:8443: connect: connection refused" Jan 25 08:17:20 crc kubenswrapper[4832]: I0125 08:17:20.594590 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"65a902e4-15aa-499b-aa8e-a5ed097f9918","Type":"ContainerStarted","Data":"a7c1b9dfea6f73228508a275ac64d4d74426170ef731f0f8ba9fdff0f8345d2a"} Jan 25 08:17:20 crc kubenswrapper[4832]: I0125 08:17:20.599038 4832 generic.go:334] "Generic (PLEG): container finished" podID="0cdb9042-6480-49eb-b855-ac5c5adce9a4" containerID="b7dde5f52c9ae54ed382849789f84bc94b5a67160df613844bf537e0b149ec00" exitCode=143 Jan 25 08:17:20 crc kubenswrapper[4832]: I0125 08:17:20.599110 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"0cdb9042-6480-49eb-b855-ac5c5adce9a4","Type":"ContainerDied","Data":"b7dde5f52c9ae54ed382849789f84bc94b5a67160df613844bf537e0b149ec00"} Jan 25 08:17:21 crc kubenswrapper[4832]: I0125 08:17:21.379550 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-dc694898-lnc2f" Jan 25 08:17:21 crc kubenswrapper[4832]: I0125 08:17:21.484885 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/1fdbaf45-d8d7-430d-9c6d-29359e4dd17e-config\") pod \"1fdbaf45-d8d7-430d-9c6d-29359e4dd17e\" (UID: \"1fdbaf45-d8d7-430d-9c6d-29359e4dd17e\") " Jan 25 08:17:21 crc kubenswrapper[4832]: I0125 08:17:21.484954 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/1fdbaf45-d8d7-430d-9c6d-29359e4dd17e-ovndb-tls-certs\") pod \"1fdbaf45-d8d7-430d-9c6d-29359e4dd17e\" (UID: \"1fdbaf45-d8d7-430d-9c6d-29359e4dd17e\") " Jan 25 08:17:21 crc kubenswrapper[4832]: I0125 08:17:21.485124 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ljht2\" (UniqueName: \"kubernetes.io/projected/1fdbaf45-d8d7-430d-9c6d-29359e4dd17e-kube-api-access-ljht2\") pod \"1fdbaf45-d8d7-430d-9c6d-29359e4dd17e\" (UID: \"1fdbaf45-d8d7-430d-9c6d-29359e4dd17e\") " Jan 25 08:17:21 crc kubenswrapper[4832]: I0125 08:17:21.485173 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1fdbaf45-d8d7-430d-9c6d-29359e4dd17e-combined-ca-bundle\") pod \"1fdbaf45-d8d7-430d-9c6d-29359e4dd17e\" (UID: \"1fdbaf45-d8d7-430d-9c6d-29359e4dd17e\") " Jan 25 08:17:21 crc kubenswrapper[4832]: I0125 08:17:21.485201 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/1fdbaf45-d8d7-430d-9c6d-29359e4dd17e-httpd-config\") pod \"1fdbaf45-d8d7-430d-9c6d-29359e4dd17e\" (UID: \"1fdbaf45-d8d7-430d-9c6d-29359e4dd17e\") " Jan 25 08:17:21 crc kubenswrapper[4832]: I0125 08:17:21.491960 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1fdbaf45-d8d7-430d-9c6d-29359e4dd17e-kube-api-access-ljht2" (OuterVolumeSpecName: "kube-api-access-ljht2") pod "1fdbaf45-d8d7-430d-9c6d-29359e4dd17e" (UID: "1fdbaf45-d8d7-430d-9c6d-29359e4dd17e"). InnerVolumeSpecName "kube-api-access-ljht2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 25 08:17:21 crc kubenswrapper[4832]: I0125 08:17:21.496098 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1fdbaf45-d8d7-430d-9c6d-29359e4dd17e-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "1fdbaf45-d8d7-430d-9c6d-29359e4dd17e" (UID: "1fdbaf45-d8d7-430d-9c6d-29359e4dd17e"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 25 08:17:21 crc kubenswrapper[4832]: I0125 08:17:21.587962 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1fdbaf45-d8d7-430d-9c6d-29359e4dd17e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1fdbaf45-d8d7-430d-9c6d-29359e4dd17e" (UID: "1fdbaf45-d8d7-430d-9c6d-29359e4dd17e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 25 08:17:21 crc kubenswrapper[4832]: I0125 08:17:21.588290 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ljht2\" (UniqueName: \"kubernetes.io/projected/1fdbaf45-d8d7-430d-9c6d-29359e4dd17e-kube-api-access-ljht2\") on node \"crc\" DevicePath \"\"" Jan 25 08:17:21 crc kubenswrapper[4832]: I0125 08:17:21.588334 4832 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1fdbaf45-d8d7-430d-9c6d-29359e4dd17e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 25 08:17:21 crc kubenswrapper[4832]: I0125 08:17:21.588346 4832 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/1fdbaf45-d8d7-430d-9c6d-29359e4dd17e-httpd-config\") on node \"crc\" DevicePath \"\"" Jan 25 08:17:21 crc kubenswrapper[4832]: I0125 08:17:21.631240 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"65a902e4-15aa-499b-aa8e-a5ed097f9918","Type":"ContainerStarted","Data":"a553a6783b46e2514941b45dfd0eeb2f2dc302e2182086e4a7a78da7f033628e"} Jan 25 08:17:21 crc kubenswrapper[4832]: I0125 08:17:21.636672 4832 generic.go:334] "Generic (PLEG): container finished" podID="1fdbaf45-d8d7-430d-9c6d-29359e4dd17e" containerID="6b4d8ad30e05cde88c2a993b10597ed6b155ae433b122fd292612d31b3d8090a" exitCode=0 Jan 25 08:17:21 crc kubenswrapper[4832]: I0125 08:17:21.636745 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-dc694898-lnc2f" event={"ID":"1fdbaf45-d8d7-430d-9c6d-29359e4dd17e","Type":"ContainerDied","Data":"6b4d8ad30e05cde88c2a993b10597ed6b155ae433b122fd292612d31b3d8090a"} Jan 25 08:17:21 crc kubenswrapper[4832]: I0125 08:17:21.637168 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-dc694898-lnc2f" event={"ID":"1fdbaf45-d8d7-430d-9c6d-29359e4dd17e","Type":"ContainerDied","Data":"213d462405947634370f71339ea2118b5da6e85e044eaa057a761b433eab668a"} Jan 25 08:17:21 crc kubenswrapper[4832]: I0125 08:17:21.637199 4832 scope.go:117] "RemoveContainer" containerID="cfabfac4215c85cb04318d6e8a65d5fc42bf16d1a77ecce2faa828a5db7e7e26" Jan 25 08:17:21 crc kubenswrapper[4832]: I0125 08:17:21.637336 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-dc694898-lnc2f" Jan 25 08:17:21 crc kubenswrapper[4832]: I0125 08:17:21.641192 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1fdbaf45-d8d7-430d-9c6d-29359e4dd17e-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "1fdbaf45-d8d7-430d-9c6d-29359e4dd17e" (UID: "1fdbaf45-d8d7-430d-9c6d-29359e4dd17e"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 25 08:17:21 crc kubenswrapper[4832]: I0125 08:17:21.642642 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1fdbaf45-d8d7-430d-9c6d-29359e4dd17e-config" (OuterVolumeSpecName: "config") pod "1fdbaf45-d8d7-430d-9c6d-29359e4dd17e" (UID: "1fdbaf45-d8d7-430d-9c6d-29359e4dd17e"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 25 08:17:21 crc kubenswrapper[4832]: I0125 08:17:21.668967 4832 scope.go:117] "RemoveContainer" containerID="6b4d8ad30e05cde88c2a993b10597ed6b155ae433b122fd292612d31b3d8090a" Jan 25 08:17:21 crc kubenswrapper[4832]: I0125 08:17:21.700107 4832 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/1fdbaf45-d8d7-430d-9c6d-29359e4dd17e-config\") on node \"crc\" DevicePath \"\"" Jan 25 08:17:21 crc kubenswrapper[4832]: I0125 08:17:21.700160 4832 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/1fdbaf45-d8d7-430d-9c6d-29359e4dd17e-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 25 08:17:21 crc kubenswrapper[4832]: I0125 08:17:21.708260 4832 scope.go:117] "RemoveContainer" containerID="cfabfac4215c85cb04318d6e8a65d5fc42bf16d1a77ecce2faa828a5db7e7e26" Jan 25 08:17:21 crc kubenswrapper[4832]: E0125 08:17:21.709333 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cfabfac4215c85cb04318d6e8a65d5fc42bf16d1a77ecce2faa828a5db7e7e26\": container with ID starting with cfabfac4215c85cb04318d6e8a65d5fc42bf16d1a77ecce2faa828a5db7e7e26 not found: ID does not exist" containerID="cfabfac4215c85cb04318d6e8a65d5fc42bf16d1a77ecce2faa828a5db7e7e26" Jan 25 08:17:21 crc kubenswrapper[4832]: I0125 08:17:21.709377 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cfabfac4215c85cb04318d6e8a65d5fc42bf16d1a77ecce2faa828a5db7e7e26"} err="failed to get container status \"cfabfac4215c85cb04318d6e8a65d5fc42bf16d1a77ecce2faa828a5db7e7e26\": rpc error: code = NotFound desc = could not find container \"cfabfac4215c85cb04318d6e8a65d5fc42bf16d1a77ecce2faa828a5db7e7e26\": container with ID starting with cfabfac4215c85cb04318d6e8a65d5fc42bf16d1a77ecce2faa828a5db7e7e26 not found: ID does not exist" Jan 25 08:17:21 crc kubenswrapper[4832]: I0125 08:17:21.709433 4832 scope.go:117] "RemoveContainer" containerID="6b4d8ad30e05cde88c2a993b10597ed6b155ae433b122fd292612d31b3d8090a" Jan 25 08:17:21 crc kubenswrapper[4832]: E0125 08:17:21.710223 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6b4d8ad30e05cde88c2a993b10597ed6b155ae433b122fd292612d31b3d8090a\": container with ID starting with 6b4d8ad30e05cde88c2a993b10597ed6b155ae433b122fd292612d31b3d8090a not found: ID does not exist" containerID="6b4d8ad30e05cde88c2a993b10597ed6b155ae433b122fd292612d31b3d8090a" Jan 25 08:17:21 crc kubenswrapper[4832]: I0125 08:17:21.710308 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6b4d8ad30e05cde88c2a993b10597ed6b155ae433b122fd292612d31b3d8090a"} err="failed to get container status \"6b4d8ad30e05cde88c2a993b10597ed6b155ae433b122fd292612d31b3d8090a\": rpc error: code = NotFound desc = could not find container \"6b4d8ad30e05cde88c2a993b10597ed6b155ae433b122fd292612d31b3d8090a\": container with ID starting with 6b4d8ad30e05cde88c2a993b10597ed6b155ae433b122fd292612d31b3d8090a not found: ID does not exist" Jan 25 08:17:22 crc kubenswrapper[4832]: I0125 08:17:22.016094 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-dc694898-lnc2f"] Jan 25 08:17:22 crc kubenswrapper[4832]: I0125 08:17:22.024992 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-dc694898-lnc2f"] Jan 25 08:17:22 crc kubenswrapper[4832]: I0125 08:17:22.648535 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"65a902e4-15aa-499b-aa8e-a5ed097f9918","Type":"ContainerStarted","Data":"b46a84b23fd0ab1ede9ad840262d6a7a815eb925c3b08bcc39a2f88039df1ac9"} Jan 25 08:17:22 crc kubenswrapper[4832]: I0125 08:17:22.648993 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 25 08:17:22 crc kubenswrapper[4832]: I0125 08:17:22.675336 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.027242379 podStartE2EDuration="5.675310321s" podCreationTimestamp="2026-01-25 08:17:17 +0000 UTC" firstStartedPulling="2026-01-25 08:17:18.625670138 +0000 UTC m=+1221.299493711" lastFinishedPulling="2026-01-25 08:17:22.27373812 +0000 UTC m=+1224.947561653" observedRunningTime="2026-01-25 08:17:22.673815104 +0000 UTC m=+1225.347638627" watchObservedRunningTime="2026-01-25 08:17:22.675310321 +0000 UTC m=+1225.349133844" Jan 25 08:17:22 crc kubenswrapper[4832]: I0125 08:17:22.980962 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 25 08:17:22 crc kubenswrapper[4832]: I0125 08:17:22.981336 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="2d5b38e8-fe79-41d7-9c0e-f053ae1029a6" containerName="glance-log" containerID="cri-o://70dd41b47f030be98780515dc5751d968023e82ef169c81d380463ea5150cd5f" gracePeriod=30 Jan 25 08:17:22 crc kubenswrapper[4832]: I0125 08:17:22.981417 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="2d5b38e8-fe79-41d7-9c0e-f053ae1029a6" containerName="glance-httpd" containerID="cri-o://9c76a612cd6731411225aa1754ce7dbee2923523b6cba2bce2299702d69fa5c0" gracePeriod=30 Jan 25 08:17:22 crc kubenswrapper[4832]: I0125 08:17:22.992777 4832 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/glance-default-internal-api-0" podUID="2d5b38e8-fe79-41d7-9c0e-f053ae1029a6" containerName="glance-log" probeResult="failure" output="Get \"https://10.217.0.168:9292/healthcheck\": EOF" Jan 25 08:17:22 crc kubenswrapper[4832]: I0125 08:17:22.992792 4832 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/glance-default-internal-api-0" podUID="2d5b38e8-fe79-41d7-9c0e-f053ae1029a6" containerName="glance-log" probeResult="failure" output="Get \"https://10.217.0.168:9292/healthcheck\": EOF" Jan 25 08:17:22 crc kubenswrapper[4832]: I0125 08:17:22.992792 4832 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/glance-default-internal-api-0" podUID="2d5b38e8-fe79-41d7-9c0e-f053ae1029a6" containerName="glance-httpd" probeResult="failure" output="Get \"https://10.217.0.168:9292/healthcheck\": EOF" Jan 25 08:17:23 crc kubenswrapper[4832]: I0125 08:17:23.662409 4832 generic.go:334] "Generic (PLEG): container finished" podID="2d5b38e8-fe79-41d7-9c0e-f053ae1029a6" containerID="70dd41b47f030be98780515dc5751d968023e82ef169c81d380463ea5150cd5f" exitCode=143 Jan 25 08:17:23 crc kubenswrapper[4832]: I0125 08:17:23.662511 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"2d5b38e8-fe79-41d7-9c0e-f053ae1029a6","Type":"ContainerDied","Data":"70dd41b47f030be98780515dc5751d968023e82ef169c81d380463ea5150cd5f"} Jan 25 08:17:23 crc kubenswrapper[4832]: I0125 08:17:23.682339 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1fdbaf45-d8d7-430d-9c6d-29359e4dd17e" path="/var/lib/kubelet/pods/1fdbaf45-d8d7-430d-9c6d-29359e4dd17e/volumes" Jan 25 08:17:24 crc kubenswrapper[4832]: I0125 08:17:24.008294 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 25 08:17:24 crc kubenswrapper[4832]: I0125 08:17:24.691559 4832 generic.go:334] "Generic (PLEG): container finished" podID="0cdb9042-6480-49eb-b855-ac5c5adce9a4" containerID="55dc6f35742eb6720b81f0c8beb836f9ae06b558c0a1ee8804acc7d548342188" exitCode=0 Jan 25 08:17:24 crc kubenswrapper[4832]: I0125 08:17:24.692211 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="65a902e4-15aa-499b-aa8e-a5ed097f9918" containerName="ceilometer-central-agent" containerID="cri-o://ea6ff49bce14edc653d6dab40433f839eb38f1c615aa3a14dc8b79262ec41d89" gracePeriod=30 Jan 25 08:17:24 crc kubenswrapper[4832]: I0125 08:17:24.692316 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"0cdb9042-6480-49eb-b855-ac5c5adce9a4","Type":"ContainerDied","Data":"55dc6f35742eb6720b81f0c8beb836f9ae06b558c0a1ee8804acc7d548342188"} Jan 25 08:17:24 crc kubenswrapper[4832]: I0125 08:17:24.692700 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="65a902e4-15aa-499b-aa8e-a5ed097f9918" containerName="proxy-httpd" containerID="cri-o://b46a84b23fd0ab1ede9ad840262d6a7a815eb925c3b08bcc39a2f88039df1ac9" gracePeriod=30 Jan 25 08:17:24 crc kubenswrapper[4832]: I0125 08:17:24.692909 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="65a902e4-15aa-499b-aa8e-a5ed097f9918" containerName="sg-core" containerID="cri-o://a553a6783b46e2514941b45dfd0eeb2f2dc302e2182086e4a7a78da7f033628e" gracePeriod=30 Jan 25 08:17:24 crc kubenswrapper[4832]: I0125 08:17:24.692925 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="65a902e4-15aa-499b-aa8e-a5ed097f9918" containerName="ceilometer-notification-agent" containerID="cri-o://a7c1b9dfea6f73228508a275ac64d4d74426170ef731f0f8ba9fdff0f8345d2a" gracePeriod=30 Jan 25 08:17:24 crc kubenswrapper[4832]: I0125 08:17:24.755526 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-658c5f7995-t6v6k" Jan 25 08:17:24 crc kubenswrapper[4832]: I0125 08:17:24.755591 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-658c5f7995-t6v6k" Jan 25 08:17:25 crc kubenswrapper[4832]: I0125 08:17:25.172960 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 25 08:17:25 crc kubenswrapper[4832]: I0125 08:17:25.278013 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hksc7\" (UniqueName: \"kubernetes.io/projected/0cdb9042-6480-49eb-b855-ac5c5adce9a4-kube-api-access-hksc7\") pod \"0cdb9042-6480-49eb-b855-ac5c5adce9a4\" (UID: \"0cdb9042-6480-49eb-b855-ac5c5adce9a4\") " Jan 25 08:17:25 crc kubenswrapper[4832]: I0125 08:17:25.278113 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0cdb9042-6480-49eb-b855-ac5c5adce9a4-scripts\") pod \"0cdb9042-6480-49eb-b855-ac5c5adce9a4\" (UID: \"0cdb9042-6480-49eb-b855-ac5c5adce9a4\") " Jan 25 08:17:25 crc kubenswrapper[4832]: I0125 08:17:25.278166 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0cdb9042-6480-49eb-b855-ac5c5adce9a4-logs\") pod \"0cdb9042-6480-49eb-b855-ac5c5adce9a4\" (UID: \"0cdb9042-6480-49eb-b855-ac5c5adce9a4\") " Jan 25 08:17:25 crc kubenswrapper[4832]: I0125 08:17:25.278277 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/0cdb9042-6480-49eb-b855-ac5c5adce9a4-httpd-run\") pod \"0cdb9042-6480-49eb-b855-ac5c5adce9a4\" (UID: \"0cdb9042-6480-49eb-b855-ac5c5adce9a4\") " Jan 25 08:17:25 crc kubenswrapper[4832]: I0125 08:17:25.278356 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0cdb9042-6480-49eb-b855-ac5c5adce9a4-config-data\") pod \"0cdb9042-6480-49eb-b855-ac5c5adce9a4\" (UID: \"0cdb9042-6480-49eb-b855-ac5c5adce9a4\") " Jan 25 08:17:25 crc kubenswrapper[4832]: I0125 08:17:25.278428 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"0cdb9042-6480-49eb-b855-ac5c5adce9a4\" (UID: \"0cdb9042-6480-49eb-b855-ac5c5adce9a4\") " Jan 25 08:17:25 crc kubenswrapper[4832]: I0125 08:17:25.278472 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0cdb9042-6480-49eb-b855-ac5c5adce9a4-combined-ca-bundle\") pod \"0cdb9042-6480-49eb-b855-ac5c5adce9a4\" (UID: \"0cdb9042-6480-49eb-b855-ac5c5adce9a4\") " Jan 25 08:17:25 crc kubenswrapper[4832]: I0125 08:17:25.278493 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/0cdb9042-6480-49eb-b855-ac5c5adce9a4-public-tls-certs\") pod \"0cdb9042-6480-49eb-b855-ac5c5adce9a4\" (UID: \"0cdb9042-6480-49eb-b855-ac5c5adce9a4\") " Jan 25 08:17:25 crc kubenswrapper[4832]: I0125 08:17:25.278802 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0cdb9042-6480-49eb-b855-ac5c5adce9a4-logs" (OuterVolumeSpecName: "logs") pod "0cdb9042-6480-49eb-b855-ac5c5adce9a4" (UID: "0cdb9042-6480-49eb-b855-ac5c5adce9a4"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 25 08:17:25 crc kubenswrapper[4832]: I0125 08:17:25.278945 4832 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0cdb9042-6480-49eb-b855-ac5c5adce9a4-logs\") on node \"crc\" DevicePath \"\"" Jan 25 08:17:25 crc kubenswrapper[4832]: I0125 08:17:25.279229 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0cdb9042-6480-49eb-b855-ac5c5adce9a4-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "0cdb9042-6480-49eb-b855-ac5c5adce9a4" (UID: "0cdb9042-6480-49eb-b855-ac5c5adce9a4"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 25 08:17:25 crc kubenswrapper[4832]: I0125 08:17:25.285993 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0cdb9042-6480-49eb-b855-ac5c5adce9a4-kube-api-access-hksc7" (OuterVolumeSpecName: "kube-api-access-hksc7") pod "0cdb9042-6480-49eb-b855-ac5c5adce9a4" (UID: "0cdb9042-6480-49eb-b855-ac5c5adce9a4"). InnerVolumeSpecName "kube-api-access-hksc7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 25 08:17:25 crc kubenswrapper[4832]: I0125 08:17:25.286239 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage02-crc" (OuterVolumeSpecName: "glance") pod "0cdb9042-6480-49eb-b855-ac5c5adce9a4" (UID: "0cdb9042-6480-49eb-b855-ac5c5adce9a4"). InnerVolumeSpecName "local-storage02-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 25 08:17:25 crc kubenswrapper[4832]: I0125 08:17:25.298561 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0cdb9042-6480-49eb-b855-ac5c5adce9a4-scripts" (OuterVolumeSpecName: "scripts") pod "0cdb9042-6480-49eb-b855-ac5c5adce9a4" (UID: "0cdb9042-6480-49eb-b855-ac5c5adce9a4"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 25 08:17:25 crc kubenswrapper[4832]: I0125 08:17:25.339298 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0cdb9042-6480-49eb-b855-ac5c5adce9a4-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0cdb9042-6480-49eb-b855-ac5c5adce9a4" (UID: "0cdb9042-6480-49eb-b855-ac5c5adce9a4"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 25 08:17:25 crc kubenswrapper[4832]: I0125 08:17:25.354710 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0cdb9042-6480-49eb-b855-ac5c5adce9a4-config-data" (OuterVolumeSpecName: "config-data") pod "0cdb9042-6480-49eb-b855-ac5c5adce9a4" (UID: "0cdb9042-6480-49eb-b855-ac5c5adce9a4"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 25 08:17:25 crc kubenswrapper[4832]: I0125 08:17:25.354821 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0cdb9042-6480-49eb-b855-ac5c5adce9a4-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "0cdb9042-6480-49eb-b855-ac5c5adce9a4" (UID: "0cdb9042-6480-49eb-b855-ac5c5adce9a4"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 25 08:17:25 crc kubenswrapper[4832]: I0125 08:17:25.380658 4832 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0cdb9042-6480-49eb-b855-ac5c5adce9a4-scripts\") on node \"crc\" DevicePath \"\"" Jan 25 08:17:25 crc kubenswrapper[4832]: I0125 08:17:25.380703 4832 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/0cdb9042-6480-49eb-b855-ac5c5adce9a4-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 25 08:17:25 crc kubenswrapper[4832]: I0125 08:17:25.380714 4832 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0cdb9042-6480-49eb-b855-ac5c5adce9a4-config-data\") on node \"crc\" DevicePath \"\"" Jan 25 08:17:25 crc kubenswrapper[4832]: I0125 08:17:25.380756 4832 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") on node \"crc\" " Jan 25 08:17:25 crc kubenswrapper[4832]: I0125 08:17:25.380777 4832 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0cdb9042-6480-49eb-b855-ac5c5adce9a4-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 25 08:17:25 crc kubenswrapper[4832]: I0125 08:17:25.380792 4832 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/0cdb9042-6480-49eb-b855-ac5c5adce9a4-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 25 08:17:25 crc kubenswrapper[4832]: I0125 08:17:25.380816 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hksc7\" (UniqueName: \"kubernetes.io/projected/0cdb9042-6480-49eb-b855-ac5c5adce9a4-kube-api-access-hksc7\") on node \"crc\" DevicePath \"\"" Jan 25 08:17:25 crc kubenswrapper[4832]: I0125 08:17:25.408709 4832 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage02-crc" (UniqueName: "kubernetes.io/local-volume/local-storage02-crc") on node "crc" Jan 25 08:17:25 crc kubenswrapper[4832]: I0125 08:17:25.483673 4832 reconciler_common.go:293] "Volume detached for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") on node \"crc\" DevicePath \"\"" Jan 25 08:17:25 crc kubenswrapper[4832]: I0125 08:17:25.706535 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"0cdb9042-6480-49eb-b855-ac5c5adce9a4","Type":"ContainerDied","Data":"07aff72612b79ee6dc51ed271f971665125e4d4fee2a158cbca36df69b45ceb6"} Jan 25 08:17:25 crc kubenswrapper[4832]: I0125 08:17:25.706595 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 25 08:17:25 crc kubenswrapper[4832]: I0125 08:17:25.706621 4832 scope.go:117] "RemoveContainer" containerID="55dc6f35742eb6720b81f0c8beb836f9ae06b558c0a1ee8804acc7d548342188" Jan 25 08:17:25 crc kubenswrapper[4832]: I0125 08:17:25.715889 4832 generic.go:334] "Generic (PLEG): container finished" podID="65a902e4-15aa-499b-aa8e-a5ed097f9918" containerID="b46a84b23fd0ab1ede9ad840262d6a7a815eb925c3b08bcc39a2f88039df1ac9" exitCode=0 Jan 25 08:17:25 crc kubenswrapper[4832]: I0125 08:17:25.715932 4832 generic.go:334] "Generic (PLEG): container finished" podID="65a902e4-15aa-499b-aa8e-a5ed097f9918" containerID="a553a6783b46e2514941b45dfd0eeb2f2dc302e2182086e4a7a78da7f033628e" exitCode=2 Jan 25 08:17:25 crc kubenswrapper[4832]: I0125 08:17:25.715942 4832 generic.go:334] "Generic (PLEG): container finished" podID="65a902e4-15aa-499b-aa8e-a5ed097f9918" containerID="a7c1b9dfea6f73228508a275ac64d4d74426170ef731f0f8ba9fdff0f8345d2a" exitCode=0 Jan 25 08:17:25 crc kubenswrapper[4832]: I0125 08:17:25.715968 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"65a902e4-15aa-499b-aa8e-a5ed097f9918","Type":"ContainerDied","Data":"b46a84b23fd0ab1ede9ad840262d6a7a815eb925c3b08bcc39a2f88039df1ac9"} Jan 25 08:17:25 crc kubenswrapper[4832]: I0125 08:17:25.716005 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"65a902e4-15aa-499b-aa8e-a5ed097f9918","Type":"ContainerDied","Data":"a553a6783b46e2514941b45dfd0eeb2f2dc302e2182086e4a7a78da7f033628e"} Jan 25 08:17:25 crc kubenswrapper[4832]: I0125 08:17:25.716016 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"65a902e4-15aa-499b-aa8e-a5ed097f9918","Type":"ContainerDied","Data":"a7c1b9dfea6f73228508a275ac64d4d74426170ef731f0f8ba9fdff0f8345d2a"} Jan 25 08:17:25 crc kubenswrapper[4832]: I0125 08:17:25.743193 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 25 08:17:25 crc kubenswrapper[4832]: I0125 08:17:25.743663 4832 scope.go:117] "RemoveContainer" containerID="b7dde5f52c9ae54ed382849789f84bc94b5a67160df613844bf537e0b149ec00" Jan 25 08:17:25 crc kubenswrapper[4832]: I0125 08:17:25.753796 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 25 08:17:25 crc kubenswrapper[4832]: I0125 08:17:25.779326 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Jan 25 08:17:25 crc kubenswrapper[4832]: E0125 08:17:25.779862 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0cdb9042-6480-49eb-b855-ac5c5adce9a4" containerName="glance-log" Jan 25 08:17:25 crc kubenswrapper[4832]: I0125 08:17:25.779920 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="0cdb9042-6480-49eb-b855-ac5c5adce9a4" containerName="glance-log" Jan 25 08:17:25 crc kubenswrapper[4832]: E0125 08:17:25.779993 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0cdb9042-6480-49eb-b855-ac5c5adce9a4" containerName="glance-httpd" Jan 25 08:17:25 crc kubenswrapper[4832]: I0125 08:17:25.780000 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="0cdb9042-6480-49eb-b855-ac5c5adce9a4" containerName="glance-httpd" Jan 25 08:17:25 crc kubenswrapper[4832]: E0125 08:17:25.780011 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1fdbaf45-d8d7-430d-9c6d-29359e4dd17e" containerName="neutron-api" Jan 25 08:17:25 crc kubenswrapper[4832]: I0125 08:17:25.780017 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="1fdbaf45-d8d7-430d-9c6d-29359e4dd17e" containerName="neutron-api" Jan 25 08:17:25 crc kubenswrapper[4832]: E0125 08:17:25.780059 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1fdbaf45-d8d7-430d-9c6d-29359e4dd17e" containerName="neutron-httpd" Jan 25 08:17:25 crc kubenswrapper[4832]: I0125 08:17:25.780068 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="1fdbaf45-d8d7-430d-9c6d-29359e4dd17e" containerName="neutron-httpd" Jan 25 08:17:25 crc kubenswrapper[4832]: I0125 08:17:25.780299 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="1fdbaf45-d8d7-430d-9c6d-29359e4dd17e" containerName="neutron-api" Jan 25 08:17:25 crc kubenswrapper[4832]: I0125 08:17:25.780313 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="1fdbaf45-d8d7-430d-9c6d-29359e4dd17e" containerName="neutron-httpd" Jan 25 08:17:25 crc kubenswrapper[4832]: I0125 08:17:25.780325 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="0cdb9042-6480-49eb-b855-ac5c5adce9a4" containerName="glance-httpd" Jan 25 08:17:25 crc kubenswrapper[4832]: I0125 08:17:25.780336 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="0cdb9042-6480-49eb-b855-ac5c5adce9a4" containerName="glance-log" Jan 25 08:17:25 crc kubenswrapper[4832]: I0125 08:17:25.793257 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 25 08:17:25 crc kubenswrapper[4832]: I0125 08:17:25.800500 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Jan 25 08:17:25 crc kubenswrapper[4832]: I0125 08:17:25.801753 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Jan 25 08:17:25 crc kubenswrapper[4832]: I0125 08:17:25.834978 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 25 08:17:25 crc kubenswrapper[4832]: I0125 08:17:25.897976 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2ba1988f-0ee4-4e4d-9b32-eff3fe30c959-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"2ba1988f-0ee4-4e4d-9b32-eff3fe30c959\") " pod="openstack/glance-default-external-api-0" Jan 25 08:17:25 crc kubenswrapper[4832]: I0125 08:17:25.898043 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/2ba1988f-0ee4-4e4d-9b32-eff3fe30c959-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"2ba1988f-0ee4-4e4d-9b32-eff3fe30c959\") " pod="openstack/glance-default-external-api-0" Jan 25 08:17:25 crc kubenswrapper[4832]: I0125 08:17:25.898073 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vvv7p\" (UniqueName: \"kubernetes.io/projected/2ba1988f-0ee4-4e4d-9b32-eff3fe30c959-kube-api-access-vvv7p\") pod \"glance-default-external-api-0\" (UID: \"2ba1988f-0ee4-4e4d-9b32-eff3fe30c959\") " pod="openstack/glance-default-external-api-0" Jan 25 08:17:25 crc kubenswrapper[4832]: I0125 08:17:25.898109 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2ba1988f-0ee4-4e4d-9b32-eff3fe30c959-config-data\") pod \"glance-default-external-api-0\" (UID: \"2ba1988f-0ee4-4e4d-9b32-eff3fe30c959\") " pod="openstack/glance-default-external-api-0" Jan 25 08:17:25 crc kubenswrapper[4832]: I0125 08:17:25.898130 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2ba1988f-0ee4-4e4d-9b32-eff3fe30c959-logs\") pod \"glance-default-external-api-0\" (UID: \"2ba1988f-0ee4-4e4d-9b32-eff3fe30c959\") " pod="openstack/glance-default-external-api-0" Jan 25 08:17:25 crc kubenswrapper[4832]: I0125 08:17:25.898149 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2ba1988f-0ee4-4e4d-9b32-eff3fe30c959-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"2ba1988f-0ee4-4e4d-9b32-eff3fe30c959\") " pod="openstack/glance-default-external-api-0" Jan 25 08:17:25 crc kubenswrapper[4832]: I0125 08:17:25.898176 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"glance-default-external-api-0\" (UID: \"2ba1988f-0ee4-4e4d-9b32-eff3fe30c959\") " pod="openstack/glance-default-external-api-0" Jan 25 08:17:25 crc kubenswrapper[4832]: I0125 08:17:25.898209 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2ba1988f-0ee4-4e4d-9b32-eff3fe30c959-scripts\") pod \"glance-default-external-api-0\" (UID: \"2ba1988f-0ee4-4e4d-9b32-eff3fe30c959\") " pod="openstack/glance-default-external-api-0" Jan 25 08:17:26 crc kubenswrapper[4832]: I0125 08:17:26.000291 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2ba1988f-0ee4-4e4d-9b32-eff3fe30c959-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"2ba1988f-0ee4-4e4d-9b32-eff3fe30c959\") " pod="openstack/glance-default-external-api-0" Jan 25 08:17:26 crc kubenswrapper[4832]: I0125 08:17:26.000361 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/2ba1988f-0ee4-4e4d-9b32-eff3fe30c959-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"2ba1988f-0ee4-4e4d-9b32-eff3fe30c959\") " pod="openstack/glance-default-external-api-0" Jan 25 08:17:26 crc kubenswrapper[4832]: I0125 08:17:26.000415 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vvv7p\" (UniqueName: \"kubernetes.io/projected/2ba1988f-0ee4-4e4d-9b32-eff3fe30c959-kube-api-access-vvv7p\") pod \"glance-default-external-api-0\" (UID: \"2ba1988f-0ee4-4e4d-9b32-eff3fe30c959\") " pod="openstack/glance-default-external-api-0" Jan 25 08:17:26 crc kubenswrapper[4832]: I0125 08:17:26.000460 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2ba1988f-0ee4-4e4d-9b32-eff3fe30c959-config-data\") pod \"glance-default-external-api-0\" (UID: \"2ba1988f-0ee4-4e4d-9b32-eff3fe30c959\") " pod="openstack/glance-default-external-api-0" Jan 25 08:17:26 crc kubenswrapper[4832]: I0125 08:17:26.000490 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2ba1988f-0ee4-4e4d-9b32-eff3fe30c959-logs\") pod \"glance-default-external-api-0\" (UID: \"2ba1988f-0ee4-4e4d-9b32-eff3fe30c959\") " pod="openstack/glance-default-external-api-0" Jan 25 08:17:26 crc kubenswrapper[4832]: I0125 08:17:26.000526 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2ba1988f-0ee4-4e4d-9b32-eff3fe30c959-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"2ba1988f-0ee4-4e4d-9b32-eff3fe30c959\") " pod="openstack/glance-default-external-api-0" Jan 25 08:17:26 crc kubenswrapper[4832]: I0125 08:17:26.000566 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"glance-default-external-api-0\" (UID: \"2ba1988f-0ee4-4e4d-9b32-eff3fe30c959\") " pod="openstack/glance-default-external-api-0" Jan 25 08:17:26 crc kubenswrapper[4832]: I0125 08:17:26.001274 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2ba1988f-0ee4-4e4d-9b32-eff3fe30c959-logs\") pod \"glance-default-external-api-0\" (UID: \"2ba1988f-0ee4-4e4d-9b32-eff3fe30c959\") " pod="openstack/glance-default-external-api-0" Jan 25 08:17:26 crc kubenswrapper[4832]: I0125 08:17:26.001289 4832 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"glance-default-external-api-0\" (UID: \"2ba1988f-0ee4-4e4d-9b32-eff3fe30c959\") device mount path \"/mnt/openstack/pv02\"" pod="openstack/glance-default-external-api-0" Jan 25 08:17:26 crc kubenswrapper[4832]: I0125 08:17:26.001696 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2ba1988f-0ee4-4e4d-9b32-eff3fe30c959-scripts\") pod \"glance-default-external-api-0\" (UID: \"2ba1988f-0ee4-4e4d-9b32-eff3fe30c959\") " pod="openstack/glance-default-external-api-0" Jan 25 08:17:26 crc kubenswrapper[4832]: I0125 08:17:26.007483 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/2ba1988f-0ee4-4e4d-9b32-eff3fe30c959-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"2ba1988f-0ee4-4e4d-9b32-eff3fe30c959\") " pod="openstack/glance-default-external-api-0" Jan 25 08:17:26 crc kubenswrapper[4832]: I0125 08:17:26.007581 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2ba1988f-0ee4-4e4d-9b32-eff3fe30c959-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"2ba1988f-0ee4-4e4d-9b32-eff3fe30c959\") " pod="openstack/glance-default-external-api-0" Jan 25 08:17:26 crc kubenswrapper[4832]: I0125 08:17:26.011259 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2ba1988f-0ee4-4e4d-9b32-eff3fe30c959-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"2ba1988f-0ee4-4e4d-9b32-eff3fe30c959\") " pod="openstack/glance-default-external-api-0" Jan 25 08:17:26 crc kubenswrapper[4832]: I0125 08:17:26.011466 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2ba1988f-0ee4-4e4d-9b32-eff3fe30c959-scripts\") pod \"glance-default-external-api-0\" (UID: \"2ba1988f-0ee4-4e4d-9b32-eff3fe30c959\") " pod="openstack/glance-default-external-api-0" Jan 25 08:17:26 crc kubenswrapper[4832]: I0125 08:17:26.024375 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vvv7p\" (UniqueName: \"kubernetes.io/projected/2ba1988f-0ee4-4e4d-9b32-eff3fe30c959-kube-api-access-vvv7p\") pod \"glance-default-external-api-0\" (UID: \"2ba1988f-0ee4-4e4d-9b32-eff3fe30c959\") " pod="openstack/glance-default-external-api-0" Jan 25 08:17:26 crc kubenswrapper[4832]: I0125 08:17:26.027118 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2ba1988f-0ee4-4e4d-9b32-eff3fe30c959-config-data\") pod \"glance-default-external-api-0\" (UID: \"2ba1988f-0ee4-4e4d-9b32-eff3fe30c959\") " pod="openstack/glance-default-external-api-0" Jan 25 08:17:26 crc kubenswrapper[4832]: I0125 08:17:26.059840 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"glance-default-external-api-0\" (UID: \"2ba1988f-0ee4-4e4d-9b32-eff3fe30c959\") " pod="openstack/glance-default-external-api-0" Jan 25 08:17:26 crc kubenswrapper[4832]: I0125 08:17:26.124421 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 25 08:17:26 crc kubenswrapper[4832]: I0125 08:17:26.363356 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-db-create-mckms"] Jan 25 08:17:26 crc kubenswrapper[4832]: I0125 08:17:26.369837 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-mckms" Jan 25 08:17:26 crc kubenswrapper[4832]: I0125 08:17:26.385743 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-mckms"] Jan 25 08:17:26 crc kubenswrapper[4832]: I0125 08:17:26.435693 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-db-create-qfsv4"] Jan 25 08:17:26 crc kubenswrapper[4832]: I0125 08:17:26.437224 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-qfsv4" Jan 25 08:17:26 crc kubenswrapper[4832]: I0125 08:17:26.458028 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-qfsv4"] Jan 25 08:17:26 crc kubenswrapper[4832]: I0125 08:17:26.518065 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3981045c-8650-4fda-af05-1ff4196d30de-operator-scripts\") pod \"nova-api-db-create-mckms\" (UID: \"3981045c-8650-4fda-af05-1ff4196d30de\") " pod="openstack/nova-api-db-create-mckms" Jan 25 08:17:26 crc kubenswrapper[4832]: I0125 08:17:26.518156 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ww7n4\" (UniqueName: \"kubernetes.io/projected/ede7170a-cec3-43e5-b7de-d37e72f0cc11-kube-api-access-ww7n4\") pod \"nova-cell0-db-create-qfsv4\" (UID: \"ede7170a-cec3-43e5-b7de-d37e72f0cc11\") " pod="openstack/nova-cell0-db-create-qfsv4" Jan 25 08:17:26 crc kubenswrapper[4832]: I0125 08:17:26.518295 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-786z7\" (UniqueName: \"kubernetes.io/projected/3981045c-8650-4fda-af05-1ff4196d30de-kube-api-access-786z7\") pod \"nova-api-db-create-mckms\" (UID: \"3981045c-8650-4fda-af05-1ff4196d30de\") " pod="openstack/nova-api-db-create-mckms" Jan 25 08:17:26 crc kubenswrapper[4832]: I0125 08:17:26.518321 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ede7170a-cec3-43e5-b7de-d37e72f0cc11-operator-scripts\") pod \"nova-cell0-db-create-qfsv4\" (UID: \"ede7170a-cec3-43e5-b7de-d37e72f0cc11\") " pod="openstack/nova-cell0-db-create-qfsv4" Jan 25 08:17:26 crc kubenswrapper[4832]: I0125 08:17:26.545952 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-fdf0-account-create-update-xcnhj"] Jan 25 08:17:26 crc kubenswrapper[4832]: I0125 08:17:26.547084 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-fdf0-account-create-update-xcnhj" Jan 25 08:17:26 crc kubenswrapper[4832]: I0125 08:17:26.549916 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-db-secret" Jan 25 08:17:26 crc kubenswrapper[4832]: I0125 08:17:26.555142 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-db-create-q8swj"] Jan 25 08:17:26 crc kubenswrapper[4832]: I0125 08:17:26.556476 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-q8swj" Jan 25 08:17:26 crc kubenswrapper[4832]: I0125 08:17:26.563721 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-fdf0-account-create-update-xcnhj"] Jan 25 08:17:26 crc kubenswrapper[4832]: I0125 08:17:26.575925 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-q8swj"] Jan 25 08:17:26 crc kubenswrapper[4832]: I0125 08:17:26.633167 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f9f7e75f-369f-47ce-b9c9-9e6018f0b3a6-operator-scripts\") pod \"nova-cell1-db-create-q8swj\" (UID: \"f9f7e75f-369f-47ce-b9c9-9e6018f0b3a6\") " pod="openstack/nova-cell1-db-create-q8swj" Jan 25 08:17:26 crc kubenswrapper[4832]: I0125 08:17:26.633247 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2b1d3eaf-356b-4dd4-87ed-2561b811f68e-operator-scripts\") pod \"nova-api-fdf0-account-create-update-xcnhj\" (UID: \"2b1d3eaf-356b-4dd4-87ed-2561b811f68e\") " pod="openstack/nova-api-fdf0-account-create-update-xcnhj" Jan 25 08:17:26 crc kubenswrapper[4832]: I0125 08:17:26.633417 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-786z7\" (UniqueName: \"kubernetes.io/projected/3981045c-8650-4fda-af05-1ff4196d30de-kube-api-access-786z7\") pod \"nova-api-db-create-mckms\" (UID: \"3981045c-8650-4fda-af05-1ff4196d30de\") " pod="openstack/nova-api-db-create-mckms" Jan 25 08:17:26 crc kubenswrapper[4832]: I0125 08:17:26.633744 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ede7170a-cec3-43e5-b7de-d37e72f0cc11-operator-scripts\") pod \"nova-cell0-db-create-qfsv4\" (UID: \"ede7170a-cec3-43e5-b7de-d37e72f0cc11\") " pod="openstack/nova-cell0-db-create-qfsv4" Jan 25 08:17:26 crc kubenswrapper[4832]: I0125 08:17:26.634275 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ftwrx\" (UniqueName: \"kubernetes.io/projected/f9f7e75f-369f-47ce-b9c9-9e6018f0b3a6-kube-api-access-ftwrx\") pod \"nova-cell1-db-create-q8swj\" (UID: \"f9f7e75f-369f-47ce-b9c9-9e6018f0b3a6\") " pod="openstack/nova-cell1-db-create-q8swj" Jan 25 08:17:26 crc kubenswrapper[4832]: I0125 08:17:26.634447 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3981045c-8650-4fda-af05-1ff4196d30de-operator-scripts\") pod \"nova-api-db-create-mckms\" (UID: \"3981045c-8650-4fda-af05-1ff4196d30de\") " pod="openstack/nova-api-db-create-mckms" Jan 25 08:17:26 crc kubenswrapper[4832]: I0125 08:17:26.634515 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rjnb2\" (UniqueName: \"kubernetes.io/projected/2b1d3eaf-356b-4dd4-87ed-2561b811f68e-kube-api-access-rjnb2\") pod \"nova-api-fdf0-account-create-update-xcnhj\" (UID: \"2b1d3eaf-356b-4dd4-87ed-2561b811f68e\") " pod="openstack/nova-api-fdf0-account-create-update-xcnhj" Jan 25 08:17:26 crc kubenswrapper[4832]: I0125 08:17:26.634642 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ww7n4\" (UniqueName: \"kubernetes.io/projected/ede7170a-cec3-43e5-b7de-d37e72f0cc11-kube-api-access-ww7n4\") pod \"nova-cell0-db-create-qfsv4\" (UID: \"ede7170a-cec3-43e5-b7de-d37e72f0cc11\") " pod="openstack/nova-cell0-db-create-qfsv4" Jan 25 08:17:26 crc kubenswrapper[4832]: I0125 08:17:26.634774 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ede7170a-cec3-43e5-b7de-d37e72f0cc11-operator-scripts\") pod \"nova-cell0-db-create-qfsv4\" (UID: \"ede7170a-cec3-43e5-b7de-d37e72f0cc11\") " pod="openstack/nova-cell0-db-create-qfsv4" Jan 25 08:17:26 crc kubenswrapper[4832]: I0125 08:17:26.635321 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3981045c-8650-4fda-af05-1ff4196d30de-operator-scripts\") pod \"nova-api-db-create-mckms\" (UID: \"3981045c-8650-4fda-af05-1ff4196d30de\") " pod="openstack/nova-api-db-create-mckms" Jan 25 08:17:26 crc kubenswrapper[4832]: I0125 08:17:26.654674 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-786z7\" (UniqueName: \"kubernetes.io/projected/3981045c-8650-4fda-af05-1ff4196d30de-kube-api-access-786z7\") pod \"nova-api-db-create-mckms\" (UID: \"3981045c-8650-4fda-af05-1ff4196d30de\") " pod="openstack/nova-api-db-create-mckms" Jan 25 08:17:26 crc kubenswrapper[4832]: I0125 08:17:26.656288 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ww7n4\" (UniqueName: \"kubernetes.io/projected/ede7170a-cec3-43e5-b7de-d37e72f0cc11-kube-api-access-ww7n4\") pod \"nova-cell0-db-create-qfsv4\" (UID: \"ede7170a-cec3-43e5-b7de-d37e72f0cc11\") " pod="openstack/nova-cell0-db-create-qfsv4" Jan 25 08:17:26 crc kubenswrapper[4832]: I0125 08:17:26.691870 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-mckms" Jan 25 08:17:26 crc kubenswrapper[4832]: I0125 08:17:26.736308 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f9f7e75f-369f-47ce-b9c9-9e6018f0b3a6-operator-scripts\") pod \"nova-cell1-db-create-q8swj\" (UID: \"f9f7e75f-369f-47ce-b9c9-9e6018f0b3a6\") " pod="openstack/nova-cell1-db-create-q8swj" Jan 25 08:17:26 crc kubenswrapper[4832]: I0125 08:17:26.736377 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2b1d3eaf-356b-4dd4-87ed-2561b811f68e-operator-scripts\") pod \"nova-api-fdf0-account-create-update-xcnhj\" (UID: \"2b1d3eaf-356b-4dd4-87ed-2561b811f68e\") " pod="openstack/nova-api-fdf0-account-create-update-xcnhj" Jan 25 08:17:26 crc kubenswrapper[4832]: I0125 08:17:26.736523 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ftwrx\" (UniqueName: \"kubernetes.io/projected/f9f7e75f-369f-47ce-b9c9-9e6018f0b3a6-kube-api-access-ftwrx\") pod \"nova-cell1-db-create-q8swj\" (UID: \"f9f7e75f-369f-47ce-b9c9-9e6018f0b3a6\") " pod="openstack/nova-cell1-db-create-q8swj" Jan 25 08:17:26 crc kubenswrapper[4832]: I0125 08:17:26.736580 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rjnb2\" (UniqueName: \"kubernetes.io/projected/2b1d3eaf-356b-4dd4-87ed-2561b811f68e-kube-api-access-rjnb2\") pod \"nova-api-fdf0-account-create-update-xcnhj\" (UID: \"2b1d3eaf-356b-4dd4-87ed-2561b811f68e\") " pod="openstack/nova-api-fdf0-account-create-update-xcnhj" Jan 25 08:17:26 crc kubenswrapper[4832]: I0125 08:17:26.738969 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f9f7e75f-369f-47ce-b9c9-9e6018f0b3a6-operator-scripts\") pod \"nova-cell1-db-create-q8swj\" (UID: \"f9f7e75f-369f-47ce-b9c9-9e6018f0b3a6\") " pod="openstack/nova-cell1-db-create-q8swj" Jan 25 08:17:26 crc kubenswrapper[4832]: I0125 08:17:26.739679 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2b1d3eaf-356b-4dd4-87ed-2561b811f68e-operator-scripts\") pod \"nova-api-fdf0-account-create-update-xcnhj\" (UID: \"2b1d3eaf-356b-4dd4-87ed-2561b811f68e\") " pod="openstack/nova-api-fdf0-account-create-update-xcnhj" Jan 25 08:17:26 crc kubenswrapper[4832]: I0125 08:17:26.746549 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-734e-account-create-update-h4xzg"] Jan 25 08:17:26 crc kubenswrapper[4832]: I0125 08:17:26.747975 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-734e-account-create-update-h4xzg" Jan 25 08:17:26 crc kubenswrapper[4832]: I0125 08:17:26.755678 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-734e-account-create-update-h4xzg"] Jan 25 08:17:26 crc kubenswrapper[4832]: I0125 08:17:26.756753 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-db-secret" Jan 25 08:17:26 crc kubenswrapper[4832]: I0125 08:17:26.771331 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ftwrx\" (UniqueName: \"kubernetes.io/projected/f9f7e75f-369f-47ce-b9c9-9e6018f0b3a6-kube-api-access-ftwrx\") pod \"nova-cell1-db-create-q8swj\" (UID: \"f9f7e75f-369f-47ce-b9c9-9e6018f0b3a6\") " pod="openstack/nova-cell1-db-create-q8swj" Jan 25 08:17:26 crc kubenswrapper[4832]: I0125 08:17:26.771366 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rjnb2\" (UniqueName: \"kubernetes.io/projected/2b1d3eaf-356b-4dd4-87ed-2561b811f68e-kube-api-access-rjnb2\") pod \"nova-api-fdf0-account-create-update-xcnhj\" (UID: \"2b1d3eaf-356b-4dd4-87ed-2561b811f68e\") " pod="openstack/nova-api-fdf0-account-create-update-xcnhj" Jan 25 08:17:26 crc kubenswrapper[4832]: I0125 08:17:26.774769 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-qfsv4" Jan 25 08:17:26 crc kubenswrapper[4832]: I0125 08:17:26.837946 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/48ebae8e-c265-49f1-a050-d6ae6b1ea729-operator-scripts\") pod \"nova-cell0-734e-account-create-update-h4xzg\" (UID: \"48ebae8e-c265-49f1-a050-d6ae6b1ea729\") " pod="openstack/nova-cell0-734e-account-create-update-h4xzg" Jan 25 08:17:26 crc kubenswrapper[4832]: I0125 08:17:26.838025 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-55jks\" (UniqueName: \"kubernetes.io/projected/48ebae8e-c265-49f1-a050-d6ae6b1ea729-kube-api-access-55jks\") pod \"nova-cell0-734e-account-create-update-h4xzg\" (UID: \"48ebae8e-c265-49f1-a050-d6ae6b1ea729\") " pod="openstack/nova-cell0-734e-account-create-update-h4xzg" Jan 25 08:17:26 crc kubenswrapper[4832]: I0125 08:17:26.868451 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-fdf0-account-create-update-xcnhj" Jan 25 08:17:26 crc kubenswrapper[4832]: I0125 08:17:26.883135 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-q8swj" Jan 25 08:17:26 crc kubenswrapper[4832]: I0125 08:17:26.908015 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 25 08:17:26 crc kubenswrapper[4832]: I0125 08:17:26.939226 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/48ebae8e-c265-49f1-a050-d6ae6b1ea729-operator-scripts\") pod \"nova-cell0-734e-account-create-update-h4xzg\" (UID: \"48ebae8e-c265-49f1-a050-d6ae6b1ea729\") " pod="openstack/nova-cell0-734e-account-create-update-h4xzg" Jan 25 08:17:26 crc kubenswrapper[4832]: I0125 08:17:26.939311 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-55jks\" (UniqueName: \"kubernetes.io/projected/48ebae8e-c265-49f1-a050-d6ae6b1ea729-kube-api-access-55jks\") pod \"nova-cell0-734e-account-create-update-h4xzg\" (UID: \"48ebae8e-c265-49f1-a050-d6ae6b1ea729\") " pod="openstack/nova-cell0-734e-account-create-update-h4xzg" Jan 25 08:17:26 crc kubenswrapper[4832]: I0125 08:17:26.940532 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/48ebae8e-c265-49f1-a050-d6ae6b1ea729-operator-scripts\") pod \"nova-cell0-734e-account-create-update-h4xzg\" (UID: \"48ebae8e-c265-49f1-a050-d6ae6b1ea729\") " pod="openstack/nova-cell0-734e-account-create-update-h4xzg" Jan 25 08:17:26 crc kubenswrapper[4832]: I0125 08:17:26.963073 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-30c4-account-create-update-7tq6t"] Jan 25 08:17:26 crc kubenswrapper[4832]: I0125 08:17:26.964353 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-30c4-account-create-update-7tq6t" Jan 25 08:17:26 crc kubenswrapper[4832]: I0125 08:17:26.973820 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-30c4-account-create-update-7tq6t"] Jan 25 08:17:26 crc kubenswrapper[4832]: I0125 08:17:26.978147 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-55jks\" (UniqueName: \"kubernetes.io/projected/48ebae8e-c265-49f1-a050-d6ae6b1ea729-kube-api-access-55jks\") pod \"nova-cell0-734e-account-create-update-h4xzg\" (UID: \"48ebae8e-c265-49f1-a050-d6ae6b1ea729\") " pod="openstack/nova-cell0-734e-account-create-update-h4xzg" Jan 25 08:17:26 crc kubenswrapper[4832]: I0125 08:17:26.979673 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-db-secret" Jan 25 08:17:27 crc kubenswrapper[4832]: I0125 08:17:27.041217 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r7txz\" (UniqueName: \"kubernetes.io/projected/163febb0-9715-4944-8c59-0a4997e12c47-kube-api-access-r7txz\") pod \"nova-cell1-30c4-account-create-update-7tq6t\" (UID: \"163febb0-9715-4944-8c59-0a4997e12c47\") " pod="openstack/nova-cell1-30c4-account-create-update-7tq6t" Jan 25 08:17:27 crc kubenswrapper[4832]: I0125 08:17:27.041323 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/163febb0-9715-4944-8c59-0a4997e12c47-operator-scripts\") pod \"nova-cell1-30c4-account-create-update-7tq6t\" (UID: \"163febb0-9715-4944-8c59-0a4997e12c47\") " pod="openstack/nova-cell1-30c4-account-create-update-7tq6t" Jan 25 08:17:27 crc kubenswrapper[4832]: I0125 08:17:27.136579 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-734e-account-create-update-h4xzg" Jan 25 08:17:27 crc kubenswrapper[4832]: I0125 08:17:27.144091 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r7txz\" (UniqueName: \"kubernetes.io/projected/163febb0-9715-4944-8c59-0a4997e12c47-kube-api-access-r7txz\") pod \"nova-cell1-30c4-account-create-update-7tq6t\" (UID: \"163febb0-9715-4944-8c59-0a4997e12c47\") " pod="openstack/nova-cell1-30c4-account-create-update-7tq6t" Jan 25 08:17:27 crc kubenswrapper[4832]: I0125 08:17:27.144163 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/163febb0-9715-4944-8c59-0a4997e12c47-operator-scripts\") pod \"nova-cell1-30c4-account-create-update-7tq6t\" (UID: \"163febb0-9715-4944-8c59-0a4997e12c47\") " pod="openstack/nova-cell1-30c4-account-create-update-7tq6t" Jan 25 08:17:27 crc kubenswrapper[4832]: I0125 08:17:27.145056 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/163febb0-9715-4944-8c59-0a4997e12c47-operator-scripts\") pod \"nova-cell1-30c4-account-create-update-7tq6t\" (UID: \"163febb0-9715-4944-8c59-0a4997e12c47\") " pod="openstack/nova-cell1-30c4-account-create-update-7tq6t" Jan 25 08:17:27 crc kubenswrapper[4832]: I0125 08:17:27.170143 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r7txz\" (UniqueName: \"kubernetes.io/projected/163febb0-9715-4944-8c59-0a4997e12c47-kube-api-access-r7txz\") pod \"nova-cell1-30c4-account-create-update-7tq6t\" (UID: \"163febb0-9715-4944-8c59-0a4997e12c47\") " pod="openstack/nova-cell1-30c4-account-create-update-7tq6t" Jan 25 08:17:27 crc kubenswrapper[4832]: I0125 08:17:27.313064 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-30c4-account-create-update-7tq6t" Jan 25 08:17:27 crc kubenswrapper[4832]: I0125 08:17:27.314249 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-mckms"] Jan 25 08:17:27 crc kubenswrapper[4832]: W0125 08:17:27.340209 4832 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3981045c_8650_4fda_af05_1ff4196d30de.slice/crio-059a9eab5677230cd1e00415056adfeb33129c4c87e6bf26cefbe13e997c9cef WatchSource:0}: Error finding container 059a9eab5677230cd1e00415056adfeb33129c4c87e6bf26cefbe13e997c9cef: Status 404 returned error can't find the container with id 059a9eab5677230cd1e00415056adfeb33129c4c87e6bf26cefbe13e997c9cef Jan 25 08:17:27 crc kubenswrapper[4832]: I0125 08:17:27.542576 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-qfsv4"] Jan 25 08:17:27 crc kubenswrapper[4832]: I0125 08:17:27.579749 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-q8swj"] Jan 25 08:17:27 crc kubenswrapper[4832]: I0125 08:17:27.696185 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0cdb9042-6480-49eb-b855-ac5c5adce9a4" path="/var/lib/kubelet/pods/0cdb9042-6480-49eb-b855-ac5c5adce9a4/volumes" Jan 25 08:17:27 crc kubenswrapper[4832]: I0125 08:17:27.727852 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-734e-account-create-update-h4xzg"] Jan 25 08:17:27 crc kubenswrapper[4832]: W0125 08:17:27.738779 4832 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod48ebae8e_c265_49f1_a050_d6ae6b1ea729.slice/crio-d1532bdc4bc3483d7356a21baa7432fa4ec0d68bc8f938b8f1ef387e16799d14 WatchSource:0}: Error finding container d1532bdc4bc3483d7356a21baa7432fa4ec0d68bc8f938b8f1ef387e16799d14: Status 404 returned error can't find the container with id d1532bdc4bc3483d7356a21baa7432fa4ec0d68bc8f938b8f1ef387e16799d14 Jan 25 08:17:27 crc kubenswrapper[4832]: I0125 08:17:27.780429 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-fdf0-account-create-update-xcnhj"] Jan 25 08:17:27 crc kubenswrapper[4832]: I0125 08:17:27.797239 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-q8swj" event={"ID":"f9f7e75f-369f-47ce-b9c9-9e6018f0b3a6","Type":"ContainerStarted","Data":"15aed774a137c891b3c49824e148207101f35173bc3ae4273d1b0f945c528ecf"} Jan 25 08:17:27 crc kubenswrapper[4832]: I0125 08:17:27.806019 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-734e-account-create-update-h4xzg" event={"ID":"48ebae8e-c265-49f1-a050-d6ae6b1ea729","Type":"ContainerStarted","Data":"d1532bdc4bc3483d7356a21baa7432fa4ec0d68bc8f938b8f1ef387e16799d14"} Jan 25 08:17:27 crc kubenswrapper[4832]: I0125 08:17:27.811761 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"2ba1988f-0ee4-4e4d-9b32-eff3fe30c959","Type":"ContainerStarted","Data":"e08a446d9730b74e5f920a6f124b5b740944cfefd7ad25431de312f7c75016a9"} Jan 25 08:17:27 crc kubenswrapper[4832]: I0125 08:17:27.817009 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-qfsv4" event={"ID":"ede7170a-cec3-43e5-b7de-d37e72f0cc11","Type":"ContainerStarted","Data":"905b27e908798dee242e4487c47293f835e6e4850749ea0e8c233a5f82a263c5"} Jan 25 08:17:27 crc kubenswrapper[4832]: I0125 08:17:27.818821 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-mckms" event={"ID":"3981045c-8650-4fda-af05-1ff4196d30de","Type":"ContainerStarted","Data":"059a9eab5677230cd1e00415056adfeb33129c4c87e6bf26cefbe13e997c9cef"} Jan 25 08:17:27 crc kubenswrapper[4832]: W0125 08:17:27.825894 4832 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2b1d3eaf_356b_4dd4_87ed_2561b811f68e.slice/crio-53e8be7bf165e85ca5d0d6e7c4799824cee4aec2b38d538427369e226b29b8dc WatchSource:0}: Error finding container 53e8be7bf165e85ca5d0d6e7c4799824cee4aec2b38d538427369e226b29b8dc: Status 404 returned error can't find the container with id 53e8be7bf165e85ca5d0d6e7c4799824cee4aec2b38d538427369e226b29b8dc Jan 25 08:17:27 crc kubenswrapper[4832]: I0125 08:17:27.831110 4832 generic.go:334] "Generic (PLEG): container finished" podID="2d5b38e8-fe79-41d7-9c0e-f053ae1029a6" containerID="9c76a612cd6731411225aa1754ce7dbee2923523b6cba2bce2299702d69fa5c0" exitCode=0 Jan 25 08:17:27 crc kubenswrapper[4832]: I0125 08:17:27.831167 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"2d5b38e8-fe79-41d7-9c0e-f053ae1029a6","Type":"ContainerDied","Data":"9c76a612cd6731411225aa1754ce7dbee2923523b6cba2bce2299702d69fa5c0"} Jan 25 08:17:27 crc kubenswrapper[4832]: I0125 08:17:27.942259 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-30c4-account-create-update-7tq6t"] Jan 25 08:17:28 crc kubenswrapper[4832]: I0125 08:17:28.202645 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 25 08:17:28 crc kubenswrapper[4832]: I0125 08:17:28.287479 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2d5b38e8-fe79-41d7-9c0e-f053ae1029a6-combined-ca-bundle\") pod \"2d5b38e8-fe79-41d7-9c0e-f053ae1029a6\" (UID: \"2d5b38e8-fe79-41d7-9c0e-f053ae1029a6\") " Jan 25 08:17:28 crc kubenswrapper[4832]: I0125 08:17:28.287592 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2kgjp\" (UniqueName: \"kubernetes.io/projected/2d5b38e8-fe79-41d7-9c0e-f053ae1029a6-kube-api-access-2kgjp\") pod \"2d5b38e8-fe79-41d7-9c0e-f053ae1029a6\" (UID: \"2d5b38e8-fe79-41d7-9c0e-f053ae1029a6\") " Jan 25 08:17:28 crc kubenswrapper[4832]: I0125 08:17:28.287626 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/2d5b38e8-fe79-41d7-9c0e-f053ae1029a6-internal-tls-certs\") pod \"2d5b38e8-fe79-41d7-9c0e-f053ae1029a6\" (UID: \"2d5b38e8-fe79-41d7-9c0e-f053ae1029a6\") " Jan 25 08:17:28 crc kubenswrapper[4832]: I0125 08:17:28.287703 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2d5b38e8-fe79-41d7-9c0e-f053ae1029a6-scripts\") pod \"2d5b38e8-fe79-41d7-9c0e-f053ae1029a6\" (UID: \"2d5b38e8-fe79-41d7-9c0e-f053ae1029a6\") " Jan 25 08:17:28 crc kubenswrapper[4832]: I0125 08:17:28.287772 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2d5b38e8-fe79-41d7-9c0e-f053ae1029a6-config-data\") pod \"2d5b38e8-fe79-41d7-9c0e-f053ae1029a6\" (UID: \"2d5b38e8-fe79-41d7-9c0e-f053ae1029a6\") " Jan 25 08:17:28 crc kubenswrapper[4832]: I0125 08:17:28.287795 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"2d5b38e8-fe79-41d7-9c0e-f053ae1029a6\" (UID: \"2d5b38e8-fe79-41d7-9c0e-f053ae1029a6\") " Jan 25 08:17:28 crc kubenswrapper[4832]: I0125 08:17:28.287887 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2d5b38e8-fe79-41d7-9c0e-f053ae1029a6-logs\") pod \"2d5b38e8-fe79-41d7-9c0e-f053ae1029a6\" (UID: \"2d5b38e8-fe79-41d7-9c0e-f053ae1029a6\") " Jan 25 08:17:28 crc kubenswrapper[4832]: I0125 08:17:28.287972 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/2d5b38e8-fe79-41d7-9c0e-f053ae1029a6-httpd-run\") pod \"2d5b38e8-fe79-41d7-9c0e-f053ae1029a6\" (UID: \"2d5b38e8-fe79-41d7-9c0e-f053ae1029a6\") " Jan 25 08:17:28 crc kubenswrapper[4832]: I0125 08:17:28.289088 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2d5b38e8-fe79-41d7-9c0e-f053ae1029a6-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "2d5b38e8-fe79-41d7-9c0e-f053ae1029a6" (UID: "2d5b38e8-fe79-41d7-9c0e-f053ae1029a6"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 25 08:17:28 crc kubenswrapper[4832]: I0125 08:17:28.291457 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2d5b38e8-fe79-41d7-9c0e-f053ae1029a6-logs" (OuterVolumeSpecName: "logs") pod "2d5b38e8-fe79-41d7-9c0e-f053ae1029a6" (UID: "2d5b38e8-fe79-41d7-9c0e-f053ae1029a6"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 25 08:17:28 crc kubenswrapper[4832]: I0125 08:17:28.308609 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2d5b38e8-fe79-41d7-9c0e-f053ae1029a6-scripts" (OuterVolumeSpecName: "scripts") pod "2d5b38e8-fe79-41d7-9c0e-f053ae1029a6" (UID: "2d5b38e8-fe79-41d7-9c0e-f053ae1029a6"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 25 08:17:28 crc kubenswrapper[4832]: I0125 08:17:28.308695 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2d5b38e8-fe79-41d7-9c0e-f053ae1029a6-kube-api-access-2kgjp" (OuterVolumeSpecName: "kube-api-access-2kgjp") pod "2d5b38e8-fe79-41d7-9c0e-f053ae1029a6" (UID: "2d5b38e8-fe79-41d7-9c0e-f053ae1029a6"). InnerVolumeSpecName "kube-api-access-2kgjp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 25 08:17:28 crc kubenswrapper[4832]: I0125 08:17:28.315341 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage05-crc" (OuterVolumeSpecName: "glance") pod "2d5b38e8-fe79-41d7-9c0e-f053ae1029a6" (UID: "2d5b38e8-fe79-41d7-9c0e-f053ae1029a6"). InnerVolumeSpecName "local-storage05-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 25 08:17:28 crc kubenswrapper[4832]: I0125 08:17:28.354193 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2d5b38e8-fe79-41d7-9c0e-f053ae1029a6-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2d5b38e8-fe79-41d7-9c0e-f053ae1029a6" (UID: "2d5b38e8-fe79-41d7-9c0e-f053ae1029a6"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 25 08:17:28 crc kubenswrapper[4832]: I0125 08:17:28.393076 4832 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/2d5b38e8-fe79-41d7-9c0e-f053ae1029a6-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 25 08:17:28 crc kubenswrapper[4832]: I0125 08:17:28.393121 4832 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2d5b38e8-fe79-41d7-9c0e-f053ae1029a6-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 25 08:17:28 crc kubenswrapper[4832]: I0125 08:17:28.393142 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2kgjp\" (UniqueName: \"kubernetes.io/projected/2d5b38e8-fe79-41d7-9c0e-f053ae1029a6-kube-api-access-2kgjp\") on node \"crc\" DevicePath \"\"" Jan 25 08:17:28 crc kubenswrapper[4832]: I0125 08:17:28.393151 4832 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2d5b38e8-fe79-41d7-9c0e-f053ae1029a6-scripts\") on node \"crc\" DevicePath \"\"" Jan 25 08:17:28 crc kubenswrapper[4832]: I0125 08:17:28.393188 4832 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") on node \"crc\" " Jan 25 08:17:28 crc kubenswrapper[4832]: I0125 08:17:28.393201 4832 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2d5b38e8-fe79-41d7-9c0e-f053ae1029a6-logs\") on node \"crc\" DevicePath \"\"" Jan 25 08:17:28 crc kubenswrapper[4832]: I0125 08:17:28.506071 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2d5b38e8-fe79-41d7-9c0e-f053ae1029a6-config-data" (OuterVolumeSpecName: "config-data") pod "2d5b38e8-fe79-41d7-9c0e-f053ae1029a6" (UID: "2d5b38e8-fe79-41d7-9c0e-f053ae1029a6"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 25 08:17:28 crc kubenswrapper[4832]: I0125 08:17:28.544334 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2d5b38e8-fe79-41d7-9c0e-f053ae1029a6-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "2d5b38e8-fe79-41d7-9c0e-f053ae1029a6" (UID: "2d5b38e8-fe79-41d7-9c0e-f053ae1029a6"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 25 08:17:28 crc kubenswrapper[4832]: I0125 08:17:28.577607 4832 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage05-crc" (UniqueName: "kubernetes.io/local-volume/local-storage05-crc") on node "crc" Jan 25 08:17:28 crc kubenswrapper[4832]: I0125 08:17:28.605906 4832 reconciler_common.go:293] "Volume detached for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") on node \"crc\" DevicePath \"\"" Jan 25 08:17:28 crc kubenswrapper[4832]: I0125 08:17:28.605954 4832 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/2d5b38e8-fe79-41d7-9c0e-f053ae1029a6-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 25 08:17:28 crc kubenswrapper[4832]: I0125 08:17:28.605971 4832 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2d5b38e8-fe79-41d7-9c0e-f053ae1029a6-config-data\") on node \"crc\" DevicePath \"\"" Jan 25 08:17:28 crc kubenswrapper[4832]: I0125 08:17:28.846907 4832 generic.go:334] "Generic (PLEG): container finished" podID="f9f7e75f-369f-47ce-b9c9-9e6018f0b3a6" containerID="74feba622b39acd952edc75d90e881187844c3d737b9ade8bd9261054a4fe7df" exitCode=0 Jan 25 08:17:28 crc kubenswrapper[4832]: I0125 08:17:28.847038 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-q8swj" event={"ID":"f9f7e75f-369f-47ce-b9c9-9e6018f0b3a6","Type":"ContainerDied","Data":"74feba622b39acd952edc75d90e881187844c3d737b9ade8bd9261054a4fe7df"} Jan 25 08:17:28 crc kubenswrapper[4832]: I0125 08:17:28.849816 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-30c4-account-create-update-7tq6t" event={"ID":"163febb0-9715-4944-8c59-0a4997e12c47","Type":"ContainerStarted","Data":"8c6e45c2487cd568917904abd06657c93fb9f8e390d1bc11ee30bf0ba90c5c5a"} Jan 25 08:17:28 crc kubenswrapper[4832]: I0125 08:17:28.849879 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-30c4-account-create-update-7tq6t" event={"ID":"163febb0-9715-4944-8c59-0a4997e12c47","Type":"ContainerStarted","Data":"ac4e234792e9597840b2c97a9ab4b641582ddf476e54df23800eac9e7456b077"} Jan 25 08:17:28 crc kubenswrapper[4832]: I0125 08:17:28.861002 4832 generic.go:334] "Generic (PLEG): container finished" podID="48ebae8e-c265-49f1-a050-d6ae6b1ea729" containerID="fc03f602940db592f521266666b34d036bde2a885f9cdd5822d1a8f20d2102fc" exitCode=0 Jan 25 08:17:28 crc kubenswrapper[4832]: I0125 08:17:28.861082 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-734e-account-create-update-h4xzg" event={"ID":"48ebae8e-c265-49f1-a050-d6ae6b1ea729","Type":"ContainerDied","Data":"fc03f602940db592f521266666b34d036bde2a885f9cdd5822d1a8f20d2102fc"} Jan 25 08:17:28 crc kubenswrapper[4832]: I0125 08:17:28.868352 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"2ba1988f-0ee4-4e4d-9b32-eff3fe30c959","Type":"ContainerStarted","Data":"6904ff5d48108b718159d46d2ad3d96decca0723cc10ccbe7940a107ae29fbdd"} Jan 25 08:17:28 crc kubenswrapper[4832]: I0125 08:17:28.872996 4832 generic.go:334] "Generic (PLEG): container finished" podID="ede7170a-cec3-43e5-b7de-d37e72f0cc11" containerID="9aed33b39d8ec4a014db4076866d65d4b3af3057eba886f29af7e602655e6bfe" exitCode=0 Jan 25 08:17:28 crc kubenswrapper[4832]: I0125 08:17:28.873086 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-qfsv4" event={"ID":"ede7170a-cec3-43e5-b7de-d37e72f0cc11","Type":"ContainerDied","Data":"9aed33b39d8ec4a014db4076866d65d4b3af3057eba886f29af7e602655e6bfe"} Jan 25 08:17:28 crc kubenswrapper[4832]: I0125 08:17:28.875564 4832 generic.go:334] "Generic (PLEG): container finished" podID="2b1d3eaf-356b-4dd4-87ed-2561b811f68e" containerID="8c5c43b555531e24c9bf75c76d9f3ae85e93dd331f3f986aa123e861dd761092" exitCode=0 Jan 25 08:17:28 crc kubenswrapper[4832]: I0125 08:17:28.875623 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-fdf0-account-create-update-xcnhj" event={"ID":"2b1d3eaf-356b-4dd4-87ed-2561b811f68e","Type":"ContainerDied","Data":"8c5c43b555531e24c9bf75c76d9f3ae85e93dd331f3f986aa123e861dd761092"} Jan 25 08:17:28 crc kubenswrapper[4832]: I0125 08:17:28.875646 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-fdf0-account-create-update-xcnhj" event={"ID":"2b1d3eaf-356b-4dd4-87ed-2561b811f68e","Type":"ContainerStarted","Data":"53e8be7bf165e85ca5d0d6e7c4799824cee4aec2b38d538427369e226b29b8dc"} Jan 25 08:17:28 crc kubenswrapper[4832]: I0125 08:17:28.880047 4832 generic.go:334] "Generic (PLEG): container finished" podID="3981045c-8650-4fda-af05-1ff4196d30de" containerID="61a1a4c106ee00b40a614d814d29530aa167c26aa1937b6057642254d73285e4" exitCode=0 Jan 25 08:17:28 crc kubenswrapper[4832]: I0125 08:17:28.880095 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-mckms" event={"ID":"3981045c-8650-4fda-af05-1ff4196d30de","Type":"ContainerDied","Data":"61a1a4c106ee00b40a614d814d29530aa167c26aa1937b6057642254d73285e4"} Jan 25 08:17:28 crc kubenswrapper[4832]: I0125 08:17:28.888025 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"2d5b38e8-fe79-41d7-9c0e-f053ae1029a6","Type":"ContainerDied","Data":"15735f60fbb6f2381e175f8be4edec672ef72977049e2445d0afc6c741ef1afb"} Jan 25 08:17:28 crc kubenswrapper[4832]: I0125 08:17:28.888112 4832 scope.go:117] "RemoveContainer" containerID="9c76a612cd6731411225aa1754ce7dbee2923523b6cba2bce2299702d69fa5c0" Jan 25 08:17:28 crc kubenswrapper[4832]: I0125 08:17:28.888114 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 25 08:17:28 crc kubenswrapper[4832]: I0125 08:17:28.892796 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-30c4-account-create-update-7tq6t" podStartSLOduration=2.89270277 podStartE2EDuration="2.89270277s" podCreationTimestamp="2026-01-25 08:17:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-25 08:17:28.88741864 +0000 UTC m=+1231.561242173" watchObservedRunningTime="2026-01-25 08:17:28.89270277 +0000 UTC m=+1231.566526303" Jan 25 08:17:28 crc kubenswrapper[4832]: I0125 08:17:28.971025 4832 scope.go:117] "RemoveContainer" containerID="70dd41b47f030be98780515dc5751d968023e82ef169c81d380463ea5150cd5f" Jan 25 08:17:29 crc kubenswrapper[4832]: I0125 08:17:29.007789 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 25 08:17:29 crc kubenswrapper[4832]: I0125 08:17:29.037606 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 25 08:17:29 crc kubenswrapper[4832]: I0125 08:17:29.076513 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 25 08:17:29 crc kubenswrapper[4832]: E0125 08:17:29.077221 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2d5b38e8-fe79-41d7-9c0e-f053ae1029a6" containerName="glance-httpd" Jan 25 08:17:29 crc kubenswrapper[4832]: I0125 08:17:29.077244 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="2d5b38e8-fe79-41d7-9c0e-f053ae1029a6" containerName="glance-httpd" Jan 25 08:17:29 crc kubenswrapper[4832]: E0125 08:17:29.077266 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2d5b38e8-fe79-41d7-9c0e-f053ae1029a6" containerName="glance-log" Jan 25 08:17:29 crc kubenswrapper[4832]: I0125 08:17:29.077274 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="2d5b38e8-fe79-41d7-9c0e-f053ae1029a6" containerName="glance-log" Jan 25 08:17:29 crc kubenswrapper[4832]: I0125 08:17:29.077810 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="2d5b38e8-fe79-41d7-9c0e-f053ae1029a6" containerName="glance-httpd" Jan 25 08:17:29 crc kubenswrapper[4832]: I0125 08:17:29.077828 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="2d5b38e8-fe79-41d7-9c0e-f053ae1029a6" containerName="glance-log" Jan 25 08:17:29 crc kubenswrapper[4832]: I0125 08:17:29.079302 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 25 08:17:29 crc kubenswrapper[4832]: I0125 08:17:29.083101 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Jan 25 08:17:29 crc kubenswrapper[4832]: I0125 08:17:29.083474 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Jan 25 08:17:29 crc kubenswrapper[4832]: I0125 08:17:29.094491 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 25 08:17:29 crc kubenswrapper[4832]: I0125 08:17:29.217207 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/ca10626f-eeda-438c-8d2b-5b7c734db90d-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"ca10626f-eeda-438c-8d2b-5b7c734db90d\") " pod="openstack/glance-default-internal-api-0" Jan 25 08:17:29 crc kubenswrapper[4832]: I0125 08:17:29.217287 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ca10626f-eeda-438c-8d2b-5b7c734db90d-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"ca10626f-eeda-438c-8d2b-5b7c734db90d\") " pod="openstack/glance-default-internal-api-0" Jan 25 08:17:29 crc kubenswrapper[4832]: I0125 08:17:29.217328 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ca10626f-eeda-438c-8d2b-5b7c734db90d-config-data\") pod \"glance-default-internal-api-0\" (UID: \"ca10626f-eeda-438c-8d2b-5b7c734db90d\") " pod="openstack/glance-default-internal-api-0" Jan 25 08:17:29 crc kubenswrapper[4832]: I0125 08:17:29.217422 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"glance-default-internal-api-0\" (UID: \"ca10626f-eeda-438c-8d2b-5b7c734db90d\") " pod="openstack/glance-default-internal-api-0" Jan 25 08:17:29 crc kubenswrapper[4832]: I0125 08:17:29.217662 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ca10626f-eeda-438c-8d2b-5b7c734db90d-logs\") pod \"glance-default-internal-api-0\" (UID: \"ca10626f-eeda-438c-8d2b-5b7c734db90d\") " pod="openstack/glance-default-internal-api-0" Jan 25 08:17:29 crc kubenswrapper[4832]: I0125 08:17:29.217687 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ca10626f-eeda-438c-8d2b-5b7c734db90d-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"ca10626f-eeda-438c-8d2b-5b7c734db90d\") " pod="openstack/glance-default-internal-api-0" Jan 25 08:17:29 crc kubenswrapper[4832]: I0125 08:17:29.217806 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wb2wl\" (UniqueName: \"kubernetes.io/projected/ca10626f-eeda-438c-8d2b-5b7c734db90d-kube-api-access-wb2wl\") pod \"glance-default-internal-api-0\" (UID: \"ca10626f-eeda-438c-8d2b-5b7c734db90d\") " pod="openstack/glance-default-internal-api-0" Jan 25 08:17:29 crc kubenswrapper[4832]: I0125 08:17:29.217834 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ca10626f-eeda-438c-8d2b-5b7c734db90d-scripts\") pod \"glance-default-internal-api-0\" (UID: \"ca10626f-eeda-438c-8d2b-5b7c734db90d\") " pod="openstack/glance-default-internal-api-0" Jan 25 08:17:29 crc kubenswrapper[4832]: I0125 08:17:29.320811 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wb2wl\" (UniqueName: \"kubernetes.io/projected/ca10626f-eeda-438c-8d2b-5b7c734db90d-kube-api-access-wb2wl\") pod \"glance-default-internal-api-0\" (UID: \"ca10626f-eeda-438c-8d2b-5b7c734db90d\") " pod="openstack/glance-default-internal-api-0" Jan 25 08:17:29 crc kubenswrapper[4832]: I0125 08:17:29.325667 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ca10626f-eeda-438c-8d2b-5b7c734db90d-scripts\") pod \"glance-default-internal-api-0\" (UID: \"ca10626f-eeda-438c-8d2b-5b7c734db90d\") " pod="openstack/glance-default-internal-api-0" Jan 25 08:17:29 crc kubenswrapper[4832]: I0125 08:17:29.325788 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/ca10626f-eeda-438c-8d2b-5b7c734db90d-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"ca10626f-eeda-438c-8d2b-5b7c734db90d\") " pod="openstack/glance-default-internal-api-0" Jan 25 08:17:29 crc kubenswrapper[4832]: I0125 08:17:29.325896 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ca10626f-eeda-438c-8d2b-5b7c734db90d-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"ca10626f-eeda-438c-8d2b-5b7c734db90d\") " pod="openstack/glance-default-internal-api-0" Jan 25 08:17:29 crc kubenswrapper[4832]: I0125 08:17:29.325946 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ca10626f-eeda-438c-8d2b-5b7c734db90d-config-data\") pod \"glance-default-internal-api-0\" (UID: \"ca10626f-eeda-438c-8d2b-5b7c734db90d\") " pod="openstack/glance-default-internal-api-0" Jan 25 08:17:29 crc kubenswrapper[4832]: I0125 08:17:29.326123 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"glance-default-internal-api-0\" (UID: \"ca10626f-eeda-438c-8d2b-5b7c734db90d\") " pod="openstack/glance-default-internal-api-0" Jan 25 08:17:29 crc kubenswrapper[4832]: I0125 08:17:29.326237 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ca10626f-eeda-438c-8d2b-5b7c734db90d-logs\") pod \"glance-default-internal-api-0\" (UID: \"ca10626f-eeda-438c-8d2b-5b7c734db90d\") " pod="openstack/glance-default-internal-api-0" Jan 25 08:17:29 crc kubenswrapper[4832]: I0125 08:17:29.326268 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ca10626f-eeda-438c-8d2b-5b7c734db90d-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"ca10626f-eeda-438c-8d2b-5b7c734db90d\") " pod="openstack/glance-default-internal-api-0" Jan 25 08:17:29 crc kubenswrapper[4832]: I0125 08:17:29.328672 4832 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"glance-default-internal-api-0\" (UID: \"ca10626f-eeda-438c-8d2b-5b7c734db90d\") device mount path \"/mnt/openstack/pv05\"" pod="openstack/glance-default-internal-api-0" Jan 25 08:17:29 crc kubenswrapper[4832]: I0125 08:17:29.329038 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ca10626f-eeda-438c-8d2b-5b7c734db90d-logs\") pod \"glance-default-internal-api-0\" (UID: \"ca10626f-eeda-438c-8d2b-5b7c734db90d\") " pod="openstack/glance-default-internal-api-0" Jan 25 08:17:29 crc kubenswrapper[4832]: I0125 08:17:29.329369 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/ca10626f-eeda-438c-8d2b-5b7c734db90d-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"ca10626f-eeda-438c-8d2b-5b7c734db90d\") " pod="openstack/glance-default-internal-api-0" Jan 25 08:17:29 crc kubenswrapper[4832]: I0125 08:17:29.339614 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ca10626f-eeda-438c-8d2b-5b7c734db90d-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"ca10626f-eeda-438c-8d2b-5b7c734db90d\") " pod="openstack/glance-default-internal-api-0" Jan 25 08:17:29 crc kubenswrapper[4832]: I0125 08:17:29.340544 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ca10626f-eeda-438c-8d2b-5b7c734db90d-scripts\") pod \"glance-default-internal-api-0\" (UID: \"ca10626f-eeda-438c-8d2b-5b7c734db90d\") " pod="openstack/glance-default-internal-api-0" Jan 25 08:17:29 crc kubenswrapper[4832]: I0125 08:17:29.353920 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ca10626f-eeda-438c-8d2b-5b7c734db90d-config-data\") pod \"glance-default-internal-api-0\" (UID: \"ca10626f-eeda-438c-8d2b-5b7c734db90d\") " pod="openstack/glance-default-internal-api-0" Jan 25 08:17:29 crc kubenswrapper[4832]: I0125 08:17:29.354819 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wb2wl\" (UniqueName: \"kubernetes.io/projected/ca10626f-eeda-438c-8d2b-5b7c734db90d-kube-api-access-wb2wl\") pod \"glance-default-internal-api-0\" (UID: \"ca10626f-eeda-438c-8d2b-5b7c734db90d\") " pod="openstack/glance-default-internal-api-0" Jan 25 08:17:29 crc kubenswrapper[4832]: I0125 08:17:29.360943 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ca10626f-eeda-438c-8d2b-5b7c734db90d-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"ca10626f-eeda-438c-8d2b-5b7c734db90d\") " pod="openstack/glance-default-internal-api-0" Jan 25 08:17:29 crc kubenswrapper[4832]: I0125 08:17:29.383251 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"glance-default-internal-api-0\" (UID: \"ca10626f-eeda-438c-8d2b-5b7c734db90d\") " pod="openstack/glance-default-internal-api-0" Jan 25 08:17:29 crc kubenswrapper[4832]: I0125 08:17:29.412692 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 25 08:17:29 crc kubenswrapper[4832]: I0125 08:17:29.423690 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 25 08:17:29 crc kubenswrapper[4832]: I0125 08:17:29.530545 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/65a902e4-15aa-499b-aa8e-a5ed097f9918-scripts\") pod \"65a902e4-15aa-499b-aa8e-a5ed097f9918\" (UID: \"65a902e4-15aa-499b-aa8e-a5ed097f9918\") " Jan 25 08:17:29 crc kubenswrapper[4832]: I0125 08:17:29.530714 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/65a902e4-15aa-499b-aa8e-a5ed097f9918-combined-ca-bundle\") pod \"65a902e4-15aa-499b-aa8e-a5ed097f9918\" (UID: \"65a902e4-15aa-499b-aa8e-a5ed097f9918\") " Jan 25 08:17:29 crc kubenswrapper[4832]: I0125 08:17:29.530772 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/65a902e4-15aa-499b-aa8e-a5ed097f9918-run-httpd\") pod \"65a902e4-15aa-499b-aa8e-a5ed097f9918\" (UID: \"65a902e4-15aa-499b-aa8e-a5ed097f9918\") " Jan 25 08:17:29 crc kubenswrapper[4832]: I0125 08:17:29.530851 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/65a902e4-15aa-499b-aa8e-a5ed097f9918-sg-core-conf-yaml\") pod \"65a902e4-15aa-499b-aa8e-a5ed097f9918\" (UID: \"65a902e4-15aa-499b-aa8e-a5ed097f9918\") " Jan 25 08:17:29 crc kubenswrapper[4832]: I0125 08:17:29.530985 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7psfk\" (UniqueName: \"kubernetes.io/projected/65a902e4-15aa-499b-aa8e-a5ed097f9918-kube-api-access-7psfk\") pod \"65a902e4-15aa-499b-aa8e-a5ed097f9918\" (UID: \"65a902e4-15aa-499b-aa8e-a5ed097f9918\") " Jan 25 08:17:29 crc kubenswrapper[4832]: I0125 08:17:29.531045 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/65a902e4-15aa-499b-aa8e-a5ed097f9918-config-data\") pod \"65a902e4-15aa-499b-aa8e-a5ed097f9918\" (UID: \"65a902e4-15aa-499b-aa8e-a5ed097f9918\") " Jan 25 08:17:29 crc kubenswrapper[4832]: I0125 08:17:29.531089 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/65a902e4-15aa-499b-aa8e-a5ed097f9918-log-httpd\") pod \"65a902e4-15aa-499b-aa8e-a5ed097f9918\" (UID: \"65a902e4-15aa-499b-aa8e-a5ed097f9918\") " Jan 25 08:17:29 crc kubenswrapper[4832]: I0125 08:17:29.531972 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/65a902e4-15aa-499b-aa8e-a5ed097f9918-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "65a902e4-15aa-499b-aa8e-a5ed097f9918" (UID: "65a902e4-15aa-499b-aa8e-a5ed097f9918"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 25 08:17:29 crc kubenswrapper[4832]: I0125 08:17:29.532231 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/65a902e4-15aa-499b-aa8e-a5ed097f9918-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "65a902e4-15aa-499b-aa8e-a5ed097f9918" (UID: "65a902e4-15aa-499b-aa8e-a5ed097f9918"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 25 08:17:29 crc kubenswrapper[4832]: I0125 08:17:29.535599 4832 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/65a902e4-15aa-499b-aa8e-a5ed097f9918-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 25 08:17:29 crc kubenswrapper[4832]: I0125 08:17:29.535646 4832 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/65a902e4-15aa-499b-aa8e-a5ed097f9918-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 25 08:17:29 crc kubenswrapper[4832]: I0125 08:17:29.538686 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/65a902e4-15aa-499b-aa8e-a5ed097f9918-scripts" (OuterVolumeSpecName: "scripts") pod "65a902e4-15aa-499b-aa8e-a5ed097f9918" (UID: "65a902e4-15aa-499b-aa8e-a5ed097f9918"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 25 08:17:29 crc kubenswrapper[4832]: I0125 08:17:29.539121 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/65a902e4-15aa-499b-aa8e-a5ed097f9918-kube-api-access-7psfk" (OuterVolumeSpecName: "kube-api-access-7psfk") pod "65a902e4-15aa-499b-aa8e-a5ed097f9918" (UID: "65a902e4-15aa-499b-aa8e-a5ed097f9918"). InnerVolumeSpecName "kube-api-access-7psfk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 25 08:17:29 crc kubenswrapper[4832]: I0125 08:17:29.616513 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/65a902e4-15aa-499b-aa8e-a5ed097f9918-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "65a902e4-15aa-499b-aa8e-a5ed097f9918" (UID: "65a902e4-15aa-499b-aa8e-a5ed097f9918"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 25 08:17:29 crc kubenswrapper[4832]: I0125 08:17:29.648431 4832 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/65a902e4-15aa-499b-aa8e-a5ed097f9918-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 25 08:17:29 crc kubenswrapper[4832]: I0125 08:17:29.648506 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7psfk\" (UniqueName: \"kubernetes.io/projected/65a902e4-15aa-499b-aa8e-a5ed097f9918-kube-api-access-7psfk\") on node \"crc\" DevicePath \"\"" Jan 25 08:17:29 crc kubenswrapper[4832]: I0125 08:17:29.648520 4832 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/65a902e4-15aa-499b-aa8e-a5ed097f9918-scripts\") on node \"crc\" DevicePath \"\"" Jan 25 08:17:29 crc kubenswrapper[4832]: I0125 08:17:29.683520 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/65a902e4-15aa-499b-aa8e-a5ed097f9918-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "65a902e4-15aa-499b-aa8e-a5ed097f9918" (UID: "65a902e4-15aa-499b-aa8e-a5ed097f9918"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 25 08:17:29 crc kubenswrapper[4832]: I0125 08:17:29.698552 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2d5b38e8-fe79-41d7-9c0e-f053ae1029a6" path="/var/lib/kubelet/pods/2d5b38e8-fe79-41d7-9c0e-f053ae1029a6/volumes" Jan 25 08:17:29 crc kubenswrapper[4832]: I0125 08:17:29.710830 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/65a902e4-15aa-499b-aa8e-a5ed097f9918-config-data" (OuterVolumeSpecName: "config-data") pod "65a902e4-15aa-499b-aa8e-a5ed097f9918" (UID: "65a902e4-15aa-499b-aa8e-a5ed097f9918"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 25 08:17:29 crc kubenswrapper[4832]: I0125 08:17:29.750934 4832 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/65a902e4-15aa-499b-aa8e-a5ed097f9918-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 25 08:17:29 crc kubenswrapper[4832]: I0125 08:17:29.750984 4832 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/65a902e4-15aa-499b-aa8e-a5ed097f9918-config-data\") on node \"crc\" DevicePath \"\"" Jan 25 08:17:29 crc kubenswrapper[4832]: I0125 08:17:29.903290 4832 generic.go:334] "Generic (PLEG): container finished" podID="65a902e4-15aa-499b-aa8e-a5ed097f9918" containerID="ea6ff49bce14edc653d6dab40433f839eb38f1c615aa3a14dc8b79262ec41d89" exitCode=0 Jan 25 08:17:29 crc kubenswrapper[4832]: I0125 08:17:29.903331 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"65a902e4-15aa-499b-aa8e-a5ed097f9918","Type":"ContainerDied","Data":"ea6ff49bce14edc653d6dab40433f839eb38f1c615aa3a14dc8b79262ec41d89"} Jan 25 08:17:29 crc kubenswrapper[4832]: I0125 08:17:29.903917 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"65a902e4-15aa-499b-aa8e-a5ed097f9918","Type":"ContainerDied","Data":"971eaaebd328a54cb3148204d3cb86fe51822f75c5d0c97fa3f15a36eda03b96"} Jan 25 08:17:29 crc kubenswrapper[4832]: I0125 08:17:29.903405 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 25 08:17:29 crc kubenswrapper[4832]: I0125 08:17:29.903960 4832 scope.go:117] "RemoveContainer" containerID="b46a84b23fd0ab1ede9ad840262d6a7a815eb925c3b08bcc39a2f88039df1ac9" Jan 25 08:17:29 crc kubenswrapper[4832]: I0125 08:17:29.910490 4832 generic.go:334] "Generic (PLEG): container finished" podID="163febb0-9715-4944-8c59-0a4997e12c47" containerID="8c6e45c2487cd568917904abd06657c93fb9f8e390d1bc11ee30bf0ba90c5c5a" exitCode=0 Jan 25 08:17:29 crc kubenswrapper[4832]: I0125 08:17:29.910565 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-30c4-account-create-update-7tq6t" event={"ID":"163febb0-9715-4944-8c59-0a4997e12c47","Type":"ContainerDied","Data":"8c6e45c2487cd568917904abd06657c93fb9f8e390d1bc11ee30bf0ba90c5c5a"} Jan 25 08:17:29 crc kubenswrapper[4832]: I0125 08:17:29.914105 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"2ba1988f-0ee4-4e4d-9b32-eff3fe30c959","Type":"ContainerStarted","Data":"98aff73bf3cbf4f44de73d082df09d6ee80c99acc948800d4f88bfa989918827"} Jan 25 08:17:29 crc kubenswrapper[4832]: I0125 08:17:29.955355 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 25 08:17:29 crc kubenswrapper[4832]: I0125 08:17:29.961599 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 25 08:17:29 crc kubenswrapper[4832]: I0125 08:17:29.977208 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 25 08:17:29 crc kubenswrapper[4832]: I0125 08:17:29.980229 4832 scope.go:117] "RemoveContainer" containerID="a553a6783b46e2514941b45dfd0eeb2f2dc302e2182086e4a7a78da7f033628e" Jan 25 08:17:29 crc kubenswrapper[4832]: E0125 08:17:29.980954 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="65a902e4-15aa-499b-aa8e-a5ed097f9918" containerName="proxy-httpd" Jan 25 08:17:29 crc kubenswrapper[4832]: I0125 08:17:29.981004 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="65a902e4-15aa-499b-aa8e-a5ed097f9918" containerName="proxy-httpd" Jan 25 08:17:29 crc kubenswrapper[4832]: E0125 08:17:29.981023 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="65a902e4-15aa-499b-aa8e-a5ed097f9918" containerName="sg-core" Jan 25 08:17:29 crc kubenswrapper[4832]: I0125 08:17:29.981029 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="65a902e4-15aa-499b-aa8e-a5ed097f9918" containerName="sg-core" Jan 25 08:17:29 crc kubenswrapper[4832]: E0125 08:17:29.981458 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="65a902e4-15aa-499b-aa8e-a5ed097f9918" containerName="ceilometer-notification-agent" Jan 25 08:17:29 crc kubenswrapper[4832]: I0125 08:17:29.981468 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="65a902e4-15aa-499b-aa8e-a5ed097f9918" containerName="ceilometer-notification-agent" Jan 25 08:17:29 crc kubenswrapper[4832]: E0125 08:17:29.981481 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="65a902e4-15aa-499b-aa8e-a5ed097f9918" containerName="ceilometer-central-agent" Jan 25 08:17:29 crc kubenswrapper[4832]: I0125 08:17:29.981487 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="65a902e4-15aa-499b-aa8e-a5ed097f9918" containerName="ceilometer-central-agent" Jan 25 08:17:29 crc kubenswrapper[4832]: I0125 08:17:29.981852 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="65a902e4-15aa-499b-aa8e-a5ed097f9918" containerName="sg-core" Jan 25 08:17:29 crc kubenswrapper[4832]: I0125 08:17:29.981865 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="65a902e4-15aa-499b-aa8e-a5ed097f9918" containerName="proxy-httpd" Jan 25 08:17:29 crc kubenswrapper[4832]: I0125 08:17:29.981882 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="65a902e4-15aa-499b-aa8e-a5ed097f9918" containerName="ceilometer-central-agent" Jan 25 08:17:29 crc kubenswrapper[4832]: I0125 08:17:29.981897 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="65a902e4-15aa-499b-aa8e-a5ed097f9918" containerName="ceilometer-notification-agent" Jan 25 08:17:29 crc kubenswrapper[4832]: I0125 08:17:29.984715 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 25 08:17:29 crc kubenswrapper[4832]: I0125 08:17:29.991182 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 25 08:17:29 crc kubenswrapper[4832]: I0125 08:17:29.991357 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 25 08:17:29 crc kubenswrapper[4832]: I0125 08:17:29.994864 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=4.994833491 podStartE2EDuration="4.994833491s" podCreationTimestamp="2026-01-25 08:17:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-25 08:17:29.99086852 +0000 UTC m=+1232.664692053" watchObservedRunningTime="2026-01-25 08:17:29.994833491 +0000 UTC m=+1232.668657034" Jan 25 08:17:30 crc kubenswrapper[4832]: I0125 08:17:30.037652 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 25 08:17:30 crc kubenswrapper[4832]: I0125 08:17:30.061602 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d34a22ee-66f7-411b-a395-7c52e98c6ef3-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"d34a22ee-66f7-411b-a395-7c52e98c6ef3\") " pod="openstack/ceilometer-0" Jan 25 08:17:30 crc kubenswrapper[4832]: I0125 08:17:30.061666 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d34a22ee-66f7-411b-a395-7c52e98c6ef3-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"d34a22ee-66f7-411b-a395-7c52e98c6ef3\") " pod="openstack/ceilometer-0" Jan 25 08:17:30 crc kubenswrapper[4832]: I0125 08:17:30.061745 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l5ppm\" (UniqueName: \"kubernetes.io/projected/d34a22ee-66f7-411b-a395-7c52e98c6ef3-kube-api-access-l5ppm\") pod \"ceilometer-0\" (UID: \"d34a22ee-66f7-411b-a395-7c52e98c6ef3\") " pod="openstack/ceilometer-0" Jan 25 08:17:30 crc kubenswrapper[4832]: I0125 08:17:30.061776 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d34a22ee-66f7-411b-a395-7c52e98c6ef3-scripts\") pod \"ceilometer-0\" (UID: \"d34a22ee-66f7-411b-a395-7c52e98c6ef3\") " pod="openstack/ceilometer-0" Jan 25 08:17:30 crc kubenswrapper[4832]: I0125 08:17:30.061802 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d34a22ee-66f7-411b-a395-7c52e98c6ef3-log-httpd\") pod \"ceilometer-0\" (UID: \"d34a22ee-66f7-411b-a395-7c52e98c6ef3\") " pod="openstack/ceilometer-0" Jan 25 08:17:30 crc kubenswrapper[4832]: I0125 08:17:30.061833 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d34a22ee-66f7-411b-a395-7c52e98c6ef3-run-httpd\") pod \"ceilometer-0\" (UID: \"d34a22ee-66f7-411b-a395-7c52e98c6ef3\") " pod="openstack/ceilometer-0" Jan 25 08:17:30 crc kubenswrapper[4832]: I0125 08:17:30.061855 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d34a22ee-66f7-411b-a395-7c52e98c6ef3-config-data\") pod \"ceilometer-0\" (UID: \"d34a22ee-66f7-411b-a395-7c52e98c6ef3\") " pod="openstack/ceilometer-0" Jan 25 08:17:30 crc kubenswrapper[4832]: I0125 08:17:30.063978 4832 scope.go:117] "RemoveContainer" containerID="a7c1b9dfea6f73228508a275ac64d4d74426170ef731f0f8ba9fdff0f8345d2a" Jan 25 08:17:30 crc kubenswrapper[4832]: I0125 08:17:30.166254 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 25 08:17:30 crc kubenswrapper[4832]: I0125 08:17:30.166634 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d34a22ee-66f7-411b-a395-7c52e98c6ef3-config-data\") pod \"ceilometer-0\" (UID: \"d34a22ee-66f7-411b-a395-7c52e98c6ef3\") " pod="openstack/ceilometer-0" Jan 25 08:17:30 crc kubenswrapper[4832]: I0125 08:17:30.167045 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d34a22ee-66f7-411b-a395-7c52e98c6ef3-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"d34a22ee-66f7-411b-a395-7c52e98c6ef3\") " pod="openstack/ceilometer-0" Jan 25 08:17:30 crc kubenswrapper[4832]: I0125 08:17:30.167136 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d34a22ee-66f7-411b-a395-7c52e98c6ef3-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"d34a22ee-66f7-411b-a395-7c52e98c6ef3\") " pod="openstack/ceilometer-0" Jan 25 08:17:30 crc kubenswrapper[4832]: I0125 08:17:30.167339 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l5ppm\" (UniqueName: \"kubernetes.io/projected/d34a22ee-66f7-411b-a395-7c52e98c6ef3-kube-api-access-l5ppm\") pod \"ceilometer-0\" (UID: \"d34a22ee-66f7-411b-a395-7c52e98c6ef3\") " pod="openstack/ceilometer-0" Jan 25 08:17:30 crc kubenswrapper[4832]: I0125 08:17:30.167505 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d34a22ee-66f7-411b-a395-7c52e98c6ef3-scripts\") pod \"ceilometer-0\" (UID: \"d34a22ee-66f7-411b-a395-7c52e98c6ef3\") " pod="openstack/ceilometer-0" Jan 25 08:17:30 crc kubenswrapper[4832]: I0125 08:17:30.167574 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d34a22ee-66f7-411b-a395-7c52e98c6ef3-log-httpd\") pod \"ceilometer-0\" (UID: \"d34a22ee-66f7-411b-a395-7c52e98c6ef3\") " pod="openstack/ceilometer-0" Jan 25 08:17:30 crc kubenswrapper[4832]: I0125 08:17:30.167656 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d34a22ee-66f7-411b-a395-7c52e98c6ef3-run-httpd\") pod \"ceilometer-0\" (UID: \"d34a22ee-66f7-411b-a395-7c52e98c6ef3\") " pod="openstack/ceilometer-0" Jan 25 08:17:30 crc kubenswrapper[4832]: I0125 08:17:30.170562 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d34a22ee-66f7-411b-a395-7c52e98c6ef3-run-httpd\") pod \"ceilometer-0\" (UID: \"d34a22ee-66f7-411b-a395-7c52e98c6ef3\") " pod="openstack/ceilometer-0" Jan 25 08:17:30 crc kubenswrapper[4832]: I0125 08:17:30.174902 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d34a22ee-66f7-411b-a395-7c52e98c6ef3-log-httpd\") pod \"ceilometer-0\" (UID: \"d34a22ee-66f7-411b-a395-7c52e98c6ef3\") " pod="openstack/ceilometer-0" Jan 25 08:17:30 crc kubenswrapper[4832]: I0125 08:17:30.185693 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l5ppm\" (UniqueName: \"kubernetes.io/projected/d34a22ee-66f7-411b-a395-7c52e98c6ef3-kube-api-access-l5ppm\") pod \"ceilometer-0\" (UID: \"d34a22ee-66f7-411b-a395-7c52e98c6ef3\") " pod="openstack/ceilometer-0" Jan 25 08:17:30 crc kubenswrapper[4832]: I0125 08:17:30.185771 4832 scope.go:117] "RemoveContainer" containerID="ea6ff49bce14edc653d6dab40433f839eb38f1c615aa3a14dc8b79262ec41d89" Jan 25 08:17:30 crc kubenswrapper[4832]: I0125 08:17:30.191087 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d34a22ee-66f7-411b-a395-7c52e98c6ef3-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"d34a22ee-66f7-411b-a395-7c52e98c6ef3\") " pod="openstack/ceilometer-0" Jan 25 08:17:30 crc kubenswrapper[4832]: I0125 08:17:30.193145 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d34a22ee-66f7-411b-a395-7c52e98c6ef3-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"d34a22ee-66f7-411b-a395-7c52e98c6ef3\") " pod="openstack/ceilometer-0" Jan 25 08:17:30 crc kubenswrapper[4832]: I0125 08:17:30.202613 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d34a22ee-66f7-411b-a395-7c52e98c6ef3-scripts\") pod \"ceilometer-0\" (UID: \"d34a22ee-66f7-411b-a395-7c52e98c6ef3\") " pod="openstack/ceilometer-0" Jan 25 08:17:30 crc kubenswrapper[4832]: I0125 08:17:30.218923 4832 scope.go:117] "RemoveContainer" containerID="b46a84b23fd0ab1ede9ad840262d6a7a815eb925c3b08bcc39a2f88039df1ac9" Jan 25 08:17:30 crc kubenswrapper[4832]: I0125 08:17:30.221844 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d34a22ee-66f7-411b-a395-7c52e98c6ef3-config-data\") pod \"ceilometer-0\" (UID: \"d34a22ee-66f7-411b-a395-7c52e98c6ef3\") " pod="openstack/ceilometer-0" Jan 25 08:17:30 crc kubenswrapper[4832]: E0125 08:17:30.228009 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b46a84b23fd0ab1ede9ad840262d6a7a815eb925c3b08bcc39a2f88039df1ac9\": container with ID starting with b46a84b23fd0ab1ede9ad840262d6a7a815eb925c3b08bcc39a2f88039df1ac9 not found: ID does not exist" containerID="b46a84b23fd0ab1ede9ad840262d6a7a815eb925c3b08bcc39a2f88039df1ac9" Jan 25 08:17:30 crc kubenswrapper[4832]: I0125 08:17:30.228061 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b46a84b23fd0ab1ede9ad840262d6a7a815eb925c3b08bcc39a2f88039df1ac9"} err="failed to get container status \"b46a84b23fd0ab1ede9ad840262d6a7a815eb925c3b08bcc39a2f88039df1ac9\": rpc error: code = NotFound desc = could not find container \"b46a84b23fd0ab1ede9ad840262d6a7a815eb925c3b08bcc39a2f88039df1ac9\": container with ID starting with b46a84b23fd0ab1ede9ad840262d6a7a815eb925c3b08bcc39a2f88039df1ac9 not found: ID does not exist" Jan 25 08:17:30 crc kubenswrapper[4832]: I0125 08:17:30.228089 4832 scope.go:117] "RemoveContainer" containerID="a553a6783b46e2514941b45dfd0eeb2f2dc302e2182086e4a7a78da7f033628e" Jan 25 08:17:30 crc kubenswrapper[4832]: E0125 08:17:30.228403 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a553a6783b46e2514941b45dfd0eeb2f2dc302e2182086e4a7a78da7f033628e\": container with ID starting with a553a6783b46e2514941b45dfd0eeb2f2dc302e2182086e4a7a78da7f033628e not found: ID does not exist" containerID="a553a6783b46e2514941b45dfd0eeb2f2dc302e2182086e4a7a78da7f033628e" Jan 25 08:17:30 crc kubenswrapper[4832]: I0125 08:17:30.228422 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a553a6783b46e2514941b45dfd0eeb2f2dc302e2182086e4a7a78da7f033628e"} err="failed to get container status \"a553a6783b46e2514941b45dfd0eeb2f2dc302e2182086e4a7a78da7f033628e\": rpc error: code = NotFound desc = could not find container \"a553a6783b46e2514941b45dfd0eeb2f2dc302e2182086e4a7a78da7f033628e\": container with ID starting with a553a6783b46e2514941b45dfd0eeb2f2dc302e2182086e4a7a78da7f033628e not found: ID does not exist" Jan 25 08:17:30 crc kubenswrapper[4832]: I0125 08:17:30.228434 4832 scope.go:117] "RemoveContainer" containerID="a7c1b9dfea6f73228508a275ac64d4d74426170ef731f0f8ba9fdff0f8345d2a" Jan 25 08:17:30 crc kubenswrapper[4832]: E0125 08:17:30.228644 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a7c1b9dfea6f73228508a275ac64d4d74426170ef731f0f8ba9fdff0f8345d2a\": container with ID starting with a7c1b9dfea6f73228508a275ac64d4d74426170ef731f0f8ba9fdff0f8345d2a not found: ID does not exist" containerID="a7c1b9dfea6f73228508a275ac64d4d74426170ef731f0f8ba9fdff0f8345d2a" Jan 25 08:17:30 crc kubenswrapper[4832]: I0125 08:17:30.228661 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a7c1b9dfea6f73228508a275ac64d4d74426170ef731f0f8ba9fdff0f8345d2a"} err="failed to get container status \"a7c1b9dfea6f73228508a275ac64d4d74426170ef731f0f8ba9fdff0f8345d2a\": rpc error: code = NotFound desc = could not find container \"a7c1b9dfea6f73228508a275ac64d4d74426170ef731f0f8ba9fdff0f8345d2a\": container with ID starting with a7c1b9dfea6f73228508a275ac64d4d74426170ef731f0f8ba9fdff0f8345d2a not found: ID does not exist" Jan 25 08:17:30 crc kubenswrapper[4832]: I0125 08:17:30.228673 4832 scope.go:117] "RemoveContainer" containerID="ea6ff49bce14edc653d6dab40433f839eb38f1c615aa3a14dc8b79262ec41d89" Jan 25 08:17:30 crc kubenswrapper[4832]: E0125 08:17:30.228877 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ea6ff49bce14edc653d6dab40433f839eb38f1c615aa3a14dc8b79262ec41d89\": container with ID starting with ea6ff49bce14edc653d6dab40433f839eb38f1c615aa3a14dc8b79262ec41d89 not found: ID does not exist" containerID="ea6ff49bce14edc653d6dab40433f839eb38f1c615aa3a14dc8b79262ec41d89" Jan 25 08:17:30 crc kubenswrapper[4832]: I0125 08:17:30.228891 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ea6ff49bce14edc653d6dab40433f839eb38f1c615aa3a14dc8b79262ec41d89"} err="failed to get container status \"ea6ff49bce14edc653d6dab40433f839eb38f1c615aa3a14dc8b79262ec41d89\": rpc error: code = NotFound desc = could not find container \"ea6ff49bce14edc653d6dab40433f839eb38f1c615aa3a14dc8b79262ec41d89\": container with ID starting with ea6ff49bce14edc653d6dab40433f839eb38f1c615aa3a14dc8b79262ec41d89 not found: ID does not exist" Jan 25 08:17:30 crc kubenswrapper[4832]: I0125 08:17:30.329426 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 25 08:17:30 crc kubenswrapper[4832]: I0125 08:17:30.555057 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-qfsv4" Jan 25 08:17:30 crc kubenswrapper[4832]: I0125 08:17:30.678172 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ww7n4\" (UniqueName: \"kubernetes.io/projected/ede7170a-cec3-43e5-b7de-d37e72f0cc11-kube-api-access-ww7n4\") pod \"ede7170a-cec3-43e5-b7de-d37e72f0cc11\" (UID: \"ede7170a-cec3-43e5-b7de-d37e72f0cc11\") " Jan 25 08:17:30 crc kubenswrapper[4832]: I0125 08:17:30.678916 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ede7170a-cec3-43e5-b7de-d37e72f0cc11-operator-scripts\") pod \"ede7170a-cec3-43e5-b7de-d37e72f0cc11\" (UID: \"ede7170a-cec3-43e5-b7de-d37e72f0cc11\") " Jan 25 08:17:30 crc kubenswrapper[4832]: I0125 08:17:30.679932 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ede7170a-cec3-43e5-b7de-d37e72f0cc11-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "ede7170a-cec3-43e5-b7de-d37e72f0cc11" (UID: "ede7170a-cec3-43e5-b7de-d37e72f0cc11"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 25 08:17:30 crc kubenswrapper[4832]: I0125 08:17:30.685544 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ede7170a-cec3-43e5-b7de-d37e72f0cc11-kube-api-access-ww7n4" (OuterVolumeSpecName: "kube-api-access-ww7n4") pod "ede7170a-cec3-43e5-b7de-d37e72f0cc11" (UID: "ede7170a-cec3-43e5-b7de-d37e72f0cc11"). InnerVolumeSpecName "kube-api-access-ww7n4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 25 08:17:30 crc kubenswrapper[4832]: I0125 08:17:30.779137 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-fdf0-account-create-update-xcnhj" Jan 25 08:17:30 crc kubenswrapper[4832]: I0125 08:17:30.780193 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-mckms" Jan 25 08:17:30 crc kubenswrapper[4832]: I0125 08:17:30.782343 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ww7n4\" (UniqueName: \"kubernetes.io/projected/ede7170a-cec3-43e5-b7de-d37e72f0cc11-kube-api-access-ww7n4\") on node \"crc\" DevicePath \"\"" Jan 25 08:17:30 crc kubenswrapper[4832]: I0125 08:17:30.782409 4832 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ede7170a-cec3-43e5-b7de-d37e72f0cc11-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 25 08:17:30 crc kubenswrapper[4832]: I0125 08:17:30.811370 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-734e-account-create-update-h4xzg" Jan 25 08:17:30 crc kubenswrapper[4832]: I0125 08:17:30.839485 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-q8swj" Jan 25 08:17:30 crc kubenswrapper[4832]: I0125 08:17:30.883967 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3981045c-8650-4fda-af05-1ff4196d30de-operator-scripts\") pod \"3981045c-8650-4fda-af05-1ff4196d30de\" (UID: \"3981045c-8650-4fda-af05-1ff4196d30de\") " Jan 25 08:17:30 crc kubenswrapper[4832]: I0125 08:17:30.884066 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f9f7e75f-369f-47ce-b9c9-9e6018f0b3a6-operator-scripts\") pod \"f9f7e75f-369f-47ce-b9c9-9e6018f0b3a6\" (UID: \"f9f7e75f-369f-47ce-b9c9-9e6018f0b3a6\") " Jan 25 08:17:30 crc kubenswrapper[4832]: I0125 08:17:30.884119 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ftwrx\" (UniqueName: \"kubernetes.io/projected/f9f7e75f-369f-47ce-b9c9-9e6018f0b3a6-kube-api-access-ftwrx\") pod \"f9f7e75f-369f-47ce-b9c9-9e6018f0b3a6\" (UID: \"f9f7e75f-369f-47ce-b9c9-9e6018f0b3a6\") " Jan 25 08:17:30 crc kubenswrapper[4832]: I0125 08:17:30.884172 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2b1d3eaf-356b-4dd4-87ed-2561b811f68e-operator-scripts\") pod \"2b1d3eaf-356b-4dd4-87ed-2561b811f68e\" (UID: \"2b1d3eaf-356b-4dd4-87ed-2561b811f68e\") " Jan 25 08:17:30 crc kubenswrapper[4832]: I0125 08:17:30.884244 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/48ebae8e-c265-49f1-a050-d6ae6b1ea729-operator-scripts\") pod \"48ebae8e-c265-49f1-a050-d6ae6b1ea729\" (UID: \"48ebae8e-c265-49f1-a050-d6ae6b1ea729\") " Jan 25 08:17:30 crc kubenswrapper[4832]: I0125 08:17:30.884321 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-55jks\" (UniqueName: \"kubernetes.io/projected/48ebae8e-c265-49f1-a050-d6ae6b1ea729-kube-api-access-55jks\") pod \"48ebae8e-c265-49f1-a050-d6ae6b1ea729\" (UID: \"48ebae8e-c265-49f1-a050-d6ae6b1ea729\") " Jan 25 08:17:30 crc kubenswrapper[4832]: I0125 08:17:30.884440 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-786z7\" (UniqueName: \"kubernetes.io/projected/3981045c-8650-4fda-af05-1ff4196d30de-kube-api-access-786z7\") pod \"3981045c-8650-4fda-af05-1ff4196d30de\" (UID: \"3981045c-8650-4fda-af05-1ff4196d30de\") " Jan 25 08:17:30 crc kubenswrapper[4832]: I0125 08:17:30.884605 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rjnb2\" (UniqueName: \"kubernetes.io/projected/2b1d3eaf-356b-4dd4-87ed-2561b811f68e-kube-api-access-rjnb2\") pod \"2b1d3eaf-356b-4dd4-87ed-2561b811f68e\" (UID: \"2b1d3eaf-356b-4dd4-87ed-2561b811f68e\") " Jan 25 08:17:30 crc kubenswrapper[4832]: I0125 08:17:30.886475 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2b1d3eaf-356b-4dd4-87ed-2561b811f68e-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "2b1d3eaf-356b-4dd4-87ed-2561b811f68e" (UID: "2b1d3eaf-356b-4dd4-87ed-2561b811f68e"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 25 08:17:30 crc kubenswrapper[4832]: I0125 08:17:30.887370 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f9f7e75f-369f-47ce-b9c9-9e6018f0b3a6-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "f9f7e75f-369f-47ce-b9c9-9e6018f0b3a6" (UID: "f9f7e75f-369f-47ce-b9c9-9e6018f0b3a6"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 25 08:17:30 crc kubenswrapper[4832]: I0125 08:17:30.887599 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/48ebae8e-c265-49f1-a050-d6ae6b1ea729-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "48ebae8e-c265-49f1-a050-d6ae6b1ea729" (UID: "48ebae8e-c265-49f1-a050-d6ae6b1ea729"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 25 08:17:30 crc kubenswrapper[4832]: I0125 08:17:30.887740 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3981045c-8650-4fda-af05-1ff4196d30de-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "3981045c-8650-4fda-af05-1ff4196d30de" (UID: "3981045c-8650-4fda-af05-1ff4196d30de"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 25 08:17:30 crc kubenswrapper[4832]: I0125 08:17:30.897671 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3981045c-8650-4fda-af05-1ff4196d30de-kube-api-access-786z7" (OuterVolumeSpecName: "kube-api-access-786z7") pod "3981045c-8650-4fda-af05-1ff4196d30de" (UID: "3981045c-8650-4fda-af05-1ff4196d30de"). InnerVolumeSpecName "kube-api-access-786z7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 25 08:17:30 crc kubenswrapper[4832]: I0125 08:17:30.912081 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f9f7e75f-369f-47ce-b9c9-9e6018f0b3a6-kube-api-access-ftwrx" (OuterVolumeSpecName: "kube-api-access-ftwrx") pod "f9f7e75f-369f-47ce-b9c9-9e6018f0b3a6" (UID: "f9f7e75f-369f-47ce-b9c9-9e6018f0b3a6"). InnerVolumeSpecName "kube-api-access-ftwrx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 25 08:17:30 crc kubenswrapper[4832]: I0125 08:17:30.920086 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2b1d3eaf-356b-4dd4-87ed-2561b811f68e-kube-api-access-rjnb2" (OuterVolumeSpecName: "kube-api-access-rjnb2") pod "2b1d3eaf-356b-4dd4-87ed-2561b811f68e" (UID: "2b1d3eaf-356b-4dd4-87ed-2561b811f68e"). InnerVolumeSpecName "kube-api-access-rjnb2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 25 08:17:30 crc kubenswrapper[4832]: I0125 08:17:30.931272 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"ca10626f-eeda-438c-8d2b-5b7c734db90d","Type":"ContainerStarted","Data":"5f1ee2446cac19a324253d6ddecc53110a86188fe3b5add480c2285ead8eaa53"} Jan 25 08:17:30 crc kubenswrapper[4832]: I0125 08:17:30.931439 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/48ebae8e-c265-49f1-a050-d6ae6b1ea729-kube-api-access-55jks" (OuterVolumeSpecName: "kube-api-access-55jks") pod "48ebae8e-c265-49f1-a050-d6ae6b1ea729" (UID: "48ebae8e-c265-49f1-a050-d6ae6b1ea729"). InnerVolumeSpecName "kube-api-access-55jks". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 25 08:17:30 crc kubenswrapper[4832]: I0125 08:17:30.947689 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-734e-account-create-update-h4xzg" event={"ID":"48ebae8e-c265-49f1-a050-d6ae6b1ea729","Type":"ContainerDied","Data":"d1532bdc4bc3483d7356a21baa7432fa4ec0d68bc8f938b8f1ef387e16799d14"} Jan 25 08:17:30 crc kubenswrapper[4832]: I0125 08:17:30.947733 4832 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d1532bdc4bc3483d7356a21baa7432fa4ec0d68bc8f938b8f1ef387e16799d14" Jan 25 08:17:30 crc kubenswrapper[4832]: I0125 08:17:30.947796 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-734e-account-create-update-h4xzg" Jan 25 08:17:30 crc kubenswrapper[4832]: I0125 08:17:30.960598 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-qfsv4" event={"ID":"ede7170a-cec3-43e5-b7de-d37e72f0cc11","Type":"ContainerDied","Data":"905b27e908798dee242e4487c47293f835e6e4850749ea0e8c233a5f82a263c5"} Jan 25 08:17:30 crc kubenswrapper[4832]: I0125 08:17:30.960643 4832 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="905b27e908798dee242e4487c47293f835e6e4850749ea0e8c233a5f82a263c5" Jan 25 08:17:30 crc kubenswrapper[4832]: I0125 08:17:30.960743 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-qfsv4" Jan 25 08:17:30 crc kubenswrapper[4832]: I0125 08:17:30.969481 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-fdf0-account-create-update-xcnhj" event={"ID":"2b1d3eaf-356b-4dd4-87ed-2561b811f68e","Type":"ContainerDied","Data":"53e8be7bf165e85ca5d0d6e7c4799824cee4aec2b38d538427369e226b29b8dc"} Jan 25 08:17:30 crc kubenswrapper[4832]: I0125 08:17:30.970163 4832 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="53e8be7bf165e85ca5d0d6e7c4799824cee4aec2b38d538427369e226b29b8dc" Jan 25 08:17:30 crc kubenswrapper[4832]: I0125 08:17:30.969562 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-fdf0-account-create-update-xcnhj" Jan 25 08:17:30 crc kubenswrapper[4832]: I0125 08:17:30.978365 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-mckms" event={"ID":"3981045c-8650-4fda-af05-1ff4196d30de","Type":"ContainerDied","Data":"059a9eab5677230cd1e00415056adfeb33129c4c87e6bf26cefbe13e997c9cef"} Jan 25 08:17:30 crc kubenswrapper[4832]: I0125 08:17:30.978428 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-mckms" Jan 25 08:17:30 crc kubenswrapper[4832]: I0125 08:17:30.978451 4832 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="059a9eab5677230cd1e00415056adfeb33129c4c87e6bf26cefbe13e997c9cef" Jan 25 08:17:30 crc kubenswrapper[4832]: I0125 08:17:30.984795 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-q8swj" Jan 25 08:17:30 crc kubenswrapper[4832]: I0125 08:17:30.985549 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-q8swj" event={"ID":"f9f7e75f-369f-47ce-b9c9-9e6018f0b3a6","Type":"ContainerDied","Data":"15aed774a137c891b3c49824e148207101f35173bc3ae4273d1b0f945c528ecf"} Jan 25 08:17:30 crc kubenswrapper[4832]: I0125 08:17:30.985653 4832 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="15aed774a137c891b3c49824e148207101f35173bc3ae4273d1b0f945c528ecf" Jan 25 08:17:30 crc kubenswrapper[4832]: I0125 08:17:30.986593 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rjnb2\" (UniqueName: \"kubernetes.io/projected/2b1d3eaf-356b-4dd4-87ed-2561b811f68e-kube-api-access-rjnb2\") on node \"crc\" DevicePath \"\"" Jan 25 08:17:30 crc kubenswrapper[4832]: I0125 08:17:30.986614 4832 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3981045c-8650-4fda-af05-1ff4196d30de-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 25 08:17:30 crc kubenswrapper[4832]: I0125 08:17:30.986625 4832 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f9f7e75f-369f-47ce-b9c9-9e6018f0b3a6-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 25 08:17:30 crc kubenswrapper[4832]: I0125 08:17:30.986636 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ftwrx\" (UniqueName: \"kubernetes.io/projected/f9f7e75f-369f-47ce-b9c9-9e6018f0b3a6-kube-api-access-ftwrx\") on node \"crc\" DevicePath \"\"" Jan 25 08:17:30 crc kubenswrapper[4832]: I0125 08:17:30.986644 4832 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2b1d3eaf-356b-4dd4-87ed-2561b811f68e-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 25 08:17:30 crc kubenswrapper[4832]: I0125 08:17:30.986653 4832 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/48ebae8e-c265-49f1-a050-d6ae6b1ea729-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 25 08:17:30 crc kubenswrapper[4832]: I0125 08:17:30.986662 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-55jks\" (UniqueName: \"kubernetes.io/projected/48ebae8e-c265-49f1-a050-d6ae6b1ea729-kube-api-access-55jks\") on node \"crc\" DevicePath \"\"" Jan 25 08:17:30 crc kubenswrapper[4832]: I0125 08:17:30.986674 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-786z7\" (UniqueName: \"kubernetes.io/projected/3981045c-8650-4fda-af05-1ff4196d30de-kube-api-access-786z7\") on node \"crc\" DevicePath \"\"" Jan 25 08:17:31 crc kubenswrapper[4832]: I0125 08:17:31.223369 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 25 08:17:31 crc kubenswrapper[4832]: I0125 08:17:31.567765 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-30c4-account-create-update-7tq6t" Jan 25 08:17:31 crc kubenswrapper[4832]: I0125 08:17:31.616358 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/163febb0-9715-4944-8c59-0a4997e12c47-operator-scripts\") pod \"163febb0-9715-4944-8c59-0a4997e12c47\" (UID: \"163febb0-9715-4944-8c59-0a4997e12c47\") " Jan 25 08:17:31 crc kubenswrapper[4832]: I0125 08:17:31.616687 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r7txz\" (UniqueName: \"kubernetes.io/projected/163febb0-9715-4944-8c59-0a4997e12c47-kube-api-access-r7txz\") pod \"163febb0-9715-4944-8c59-0a4997e12c47\" (UID: \"163febb0-9715-4944-8c59-0a4997e12c47\") " Jan 25 08:17:31 crc kubenswrapper[4832]: I0125 08:17:31.617318 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/163febb0-9715-4944-8c59-0a4997e12c47-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "163febb0-9715-4944-8c59-0a4997e12c47" (UID: "163febb0-9715-4944-8c59-0a4997e12c47"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 25 08:17:31 crc kubenswrapper[4832]: I0125 08:17:31.622272 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/163febb0-9715-4944-8c59-0a4997e12c47-kube-api-access-r7txz" (OuterVolumeSpecName: "kube-api-access-r7txz") pod "163febb0-9715-4944-8c59-0a4997e12c47" (UID: "163febb0-9715-4944-8c59-0a4997e12c47"). InnerVolumeSpecName "kube-api-access-r7txz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 25 08:17:31 crc kubenswrapper[4832]: I0125 08:17:31.707895 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="65a902e4-15aa-499b-aa8e-a5ed097f9918" path="/var/lib/kubelet/pods/65a902e4-15aa-499b-aa8e-a5ed097f9918/volumes" Jan 25 08:17:31 crc kubenswrapper[4832]: I0125 08:17:31.719537 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r7txz\" (UniqueName: \"kubernetes.io/projected/163febb0-9715-4944-8c59-0a4997e12c47-kube-api-access-r7txz\") on node \"crc\" DevicePath \"\"" Jan 25 08:17:31 crc kubenswrapper[4832]: I0125 08:17:31.719748 4832 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/163febb0-9715-4944-8c59-0a4997e12c47-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 25 08:17:31 crc kubenswrapper[4832]: I0125 08:17:31.994998 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"ca10626f-eeda-438c-8d2b-5b7c734db90d","Type":"ContainerStarted","Data":"3ec5bb701173c0d6ac730201c4d5f880a2eef9235c01f4d5f49e9865890dff55"} Jan 25 08:17:32 crc kubenswrapper[4832]: I0125 08:17:32.005246 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d34a22ee-66f7-411b-a395-7c52e98c6ef3","Type":"ContainerStarted","Data":"c41702f451f359ddec6d742717b2f4ad61bbc16d5ed2e00696f6f4fc9691db7f"} Jan 25 08:17:32 crc kubenswrapper[4832]: I0125 08:17:32.009612 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-30c4-account-create-update-7tq6t" event={"ID":"163febb0-9715-4944-8c59-0a4997e12c47","Type":"ContainerDied","Data":"ac4e234792e9597840b2c97a9ab4b641582ddf476e54df23800eac9e7456b077"} Jan 25 08:17:32 crc kubenswrapper[4832]: I0125 08:17:32.009662 4832 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ac4e234792e9597840b2c97a9ab4b641582ddf476e54df23800eac9e7456b077" Jan 25 08:17:32 crc kubenswrapper[4832]: I0125 08:17:32.009727 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-30c4-account-create-update-7tq6t" Jan 25 08:17:32 crc kubenswrapper[4832]: I0125 08:17:32.668531 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/horizon-f649cfc6-vzpx7" Jan 25 08:17:33 crc kubenswrapper[4832]: I0125 08:17:33.019272 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d34a22ee-66f7-411b-a395-7c52e98c6ef3","Type":"ContainerStarted","Data":"5b825fd873141b7eab1296c4cdce62309d49d65cdfab5ed0839d858df789af5f"} Jan 25 08:17:33 crc kubenswrapper[4832]: I0125 08:17:33.021468 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"ca10626f-eeda-438c-8d2b-5b7c734db90d","Type":"ContainerStarted","Data":"24136d4ac0c78aa067c95fffb84f667b9d2ecb7724ee0dc028f86339e4daa09b"} Jan 25 08:17:33 crc kubenswrapper[4832]: I0125 08:17:33.051601 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=5.051580713 podStartE2EDuration="5.051580713s" podCreationTimestamp="2026-01-25 08:17:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-25 08:17:33.043047944 +0000 UTC m=+1235.716871477" watchObservedRunningTime="2026-01-25 08:17:33.051580713 +0000 UTC m=+1235.725404246" Jan 25 08:17:34 crc kubenswrapper[4832]: I0125 08:17:34.032102 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d34a22ee-66f7-411b-a395-7c52e98c6ef3","Type":"ContainerStarted","Data":"260b9f431a0ab35c8f270dde3563f44bb24fd5dbb6fa9b801c86c1a410bba7b0"} Jan 25 08:17:34 crc kubenswrapper[4832]: I0125 08:17:34.032439 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d34a22ee-66f7-411b-a395-7c52e98c6ef3","Type":"ContainerStarted","Data":"85c0e91c20c0cf4665f5772c909cd611dea308b192eb95c11e760cf76cf980d6"} Jan 25 08:17:34 crc kubenswrapper[4832]: I0125 08:17:34.090139 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 25 08:17:34 crc kubenswrapper[4832]: I0125 08:17:34.566200 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/horizon-f649cfc6-vzpx7" Jan 25 08:17:34 crc kubenswrapper[4832]: I0125 08:17:34.622735 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-856b6b4996-m59cl"] Jan 25 08:17:34 crc kubenswrapper[4832]: I0125 08:17:34.623039 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-856b6b4996-m59cl" podUID="573d9b12-352d-4b14-b79c-f2a4a3bfec61" containerName="horizon-log" containerID="cri-o://c292b116a3c1fdcc1ff68e24bd47cbed28c4a98bf62546d1e65268a40c49af76" gracePeriod=30 Jan 25 08:17:34 crc kubenswrapper[4832]: I0125 08:17:34.623207 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-856b6b4996-m59cl" podUID="573d9b12-352d-4b14-b79c-f2a4a3bfec61" containerName="horizon" containerID="cri-o://bb732af1be5b8febd9fa4b66ceda9d6420275da7a02af0dbc3f119bbf4968964" gracePeriod=30 Jan 25 08:17:36 crc kubenswrapper[4832]: I0125 08:17:36.050131 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d34a22ee-66f7-411b-a395-7c52e98c6ef3","Type":"ContainerStarted","Data":"80ead176e0214684ca013c5450b3e1fa0793628cff13d97e18cbad858a013eeb"} Jan 25 08:17:36 crc kubenswrapper[4832]: I0125 08:17:36.050674 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 25 08:17:36 crc kubenswrapper[4832]: I0125 08:17:36.050618 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="d34a22ee-66f7-411b-a395-7c52e98c6ef3" containerName="sg-core" containerID="cri-o://260b9f431a0ab35c8f270dde3563f44bb24fd5dbb6fa9b801c86c1a410bba7b0" gracePeriod=30 Jan 25 08:17:36 crc kubenswrapper[4832]: I0125 08:17:36.050344 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="d34a22ee-66f7-411b-a395-7c52e98c6ef3" containerName="proxy-httpd" containerID="cri-o://80ead176e0214684ca013c5450b3e1fa0793628cff13d97e18cbad858a013eeb" gracePeriod=30 Jan 25 08:17:36 crc kubenswrapper[4832]: I0125 08:17:36.050635 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="d34a22ee-66f7-411b-a395-7c52e98c6ef3" containerName="ceilometer-notification-agent" containerID="cri-o://85c0e91c20c0cf4665f5772c909cd611dea308b192eb95c11e760cf76cf980d6" gracePeriod=30 Jan 25 08:17:36 crc kubenswrapper[4832]: I0125 08:17:36.050931 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="d34a22ee-66f7-411b-a395-7c52e98c6ef3" containerName="ceilometer-central-agent" containerID="cri-o://5b825fd873141b7eab1296c4cdce62309d49d65cdfab5ed0839d858df789af5f" gracePeriod=30 Jan 25 08:17:36 crc kubenswrapper[4832]: I0125 08:17:36.082188 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=3.224675537 podStartE2EDuration="7.082164921s" podCreationTimestamp="2026-01-25 08:17:29 +0000 UTC" firstStartedPulling="2026-01-25 08:17:31.198028758 +0000 UTC m=+1233.871852291" lastFinishedPulling="2026-01-25 08:17:35.055518142 +0000 UTC m=+1237.729341675" observedRunningTime="2026-01-25 08:17:36.070906409 +0000 UTC m=+1238.744729932" watchObservedRunningTime="2026-01-25 08:17:36.082164921 +0000 UTC m=+1238.755988454" Jan 25 08:17:36 crc kubenswrapper[4832]: I0125 08:17:36.124691 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Jan 25 08:17:36 crc kubenswrapper[4832]: I0125 08:17:36.126591 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Jan 25 08:17:36 crc kubenswrapper[4832]: I0125 08:17:36.172067 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Jan 25 08:17:36 crc kubenswrapper[4832]: I0125 08:17:36.172975 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Jan 25 08:17:37 crc kubenswrapper[4832]: I0125 08:17:37.058122 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-db-sync-7snwr"] Jan 25 08:17:37 crc kubenswrapper[4832]: E0125 08:17:37.058699 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f9f7e75f-369f-47ce-b9c9-9e6018f0b3a6" containerName="mariadb-database-create" Jan 25 08:17:37 crc kubenswrapper[4832]: I0125 08:17:37.058712 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="f9f7e75f-369f-47ce-b9c9-9e6018f0b3a6" containerName="mariadb-database-create" Jan 25 08:17:37 crc kubenswrapper[4832]: E0125 08:17:37.058738 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2b1d3eaf-356b-4dd4-87ed-2561b811f68e" containerName="mariadb-account-create-update" Jan 25 08:17:37 crc kubenswrapper[4832]: I0125 08:17:37.058744 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="2b1d3eaf-356b-4dd4-87ed-2561b811f68e" containerName="mariadb-account-create-update" Jan 25 08:17:37 crc kubenswrapper[4832]: E0125 08:17:37.058755 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="163febb0-9715-4944-8c59-0a4997e12c47" containerName="mariadb-account-create-update" Jan 25 08:17:37 crc kubenswrapper[4832]: I0125 08:17:37.058762 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="163febb0-9715-4944-8c59-0a4997e12c47" containerName="mariadb-account-create-update" Jan 25 08:17:37 crc kubenswrapper[4832]: E0125 08:17:37.058778 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ede7170a-cec3-43e5-b7de-d37e72f0cc11" containerName="mariadb-database-create" Jan 25 08:17:37 crc kubenswrapper[4832]: I0125 08:17:37.058784 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="ede7170a-cec3-43e5-b7de-d37e72f0cc11" containerName="mariadb-database-create" Jan 25 08:17:37 crc kubenswrapper[4832]: E0125 08:17:37.058795 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3981045c-8650-4fda-af05-1ff4196d30de" containerName="mariadb-database-create" Jan 25 08:17:37 crc kubenswrapper[4832]: I0125 08:17:37.058801 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="3981045c-8650-4fda-af05-1ff4196d30de" containerName="mariadb-database-create" Jan 25 08:17:37 crc kubenswrapper[4832]: E0125 08:17:37.058815 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="48ebae8e-c265-49f1-a050-d6ae6b1ea729" containerName="mariadb-account-create-update" Jan 25 08:17:37 crc kubenswrapper[4832]: I0125 08:17:37.058821 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="48ebae8e-c265-49f1-a050-d6ae6b1ea729" containerName="mariadb-account-create-update" Jan 25 08:17:37 crc kubenswrapper[4832]: I0125 08:17:37.058969 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="48ebae8e-c265-49f1-a050-d6ae6b1ea729" containerName="mariadb-account-create-update" Jan 25 08:17:37 crc kubenswrapper[4832]: I0125 08:17:37.058978 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="ede7170a-cec3-43e5-b7de-d37e72f0cc11" containerName="mariadb-database-create" Jan 25 08:17:37 crc kubenswrapper[4832]: I0125 08:17:37.058990 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="3981045c-8650-4fda-af05-1ff4196d30de" containerName="mariadb-database-create" Jan 25 08:17:37 crc kubenswrapper[4832]: I0125 08:17:37.059003 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="163febb0-9715-4944-8c59-0a4997e12c47" containerName="mariadb-account-create-update" Jan 25 08:17:37 crc kubenswrapper[4832]: I0125 08:17:37.059017 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="2b1d3eaf-356b-4dd4-87ed-2561b811f68e" containerName="mariadb-account-create-update" Jan 25 08:17:37 crc kubenswrapper[4832]: I0125 08:17:37.059042 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="f9f7e75f-369f-47ce-b9c9-9e6018f0b3a6" containerName="mariadb-database-create" Jan 25 08:17:37 crc kubenswrapper[4832]: I0125 08:17:37.059778 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-7snwr" Jan 25 08:17:37 crc kubenswrapper[4832]: I0125 08:17:37.061924 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-scripts" Jan 25 08:17:37 crc kubenswrapper[4832]: I0125 08:17:37.062114 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-rf7hq" Jan 25 08:17:37 crc kubenswrapper[4832]: I0125 08:17:37.065194 4832 generic.go:334] "Generic (PLEG): container finished" podID="d34a22ee-66f7-411b-a395-7c52e98c6ef3" containerID="80ead176e0214684ca013c5450b3e1fa0793628cff13d97e18cbad858a013eeb" exitCode=0 Jan 25 08:17:37 crc kubenswrapper[4832]: I0125 08:17:37.065224 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d34a22ee-66f7-411b-a395-7c52e98c6ef3","Type":"ContainerDied","Data":"80ead176e0214684ca013c5450b3e1fa0793628cff13d97e18cbad858a013eeb"} Jan 25 08:17:37 crc kubenswrapper[4832]: I0125 08:17:37.065257 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d34a22ee-66f7-411b-a395-7c52e98c6ef3","Type":"ContainerDied","Data":"260b9f431a0ab35c8f270dde3563f44bb24fd5dbb6fa9b801c86c1a410bba7b0"} Jan 25 08:17:37 crc kubenswrapper[4832]: I0125 08:17:37.065231 4832 generic.go:334] "Generic (PLEG): container finished" podID="d34a22ee-66f7-411b-a395-7c52e98c6ef3" containerID="260b9f431a0ab35c8f270dde3563f44bb24fd5dbb6fa9b801c86c1a410bba7b0" exitCode=2 Jan 25 08:17:37 crc kubenswrapper[4832]: I0125 08:17:37.065274 4832 generic.go:334] "Generic (PLEG): container finished" podID="d34a22ee-66f7-411b-a395-7c52e98c6ef3" containerID="85c0e91c20c0cf4665f5772c909cd611dea308b192eb95c11e760cf76cf980d6" exitCode=0 Jan 25 08:17:37 crc kubenswrapper[4832]: I0125 08:17:37.066428 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d34a22ee-66f7-411b-a395-7c52e98c6ef3","Type":"ContainerDied","Data":"85c0e91c20c0cf4665f5772c909cd611dea308b192eb95c11e760cf76cf980d6"} Jan 25 08:17:37 crc kubenswrapper[4832]: I0125 08:17:37.066455 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Jan 25 08:17:37 crc kubenswrapper[4832]: I0125 08:17:37.066563 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Jan 25 08:17:37 crc kubenswrapper[4832]: I0125 08:17:37.067535 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Jan 25 08:17:37 crc kubenswrapper[4832]: I0125 08:17:37.070286 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-7snwr"] Jan 25 08:17:37 crc kubenswrapper[4832]: I0125 08:17:37.131001 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/47eba52e-d8fa-4336-9c57-7006963eb712-scripts\") pod \"nova-cell0-conductor-db-sync-7snwr\" (UID: \"47eba52e-d8fa-4336-9c57-7006963eb712\") " pod="openstack/nova-cell0-conductor-db-sync-7snwr" Jan 25 08:17:37 crc kubenswrapper[4832]: I0125 08:17:37.131050 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bmngs\" (UniqueName: \"kubernetes.io/projected/47eba52e-d8fa-4336-9c57-7006963eb712-kube-api-access-bmngs\") pod \"nova-cell0-conductor-db-sync-7snwr\" (UID: \"47eba52e-d8fa-4336-9c57-7006963eb712\") " pod="openstack/nova-cell0-conductor-db-sync-7snwr" Jan 25 08:17:37 crc kubenswrapper[4832]: I0125 08:17:37.131085 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/47eba52e-d8fa-4336-9c57-7006963eb712-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-7snwr\" (UID: \"47eba52e-d8fa-4336-9c57-7006963eb712\") " pod="openstack/nova-cell0-conductor-db-sync-7snwr" Jan 25 08:17:37 crc kubenswrapper[4832]: I0125 08:17:37.131258 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/47eba52e-d8fa-4336-9c57-7006963eb712-config-data\") pod \"nova-cell0-conductor-db-sync-7snwr\" (UID: \"47eba52e-d8fa-4336-9c57-7006963eb712\") " pod="openstack/nova-cell0-conductor-db-sync-7snwr" Jan 25 08:17:37 crc kubenswrapper[4832]: I0125 08:17:37.232741 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/47eba52e-d8fa-4336-9c57-7006963eb712-config-data\") pod \"nova-cell0-conductor-db-sync-7snwr\" (UID: \"47eba52e-d8fa-4336-9c57-7006963eb712\") " pod="openstack/nova-cell0-conductor-db-sync-7snwr" Jan 25 08:17:37 crc kubenswrapper[4832]: I0125 08:17:37.232814 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/47eba52e-d8fa-4336-9c57-7006963eb712-scripts\") pod \"nova-cell0-conductor-db-sync-7snwr\" (UID: \"47eba52e-d8fa-4336-9c57-7006963eb712\") " pod="openstack/nova-cell0-conductor-db-sync-7snwr" Jan 25 08:17:37 crc kubenswrapper[4832]: I0125 08:17:37.232855 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bmngs\" (UniqueName: \"kubernetes.io/projected/47eba52e-d8fa-4336-9c57-7006963eb712-kube-api-access-bmngs\") pod \"nova-cell0-conductor-db-sync-7snwr\" (UID: \"47eba52e-d8fa-4336-9c57-7006963eb712\") " pod="openstack/nova-cell0-conductor-db-sync-7snwr" Jan 25 08:17:37 crc kubenswrapper[4832]: I0125 08:17:37.232900 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/47eba52e-d8fa-4336-9c57-7006963eb712-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-7snwr\" (UID: \"47eba52e-d8fa-4336-9c57-7006963eb712\") " pod="openstack/nova-cell0-conductor-db-sync-7snwr" Jan 25 08:17:37 crc kubenswrapper[4832]: I0125 08:17:37.238520 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/47eba52e-d8fa-4336-9c57-7006963eb712-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-7snwr\" (UID: \"47eba52e-d8fa-4336-9c57-7006963eb712\") " pod="openstack/nova-cell0-conductor-db-sync-7snwr" Jan 25 08:17:37 crc kubenswrapper[4832]: I0125 08:17:37.241832 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/47eba52e-d8fa-4336-9c57-7006963eb712-scripts\") pod \"nova-cell0-conductor-db-sync-7snwr\" (UID: \"47eba52e-d8fa-4336-9c57-7006963eb712\") " pod="openstack/nova-cell0-conductor-db-sync-7snwr" Jan 25 08:17:37 crc kubenswrapper[4832]: I0125 08:17:37.248843 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/47eba52e-d8fa-4336-9c57-7006963eb712-config-data\") pod \"nova-cell0-conductor-db-sync-7snwr\" (UID: \"47eba52e-d8fa-4336-9c57-7006963eb712\") " pod="openstack/nova-cell0-conductor-db-sync-7snwr" Jan 25 08:17:37 crc kubenswrapper[4832]: I0125 08:17:37.248886 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bmngs\" (UniqueName: \"kubernetes.io/projected/47eba52e-d8fa-4336-9c57-7006963eb712-kube-api-access-bmngs\") pod \"nova-cell0-conductor-db-sync-7snwr\" (UID: \"47eba52e-d8fa-4336-9c57-7006963eb712\") " pod="openstack/nova-cell0-conductor-db-sync-7snwr" Jan 25 08:17:37 crc kubenswrapper[4832]: I0125 08:17:37.380789 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-7snwr" Jan 25 08:17:37 crc kubenswrapper[4832]: I0125 08:17:37.579373 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 25 08:17:37 crc kubenswrapper[4832]: I0125 08:17:37.639289 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d34a22ee-66f7-411b-a395-7c52e98c6ef3-sg-core-conf-yaml\") pod \"d34a22ee-66f7-411b-a395-7c52e98c6ef3\" (UID: \"d34a22ee-66f7-411b-a395-7c52e98c6ef3\") " Jan 25 08:17:37 crc kubenswrapper[4832]: I0125 08:17:37.639483 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d34a22ee-66f7-411b-a395-7c52e98c6ef3-scripts\") pod \"d34a22ee-66f7-411b-a395-7c52e98c6ef3\" (UID: \"d34a22ee-66f7-411b-a395-7c52e98c6ef3\") " Jan 25 08:17:37 crc kubenswrapper[4832]: I0125 08:17:37.639592 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d34a22ee-66f7-411b-a395-7c52e98c6ef3-run-httpd\") pod \"d34a22ee-66f7-411b-a395-7c52e98c6ef3\" (UID: \"d34a22ee-66f7-411b-a395-7c52e98c6ef3\") " Jan 25 08:17:37 crc kubenswrapper[4832]: I0125 08:17:37.639612 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d34a22ee-66f7-411b-a395-7c52e98c6ef3-combined-ca-bundle\") pod \"d34a22ee-66f7-411b-a395-7c52e98c6ef3\" (UID: \"d34a22ee-66f7-411b-a395-7c52e98c6ef3\") " Jan 25 08:17:37 crc kubenswrapper[4832]: I0125 08:17:37.639630 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l5ppm\" (UniqueName: \"kubernetes.io/projected/d34a22ee-66f7-411b-a395-7c52e98c6ef3-kube-api-access-l5ppm\") pod \"d34a22ee-66f7-411b-a395-7c52e98c6ef3\" (UID: \"d34a22ee-66f7-411b-a395-7c52e98c6ef3\") " Jan 25 08:17:37 crc kubenswrapper[4832]: I0125 08:17:37.639804 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d34a22ee-66f7-411b-a395-7c52e98c6ef3-config-data\") pod \"d34a22ee-66f7-411b-a395-7c52e98c6ef3\" (UID: \"d34a22ee-66f7-411b-a395-7c52e98c6ef3\") " Jan 25 08:17:37 crc kubenswrapper[4832]: I0125 08:17:37.639843 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d34a22ee-66f7-411b-a395-7c52e98c6ef3-log-httpd\") pod \"d34a22ee-66f7-411b-a395-7c52e98c6ef3\" (UID: \"d34a22ee-66f7-411b-a395-7c52e98c6ef3\") " Jan 25 08:17:37 crc kubenswrapper[4832]: I0125 08:17:37.640610 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d34a22ee-66f7-411b-a395-7c52e98c6ef3-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "d34a22ee-66f7-411b-a395-7c52e98c6ef3" (UID: "d34a22ee-66f7-411b-a395-7c52e98c6ef3"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 25 08:17:37 crc kubenswrapper[4832]: I0125 08:17:37.640834 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d34a22ee-66f7-411b-a395-7c52e98c6ef3-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "d34a22ee-66f7-411b-a395-7c52e98c6ef3" (UID: "d34a22ee-66f7-411b-a395-7c52e98c6ef3"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 25 08:17:37 crc kubenswrapper[4832]: I0125 08:17:37.645321 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d34a22ee-66f7-411b-a395-7c52e98c6ef3-scripts" (OuterVolumeSpecName: "scripts") pod "d34a22ee-66f7-411b-a395-7c52e98c6ef3" (UID: "d34a22ee-66f7-411b-a395-7c52e98c6ef3"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 25 08:17:37 crc kubenswrapper[4832]: I0125 08:17:37.657736 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d34a22ee-66f7-411b-a395-7c52e98c6ef3-kube-api-access-l5ppm" (OuterVolumeSpecName: "kube-api-access-l5ppm") pod "d34a22ee-66f7-411b-a395-7c52e98c6ef3" (UID: "d34a22ee-66f7-411b-a395-7c52e98c6ef3"). InnerVolumeSpecName "kube-api-access-l5ppm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 25 08:17:37 crc kubenswrapper[4832]: I0125 08:17:37.672509 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d34a22ee-66f7-411b-a395-7c52e98c6ef3-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "d34a22ee-66f7-411b-a395-7c52e98c6ef3" (UID: "d34a22ee-66f7-411b-a395-7c52e98c6ef3"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 25 08:17:37 crc kubenswrapper[4832]: I0125 08:17:37.737807 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d34a22ee-66f7-411b-a395-7c52e98c6ef3-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d34a22ee-66f7-411b-a395-7c52e98c6ef3" (UID: "d34a22ee-66f7-411b-a395-7c52e98c6ef3"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 25 08:17:37 crc kubenswrapper[4832]: I0125 08:17:37.741714 4832 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d34a22ee-66f7-411b-a395-7c52e98c6ef3-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 25 08:17:37 crc kubenswrapper[4832]: I0125 08:17:37.741742 4832 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d34a22ee-66f7-411b-a395-7c52e98c6ef3-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 25 08:17:37 crc kubenswrapper[4832]: I0125 08:17:37.741777 4832 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d34a22ee-66f7-411b-a395-7c52e98c6ef3-scripts\") on node \"crc\" DevicePath \"\"" Jan 25 08:17:37 crc kubenswrapper[4832]: I0125 08:17:37.741785 4832 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d34a22ee-66f7-411b-a395-7c52e98c6ef3-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 25 08:17:37 crc kubenswrapper[4832]: I0125 08:17:37.741793 4832 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d34a22ee-66f7-411b-a395-7c52e98c6ef3-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 25 08:17:37 crc kubenswrapper[4832]: I0125 08:17:37.741801 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l5ppm\" (UniqueName: \"kubernetes.io/projected/d34a22ee-66f7-411b-a395-7c52e98c6ef3-kube-api-access-l5ppm\") on node \"crc\" DevicePath \"\"" Jan 25 08:17:37 crc kubenswrapper[4832]: I0125 08:17:37.781821 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d34a22ee-66f7-411b-a395-7c52e98c6ef3-config-data" (OuterVolumeSpecName: "config-data") pod "d34a22ee-66f7-411b-a395-7c52e98c6ef3" (UID: "d34a22ee-66f7-411b-a395-7c52e98c6ef3"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 25 08:17:37 crc kubenswrapper[4832]: I0125 08:17:37.844733 4832 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d34a22ee-66f7-411b-a395-7c52e98c6ef3-config-data\") on node \"crc\" DevicePath \"\"" Jan 25 08:17:37 crc kubenswrapper[4832]: W0125 08:17:37.979694 4832 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod47eba52e_d8fa_4336_9c57_7006963eb712.slice/crio-ffebb57d29fdc0accddaccb2e15f57ff8159296f20ced3ec8fcb355ff52b4534 WatchSource:0}: Error finding container ffebb57d29fdc0accddaccb2e15f57ff8159296f20ced3ec8fcb355ff52b4534: Status 404 returned error can't find the container with id ffebb57d29fdc0accddaccb2e15f57ff8159296f20ced3ec8fcb355ff52b4534 Jan 25 08:17:37 crc kubenswrapper[4832]: I0125 08:17:37.981238 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-7snwr"] Jan 25 08:17:38 crc kubenswrapper[4832]: I0125 08:17:38.078568 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-7snwr" event={"ID":"47eba52e-d8fa-4336-9c57-7006963eb712","Type":"ContainerStarted","Data":"ffebb57d29fdc0accddaccb2e15f57ff8159296f20ced3ec8fcb355ff52b4534"} Jan 25 08:17:38 crc kubenswrapper[4832]: I0125 08:17:38.082772 4832 generic.go:334] "Generic (PLEG): container finished" podID="d34a22ee-66f7-411b-a395-7c52e98c6ef3" containerID="5b825fd873141b7eab1296c4cdce62309d49d65cdfab5ed0839d858df789af5f" exitCode=0 Jan 25 08:17:38 crc kubenswrapper[4832]: I0125 08:17:38.083277 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d34a22ee-66f7-411b-a395-7c52e98c6ef3","Type":"ContainerDied","Data":"5b825fd873141b7eab1296c4cdce62309d49d65cdfab5ed0839d858df789af5f"} Jan 25 08:17:38 crc kubenswrapper[4832]: I0125 08:17:38.083309 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 25 08:17:38 crc kubenswrapper[4832]: I0125 08:17:38.083404 4832 scope.go:117] "RemoveContainer" containerID="80ead176e0214684ca013c5450b3e1fa0793628cff13d97e18cbad858a013eeb" Jan 25 08:17:38 crc kubenswrapper[4832]: I0125 08:17:38.083372 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d34a22ee-66f7-411b-a395-7c52e98c6ef3","Type":"ContainerDied","Data":"c41702f451f359ddec6d742717b2f4ad61bbc16d5ed2e00696f6f4fc9691db7f"} Jan 25 08:17:38 crc kubenswrapper[4832]: I0125 08:17:38.115137 4832 scope.go:117] "RemoveContainer" containerID="260b9f431a0ab35c8f270dde3563f44bb24fd5dbb6fa9b801c86c1a410bba7b0" Jan 25 08:17:38 crc kubenswrapper[4832]: I0125 08:17:38.120645 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 25 08:17:38 crc kubenswrapper[4832]: I0125 08:17:38.131587 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 25 08:17:38 crc kubenswrapper[4832]: I0125 08:17:38.137110 4832 scope.go:117] "RemoveContainer" containerID="85c0e91c20c0cf4665f5772c909cd611dea308b192eb95c11e760cf76cf980d6" Jan 25 08:17:38 crc kubenswrapper[4832]: I0125 08:17:38.172584 4832 scope.go:117] "RemoveContainer" containerID="5b825fd873141b7eab1296c4cdce62309d49d65cdfab5ed0839d858df789af5f" Jan 25 08:17:38 crc kubenswrapper[4832]: I0125 08:17:38.204675 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 25 08:17:38 crc kubenswrapper[4832]: E0125 08:17:38.205693 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d34a22ee-66f7-411b-a395-7c52e98c6ef3" containerName="ceilometer-notification-agent" Jan 25 08:17:38 crc kubenswrapper[4832]: I0125 08:17:38.205719 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="d34a22ee-66f7-411b-a395-7c52e98c6ef3" containerName="ceilometer-notification-agent" Jan 25 08:17:38 crc kubenswrapper[4832]: E0125 08:17:38.205736 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d34a22ee-66f7-411b-a395-7c52e98c6ef3" containerName="ceilometer-central-agent" Jan 25 08:17:38 crc kubenswrapper[4832]: I0125 08:17:38.205744 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="d34a22ee-66f7-411b-a395-7c52e98c6ef3" containerName="ceilometer-central-agent" Jan 25 08:17:38 crc kubenswrapper[4832]: E0125 08:17:38.205761 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d34a22ee-66f7-411b-a395-7c52e98c6ef3" containerName="proxy-httpd" Jan 25 08:17:38 crc kubenswrapper[4832]: I0125 08:17:38.205769 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="d34a22ee-66f7-411b-a395-7c52e98c6ef3" containerName="proxy-httpd" Jan 25 08:17:38 crc kubenswrapper[4832]: E0125 08:17:38.205780 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d34a22ee-66f7-411b-a395-7c52e98c6ef3" containerName="sg-core" Jan 25 08:17:38 crc kubenswrapper[4832]: I0125 08:17:38.205788 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="d34a22ee-66f7-411b-a395-7c52e98c6ef3" containerName="sg-core" Jan 25 08:17:38 crc kubenswrapper[4832]: I0125 08:17:38.206701 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="d34a22ee-66f7-411b-a395-7c52e98c6ef3" containerName="proxy-httpd" Jan 25 08:17:38 crc kubenswrapper[4832]: I0125 08:17:38.206740 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="d34a22ee-66f7-411b-a395-7c52e98c6ef3" containerName="ceilometer-notification-agent" Jan 25 08:17:38 crc kubenswrapper[4832]: I0125 08:17:38.206932 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="d34a22ee-66f7-411b-a395-7c52e98c6ef3" containerName="ceilometer-central-agent" Jan 25 08:17:38 crc kubenswrapper[4832]: I0125 08:17:38.206971 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="d34a22ee-66f7-411b-a395-7c52e98c6ef3" containerName="sg-core" Jan 25 08:17:38 crc kubenswrapper[4832]: I0125 08:17:38.215582 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 25 08:17:38 crc kubenswrapper[4832]: I0125 08:17:38.217997 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 25 08:17:38 crc kubenswrapper[4832]: I0125 08:17:38.218320 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 25 08:17:38 crc kubenswrapper[4832]: I0125 08:17:38.223454 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 25 08:17:38 crc kubenswrapper[4832]: I0125 08:17:38.358693 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8141145d-2a12-4069-9185-c8123c6a4c5a-log-httpd\") pod \"ceilometer-0\" (UID: \"8141145d-2a12-4069-9185-c8123c6a4c5a\") " pod="openstack/ceilometer-0" Jan 25 08:17:38 crc kubenswrapper[4832]: I0125 08:17:38.359167 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8141145d-2a12-4069-9185-c8123c6a4c5a-config-data\") pod \"ceilometer-0\" (UID: \"8141145d-2a12-4069-9185-c8123c6a4c5a\") " pod="openstack/ceilometer-0" Jan 25 08:17:38 crc kubenswrapper[4832]: I0125 08:17:38.359196 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/8141145d-2a12-4069-9185-c8123c6a4c5a-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"8141145d-2a12-4069-9185-c8123c6a4c5a\") " pod="openstack/ceilometer-0" Jan 25 08:17:38 crc kubenswrapper[4832]: I0125 08:17:38.359243 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8141145d-2a12-4069-9185-c8123c6a4c5a-scripts\") pod \"ceilometer-0\" (UID: \"8141145d-2a12-4069-9185-c8123c6a4c5a\") " pod="openstack/ceilometer-0" Jan 25 08:17:38 crc kubenswrapper[4832]: I0125 08:17:38.359286 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lmhcm\" (UniqueName: \"kubernetes.io/projected/8141145d-2a12-4069-9185-c8123c6a4c5a-kube-api-access-lmhcm\") pod \"ceilometer-0\" (UID: \"8141145d-2a12-4069-9185-c8123c6a4c5a\") " pod="openstack/ceilometer-0" Jan 25 08:17:38 crc kubenswrapper[4832]: I0125 08:17:38.359421 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8141145d-2a12-4069-9185-c8123c6a4c5a-run-httpd\") pod \"ceilometer-0\" (UID: \"8141145d-2a12-4069-9185-c8123c6a4c5a\") " pod="openstack/ceilometer-0" Jan 25 08:17:38 crc kubenswrapper[4832]: I0125 08:17:38.359485 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8141145d-2a12-4069-9185-c8123c6a4c5a-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"8141145d-2a12-4069-9185-c8123c6a4c5a\") " pod="openstack/ceilometer-0" Jan 25 08:17:38 crc kubenswrapper[4832]: I0125 08:17:38.460829 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8141145d-2a12-4069-9185-c8123c6a4c5a-run-httpd\") pod \"ceilometer-0\" (UID: \"8141145d-2a12-4069-9185-c8123c6a4c5a\") " pod="openstack/ceilometer-0" Jan 25 08:17:38 crc kubenswrapper[4832]: I0125 08:17:38.460910 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8141145d-2a12-4069-9185-c8123c6a4c5a-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"8141145d-2a12-4069-9185-c8123c6a4c5a\") " pod="openstack/ceilometer-0" Jan 25 08:17:38 crc kubenswrapper[4832]: I0125 08:17:38.460991 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8141145d-2a12-4069-9185-c8123c6a4c5a-log-httpd\") pod \"ceilometer-0\" (UID: \"8141145d-2a12-4069-9185-c8123c6a4c5a\") " pod="openstack/ceilometer-0" Jan 25 08:17:38 crc kubenswrapper[4832]: I0125 08:17:38.461018 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/8141145d-2a12-4069-9185-c8123c6a4c5a-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"8141145d-2a12-4069-9185-c8123c6a4c5a\") " pod="openstack/ceilometer-0" Jan 25 08:17:38 crc kubenswrapper[4832]: I0125 08:17:38.461038 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8141145d-2a12-4069-9185-c8123c6a4c5a-config-data\") pod \"ceilometer-0\" (UID: \"8141145d-2a12-4069-9185-c8123c6a4c5a\") " pod="openstack/ceilometer-0" Jan 25 08:17:38 crc kubenswrapper[4832]: I0125 08:17:38.461076 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8141145d-2a12-4069-9185-c8123c6a4c5a-scripts\") pod \"ceilometer-0\" (UID: \"8141145d-2a12-4069-9185-c8123c6a4c5a\") " pod="openstack/ceilometer-0" Jan 25 08:17:38 crc kubenswrapper[4832]: I0125 08:17:38.461108 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lmhcm\" (UniqueName: \"kubernetes.io/projected/8141145d-2a12-4069-9185-c8123c6a4c5a-kube-api-access-lmhcm\") pod \"ceilometer-0\" (UID: \"8141145d-2a12-4069-9185-c8123c6a4c5a\") " pod="openstack/ceilometer-0" Jan 25 08:17:38 crc kubenswrapper[4832]: I0125 08:17:38.462223 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8141145d-2a12-4069-9185-c8123c6a4c5a-log-httpd\") pod \"ceilometer-0\" (UID: \"8141145d-2a12-4069-9185-c8123c6a4c5a\") " pod="openstack/ceilometer-0" Jan 25 08:17:38 crc kubenswrapper[4832]: I0125 08:17:38.462648 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8141145d-2a12-4069-9185-c8123c6a4c5a-run-httpd\") pod \"ceilometer-0\" (UID: \"8141145d-2a12-4069-9185-c8123c6a4c5a\") " pod="openstack/ceilometer-0" Jan 25 08:17:38 crc kubenswrapper[4832]: I0125 08:17:38.468473 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8141145d-2a12-4069-9185-c8123c6a4c5a-config-data\") pod \"ceilometer-0\" (UID: \"8141145d-2a12-4069-9185-c8123c6a4c5a\") " pod="openstack/ceilometer-0" Jan 25 08:17:38 crc kubenswrapper[4832]: I0125 08:17:38.468548 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/8141145d-2a12-4069-9185-c8123c6a4c5a-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"8141145d-2a12-4069-9185-c8123c6a4c5a\") " pod="openstack/ceilometer-0" Jan 25 08:17:38 crc kubenswrapper[4832]: I0125 08:17:38.468657 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8141145d-2a12-4069-9185-c8123c6a4c5a-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"8141145d-2a12-4069-9185-c8123c6a4c5a\") " pod="openstack/ceilometer-0" Jan 25 08:17:38 crc kubenswrapper[4832]: I0125 08:17:38.469958 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8141145d-2a12-4069-9185-c8123c6a4c5a-scripts\") pod \"ceilometer-0\" (UID: \"8141145d-2a12-4069-9185-c8123c6a4c5a\") " pod="openstack/ceilometer-0" Jan 25 08:17:38 crc kubenswrapper[4832]: I0125 08:17:38.478777 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lmhcm\" (UniqueName: \"kubernetes.io/projected/8141145d-2a12-4069-9185-c8123c6a4c5a-kube-api-access-lmhcm\") pod \"ceilometer-0\" (UID: \"8141145d-2a12-4069-9185-c8123c6a4c5a\") " pod="openstack/ceilometer-0" Jan 25 08:17:38 crc kubenswrapper[4832]: I0125 08:17:38.598254 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 25 08:17:38 crc kubenswrapper[4832]: I0125 08:17:38.749571 4832 scope.go:117] "RemoveContainer" containerID="80ead176e0214684ca013c5450b3e1fa0793628cff13d97e18cbad858a013eeb" Jan 25 08:17:38 crc kubenswrapper[4832]: E0125 08:17:38.750141 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"80ead176e0214684ca013c5450b3e1fa0793628cff13d97e18cbad858a013eeb\": container with ID starting with 80ead176e0214684ca013c5450b3e1fa0793628cff13d97e18cbad858a013eeb not found: ID does not exist" containerID="80ead176e0214684ca013c5450b3e1fa0793628cff13d97e18cbad858a013eeb" Jan 25 08:17:38 crc kubenswrapper[4832]: I0125 08:17:38.750182 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"80ead176e0214684ca013c5450b3e1fa0793628cff13d97e18cbad858a013eeb"} err="failed to get container status \"80ead176e0214684ca013c5450b3e1fa0793628cff13d97e18cbad858a013eeb\": rpc error: code = NotFound desc = could not find container \"80ead176e0214684ca013c5450b3e1fa0793628cff13d97e18cbad858a013eeb\": container with ID starting with 80ead176e0214684ca013c5450b3e1fa0793628cff13d97e18cbad858a013eeb not found: ID does not exist" Jan 25 08:17:38 crc kubenswrapper[4832]: I0125 08:17:38.750201 4832 scope.go:117] "RemoveContainer" containerID="260b9f431a0ab35c8f270dde3563f44bb24fd5dbb6fa9b801c86c1a410bba7b0" Jan 25 08:17:38 crc kubenswrapper[4832]: E0125 08:17:38.751020 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"260b9f431a0ab35c8f270dde3563f44bb24fd5dbb6fa9b801c86c1a410bba7b0\": container with ID starting with 260b9f431a0ab35c8f270dde3563f44bb24fd5dbb6fa9b801c86c1a410bba7b0 not found: ID does not exist" containerID="260b9f431a0ab35c8f270dde3563f44bb24fd5dbb6fa9b801c86c1a410bba7b0" Jan 25 08:17:38 crc kubenswrapper[4832]: I0125 08:17:38.751042 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"260b9f431a0ab35c8f270dde3563f44bb24fd5dbb6fa9b801c86c1a410bba7b0"} err="failed to get container status \"260b9f431a0ab35c8f270dde3563f44bb24fd5dbb6fa9b801c86c1a410bba7b0\": rpc error: code = NotFound desc = could not find container \"260b9f431a0ab35c8f270dde3563f44bb24fd5dbb6fa9b801c86c1a410bba7b0\": container with ID starting with 260b9f431a0ab35c8f270dde3563f44bb24fd5dbb6fa9b801c86c1a410bba7b0 not found: ID does not exist" Jan 25 08:17:38 crc kubenswrapper[4832]: I0125 08:17:38.751077 4832 scope.go:117] "RemoveContainer" containerID="85c0e91c20c0cf4665f5772c909cd611dea308b192eb95c11e760cf76cf980d6" Jan 25 08:17:38 crc kubenswrapper[4832]: E0125 08:17:38.751422 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"85c0e91c20c0cf4665f5772c909cd611dea308b192eb95c11e760cf76cf980d6\": container with ID starting with 85c0e91c20c0cf4665f5772c909cd611dea308b192eb95c11e760cf76cf980d6 not found: ID does not exist" containerID="85c0e91c20c0cf4665f5772c909cd611dea308b192eb95c11e760cf76cf980d6" Jan 25 08:17:38 crc kubenswrapper[4832]: I0125 08:17:38.751444 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"85c0e91c20c0cf4665f5772c909cd611dea308b192eb95c11e760cf76cf980d6"} err="failed to get container status \"85c0e91c20c0cf4665f5772c909cd611dea308b192eb95c11e760cf76cf980d6\": rpc error: code = NotFound desc = could not find container \"85c0e91c20c0cf4665f5772c909cd611dea308b192eb95c11e760cf76cf980d6\": container with ID starting with 85c0e91c20c0cf4665f5772c909cd611dea308b192eb95c11e760cf76cf980d6 not found: ID does not exist" Jan 25 08:17:38 crc kubenswrapper[4832]: I0125 08:17:38.751487 4832 scope.go:117] "RemoveContainer" containerID="5b825fd873141b7eab1296c4cdce62309d49d65cdfab5ed0839d858df789af5f" Jan 25 08:17:38 crc kubenswrapper[4832]: E0125 08:17:38.751918 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5b825fd873141b7eab1296c4cdce62309d49d65cdfab5ed0839d858df789af5f\": container with ID starting with 5b825fd873141b7eab1296c4cdce62309d49d65cdfab5ed0839d858df789af5f not found: ID does not exist" containerID="5b825fd873141b7eab1296c4cdce62309d49d65cdfab5ed0839d858df789af5f" Jan 25 08:17:38 crc kubenswrapper[4832]: I0125 08:17:38.751991 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5b825fd873141b7eab1296c4cdce62309d49d65cdfab5ed0839d858df789af5f"} err="failed to get container status \"5b825fd873141b7eab1296c4cdce62309d49d65cdfab5ed0839d858df789af5f\": rpc error: code = NotFound desc = could not find container \"5b825fd873141b7eab1296c4cdce62309d49d65cdfab5ed0839d858df789af5f\": container with ID starting with 5b825fd873141b7eab1296c4cdce62309d49d65cdfab5ed0839d858df789af5f not found: ID does not exist" Jan 25 08:17:39 crc kubenswrapper[4832]: I0125 08:17:39.104364 4832 generic.go:334] "Generic (PLEG): container finished" podID="573d9b12-352d-4b14-b79c-f2a4a3bfec61" containerID="bb732af1be5b8febd9fa4b66ceda9d6420275da7a02af0dbc3f119bbf4968964" exitCode=0 Jan 25 08:17:39 crc kubenswrapper[4832]: I0125 08:17:39.104719 4832 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 25 08:17:39 crc kubenswrapper[4832]: I0125 08:17:39.104727 4832 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 25 08:17:39 crc kubenswrapper[4832]: I0125 08:17:39.104818 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-856b6b4996-m59cl" event={"ID":"573d9b12-352d-4b14-b79c-f2a4a3bfec61","Type":"ContainerDied","Data":"bb732af1be5b8febd9fa4b66ceda9d6420275da7a02af0dbc3f119bbf4968964"} Jan 25 08:17:39 crc kubenswrapper[4832]: I0125 08:17:39.271312 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 25 08:17:39 crc kubenswrapper[4832]: I0125 08:17:39.345749 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Jan 25 08:17:39 crc kubenswrapper[4832]: I0125 08:17:39.347474 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Jan 25 08:17:39 crc kubenswrapper[4832]: I0125 08:17:39.427029 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Jan 25 08:17:39 crc kubenswrapper[4832]: I0125 08:17:39.427283 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Jan 25 08:17:39 crc kubenswrapper[4832]: I0125 08:17:39.458997 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Jan 25 08:17:39 crc kubenswrapper[4832]: I0125 08:17:39.485338 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Jan 25 08:17:39 crc kubenswrapper[4832]: I0125 08:17:39.687069 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d34a22ee-66f7-411b-a395-7c52e98c6ef3" path="/var/lib/kubelet/pods/d34a22ee-66f7-411b-a395-7c52e98c6ef3/volumes" Jan 25 08:17:39 crc kubenswrapper[4832]: I0125 08:17:39.811640 4832 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-856b6b4996-m59cl" podUID="573d9b12-352d-4b14-b79c-f2a4a3bfec61" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.145:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.145:8443: connect: connection refused" Jan 25 08:17:40 crc kubenswrapper[4832]: I0125 08:17:40.122538 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8141145d-2a12-4069-9185-c8123c6a4c5a","Type":"ContainerStarted","Data":"54d8642a687fd5abb58434d5238cdd3c3d03b78179d5c13187a43fdc705ac23e"} Jan 25 08:17:40 crc kubenswrapper[4832]: I0125 08:17:40.123030 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8141145d-2a12-4069-9185-c8123c6a4c5a","Type":"ContainerStarted","Data":"a5ae5cda1bb23a86cc06760561e89a2699bd2f4815d59de6cf4fc70e58070011"} Jan 25 08:17:40 crc kubenswrapper[4832]: I0125 08:17:40.123557 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Jan 25 08:17:40 crc kubenswrapper[4832]: I0125 08:17:40.123779 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Jan 25 08:17:41 crc kubenswrapper[4832]: I0125 08:17:41.139380 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8141145d-2a12-4069-9185-c8123c6a4c5a","Type":"ContainerStarted","Data":"e10c6daa61c39ff66282ffcac1ab1e46b701f433b0de516e451c88e224723d1f"} Jan 25 08:17:42 crc kubenswrapper[4832]: I0125 08:17:42.154845 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8141145d-2a12-4069-9185-c8123c6a4c5a","Type":"ContainerStarted","Data":"effc41c602053ee68d78fdae6cb1f4e3e9aec87a3b757adff28bc94f394694cd"} Jan 25 08:17:42 crc kubenswrapper[4832]: I0125 08:17:42.894219 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Jan 25 08:17:42 crc kubenswrapper[4832]: I0125 08:17:42.894675 4832 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 25 08:17:42 crc kubenswrapper[4832]: I0125 08:17:42.920429 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Jan 25 08:17:43 crc kubenswrapper[4832]: I0125 08:17:43.187469 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8141145d-2a12-4069-9185-c8123c6a4c5a","Type":"ContainerStarted","Data":"67f35f533a6a9f3811e30cef50dc5702248babff181393bf68af963c85a7d631"} Jan 25 08:17:43 crc kubenswrapper[4832]: I0125 08:17:43.187829 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 25 08:17:43 crc kubenswrapper[4832]: I0125 08:17:43.211876 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=1.8545448119999999 podStartE2EDuration="5.211852456s" podCreationTimestamp="2026-01-25 08:17:38 +0000 UTC" firstStartedPulling="2026-01-25 08:17:39.29431794 +0000 UTC m=+1241.968141473" lastFinishedPulling="2026-01-25 08:17:42.651625584 +0000 UTC m=+1245.325449117" observedRunningTime="2026-01-25 08:17:43.204425191 +0000 UTC m=+1245.878248724" watchObservedRunningTime="2026-01-25 08:17:43.211852456 +0000 UTC m=+1245.885675989" Jan 25 08:17:43 crc kubenswrapper[4832]: I0125 08:17:43.923610 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 25 08:17:45 crc kubenswrapper[4832]: I0125 08:17:45.219915 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="8141145d-2a12-4069-9185-c8123c6a4c5a" containerName="ceilometer-central-agent" containerID="cri-o://54d8642a687fd5abb58434d5238cdd3c3d03b78179d5c13187a43fdc705ac23e" gracePeriod=30 Jan 25 08:17:45 crc kubenswrapper[4832]: I0125 08:17:45.220422 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="8141145d-2a12-4069-9185-c8123c6a4c5a" containerName="proxy-httpd" containerID="cri-o://67f35f533a6a9f3811e30cef50dc5702248babff181393bf68af963c85a7d631" gracePeriod=30 Jan 25 08:17:45 crc kubenswrapper[4832]: I0125 08:17:45.220515 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="8141145d-2a12-4069-9185-c8123c6a4c5a" containerName="ceilometer-notification-agent" containerID="cri-o://e10c6daa61c39ff66282ffcac1ab1e46b701f433b0de516e451c88e224723d1f" gracePeriod=30 Jan 25 08:17:45 crc kubenswrapper[4832]: I0125 08:17:45.220553 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="8141145d-2a12-4069-9185-c8123c6a4c5a" containerName="sg-core" containerID="cri-o://effc41c602053ee68d78fdae6cb1f4e3e9aec87a3b757adff28bc94f394694cd" gracePeriod=30 Jan 25 08:17:46 crc kubenswrapper[4832]: I0125 08:17:46.232159 4832 generic.go:334] "Generic (PLEG): container finished" podID="8141145d-2a12-4069-9185-c8123c6a4c5a" containerID="67f35f533a6a9f3811e30cef50dc5702248babff181393bf68af963c85a7d631" exitCode=0 Jan 25 08:17:46 crc kubenswrapper[4832]: I0125 08:17:46.232500 4832 generic.go:334] "Generic (PLEG): container finished" podID="8141145d-2a12-4069-9185-c8123c6a4c5a" containerID="effc41c602053ee68d78fdae6cb1f4e3e9aec87a3b757adff28bc94f394694cd" exitCode=2 Jan 25 08:17:46 crc kubenswrapper[4832]: I0125 08:17:46.232513 4832 generic.go:334] "Generic (PLEG): container finished" podID="8141145d-2a12-4069-9185-c8123c6a4c5a" containerID="e10c6daa61c39ff66282ffcac1ab1e46b701f433b0de516e451c88e224723d1f" exitCode=0 Jan 25 08:17:46 crc kubenswrapper[4832]: I0125 08:17:46.232204 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8141145d-2a12-4069-9185-c8123c6a4c5a","Type":"ContainerDied","Data":"67f35f533a6a9f3811e30cef50dc5702248babff181393bf68af963c85a7d631"} Jan 25 08:17:46 crc kubenswrapper[4832]: I0125 08:17:46.232548 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8141145d-2a12-4069-9185-c8123c6a4c5a","Type":"ContainerDied","Data":"effc41c602053ee68d78fdae6cb1f4e3e9aec87a3b757adff28bc94f394694cd"} Jan 25 08:17:46 crc kubenswrapper[4832]: I0125 08:17:46.232562 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8141145d-2a12-4069-9185-c8123c6a4c5a","Type":"ContainerDied","Data":"e10c6daa61c39ff66282ffcac1ab1e46b701f433b0de516e451c88e224723d1f"} Jan 25 08:17:49 crc kubenswrapper[4832]: I0125 08:17:49.262559 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-7snwr" event={"ID":"47eba52e-d8fa-4336-9c57-7006963eb712","Type":"ContainerStarted","Data":"76d01e0bfcc0872f53687478ef0953e42b8d701cf8269f78bc992fc53ee4a3b2"} Jan 25 08:17:49 crc kubenswrapper[4832]: I0125 08:17:49.284491 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-db-sync-7snwr" podStartSLOduration=1.8151787270000002 podStartE2EDuration="12.284469161s" podCreationTimestamp="2026-01-25 08:17:37 +0000 UTC" firstStartedPulling="2026-01-25 08:17:37.983114624 +0000 UTC m=+1240.656938157" lastFinishedPulling="2026-01-25 08:17:48.452405048 +0000 UTC m=+1251.126228591" observedRunningTime="2026-01-25 08:17:49.279197601 +0000 UTC m=+1251.953021124" watchObservedRunningTime="2026-01-25 08:17:49.284469161 +0000 UTC m=+1251.958292694" Jan 25 08:17:49 crc kubenswrapper[4832]: I0125 08:17:49.811969 4832 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-856b6b4996-m59cl" podUID="573d9b12-352d-4b14-b79c-f2a4a3bfec61" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.145:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.145:8443: connect: connection refused" Jan 25 08:17:50 crc kubenswrapper[4832]: I0125 08:17:50.277770 4832 generic.go:334] "Generic (PLEG): container finished" podID="8141145d-2a12-4069-9185-c8123c6a4c5a" containerID="54d8642a687fd5abb58434d5238cdd3c3d03b78179d5c13187a43fdc705ac23e" exitCode=0 Jan 25 08:17:50 crc kubenswrapper[4832]: I0125 08:17:50.278715 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8141145d-2a12-4069-9185-c8123c6a4c5a","Type":"ContainerDied","Data":"54d8642a687fd5abb58434d5238cdd3c3d03b78179d5c13187a43fdc705ac23e"} Jan 25 08:17:50 crc kubenswrapper[4832]: I0125 08:17:50.373245 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 25 08:17:50 crc kubenswrapper[4832]: I0125 08:17:50.506738 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8141145d-2a12-4069-9185-c8123c6a4c5a-run-httpd\") pod \"8141145d-2a12-4069-9185-c8123c6a4c5a\" (UID: \"8141145d-2a12-4069-9185-c8123c6a4c5a\") " Jan 25 08:17:50 crc kubenswrapper[4832]: I0125 08:17:50.506825 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8141145d-2a12-4069-9185-c8123c6a4c5a-config-data\") pod \"8141145d-2a12-4069-9185-c8123c6a4c5a\" (UID: \"8141145d-2a12-4069-9185-c8123c6a4c5a\") " Jan 25 08:17:50 crc kubenswrapper[4832]: I0125 08:17:50.506931 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8141145d-2a12-4069-9185-c8123c6a4c5a-combined-ca-bundle\") pod \"8141145d-2a12-4069-9185-c8123c6a4c5a\" (UID: \"8141145d-2a12-4069-9185-c8123c6a4c5a\") " Jan 25 08:17:50 crc kubenswrapper[4832]: I0125 08:17:50.506971 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lmhcm\" (UniqueName: \"kubernetes.io/projected/8141145d-2a12-4069-9185-c8123c6a4c5a-kube-api-access-lmhcm\") pod \"8141145d-2a12-4069-9185-c8123c6a4c5a\" (UID: \"8141145d-2a12-4069-9185-c8123c6a4c5a\") " Jan 25 08:17:50 crc kubenswrapper[4832]: I0125 08:17:50.507008 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/8141145d-2a12-4069-9185-c8123c6a4c5a-sg-core-conf-yaml\") pod \"8141145d-2a12-4069-9185-c8123c6a4c5a\" (UID: \"8141145d-2a12-4069-9185-c8123c6a4c5a\") " Jan 25 08:17:50 crc kubenswrapper[4832]: I0125 08:17:50.507093 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8141145d-2a12-4069-9185-c8123c6a4c5a-scripts\") pod \"8141145d-2a12-4069-9185-c8123c6a4c5a\" (UID: \"8141145d-2a12-4069-9185-c8123c6a4c5a\") " Jan 25 08:17:50 crc kubenswrapper[4832]: I0125 08:17:50.507124 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8141145d-2a12-4069-9185-c8123c6a4c5a-log-httpd\") pod \"8141145d-2a12-4069-9185-c8123c6a4c5a\" (UID: \"8141145d-2a12-4069-9185-c8123c6a4c5a\") " Jan 25 08:17:50 crc kubenswrapper[4832]: I0125 08:17:50.507877 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8141145d-2a12-4069-9185-c8123c6a4c5a-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "8141145d-2a12-4069-9185-c8123c6a4c5a" (UID: "8141145d-2a12-4069-9185-c8123c6a4c5a"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 25 08:17:50 crc kubenswrapper[4832]: I0125 08:17:50.508110 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8141145d-2a12-4069-9185-c8123c6a4c5a-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "8141145d-2a12-4069-9185-c8123c6a4c5a" (UID: "8141145d-2a12-4069-9185-c8123c6a4c5a"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 25 08:17:50 crc kubenswrapper[4832]: I0125 08:17:50.530586 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8141145d-2a12-4069-9185-c8123c6a4c5a-scripts" (OuterVolumeSpecName: "scripts") pod "8141145d-2a12-4069-9185-c8123c6a4c5a" (UID: "8141145d-2a12-4069-9185-c8123c6a4c5a"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 25 08:17:50 crc kubenswrapper[4832]: I0125 08:17:50.530654 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8141145d-2a12-4069-9185-c8123c6a4c5a-kube-api-access-lmhcm" (OuterVolumeSpecName: "kube-api-access-lmhcm") pod "8141145d-2a12-4069-9185-c8123c6a4c5a" (UID: "8141145d-2a12-4069-9185-c8123c6a4c5a"). InnerVolumeSpecName "kube-api-access-lmhcm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 25 08:17:50 crc kubenswrapper[4832]: I0125 08:17:50.539418 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8141145d-2a12-4069-9185-c8123c6a4c5a-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "8141145d-2a12-4069-9185-c8123c6a4c5a" (UID: "8141145d-2a12-4069-9185-c8123c6a4c5a"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 25 08:17:50 crc kubenswrapper[4832]: I0125 08:17:50.585505 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8141145d-2a12-4069-9185-c8123c6a4c5a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8141145d-2a12-4069-9185-c8123c6a4c5a" (UID: "8141145d-2a12-4069-9185-c8123c6a4c5a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 25 08:17:50 crc kubenswrapper[4832]: I0125 08:17:50.610141 4832 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8141145d-2a12-4069-9185-c8123c6a4c5a-scripts\") on node \"crc\" DevicePath \"\"" Jan 25 08:17:50 crc kubenswrapper[4832]: I0125 08:17:50.610186 4832 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8141145d-2a12-4069-9185-c8123c6a4c5a-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 25 08:17:50 crc kubenswrapper[4832]: I0125 08:17:50.610207 4832 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8141145d-2a12-4069-9185-c8123c6a4c5a-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 25 08:17:50 crc kubenswrapper[4832]: I0125 08:17:50.610226 4832 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8141145d-2a12-4069-9185-c8123c6a4c5a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 25 08:17:50 crc kubenswrapper[4832]: I0125 08:17:50.610246 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lmhcm\" (UniqueName: \"kubernetes.io/projected/8141145d-2a12-4069-9185-c8123c6a4c5a-kube-api-access-lmhcm\") on node \"crc\" DevicePath \"\"" Jan 25 08:17:50 crc kubenswrapper[4832]: I0125 08:17:50.610291 4832 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/8141145d-2a12-4069-9185-c8123c6a4c5a-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 25 08:17:50 crc kubenswrapper[4832]: I0125 08:17:50.641590 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8141145d-2a12-4069-9185-c8123c6a4c5a-config-data" (OuterVolumeSpecName: "config-data") pod "8141145d-2a12-4069-9185-c8123c6a4c5a" (UID: "8141145d-2a12-4069-9185-c8123c6a4c5a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 25 08:17:50 crc kubenswrapper[4832]: I0125 08:17:50.711786 4832 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8141145d-2a12-4069-9185-c8123c6a4c5a-config-data\") on node \"crc\" DevicePath \"\"" Jan 25 08:17:51 crc kubenswrapper[4832]: I0125 08:17:51.293123 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8141145d-2a12-4069-9185-c8123c6a4c5a","Type":"ContainerDied","Data":"a5ae5cda1bb23a86cc06760561e89a2699bd2f4815d59de6cf4fc70e58070011"} Jan 25 08:17:51 crc kubenswrapper[4832]: I0125 08:17:51.293194 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 25 08:17:51 crc kubenswrapper[4832]: I0125 08:17:51.293582 4832 scope.go:117] "RemoveContainer" containerID="67f35f533a6a9f3811e30cef50dc5702248babff181393bf68af963c85a7d631" Jan 25 08:17:51 crc kubenswrapper[4832]: I0125 08:17:51.333447 4832 scope.go:117] "RemoveContainer" containerID="effc41c602053ee68d78fdae6cb1f4e3e9aec87a3b757adff28bc94f394694cd" Jan 25 08:17:51 crc kubenswrapper[4832]: I0125 08:17:51.333525 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 25 08:17:51 crc kubenswrapper[4832]: I0125 08:17:51.343295 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 25 08:17:51 crc kubenswrapper[4832]: I0125 08:17:51.377200 4832 scope.go:117] "RemoveContainer" containerID="e10c6daa61c39ff66282ffcac1ab1e46b701f433b0de516e451c88e224723d1f" Jan 25 08:17:51 crc kubenswrapper[4832]: I0125 08:17:51.382102 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 25 08:17:51 crc kubenswrapper[4832]: E0125 08:17:51.382600 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8141145d-2a12-4069-9185-c8123c6a4c5a" containerName="ceilometer-central-agent" Jan 25 08:17:51 crc kubenswrapper[4832]: I0125 08:17:51.382626 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="8141145d-2a12-4069-9185-c8123c6a4c5a" containerName="ceilometer-central-agent" Jan 25 08:17:51 crc kubenswrapper[4832]: E0125 08:17:51.382638 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8141145d-2a12-4069-9185-c8123c6a4c5a" containerName="proxy-httpd" Jan 25 08:17:51 crc kubenswrapper[4832]: I0125 08:17:51.382647 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="8141145d-2a12-4069-9185-c8123c6a4c5a" containerName="proxy-httpd" Jan 25 08:17:51 crc kubenswrapper[4832]: E0125 08:17:51.382685 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8141145d-2a12-4069-9185-c8123c6a4c5a" containerName="sg-core" Jan 25 08:17:51 crc kubenswrapper[4832]: I0125 08:17:51.382695 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="8141145d-2a12-4069-9185-c8123c6a4c5a" containerName="sg-core" Jan 25 08:17:51 crc kubenswrapper[4832]: E0125 08:17:51.382724 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8141145d-2a12-4069-9185-c8123c6a4c5a" containerName="ceilometer-notification-agent" Jan 25 08:17:51 crc kubenswrapper[4832]: I0125 08:17:51.382732 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="8141145d-2a12-4069-9185-c8123c6a4c5a" containerName="ceilometer-notification-agent" Jan 25 08:17:51 crc kubenswrapper[4832]: I0125 08:17:51.382910 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="8141145d-2a12-4069-9185-c8123c6a4c5a" containerName="sg-core" Jan 25 08:17:51 crc kubenswrapper[4832]: I0125 08:17:51.382934 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="8141145d-2a12-4069-9185-c8123c6a4c5a" containerName="ceilometer-notification-agent" Jan 25 08:17:51 crc kubenswrapper[4832]: I0125 08:17:51.382945 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="8141145d-2a12-4069-9185-c8123c6a4c5a" containerName="proxy-httpd" Jan 25 08:17:51 crc kubenswrapper[4832]: I0125 08:17:51.382957 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="8141145d-2a12-4069-9185-c8123c6a4c5a" containerName="ceilometer-central-agent" Jan 25 08:17:51 crc kubenswrapper[4832]: I0125 08:17:51.384578 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 25 08:17:51 crc kubenswrapper[4832]: I0125 08:17:51.387640 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 25 08:17:51 crc kubenswrapper[4832]: I0125 08:17:51.387852 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 25 08:17:51 crc kubenswrapper[4832]: I0125 08:17:51.393634 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 25 08:17:51 crc kubenswrapper[4832]: I0125 08:17:51.401944 4832 scope.go:117] "RemoveContainer" containerID="54d8642a687fd5abb58434d5238cdd3c3d03b78179d5c13187a43fdc705ac23e" Jan 25 08:17:51 crc kubenswrapper[4832]: I0125 08:17:51.528106 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9286b541-140f-4479-b885-6c5e01384354-log-httpd\") pod \"ceilometer-0\" (UID: \"9286b541-140f-4479-b885-6c5e01384354\") " pod="openstack/ceilometer-0" Jan 25 08:17:51 crc kubenswrapper[4832]: I0125 08:17:51.528190 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/9286b541-140f-4479-b885-6c5e01384354-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"9286b541-140f-4479-b885-6c5e01384354\") " pod="openstack/ceilometer-0" Jan 25 08:17:51 crc kubenswrapper[4832]: I0125 08:17:51.528215 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9286b541-140f-4479-b885-6c5e01384354-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"9286b541-140f-4479-b885-6c5e01384354\") " pod="openstack/ceilometer-0" Jan 25 08:17:51 crc kubenswrapper[4832]: I0125 08:17:51.528286 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zxt8g\" (UniqueName: \"kubernetes.io/projected/9286b541-140f-4479-b885-6c5e01384354-kube-api-access-zxt8g\") pod \"ceilometer-0\" (UID: \"9286b541-140f-4479-b885-6c5e01384354\") " pod="openstack/ceilometer-0" Jan 25 08:17:51 crc kubenswrapper[4832]: I0125 08:17:51.528354 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9286b541-140f-4479-b885-6c5e01384354-config-data\") pod \"ceilometer-0\" (UID: \"9286b541-140f-4479-b885-6c5e01384354\") " pod="openstack/ceilometer-0" Jan 25 08:17:51 crc kubenswrapper[4832]: I0125 08:17:51.528375 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9286b541-140f-4479-b885-6c5e01384354-run-httpd\") pod \"ceilometer-0\" (UID: \"9286b541-140f-4479-b885-6c5e01384354\") " pod="openstack/ceilometer-0" Jan 25 08:17:51 crc kubenswrapper[4832]: I0125 08:17:51.528433 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9286b541-140f-4479-b885-6c5e01384354-scripts\") pod \"ceilometer-0\" (UID: \"9286b541-140f-4479-b885-6c5e01384354\") " pod="openstack/ceilometer-0" Jan 25 08:17:51 crc kubenswrapper[4832]: I0125 08:17:51.629705 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zxt8g\" (UniqueName: \"kubernetes.io/projected/9286b541-140f-4479-b885-6c5e01384354-kube-api-access-zxt8g\") pod \"ceilometer-0\" (UID: \"9286b541-140f-4479-b885-6c5e01384354\") " pod="openstack/ceilometer-0" Jan 25 08:17:51 crc kubenswrapper[4832]: I0125 08:17:51.629788 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9286b541-140f-4479-b885-6c5e01384354-config-data\") pod \"ceilometer-0\" (UID: \"9286b541-140f-4479-b885-6c5e01384354\") " pod="openstack/ceilometer-0" Jan 25 08:17:51 crc kubenswrapper[4832]: I0125 08:17:51.629806 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9286b541-140f-4479-b885-6c5e01384354-run-httpd\") pod \"ceilometer-0\" (UID: \"9286b541-140f-4479-b885-6c5e01384354\") " pod="openstack/ceilometer-0" Jan 25 08:17:51 crc kubenswrapper[4832]: I0125 08:17:51.629837 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9286b541-140f-4479-b885-6c5e01384354-scripts\") pod \"ceilometer-0\" (UID: \"9286b541-140f-4479-b885-6c5e01384354\") " pod="openstack/ceilometer-0" Jan 25 08:17:51 crc kubenswrapper[4832]: I0125 08:17:51.630337 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9286b541-140f-4479-b885-6c5e01384354-run-httpd\") pod \"ceilometer-0\" (UID: \"9286b541-140f-4479-b885-6c5e01384354\") " pod="openstack/ceilometer-0" Jan 25 08:17:51 crc kubenswrapper[4832]: I0125 08:17:51.630478 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9286b541-140f-4479-b885-6c5e01384354-log-httpd\") pod \"ceilometer-0\" (UID: \"9286b541-140f-4479-b885-6c5e01384354\") " pod="openstack/ceilometer-0" Jan 25 08:17:51 crc kubenswrapper[4832]: I0125 08:17:51.630546 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9286b541-140f-4479-b885-6c5e01384354-log-httpd\") pod \"ceilometer-0\" (UID: \"9286b541-140f-4479-b885-6c5e01384354\") " pod="openstack/ceilometer-0" Jan 25 08:17:51 crc kubenswrapper[4832]: I0125 08:17:51.630608 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/9286b541-140f-4479-b885-6c5e01384354-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"9286b541-140f-4479-b885-6c5e01384354\") " pod="openstack/ceilometer-0" Jan 25 08:17:51 crc kubenswrapper[4832]: I0125 08:17:51.630651 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9286b541-140f-4479-b885-6c5e01384354-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"9286b541-140f-4479-b885-6c5e01384354\") " pod="openstack/ceilometer-0" Jan 25 08:17:51 crc kubenswrapper[4832]: I0125 08:17:51.635301 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/9286b541-140f-4479-b885-6c5e01384354-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"9286b541-140f-4479-b885-6c5e01384354\") " pod="openstack/ceilometer-0" Jan 25 08:17:51 crc kubenswrapper[4832]: I0125 08:17:51.636507 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9286b541-140f-4479-b885-6c5e01384354-scripts\") pod \"ceilometer-0\" (UID: \"9286b541-140f-4479-b885-6c5e01384354\") " pod="openstack/ceilometer-0" Jan 25 08:17:51 crc kubenswrapper[4832]: I0125 08:17:51.636895 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9286b541-140f-4479-b885-6c5e01384354-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"9286b541-140f-4479-b885-6c5e01384354\") " pod="openstack/ceilometer-0" Jan 25 08:17:51 crc kubenswrapper[4832]: I0125 08:17:51.637902 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9286b541-140f-4479-b885-6c5e01384354-config-data\") pod \"ceilometer-0\" (UID: \"9286b541-140f-4479-b885-6c5e01384354\") " pod="openstack/ceilometer-0" Jan 25 08:17:51 crc kubenswrapper[4832]: I0125 08:17:51.653498 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zxt8g\" (UniqueName: \"kubernetes.io/projected/9286b541-140f-4479-b885-6c5e01384354-kube-api-access-zxt8g\") pod \"ceilometer-0\" (UID: \"9286b541-140f-4479-b885-6c5e01384354\") " pod="openstack/ceilometer-0" Jan 25 08:17:51 crc kubenswrapper[4832]: I0125 08:17:51.682717 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8141145d-2a12-4069-9185-c8123c6a4c5a" path="/var/lib/kubelet/pods/8141145d-2a12-4069-9185-c8123c6a4c5a/volumes" Jan 25 08:17:51 crc kubenswrapper[4832]: I0125 08:17:51.728655 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 25 08:17:52 crc kubenswrapper[4832]: I0125 08:17:52.149770 4832 patch_prober.go:28] interesting pod/machine-config-daemon-9r9sz container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 25 08:17:52 crc kubenswrapper[4832]: I0125 08:17:52.150253 4832 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-9r9sz" podUID="1fb47e8e-c812-41b4-9be7-3fad81e121b0" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 25 08:17:52 crc kubenswrapper[4832]: I0125 08:17:52.189985 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 25 08:17:52 crc kubenswrapper[4832]: I0125 08:17:52.302960 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9286b541-140f-4479-b885-6c5e01384354","Type":"ContainerStarted","Data":"fa5306bf348763a0674ba0f3b28d6f757b93b5c20ab08607614f840aa0f83262"} Jan 25 08:17:53 crc kubenswrapper[4832]: I0125 08:17:53.157589 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 25 08:17:53 crc kubenswrapper[4832]: I0125 08:17:53.314308 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9286b541-140f-4479-b885-6c5e01384354","Type":"ContainerStarted","Data":"08d0bbe427beece5c78442e6e1a0432d39f2cad866f0a7b9180e0aab3d98392f"} Jan 25 08:17:54 crc kubenswrapper[4832]: I0125 08:17:54.325713 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9286b541-140f-4479-b885-6c5e01384354","Type":"ContainerStarted","Data":"4d5e2fc296072b935c0ecaef9cd310181a723549626b9ffc6f2de7b43b147b90"} Jan 25 08:17:55 crc kubenswrapper[4832]: I0125 08:17:55.338840 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9286b541-140f-4479-b885-6c5e01384354","Type":"ContainerStarted","Data":"13bae70daef2993d97195a4781d978baeff7c838d8c50c0ebc3eea467c5ad10a"} Jan 25 08:17:56 crc kubenswrapper[4832]: I0125 08:17:56.349142 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9286b541-140f-4479-b885-6c5e01384354","Type":"ContainerStarted","Data":"cce3aabf7b1aab5dd066f780753e0fcd2a93227e5389f5ef3001dd7d3e2d904b"} Jan 25 08:17:56 crc kubenswrapper[4832]: I0125 08:17:56.349916 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 25 08:17:56 crc kubenswrapper[4832]: I0125 08:17:56.349555 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="9286b541-140f-4479-b885-6c5e01384354" containerName="proxy-httpd" containerID="cri-o://cce3aabf7b1aab5dd066f780753e0fcd2a93227e5389f5ef3001dd7d3e2d904b" gracePeriod=30 Jan 25 08:17:56 crc kubenswrapper[4832]: I0125 08:17:56.349281 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="9286b541-140f-4479-b885-6c5e01384354" containerName="ceilometer-central-agent" containerID="cri-o://08d0bbe427beece5c78442e6e1a0432d39f2cad866f0a7b9180e0aab3d98392f" gracePeriod=30 Jan 25 08:17:56 crc kubenswrapper[4832]: I0125 08:17:56.349582 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="9286b541-140f-4479-b885-6c5e01384354" containerName="ceilometer-notification-agent" containerID="cri-o://4d5e2fc296072b935c0ecaef9cd310181a723549626b9ffc6f2de7b43b147b90" gracePeriod=30 Jan 25 08:17:56 crc kubenswrapper[4832]: I0125 08:17:56.349570 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="9286b541-140f-4479-b885-6c5e01384354" containerName="sg-core" containerID="cri-o://13bae70daef2993d97195a4781d978baeff7c838d8c50c0ebc3eea467c5ad10a" gracePeriod=30 Jan 25 08:17:56 crc kubenswrapper[4832]: I0125 08:17:56.379759 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.186422797 podStartE2EDuration="5.379740443s" podCreationTimestamp="2026-01-25 08:17:51 +0000 UTC" firstStartedPulling="2026-01-25 08:17:52.206771874 +0000 UTC m=+1254.880595407" lastFinishedPulling="2026-01-25 08:17:55.40008952 +0000 UTC m=+1258.073913053" observedRunningTime="2026-01-25 08:17:56.379485665 +0000 UTC m=+1259.053309198" watchObservedRunningTime="2026-01-25 08:17:56.379740443 +0000 UTC m=+1259.053563976" Jan 25 08:17:57 crc kubenswrapper[4832]: I0125 08:17:57.359789 4832 generic.go:334] "Generic (PLEG): container finished" podID="9286b541-140f-4479-b885-6c5e01384354" containerID="cce3aabf7b1aab5dd066f780753e0fcd2a93227e5389f5ef3001dd7d3e2d904b" exitCode=0 Jan 25 08:17:57 crc kubenswrapper[4832]: I0125 08:17:57.360090 4832 generic.go:334] "Generic (PLEG): container finished" podID="9286b541-140f-4479-b885-6c5e01384354" containerID="13bae70daef2993d97195a4781d978baeff7c838d8c50c0ebc3eea467c5ad10a" exitCode=2 Jan 25 08:17:57 crc kubenswrapper[4832]: I0125 08:17:57.360102 4832 generic.go:334] "Generic (PLEG): container finished" podID="9286b541-140f-4479-b885-6c5e01384354" containerID="4d5e2fc296072b935c0ecaef9cd310181a723549626b9ffc6f2de7b43b147b90" exitCode=0 Jan 25 08:17:57 crc kubenswrapper[4832]: I0125 08:17:57.359963 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9286b541-140f-4479-b885-6c5e01384354","Type":"ContainerDied","Data":"cce3aabf7b1aab5dd066f780753e0fcd2a93227e5389f5ef3001dd7d3e2d904b"} Jan 25 08:17:57 crc kubenswrapper[4832]: I0125 08:17:57.360140 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9286b541-140f-4479-b885-6c5e01384354","Type":"ContainerDied","Data":"13bae70daef2993d97195a4781d978baeff7c838d8c50c0ebc3eea467c5ad10a"} Jan 25 08:17:57 crc kubenswrapper[4832]: I0125 08:17:57.360154 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9286b541-140f-4479-b885-6c5e01384354","Type":"ContainerDied","Data":"4d5e2fc296072b935c0ecaef9cd310181a723549626b9ffc6f2de7b43b147b90"} Jan 25 08:17:58 crc kubenswrapper[4832]: I0125 08:17:58.368931 4832 generic.go:334] "Generic (PLEG): container finished" podID="47eba52e-d8fa-4336-9c57-7006963eb712" containerID="76d01e0bfcc0872f53687478ef0953e42b8d701cf8269f78bc992fc53ee4a3b2" exitCode=0 Jan 25 08:17:58 crc kubenswrapper[4832]: I0125 08:17:58.368973 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-7snwr" event={"ID":"47eba52e-d8fa-4336-9c57-7006963eb712","Type":"ContainerDied","Data":"76d01e0bfcc0872f53687478ef0953e42b8d701cf8269f78bc992fc53ee4a3b2"} Jan 25 08:17:59 crc kubenswrapper[4832]: I0125 08:17:59.749470 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-7snwr" Jan 25 08:17:59 crc kubenswrapper[4832]: I0125 08:17:59.811251 4832 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-856b6b4996-m59cl" podUID="573d9b12-352d-4b14-b79c-f2a4a3bfec61" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.145:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.145:8443: connect: connection refused" Jan 25 08:17:59 crc kubenswrapper[4832]: I0125 08:17:59.811379 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-856b6b4996-m59cl" Jan 25 08:17:59 crc kubenswrapper[4832]: I0125 08:17:59.885530 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bmngs\" (UniqueName: \"kubernetes.io/projected/47eba52e-d8fa-4336-9c57-7006963eb712-kube-api-access-bmngs\") pod \"47eba52e-d8fa-4336-9c57-7006963eb712\" (UID: \"47eba52e-d8fa-4336-9c57-7006963eb712\") " Jan 25 08:17:59 crc kubenswrapper[4832]: I0125 08:17:59.885648 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/47eba52e-d8fa-4336-9c57-7006963eb712-combined-ca-bundle\") pod \"47eba52e-d8fa-4336-9c57-7006963eb712\" (UID: \"47eba52e-d8fa-4336-9c57-7006963eb712\") " Jan 25 08:17:59 crc kubenswrapper[4832]: I0125 08:17:59.885739 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/47eba52e-d8fa-4336-9c57-7006963eb712-scripts\") pod \"47eba52e-d8fa-4336-9c57-7006963eb712\" (UID: \"47eba52e-d8fa-4336-9c57-7006963eb712\") " Jan 25 08:17:59 crc kubenswrapper[4832]: I0125 08:17:59.885911 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/47eba52e-d8fa-4336-9c57-7006963eb712-config-data\") pod \"47eba52e-d8fa-4336-9c57-7006963eb712\" (UID: \"47eba52e-d8fa-4336-9c57-7006963eb712\") " Jan 25 08:17:59 crc kubenswrapper[4832]: I0125 08:17:59.891770 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/47eba52e-d8fa-4336-9c57-7006963eb712-scripts" (OuterVolumeSpecName: "scripts") pod "47eba52e-d8fa-4336-9c57-7006963eb712" (UID: "47eba52e-d8fa-4336-9c57-7006963eb712"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 25 08:17:59 crc kubenswrapper[4832]: I0125 08:17:59.891787 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/47eba52e-d8fa-4336-9c57-7006963eb712-kube-api-access-bmngs" (OuterVolumeSpecName: "kube-api-access-bmngs") pod "47eba52e-d8fa-4336-9c57-7006963eb712" (UID: "47eba52e-d8fa-4336-9c57-7006963eb712"). InnerVolumeSpecName "kube-api-access-bmngs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 25 08:17:59 crc kubenswrapper[4832]: I0125 08:17:59.913555 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/47eba52e-d8fa-4336-9c57-7006963eb712-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "47eba52e-d8fa-4336-9c57-7006963eb712" (UID: "47eba52e-d8fa-4336-9c57-7006963eb712"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 25 08:17:59 crc kubenswrapper[4832]: I0125 08:17:59.923890 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/47eba52e-d8fa-4336-9c57-7006963eb712-config-data" (OuterVolumeSpecName: "config-data") pod "47eba52e-d8fa-4336-9c57-7006963eb712" (UID: "47eba52e-d8fa-4336-9c57-7006963eb712"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 25 08:17:59 crc kubenswrapper[4832]: I0125 08:17:59.975443 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 25 08:17:59 crc kubenswrapper[4832]: I0125 08:17:59.988079 4832 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/47eba52e-d8fa-4336-9c57-7006963eb712-config-data\") on node \"crc\" DevicePath \"\"" Jan 25 08:17:59 crc kubenswrapper[4832]: I0125 08:17:59.988113 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bmngs\" (UniqueName: \"kubernetes.io/projected/47eba52e-d8fa-4336-9c57-7006963eb712-kube-api-access-bmngs\") on node \"crc\" DevicePath \"\"" Jan 25 08:17:59 crc kubenswrapper[4832]: I0125 08:17:59.988124 4832 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/47eba52e-d8fa-4336-9c57-7006963eb712-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 25 08:17:59 crc kubenswrapper[4832]: I0125 08:17:59.988134 4832 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/47eba52e-d8fa-4336-9c57-7006963eb712-scripts\") on node \"crc\" DevicePath \"\"" Jan 25 08:18:00 crc kubenswrapper[4832]: I0125 08:18:00.089481 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9286b541-140f-4479-b885-6c5e01384354-log-httpd\") pod \"9286b541-140f-4479-b885-6c5e01384354\" (UID: \"9286b541-140f-4479-b885-6c5e01384354\") " Jan 25 08:18:00 crc kubenswrapper[4832]: I0125 08:18:00.089553 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9286b541-140f-4479-b885-6c5e01384354-run-httpd\") pod \"9286b541-140f-4479-b885-6c5e01384354\" (UID: \"9286b541-140f-4479-b885-6c5e01384354\") " Jan 25 08:18:00 crc kubenswrapper[4832]: I0125 08:18:00.089602 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9286b541-140f-4479-b885-6c5e01384354-combined-ca-bundle\") pod \"9286b541-140f-4479-b885-6c5e01384354\" (UID: \"9286b541-140f-4479-b885-6c5e01384354\") " Jan 25 08:18:00 crc kubenswrapper[4832]: I0125 08:18:00.089625 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9286b541-140f-4479-b885-6c5e01384354-config-data\") pod \"9286b541-140f-4479-b885-6c5e01384354\" (UID: \"9286b541-140f-4479-b885-6c5e01384354\") " Jan 25 08:18:00 crc kubenswrapper[4832]: I0125 08:18:00.089697 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/9286b541-140f-4479-b885-6c5e01384354-sg-core-conf-yaml\") pod \"9286b541-140f-4479-b885-6c5e01384354\" (UID: \"9286b541-140f-4479-b885-6c5e01384354\") " Jan 25 08:18:00 crc kubenswrapper[4832]: I0125 08:18:00.089719 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9286b541-140f-4479-b885-6c5e01384354-scripts\") pod \"9286b541-140f-4479-b885-6c5e01384354\" (UID: \"9286b541-140f-4479-b885-6c5e01384354\") " Jan 25 08:18:00 crc kubenswrapper[4832]: I0125 08:18:00.089793 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zxt8g\" (UniqueName: \"kubernetes.io/projected/9286b541-140f-4479-b885-6c5e01384354-kube-api-access-zxt8g\") pod \"9286b541-140f-4479-b885-6c5e01384354\" (UID: \"9286b541-140f-4479-b885-6c5e01384354\") " Jan 25 08:18:00 crc kubenswrapper[4832]: I0125 08:18:00.090014 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9286b541-140f-4479-b885-6c5e01384354-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "9286b541-140f-4479-b885-6c5e01384354" (UID: "9286b541-140f-4479-b885-6c5e01384354"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 25 08:18:00 crc kubenswrapper[4832]: I0125 08:18:00.090309 4832 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9286b541-140f-4479-b885-6c5e01384354-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 25 08:18:00 crc kubenswrapper[4832]: I0125 08:18:00.090476 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9286b541-140f-4479-b885-6c5e01384354-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "9286b541-140f-4479-b885-6c5e01384354" (UID: "9286b541-140f-4479-b885-6c5e01384354"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 25 08:18:00 crc kubenswrapper[4832]: I0125 08:18:00.095649 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9286b541-140f-4479-b885-6c5e01384354-kube-api-access-zxt8g" (OuterVolumeSpecName: "kube-api-access-zxt8g") pod "9286b541-140f-4479-b885-6c5e01384354" (UID: "9286b541-140f-4479-b885-6c5e01384354"). InnerVolumeSpecName "kube-api-access-zxt8g". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 25 08:18:00 crc kubenswrapper[4832]: I0125 08:18:00.095594 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9286b541-140f-4479-b885-6c5e01384354-scripts" (OuterVolumeSpecName: "scripts") pod "9286b541-140f-4479-b885-6c5e01384354" (UID: "9286b541-140f-4479-b885-6c5e01384354"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 25 08:18:00 crc kubenswrapper[4832]: I0125 08:18:00.115361 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9286b541-140f-4479-b885-6c5e01384354-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "9286b541-140f-4479-b885-6c5e01384354" (UID: "9286b541-140f-4479-b885-6c5e01384354"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 25 08:18:00 crc kubenswrapper[4832]: I0125 08:18:00.164210 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9286b541-140f-4479-b885-6c5e01384354-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9286b541-140f-4479-b885-6c5e01384354" (UID: "9286b541-140f-4479-b885-6c5e01384354"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 25 08:18:00 crc kubenswrapper[4832]: I0125 08:18:00.184148 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9286b541-140f-4479-b885-6c5e01384354-config-data" (OuterVolumeSpecName: "config-data") pod "9286b541-140f-4479-b885-6c5e01384354" (UID: "9286b541-140f-4479-b885-6c5e01384354"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 25 08:18:00 crc kubenswrapper[4832]: I0125 08:18:00.192626 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zxt8g\" (UniqueName: \"kubernetes.io/projected/9286b541-140f-4479-b885-6c5e01384354-kube-api-access-zxt8g\") on node \"crc\" DevicePath \"\"" Jan 25 08:18:00 crc kubenswrapper[4832]: I0125 08:18:00.192652 4832 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9286b541-140f-4479-b885-6c5e01384354-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 25 08:18:00 crc kubenswrapper[4832]: I0125 08:18:00.192664 4832 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9286b541-140f-4479-b885-6c5e01384354-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 25 08:18:00 crc kubenswrapper[4832]: I0125 08:18:00.192673 4832 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9286b541-140f-4479-b885-6c5e01384354-config-data\") on node \"crc\" DevicePath \"\"" Jan 25 08:18:00 crc kubenswrapper[4832]: I0125 08:18:00.192683 4832 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/9286b541-140f-4479-b885-6c5e01384354-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 25 08:18:00 crc kubenswrapper[4832]: I0125 08:18:00.192691 4832 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9286b541-140f-4479-b885-6c5e01384354-scripts\") on node \"crc\" DevicePath \"\"" Jan 25 08:18:00 crc kubenswrapper[4832]: I0125 08:18:00.388098 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-7snwr" Jan 25 08:18:00 crc kubenswrapper[4832]: I0125 08:18:00.388089 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-7snwr" event={"ID":"47eba52e-d8fa-4336-9c57-7006963eb712","Type":"ContainerDied","Data":"ffebb57d29fdc0accddaccb2e15f57ff8159296f20ced3ec8fcb355ff52b4534"} Jan 25 08:18:00 crc kubenswrapper[4832]: I0125 08:18:00.388257 4832 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ffebb57d29fdc0accddaccb2e15f57ff8159296f20ced3ec8fcb355ff52b4534" Jan 25 08:18:00 crc kubenswrapper[4832]: I0125 08:18:00.390592 4832 generic.go:334] "Generic (PLEG): container finished" podID="9286b541-140f-4479-b885-6c5e01384354" containerID="08d0bbe427beece5c78442e6e1a0432d39f2cad866f0a7b9180e0aab3d98392f" exitCode=0 Jan 25 08:18:00 crc kubenswrapper[4832]: I0125 08:18:00.390629 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9286b541-140f-4479-b885-6c5e01384354","Type":"ContainerDied","Data":"08d0bbe427beece5c78442e6e1a0432d39f2cad866f0a7b9180e0aab3d98392f"} Jan 25 08:18:00 crc kubenswrapper[4832]: I0125 08:18:00.390653 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 25 08:18:00 crc kubenswrapper[4832]: I0125 08:18:00.390668 4832 scope.go:117] "RemoveContainer" containerID="cce3aabf7b1aab5dd066f780753e0fcd2a93227e5389f5ef3001dd7d3e2d904b" Jan 25 08:18:00 crc kubenswrapper[4832]: I0125 08:18:00.390655 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9286b541-140f-4479-b885-6c5e01384354","Type":"ContainerDied","Data":"fa5306bf348763a0674ba0f3b28d6f757b93b5c20ab08607614f840aa0f83262"} Jan 25 08:18:00 crc kubenswrapper[4832]: I0125 08:18:00.424397 4832 scope.go:117] "RemoveContainer" containerID="13bae70daef2993d97195a4781d978baeff7c838d8c50c0ebc3eea467c5ad10a" Jan 25 08:18:00 crc kubenswrapper[4832]: I0125 08:18:00.436946 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 25 08:18:00 crc kubenswrapper[4832]: I0125 08:18:00.450841 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 25 08:18:00 crc kubenswrapper[4832]: I0125 08:18:00.462652 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 25 08:18:00 crc kubenswrapper[4832]: E0125 08:18:00.463197 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9286b541-140f-4479-b885-6c5e01384354" containerName="ceilometer-notification-agent" Jan 25 08:18:00 crc kubenswrapper[4832]: I0125 08:18:00.463224 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="9286b541-140f-4479-b885-6c5e01384354" containerName="ceilometer-notification-agent" Jan 25 08:18:00 crc kubenswrapper[4832]: E0125 08:18:00.463249 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9286b541-140f-4479-b885-6c5e01384354" containerName="sg-core" Jan 25 08:18:00 crc kubenswrapper[4832]: I0125 08:18:00.463259 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="9286b541-140f-4479-b885-6c5e01384354" containerName="sg-core" Jan 25 08:18:00 crc kubenswrapper[4832]: E0125 08:18:00.463276 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9286b541-140f-4479-b885-6c5e01384354" containerName="proxy-httpd" Jan 25 08:18:00 crc kubenswrapper[4832]: I0125 08:18:00.463284 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="9286b541-140f-4479-b885-6c5e01384354" containerName="proxy-httpd" Jan 25 08:18:00 crc kubenswrapper[4832]: E0125 08:18:00.463304 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9286b541-140f-4479-b885-6c5e01384354" containerName="ceilometer-central-agent" Jan 25 08:18:00 crc kubenswrapper[4832]: I0125 08:18:00.463313 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="9286b541-140f-4479-b885-6c5e01384354" containerName="ceilometer-central-agent" Jan 25 08:18:00 crc kubenswrapper[4832]: E0125 08:18:00.463331 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="47eba52e-d8fa-4336-9c57-7006963eb712" containerName="nova-cell0-conductor-db-sync" Jan 25 08:18:00 crc kubenswrapper[4832]: I0125 08:18:00.463339 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="47eba52e-d8fa-4336-9c57-7006963eb712" containerName="nova-cell0-conductor-db-sync" Jan 25 08:18:00 crc kubenswrapper[4832]: I0125 08:18:00.463624 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="9286b541-140f-4479-b885-6c5e01384354" containerName="ceilometer-notification-agent" Jan 25 08:18:00 crc kubenswrapper[4832]: I0125 08:18:00.463649 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="9286b541-140f-4479-b885-6c5e01384354" containerName="proxy-httpd" Jan 25 08:18:00 crc kubenswrapper[4832]: I0125 08:18:00.463666 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="9286b541-140f-4479-b885-6c5e01384354" containerName="ceilometer-central-agent" Jan 25 08:18:00 crc kubenswrapper[4832]: I0125 08:18:00.463678 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="9286b541-140f-4479-b885-6c5e01384354" containerName="sg-core" Jan 25 08:18:00 crc kubenswrapper[4832]: I0125 08:18:00.463694 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="47eba52e-d8fa-4336-9c57-7006963eb712" containerName="nova-cell0-conductor-db-sync" Jan 25 08:18:00 crc kubenswrapper[4832]: I0125 08:18:00.467341 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 25 08:18:00 crc kubenswrapper[4832]: I0125 08:18:00.470090 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 25 08:18:00 crc kubenswrapper[4832]: I0125 08:18:00.470359 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 25 08:18:00 crc kubenswrapper[4832]: I0125 08:18:00.474771 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 25 08:18:00 crc kubenswrapper[4832]: I0125 08:18:00.527573 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 25 08:18:00 crc kubenswrapper[4832]: I0125 08:18:00.529075 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Jan 25 08:18:00 crc kubenswrapper[4832]: I0125 08:18:00.534765 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-rf7hq" Jan 25 08:18:00 crc kubenswrapper[4832]: I0125 08:18:00.534947 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Jan 25 08:18:00 crc kubenswrapper[4832]: I0125 08:18:00.535974 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 25 08:18:00 crc kubenswrapper[4832]: I0125 08:18:00.601582 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a51d9c21-2b71-46f0-8b63-9961d75247fe-config-data\") pod \"ceilometer-0\" (UID: \"a51d9c21-2b71-46f0-8b63-9961d75247fe\") " pod="openstack/ceilometer-0" Jan 25 08:18:00 crc kubenswrapper[4832]: I0125 08:18:00.601631 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a51d9c21-2b71-46f0-8b63-9961d75247fe-log-httpd\") pod \"ceilometer-0\" (UID: \"a51d9c21-2b71-46f0-8b63-9961d75247fe\") " pod="openstack/ceilometer-0" Jan 25 08:18:00 crc kubenswrapper[4832]: I0125 08:18:00.601698 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a51d9c21-2b71-46f0-8b63-9961d75247fe-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"a51d9c21-2b71-46f0-8b63-9961d75247fe\") " pod="openstack/ceilometer-0" Jan 25 08:18:00 crc kubenswrapper[4832]: I0125 08:18:00.601716 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a51d9c21-2b71-46f0-8b63-9961d75247fe-scripts\") pod \"ceilometer-0\" (UID: \"a51d9c21-2b71-46f0-8b63-9961d75247fe\") " pod="openstack/ceilometer-0" Jan 25 08:18:00 crc kubenswrapper[4832]: I0125 08:18:00.601731 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/a51d9c21-2b71-46f0-8b63-9961d75247fe-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"a51d9c21-2b71-46f0-8b63-9961d75247fe\") " pod="openstack/ceilometer-0" Jan 25 08:18:00 crc kubenswrapper[4832]: I0125 08:18:00.601748 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b0b4eea3-2f29-4f50-a197-b3e6531df0d5-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"b0b4eea3-2f29-4f50-a197-b3e6531df0d5\") " pod="openstack/nova-cell0-conductor-0" Jan 25 08:18:00 crc kubenswrapper[4832]: I0125 08:18:00.601935 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a51d9c21-2b71-46f0-8b63-9961d75247fe-run-httpd\") pod \"ceilometer-0\" (UID: \"a51d9c21-2b71-46f0-8b63-9961d75247fe\") " pod="openstack/ceilometer-0" Jan 25 08:18:00 crc kubenswrapper[4832]: I0125 08:18:00.602020 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zgr69\" (UniqueName: \"kubernetes.io/projected/b0b4eea3-2f29-4f50-a197-b3e6531df0d5-kube-api-access-zgr69\") pod \"nova-cell0-conductor-0\" (UID: \"b0b4eea3-2f29-4f50-a197-b3e6531df0d5\") " pod="openstack/nova-cell0-conductor-0" Jan 25 08:18:00 crc kubenswrapper[4832]: I0125 08:18:00.602083 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n7nj7\" (UniqueName: \"kubernetes.io/projected/a51d9c21-2b71-46f0-8b63-9961d75247fe-kube-api-access-n7nj7\") pod \"ceilometer-0\" (UID: \"a51d9c21-2b71-46f0-8b63-9961d75247fe\") " pod="openstack/ceilometer-0" Jan 25 08:18:00 crc kubenswrapper[4832]: I0125 08:18:00.602129 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b0b4eea3-2f29-4f50-a197-b3e6531df0d5-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"b0b4eea3-2f29-4f50-a197-b3e6531df0d5\") " pod="openstack/nova-cell0-conductor-0" Jan 25 08:18:00 crc kubenswrapper[4832]: I0125 08:18:00.689472 4832 scope.go:117] "RemoveContainer" containerID="4d5e2fc296072b935c0ecaef9cd310181a723549626b9ffc6f2de7b43b147b90" Jan 25 08:18:00 crc kubenswrapper[4832]: I0125 08:18:00.706519 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a51d9c21-2b71-46f0-8b63-9961d75247fe-run-httpd\") pod \"ceilometer-0\" (UID: \"a51d9c21-2b71-46f0-8b63-9961d75247fe\") " pod="openstack/ceilometer-0" Jan 25 08:18:00 crc kubenswrapper[4832]: I0125 08:18:00.706564 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zgr69\" (UniqueName: \"kubernetes.io/projected/b0b4eea3-2f29-4f50-a197-b3e6531df0d5-kube-api-access-zgr69\") pod \"nova-cell0-conductor-0\" (UID: \"b0b4eea3-2f29-4f50-a197-b3e6531df0d5\") " pod="openstack/nova-cell0-conductor-0" Jan 25 08:18:00 crc kubenswrapper[4832]: I0125 08:18:00.706610 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n7nj7\" (UniqueName: \"kubernetes.io/projected/a51d9c21-2b71-46f0-8b63-9961d75247fe-kube-api-access-n7nj7\") pod \"ceilometer-0\" (UID: \"a51d9c21-2b71-46f0-8b63-9961d75247fe\") " pod="openstack/ceilometer-0" Jan 25 08:18:00 crc kubenswrapper[4832]: I0125 08:18:00.706627 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b0b4eea3-2f29-4f50-a197-b3e6531df0d5-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"b0b4eea3-2f29-4f50-a197-b3e6531df0d5\") " pod="openstack/nova-cell0-conductor-0" Jan 25 08:18:00 crc kubenswrapper[4832]: I0125 08:18:00.706656 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a51d9c21-2b71-46f0-8b63-9961d75247fe-config-data\") pod \"ceilometer-0\" (UID: \"a51d9c21-2b71-46f0-8b63-9961d75247fe\") " pod="openstack/ceilometer-0" Jan 25 08:18:00 crc kubenswrapper[4832]: I0125 08:18:00.706678 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a51d9c21-2b71-46f0-8b63-9961d75247fe-log-httpd\") pod \"ceilometer-0\" (UID: \"a51d9c21-2b71-46f0-8b63-9961d75247fe\") " pod="openstack/ceilometer-0" Jan 25 08:18:00 crc kubenswrapper[4832]: I0125 08:18:00.706706 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a51d9c21-2b71-46f0-8b63-9961d75247fe-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"a51d9c21-2b71-46f0-8b63-9961d75247fe\") " pod="openstack/ceilometer-0" Jan 25 08:18:00 crc kubenswrapper[4832]: I0125 08:18:00.706723 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a51d9c21-2b71-46f0-8b63-9961d75247fe-scripts\") pod \"ceilometer-0\" (UID: \"a51d9c21-2b71-46f0-8b63-9961d75247fe\") " pod="openstack/ceilometer-0" Jan 25 08:18:00 crc kubenswrapper[4832]: I0125 08:18:00.706739 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/a51d9c21-2b71-46f0-8b63-9961d75247fe-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"a51d9c21-2b71-46f0-8b63-9961d75247fe\") " pod="openstack/ceilometer-0" Jan 25 08:18:00 crc kubenswrapper[4832]: I0125 08:18:00.706754 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b0b4eea3-2f29-4f50-a197-b3e6531df0d5-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"b0b4eea3-2f29-4f50-a197-b3e6531df0d5\") " pod="openstack/nova-cell0-conductor-0" Jan 25 08:18:00 crc kubenswrapper[4832]: I0125 08:18:00.716876 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a51d9c21-2b71-46f0-8b63-9961d75247fe-run-httpd\") pod \"ceilometer-0\" (UID: \"a51d9c21-2b71-46f0-8b63-9961d75247fe\") " pod="openstack/ceilometer-0" Jan 25 08:18:00 crc kubenswrapper[4832]: I0125 08:18:00.718192 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a51d9c21-2b71-46f0-8b63-9961d75247fe-log-httpd\") pod \"ceilometer-0\" (UID: \"a51d9c21-2b71-46f0-8b63-9961d75247fe\") " pod="openstack/ceilometer-0" Jan 25 08:18:00 crc kubenswrapper[4832]: I0125 08:18:00.719017 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b0b4eea3-2f29-4f50-a197-b3e6531df0d5-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"b0b4eea3-2f29-4f50-a197-b3e6531df0d5\") " pod="openstack/nova-cell0-conductor-0" Jan 25 08:18:00 crc kubenswrapper[4832]: I0125 08:18:00.722150 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a51d9c21-2b71-46f0-8b63-9961d75247fe-scripts\") pod \"ceilometer-0\" (UID: \"a51d9c21-2b71-46f0-8b63-9961d75247fe\") " pod="openstack/ceilometer-0" Jan 25 08:18:00 crc kubenswrapper[4832]: I0125 08:18:00.726458 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/a51d9c21-2b71-46f0-8b63-9961d75247fe-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"a51d9c21-2b71-46f0-8b63-9961d75247fe\") " pod="openstack/ceilometer-0" Jan 25 08:18:00 crc kubenswrapper[4832]: I0125 08:18:00.726793 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a51d9c21-2b71-46f0-8b63-9961d75247fe-config-data\") pod \"ceilometer-0\" (UID: \"a51d9c21-2b71-46f0-8b63-9961d75247fe\") " pod="openstack/ceilometer-0" Jan 25 08:18:00 crc kubenswrapper[4832]: I0125 08:18:00.734616 4832 scope.go:117] "RemoveContainer" containerID="08d0bbe427beece5c78442e6e1a0432d39f2cad866f0a7b9180e0aab3d98392f" Jan 25 08:18:00 crc kubenswrapper[4832]: I0125 08:18:00.735053 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b0b4eea3-2f29-4f50-a197-b3e6531df0d5-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"b0b4eea3-2f29-4f50-a197-b3e6531df0d5\") " pod="openstack/nova-cell0-conductor-0" Jan 25 08:18:00 crc kubenswrapper[4832]: I0125 08:18:00.736377 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a51d9c21-2b71-46f0-8b63-9961d75247fe-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"a51d9c21-2b71-46f0-8b63-9961d75247fe\") " pod="openstack/ceilometer-0" Jan 25 08:18:00 crc kubenswrapper[4832]: I0125 08:18:00.738246 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zgr69\" (UniqueName: \"kubernetes.io/projected/b0b4eea3-2f29-4f50-a197-b3e6531df0d5-kube-api-access-zgr69\") pod \"nova-cell0-conductor-0\" (UID: \"b0b4eea3-2f29-4f50-a197-b3e6531df0d5\") " pod="openstack/nova-cell0-conductor-0" Jan 25 08:18:00 crc kubenswrapper[4832]: I0125 08:18:00.754799 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n7nj7\" (UniqueName: \"kubernetes.io/projected/a51d9c21-2b71-46f0-8b63-9961d75247fe-kube-api-access-n7nj7\") pod \"ceilometer-0\" (UID: \"a51d9c21-2b71-46f0-8b63-9961d75247fe\") " pod="openstack/ceilometer-0" Jan 25 08:18:00 crc kubenswrapper[4832]: I0125 08:18:00.789201 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 25 08:18:00 crc kubenswrapper[4832]: I0125 08:18:00.847750 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Jan 25 08:18:00 crc kubenswrapper[4832]: I0125 08:18:00.898847 4832 scope.go:117] "RemoveContainer" containerID="cce3aabf7b1aab5dd066f780753e0fcd2a93227e5389f5ef3001dd7d3e2d904b" Jan 25 08:18:00 crc kubenswrapper[4832]: E0125 08:18:00.899435 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cce3aabf7b1aab5dd066f780753e0fcd2a93227e5389f5ef3001dd7d3e2d904b\": container with ID starting with cce3aabf7b1aab5dd066f780753e0fcd2a93227e5389f5ef3001dd7d3e2d904b not found: ID does not exist" containerID="cce3aabf7b1aab5dd066f780753e0fcd2a93227e5389f5ef3001dd7d3e2d904b" Jan 25 08:18:00 crc kubenswrapper[4832]: I0125 08:18:00.899479 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cce3aabf7b1aab5dd066f780753e0fcd2a93227e5389f5ef3001dd7d3e2d904b"} err="failed to get container status \"cce3aabf7b1aab5dd066f780753e0fcd2a93227e5389f5ef3001dd7d3e2d904b\": rpc error: code = NotFound desc = could not find container \"cce3aabf7b1aab5dd066f780753e0fcd2a93227e5389f5ef3001dd7d3e2d904b\": container with ID starting with cce3aabf7b1aab5dd066f780753e0fcd2a93227e5389f5ef3001dd7d3e2d904b not found: ID does not exist" Jan 25 08:18:00 crc kubenswrapper[4832]: I0125 08:18:00.899518 4832 scope.go:117] "RemoveContainer" containerID="13bae70daef2993d97195a4781d978baeff7c838d8c50c0ebc3eea467c5ad10a" Jan 25 08:18:00 crc kubenswrapper[4832]: E0125 08:18:00.899886 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"13bae70daef2993d97195a4781d978baeff7c838d8c50c0ebc3eea467c5ad10a\": container with ID starting with 13bae70daef2993d97195a4781d978baeff7c838d8c50c0ebc3eea467c5ad10a not found: ID does not exist" containerID="13bae70daef2993d97195a4781d978baeff7c838d8c50c0ebc3eea467c5ad10a" Jan 25 08:18:00 crc kubenswrapper[4832]: I0125 08:18:00.899926 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"13bae70daef2993d97195a4781d978baeff7c838d8c50c0ebc3eea467c5ad10a"} err="failed to get container status \"13bae70daef2993d97195a4781d978baeff7c838d8c50c0ebc3eea467c5ad10a\": rpc error: code = NotFound desc = could not find container \"13bae70daef2993d97195a4781d978baeff7c838d8c50c0ebc3eea467c5ad10a\": container with ID starting with 13bae70daef2993d97195a4781d978baeff7c838d8c50c0ebc3eea467c5ad10a not found: ID does not exist" Jan 25 08:18:00 crc kubenswrapper[4832]: I0125 08:18:00.899948 4832 scope.go:117] "RemoveContainer" containerID="4d5e2fc296072b935c0ecaef9cd310181a723549626b9ffc6f2de7b43b147b90" Jan 25 08:18:00 crc kubenswrapper[4832]: E0125 08:18:00.900221 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4d5e2fc296072b935c0ecaef9cd310181a723549626b9ffc6f2de7b43b147b90\": container with ID starting with 4d5e2fc296072b935c0ecaef9cd310181a723549626b9ffc6f2de7b43b147b90 not found: ID does not exist" containerID="4d5e2fc296072b935c0ecaef9cd310181a723549626b9ffc6f2de7b43b147b90" Jan 25 08:18:00 crc kubenswrapper[4832]: I0125 08:18:00.900272 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4d5e2fc296072b935c0ecaef9cd310181a723549626b9ffc6f2de7b43b147b90"} err="failed to get container status \"4d5e2fc296072b935c0ecaef9cd310181a723549626b9ffc6f2de7b43b147b90\": rpc error: code = NotFound desc = could not find container \"4d5e2fc296072b935c0ecaef9cd310181a723549626b9ffc6f2de7b43b147b90\": container with ID starting with 4d5e2fc296072b935c0ecaef9cd310181a723549626b9ffc6f2de7b43b147b90 not found: ID does not exist" Jan 25 08:18:00 crc kubenswrapper[4832]: I0125 08:18:00.900311 4832 scope.go:117] "RemoveContainer" containerID="08d0bbe427beece5c78442e6e1a0432d39f2cad866f0a7b9180e0aab3d98392f" Jan 25 08:18:00 crc kubenswrapper[4832]: E0125 08:18:00.900620 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"08d0bbe427beece5c78442e6e1a0432d39f2cad866f0a7b9180e0aab3d98392f\": container with ID starting with 08d0bbe427beece5c78442e6e1a0432d39f2cad866f0a7b9180e0aab3d98392f not found: ID does not exist" containerID="08d0bbe427beece5c78442e6e1a0432d39f2cad866f0a7b9180e0aab3d98392f" Jan 25 08:18:00 crc kubenswrapper[4832]: I0125 08:18:00.900645 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"08d0bbe427beece5c78442e6e1a0432d39f2cad866f0a7b9180e0aab3d98392f"} err="failed to get container status \"08d0bbe427beece5c78442e6e1a0432d39f2cad866f0a7b9180e0aab3d98392f\": rpc error: code = NotFound desc = could not find container \"08d0bbe427beece5c78442e6e1a0432d39f2cad866f0a7b9180e0aab3d98392f\": container with ID starting with 08d0bbe427beece5c78442e6e1a0432d39f2cad866f0a7b9180e0aab3d98392f not found: ID does not exist" Jan 25 08:18:01 crc kubenswrapper[4832]: I0125 08:18:01.622565 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 25 08:18:01 crc kubenswrapper[4832]: I0125 08:18:01.635372 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 25 08:18:01 crc kubenswrapper[4832]: I0125 08:18:01.686465 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9286b541-140f-4479-b885-6c5e01384354" path="/var/lib/kubelet/pods/9286b541-140f-4479-b885-6c5e01384354/volumes" Jan 25 08:18:02 crc kubenswrapper[4832]: I0125 08:18:02.410047 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"b0b4eea3-2f29-4f50-a197-b3e6531df0d5","Type":"ContainerStarted","Data":"e33067e3c8c37767a81cdb5ac19d1e4a94929f060d91a17545565b82c7015796"} Jan 25 08:18:02 crc kubenswrapper[4832]: I0125 08:18:02.429004 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"b0b4eea3-2f29-4f50-a197-b3e6531df0d5","Type":"ContainerStarted","Data":"f82c05bfb7f38eb78c73c1f4eacb2e6feb8b5cc6459ebb2f3de19c000668b30b"} Jan 25 08:18:02 crc kubenswrapper[4832]: I0125 08:18:02.430519 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell0-conductor-0" Jan 25 08:18:02 crc kubenswrapper[4832]: I0125 08:18:02.446343 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a51d9c21-2b71-46f0-8b63-9961d75247fe","Type":"ContainerStarted","Data":"bc92d92afa96c88a2d68885c5bc1fea24da6a85f74e6e5429f981fc324348a16"} Jan 25 08:18:02 crc kubenswrapper[4832]: I0125 08:18:02.446445 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a51d9c21-2b71-46f0-8b63-9961d75247fe","Type":"ContainerStarted","Data":"cb8b35be57621d2200a2533e36665fb8f3c966b024204287d6fa4f5f0430a94f"} Jan 25 08:18:02 crc kubenswrapper[4832]: I0125 08:18:02.466814 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-0" podStartSLOduration=2.466794775 podStartE2EDuration="2.466794775s" podCreationTimestamp="2026-01-25 08:18:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-25 08:18:02.459159273 +0000 UTC m=+1265.132982806" watchObservedRunningTime="2026-01-25 08:18:02.466794775 +0000 UTC m=+1265.140618308" Jan 25 08:18:03 crc kubenswrapper[4832]: I0125 08:18:03.463617 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a51d9c21-2b71-46f0-8b63-9961d75247fe","Type":"ContainerStarted","Data":"d6a35425c90b18fbe9e4730d3566e1a10343f541ac5eccd9145e3375295b8a75"} Jan 25 08:18:04 crc kubenswrapper[4832]: I0125 08:18:04.474369 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a51d9c21-2b71-46f0-8b63-9961d75247fe","Type":"ContainerStarted","Data":"9932a79a927984403bed18124182da23df84fc7421fe75b4cc847e0252c545c2"} Jan 25 08:18:05 crc kubenswrapper[4832]: I0125 08:18:05.083627 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-856b6b4996-m59cl" Jan 25 08:18:05 crc kubenswrapper[4832]: I0125 08:18:05.203504 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/573d9b12-352d-4b14-b79c-f2a4a3bfec61-scripts\") pod \"573d9b12-352d-4b14-b79c-f2a4a3bfec61\" (UID: \"573d9b12-352d-4b14-b79c-f2a4a3bfec61\") " Jan 25 08:18:05 crc kubenswrapper[4832]: I0125 08:18:05.203581 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mpzvt\" (UniqueName: \"kubernetes.io/projected/573d9b12-352d-4b14-b79c-f2a4a3bfec61-kube-api-access-mpzvt\") pod \"573d9b12-352d-4b14-b79c-f2a4a3bfec61\" (UID: \"573d9b12-352d-4b14-b79c-f2a4a3bfec61\") " Jan 25 08:18:05 crc kubenswrapper[4832]: I0125 08:18:05.203746 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/573d9b12-352d-4b14-b79c-f2a4a3bfec61-logs\") pod \"573d9b12-352d-4b14-b79c-f2a4a3bfec61\" (UID: \"573d9b12-352d-4b14-b79c-f2a4a3bfec61\") " Jan 25 08:18:05 crc kubenswrapper[4832]: I0125 08:18:05.203864 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/573d9b12-352d-4b14-b79c-f2a4a3bfec61-config-data\") pod \"573d9b12-352d-4b14-b79c-f2a4a3bfec61\" (UID: \"573d9b12-352d-4b14-b79c-f2a4a3bfec61\") " Jan 25 08:18:05 crc kubenswrapper[4832]: I0125 08:18:05.203911 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/573d9b12-352d-4b14-b79c-f2a4a3bfec61-horizon-tls-certs\") pod \"573d9b12-352d-4b14-b79c-f2a4a3bfec61\" (UID: \"573d9b12-352d-4b14-b79c-f2a4a3bfec61\") " Jan 25 08:18:05 crc kubenswrapper[4832]: I0125 08:18:05.203949 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/573d9b12-352d-4b14-b79c-f2a4a3bfec61-combined-ca-bundle\") pod \"573d9b12-352d-4b14-b79c-f2a4a3bfec61\" (UID: \"573d9b12-352d-4b14-b79c-f2a4a3bfec61\") " Jan 25 08:18:05 crc kubenswrapper[4832]: I0125 08:18:05.203992 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/573d9b12-352d-4b14-b79c-f2a4a3bfec61-horizon-secret-key\") pod \"573d9b12-352d-4b14-b79c-f2a4a3bfec61\" (UID: \"573d9b12-352d-4b14-b79c-f2a4a3bfec61\") " Jan 25 08:18:05 crc kubenswrapper[4832]: I0125 08:18:05.204380 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/573d9b12-352d-4b14-b79c-f2a4a3bfec61-logs" (OuterVolumeSpecName: "logs") pod "573d9b12-352d-4b14-b79c-f2a4a3bfec61" (UID: "573d9b12-352d-4b14-b79c-f2a4a3bfec61"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 25 08:18:05 crc kubenswrapper[4832]: I0125 08:18:05.208265 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/573d9b12-352d-4b14-b79c-f2a4a3bfec61-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "573d9b12-352d-4b14-b79c-f2a4a3bfec61" (UID: "573d9b12-352d-4b14-b79c-f2a4a3bfec61"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 25 08:18:05 crc kubenswrapper[4832]: I0125 08:18:05.210937 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/573d9b12-352d-4b14-b79c-f2a4a3bfec61-kube-api-access-mpzvt" (OuterVolumeSpecName: "kube-api-access-mpzvt") pod "573d9b12-352d-4b14-b79c-f2a4a3bfec61" (UID: "573d9b12-352d-4b14-b79c-f2a4a3bfec61"). InnerVolumeSpecName "kube-api-access-mpzvt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 25 08:18:05 crc kubenswrapper[4832]: I0125 08:18:05.234127 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/573d9b12-352d-4b14-b79c-f2a4a3bfec61-scripts" (OuterVolumeSpecName: "scripts") pod "573d9b12-352d-4b14-b79c-f2a4a3bfec61" (UID: "573d9b12-352d-4b14-b79c-f2a4a3bfec61"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 25 08:18:05 crc kubenswrapper[4832]: I0125 08:18:05.234768 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/573d9b12-352d-4b14-b79c-f2a4a3bfec61-config-data" (OuterVolumeSpecName: "config-data") pod "573d9b12-352d-4b14-b79c-f2a4a3bfec61" (UID: "573d9b12-352d-4b14-b79c-f2a4a3bfec61"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 25 08:18:05 crc kubenswrapper[4832]: I0125 08:18:05.240214 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/573d9b12-352d-4b14-b79c-f2a4a3bfec61-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "573d9b12-352d-4b14-b79c-f2a4a3bfec61" (UID: "573d9b12-352d-4b14-b79c-f2a4a3bfec61"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 25 08:18:05 crc kubenswrapper[4832]: I0125 08:18:05.260838 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/573d9b12-352d-4b14-b79c-f2a4a3bfec61-horizon-tls-certs" (OuterVolumeSpecName: "horizon-tls-certs") pod "573d9b12-352d-4b14-b79c-f2a4a3bfec61" (UID: "573d9b12-352d-4b14-b79c-f2a4a3bfec61"). InnerVolumeSpecName "horizon-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 25 08:18:05 crc kubenswrapper[4832]: I0125 08:18:05.305868 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mpzvt\" (UniqueName: \"kubernetes.io/projected/573d9b12-352d-4b14-b79c-f2a4a3bfec61-kube-api-access-mpzvt\") on node \"crc\" DevicePath \"\"" Jan 25 08:18:05 crc kubenswrapper[4832]: I0125 08:18:05.305899 4832 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/573d9b12-352d-4b14-b79c-f2a4a3bfec61-logs\") on node \"crc\" DevicePath \"\"" Jan 25 08:18:05 crc kubenswrapper[4832]: I0125 08:18:05.305909 4832 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/573d9b12-352d-4b14-b79c-f2a4a3bfec61-config-data\") on node \"crc\" DevicePath \"\"" Jan 25 08:18:05 crc kubenswrapper[4832]: I0125 08:18:05.305921 4832 reconciler_common.go:293] "Volume detached for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/573d9b12-352d-4b14-b79c-f2a4a3bfec61-horizon-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 25 08:18:05 crc kubenswrapper[4832]: I0125 08:18:05.305929 4832 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/573d9b12-352d-4b14-b79c-f2a4a3bfec61-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 25 08:18:05 crc kubenswrapper[4832]: I0125 08:18:05.305937 4832 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/573d9b12-352d-4b14-b79c-f2a4a3bfec61-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Jan 25 08:18:05 crc kubenswrapper[4832]: I0125 08:18:05.305945 4832 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/573d9b12-352d-4b14-b79c-f2a4a3bfec61-scripts\") on node \"crc\" DevicePath \"\"" Jan 25 08:18:05 crc kubenswrapper[4832]: I0125 08:18:05.485732 4832 generic.go:334] "Generic (PLEG): container finished" podID="573d9b12-352d-4b14-b79c-f2a4a3bfec61" containerID="c292b116a3c1fdcc1ff68e24bd47cbed28c4a98bf62546d1e65268a40c49af76" exitCode=137 Jan 25 08:18:05 crc kubenswrapper[4832]: I0125 08:18:05.485795 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-856b6b4996-m59cl" event={"ID":"573d9b12-352d-4b14-b79c-f2a4a3bfec61","Type":"ContainerDied","Data":"c292b116a3c1fdcc1ff68e24bd47cbed28c4a98bf62546d1e65268a40c49af76"} Jan 25 08:18:05 crc kubenswrapper[4832]: I0125 08:18:05.485822 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-856b6b4996-m59cl" event={"ID":"573d9b12-352d-4b14-b79c-f2a4a3bfec61","Type":"ContainerDied","Data":"1742dd5219b7af04a6e13c07f9379331dad7bce12fb59c3d9128bb68d2f8e984"} Jan 25 08:18:05 crc kubenswrapper[4832]: I0125 08:18:05.485840 4832 scope.go:117] "RemoveContainer" containerID="bb732af1be5b8febd9fa4b66ceda9d6420275da7a02af0dbc3f119bbf4968964" Jan 25 08:18:05 crc kubenswrapper[4832]: I0125 08:18:05.486007 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-856b6b4996-m59cl" Jan 25 08:18:05 crc kubenswrapper[4832]: I0125 08:18:05.492325 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a51d9c21-2b71-46f0-8b63-9961d75247fe","Type":"ContainerStarted","Data":"427ffa790e251b576c344c77d7e41b6e5519f58d85c8f21ec107fe25c1d306d6"} Jan 25 08:18:05 crc kubenswrapper[4832]: I0125 08:18:05.493611 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 25 08:18:05 crc kubenswrapper[4832]: I0125 08:18:05.553364 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.4678993670000002 podStartE2EDuration="5.553341281s" podCreationTimestamp="2026-01-25 08:18:00 +0000 UTC" firstStartedPulling="2026-01-25 08:18:01.635078532 +0000 UTC m=+1264.308902065" lastFinishedPulling="2026-01-25 08:18:04.720520446 +0000 UTC m=+1267.394343979" observedRunningTime="2026-01-25 08:18:05.528470477 +0000 UTC m=+1268.202294030" watchObservedRunningTime="2026-01-25 08:18:05.553341281 +0000 UTC m=+1268.227164814" Jan 25 08:18:05 crc kubenswrapper[4832]: I0125 08:18:05.571648 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-856b6b4996-m59cl"] Jan 25 08:18:05 crc kubenswrapper[4832]: I0125 08:18:05.585170 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-856b6b4996-m59cl"] Jan 25 08:18:05 crc kubenswrapper[4832]: I0125 08:18:05.665194 4832 scope.go:117] "RemoveContainer" containerID="c292b116a3c1fdcc1ff68e24bd47cbed28c4a98bf62546d1e65268a40c49af76" Jan 25 08:18:05 crc kubenswrapper[4832]: I0125 08:18:05.683081 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="573d9b12-352d-4b14-b79c-f2a4a3bfec61" path="/var/lib/kubelet/pods/573d9b12-352d-4b14-b79c-f2a4a3bfec61/volumes" Jan 25 08:18:05 crc kubenswrapper[4832]: I0125 08:18:05.684064 4832 scope.go:117] "RemoveContainer" containerID="bb732af1be5b8febd9fa4b66ceda9d6420275da7a02af0dbc3f119bbf4968964" Jan 25 08:18:05 crc kubenswrapper[4832]: E0125 08:18:05.684678 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bb732af1be5b8febd9fa4b66ceda9d6420275da7a02af0dbc3f119bbf4968964\": container with ID starting with bb732af1be5b8febd9fa4b66ceda9d6420275da7a02af0dbc3f119bbf4968964 not found: ID does not exist" containerID="bb732af1be5b8febd9fa4b66ceda9d6420275da7a02af0dbc3f119bbf4968964" Jan 25 08:18:05 crc kubenswrapper[4832]: I0125 08:18:05.684741 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bb732af1be5b8febd9fa4b66ceda9d6420275da7a02af0dbc3f119bbf4968964"} err="failed to get container status \"bb732af1be5b8febd9fa4b66ceda9d6420275da7a02af0dbc3f119bbf4968964\": rpc error: code = NotFound desc = could not find container \"bb732af1be5b8febd9fa4b66ceda9d6420275da7a02af0dbc3f119bbf4968964\": container with ID starting with bb732af1be5b8febd9fa4b66ceda9d6420275da7a02af0dbc3f119bbf4968964 not found: ID does not exist" Jan 25 08:18:05 crc kubenswrapper[4832]: I0125 08:18:05.684766 4832 scope.go:117] "RemoveContainer" containerID="c292b116a3c1fdcc1ff68e24bd47cbed28c4a98bf62546d1e65268a40c49af76" Jan 25 08:18:05 crc kubenswrapper[4832]: E0125 08:18:05.685017 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c292b116a3c1fdcc1ff68e24bd47cbed28c4a98bf62546d1e65268a40c49af76\": container with ID starting with c292b116a3c1fdcc1ff68e24bd47cbed28c4a98bf62546d1e65268a40c49af76 not found: ID does not exist" containerID="c292b116a3c1fdcc1ff68e24bd47cbed28c4a98bf62546d1e65268a40c49af76" Jan 25 08:18:05 crc kubenswrapper[4832]: I0125 08:18:05.685047 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c292b116a3c1fdcc1ff68e24bd47cbed28c4a98bf62546d1e65268a40c49af76"} err="failed to get container status \"c292b116a3c1fdcc1ff68e24bd47cbed28c4a98bf62546d1e65268a40c49af76\": rpc error: code = NotFound desc = could not find container \"c292b116a3c1fdcc1ff68e24bd47cbed28c4a98bf62546d1e65268a40c49af76\": container with ID starting with c292b116a3c1fdcc1ff68e24bd47cbed28c4a98bf62546d1e65268a40c49af76 not found: ID does not exist" Jan 25 08:18:10 crc kubenswrapper[4832]: I0125 08:18:10.873620 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell0-conductor-0" Jan 25 08:18:11 crc kubenswrapper[4832]: I0125 08:18:11.322827 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-cell-mapping-nglwx"] Jan 25 08:18:11 crc kubenswrapper[4832]: E0125 08:18:11.323672 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="573d9b12-352d-4b14-b79c-f2a4a3bfec61" containerName="horizon-log" Jan 25 08:18:11 crc kubenswrapper[4832]: I0125 08:18:11.323695 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="573d9b12-352d-4b14-b79c-f2a4a3bfec61" containerName="horizon-log" Jan 25 08:18:11 crc kubenswrapper[4832]: E0125 08:18:11.323724 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="573d9b12-352d-4b14-b79c-f2a4a3bfec61" containerName="horizon" Jan 25 08:18:11 crc kubenswrapper[4832]: I0125 08:18:11.323733 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="573d9b12-352d-4b14-b79c-f2a4a3bfec61" containerName="horizon" Jan 25 08:18:11 crc kubenswrapper[4832]: I0125 08:18:11.323913 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="573d9b12-352d-4b14-b79c-f2a4a3bfec61" containerName="horizon" Jan 25 08:18:11 crc kubenswrapper[4832]: I0125 08:18:11.323943 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="573d9b12-352d-4b14-b79c-f2a4a3bfec61" containerName="horizon-log" Jan 25 08:18:11 crc kubenswrapper[4832]: I0125 08:18:11.324660 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-nglwx" Jan 25 08:18:11 crc kubenswrapper[4832]: I0125 08:18:11.327240 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-scripts" Jan 25 08:18:11 crc kubenswrapper[4832]: I0125 08:18:11.327197 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-config-data" Jan 25 08:18:11 crc kubenswrapper[4832]: I0125 08:18:11.347504 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-nglwx"] Jan 25 08:18:11 crc kubenswrapper[4832]: I0125 08:18:11.435446 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d1a99b4f-2213-4a2a-9086-e755207a4e3c-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-nglwx\" (UID: \"d1a99b4f-2213-4a2a-9086-e755207a4e3c\") " pod="openstack/nova-cell0-cell-mapping-nglwx" Jan 25 08:18:11 crc kubenswrapper[4832]: I0125 08:18:11.435627 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k6fx9\" (UniqueName: \"kubernetes.io/projected/d1a99b4f-2213-4a2a-9086-e755207a4e3c-kube-api-access-k6fx9\") pod \"nova-cell0-cell-mapping-nglwx\" (UID: \"d1a99b4f-2213-4a2a-9086-e755207a4e3c\") " pod="openstack/nova-cell0-cell-mapping-nglwx" Jan 25 08:18:11 crc kubenswrapper[4832]: I0125 08:18:11.435689 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d1a99b4f-2213-4a2a-9086-e755207a4e3c-config-data\") pod \"nova-cell0-cell-mapping-nglwx\" (UID: \"d1a99b4f-2213-4a2a-9086-e755207a4e3c\") " pod="openstack/nova-cell0-cell-mapping-nglwx" Jan 25 08:18:11 crc kubenswrapper[4832]: I0125 08:18:11.435748 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d1a99b4f-2213-4a2a-9086-e755207a4e3c-scripts\") pod \"nova-cell0-cell-mapping-nglwx\" (UID: \"d1a99b4f-2213-4a2a-9086-e755207a4e3c\") " pod="openstack/nova-cell0-cell-mapping-nglwx" Jan 25 08:18:11 crc kubenswrapper[4832]: I0125 08:18:11.537139 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k6fx9\" (UniqueName: \"kubernetes.io/projected/d1a99b4f-2213-4a2a-9086-e755207a4e3c-kube-api-access-k6fx9\") pod \"nova-cell0-cell-mapping-nglwx\" (UID: \"d1a99b4f-2213-4a2a-9086-e755207a4e3c\") " pod="openstack/nova-cell0-cell-mapping-nglwx" Jan 25 08:18:11 crc kubenswrapper[4832]: I0125 08:18:11.537211 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d1a99b4f-2213-4a2a-9086-e755207a4e3c-config-data\") pod \"nova-cell0-cell-mapping-nglwx\" (UID: \"d1a99b4f-2213-4a2a-9086-e755207a4e3c\") " pod="openstack/nova-cell0-cell-mapping-nglwx" Jan 25 08:18:11 crc kubenswrapper[4832]: I0125 08:18:11.537254 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d1a99b4f-2213-4a2a-9086-e755207a4e3c-scripts\") pod \"nova-cell0-cell-mapping-nglwx\" (UID: \"d1a99b4f-2213-4a2a-9086-e755207a4e3c\") " pod="openstack/nova-cell0-cell-mapping-nglwx" Jan 25 08:18:11 crc kubenswrapper[4832]: I0125 08:18:11.537275 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d1a99b4f-2213-4a2a-9086-e755207a4e3c-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-nglwx\" (UID: \"d1a99b4f-2213-4a2a-9086-e755207a4e3c\") " pod="openstack/nova-cell0-cell-mapping-nglwx" Jan 25 08:18:11 crc kubenswrapper[4832]: I0125 08:18:11.562458 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Jan 25 08:18:11 crc kubenswrapper[4832]: I0125 08:18:11.563211 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d1a99b4f-2213-4a2a-9086-e755207a4e3c-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-nglwx\" (UID: \"d1a99b4f-2213-4a2a-9086-e755207a4e3c\") " pod="openstack/nova-cell0-cell-mapping-nglwx" Jan 25 08:18:11 crc kubenswrapper[4832]: I0125 08:18:11.564456 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 25 08:18:11 crc kubenswrapper[4832]: I0125 08:18:11.567909 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Jan 25 08:18:11 crc kubenswrapper[4832]: I0125 08:18:11.571220 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d1a99b4f-2213-4a2a-9086-e755207a4e3c-scripts\") pod \"nova-cell0-cell-mapping-nglwx\" (UID: \"d1a99b4f-2213-4a2a-9086-e755207a4e3c\") " pod="openstack/nova-cell0-cell-mapping-nglwx" Jan 25 08:18:11 crc kubenswrapper[4832]: I0125 08:18:11.571492 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d1a99b4f-2213-4a2a-9086-e755207a4e3c-config-data\") pod \"nova-cell0-cell-mapping-nglwx\" (UID: \"d1a99b4f-2213-4a2a-9086-e755207a4e3c\") " pod="openstack/nova-cell0-cell-mapping-nglwx" Jan 25 08:18:11 crc kubenswrapper[4832]: I0125 08:18:11.573308 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k6fx9\" (UniqueName: \"kubernetes.io/projected/d1a99b4f-2213-4a2a-9086-e755207a4e3c-kube-api-access-k6fx9\") pod \"nova-cell0-cell-mapping-nglwx\" (UID: \"d1a99b4f-2213-4a2a-9086-e755207a4e3c\") " pod="openstack/nova-cell0-cell-mapping-nglwx" Jan 25 08:18:11 crc kubenswrapper[4832]: I0125 08:18:11.607465 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 25 08:18:11 crc kubenswrapper[4832]: I0125 08:18:11.653549 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Jan 25 08:18:11 crc kubenswrapper[4832]: I0125 08:18:11.655097 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 25 08:18:11 crc kubenswrapper[4832]: I0125 08:18:11.672846 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Jan 25 08:18:11 crc kubenswrapper[4832]: I0125 08:18:11.707452 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-nglwx" Jan 25 08:18:11 crc kubenswrapper[4832]: I0125 08:18:11.715582 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 25 08:18:11 crc kubenswrapper[4832]: I0125 08:18:11.740096 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/95f0c1bd-2ef0-41c2-960f-ea7e06873c6b-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"95f0c1bd-2ef0-41c2-960f-ea7e06873c6b\") " pod="openstack/nova-api-0" Jan 25 08:18:11 crc kubenswrapper[4832]: I0125 08:18:11.740165 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j4zl9\" (UniqueName: \"kubernetes.io/projected/95f0c1bd-2ef0-41c2-960f-ea7e06873c6b-kube-api-access-j4zl9\") pod \"nova-api-0\" (UID: \"95f0c1bd-2ef0-41c2-960f-ea7e06873c6b\") " pod="openstack/nova-api-0" Jan 25 08:18:11 crc kubenswrapper[4832]: I0125 08:18:11.740207 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dc449e14-2c38-4376-8bae-1950edee8d5a-logs\") pod \"nova-metadata-0\" (UID: \"dc449e14-2c38-4376-8bae-1950edee8d5a\") " pod="openstack/nova-metadata-0" Jan 25 08:18:11 crc kubenswrapper[4832]: I0125 08:18:11.740238 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/95f0c1bd-2ef0-41c2-960f-ea7e06873c6b-logs\") pod \"nova-api-0\" (UID: \"95f0c1bd-2ef0-41c2-960f-ea7e06873c6b\") " pod="openstack/nova-api-0" Jan 25 08:18:11 crc kubenswrapper[4832]: I0125 08:18:11.740262 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dc449e14-2c38-4376-8bae-1950edee8d5a-config-data\") pod \"nova-metadata-0\" (UID: \"dc449e14-2c38-4376-8bae-1950edee8d5a\") " pod="openstack/nova-metadata-0" Jan 25 08:18:11 crc kubenswrapper[4832]: I0125 08:18:11.740286 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ltkrw\" (UniqueName: \"kubernetes.io/projected/dc449e14-2c38-4376-8bae-1950edee8d5a-kube-api-access-ltkrw\") pod \"nova-metadata-0\" (UID: \"dc449e14-2c38-4376-8bae-1950edee8d5a\") " pod="openstack/nova-metadata-0" Jan 25 08:18:11 crc kubenswrapper[4832]: I0125 08:18:11.740333 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dc449e14-2c38-4376-8bae-1950edee8d5a-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"dc449e14-2c38-4376-8bae-1950edee8d5a\") " pod="openstack/nova-metadata-0" Jan 25 08:18:11 crc kubenswrapper[4832]: I0125 08:18:11.740365 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/95f0c1bd-2ef0-41c2-960f-ea7e06873c6b-config-data\") pod \"nova-api-0\" (UID: \"95f0c1bd-2ef0-41c2-960f-ea7e06873c6b\") " pod="openstack/nova-api-0" Jan 25 08:18:11 crc kubenswrapper[4832]: I0125 08:18:11.754973 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 25 08:18:11 crc kubenswrapper[4832]: I0125 08:18:11.757686 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 25 08:18:11 crc kubenswrapper[4832]: I0125 08:18:11.764234 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Jan 25 08:18:11 crc kubenswrapper[4832]: I0125 08:18:11.765554 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 25 08:18:11 crc kubenswrapper[4832]: I0125 08:18:11.777135 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-845d6d6f59-gbk4s"] Jan 25 08:18:11 crc kubenswrapper[4832]: I0125 08:18:11.778873 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-845d6d6f59-gbk4s" Jan 25 08:18:11 crc kubenswrapper[4832]: I0125 08:18:11.818133 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-845d6d6f59-gbk4s"] Jan 25 08:18:11 crc kubenswrapper[4832]: I0125 08:18:11.841558 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dc449e14-2c38-4376-8bae-1950edee8d5a-logs\") pod \"nova-metadata-0\" (UID: \"dc449e14-2c38-4376-8bae-1950edee8d5a\") " pod="openstack/nova-metadata-0" Jan 25 08:18:11 crc kubenswrapper[4832]: I0125 08:18:11.841612 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/95f0c1bd-2ef0-41c2-960f-ea7e06873c6b-logs\") pod \"nova-api-0\" (UID: \"95f0c1bd-2ef0-41c2-960f-ea7e06873c6b\") " pod="openstack/nova-api-0" Jan 25 08:18:11 crc kubenswrapper[4832]: I0125 08:18:11.841668 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dc449e14-2c38-4376-8bae-1950edee8d5a-config-data\") pod \"nova-metadata-0\" (UID: \"dc449e14-2c38-4376-8bae-1950edee8d5a\") " pod="openstack/nova-metadata-0" Jan 25 08:18:11 crc kubenswrapper[4832]: I0125 08:18:11.841693 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5bbea8c8-972b-41f2-b1e7-e2aa7f521384-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"5bbea8c8-972b-41f2-b1e7-e2aa7f521384\") " pod="openstack/nova-cell1-novncproxy-0" Jan 25 08:18:11 crc kubenswrapper[4832]: I0125 08:18:11.841718 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ltkrw\" (UniqueName: \"kubernetes.io/projected/dc449e14-2c38-4376-8bae-1950edee8d5a-kube-api-access-ltkrw\") pod \"nova-metadata-0\" (UID: \"dc449e14-2c38-4376-8bae-1950edee8d5a\") " pod="openstack/nova-metadata-0" Jan 25 08:18:11 crc kubenswrapper[4832]: I0125 08:18:11.841774 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5bbea8c8-972b-41f2-b1e7-e2aa7f521384-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"5bbea8c8-972b-41f2-b1e7-e2aa7f521384\") " pod="openstack/nova-cell1-novncproxy-0" Jan 25 08:18:11 crc kubenswrapper[4832]: I0125 08:18:11.841794 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dc449e14-2c38-4376-8bae-1950edee8d5a-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"dc449e14-2c38-4376-8bae-1950edee8d5a\") " pod="openstack/nova-metadata-0" Jan 25 08:18:11 crc kubenswrapper[4832]: I0125 08:18:11.841828 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/95f0c1bd-2ef0-41c2-960f-ea7e06873c6b-config-data\") pod \"nova-api-0\" (UID: \"95f0c1bd-2ef0-41c2-960f-ea7e06873c6b\") " pod="openstack/nova-api-0" Jan 25 08:18:11 crc kubenswrapper[4832]: I0125 08:18:11.841850 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fqf4s\" (UniqueName: \"kubernetes.io/projected/5bbea8c8-972b-41f2-b1e7-e2aa7f521384-kube-api-access-fqf4s\") pod \"nova-cell1-novncproxy-0\" (UID: \"5bbea8c8-972b-41f2-b1e7-e2aa7f521384\") " pod="openstack/nova-cell1-novncproxy-0" Jan 25 08:18:11 crc kubenswrapper[4832]: I0125 08:18:11.841876 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/95f0c1bd-2ef0-41c2-960f-ea7e06873c6b-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"95f0c1bd-2ef0-41c2-960f-ea7e06873c6b\") " pod="openstack/nova-api-0" Jan 25 08:18:11 crc kubenswrapper[4832]: I0125 08:18:11.841942 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j4zl9\" (UniqueName: \"kubernetes.io/projected/95f0c1bd-2ef0-41c2-960f-ea7e06873c6b-kube-api-access-j4zl9\") pod \"nova-api-0\" (UID: \"95f0c1bd-2ef0-41c2-960f-ea7e06873c6b\") " pod="openstack/nova-api-0" Jan 25 08:18:11 crc kubenswrapper[4832]: I0125 08:18:11.842610 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dc449e14-2c38-4376-8bae-1950edee8d5a-logs\") pod \"nova-metadata-0\" (UID: \"dc449e14-2c38-4376-8bae-1950edee8d5a\") " pod="openstack/nova-metadata-0" Jan 25 08:18:11 crc kubenswrapper[4832]: I0125 08:18:11.842864 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/95f0c1bd-2ef0-41c2-960f-ea7e06873c6b-logs\") pod \"nova-api-0\" (UID: \"95f0c1bd-2ef0-41c2-960f-ea7e06873c6b\") " pod="openstack/nova-api-0" Jan 25 08:18:11 crc kubenswrapper[4832]: I0125 08:18:11.850833 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dc449e14-2c38-4376-8bae-1950edee8d5a-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"dc449e14-2c38-4376-8bae-1950edee8d5a\") " pod="openstack/nova-metadata-0" Jan 25 08:18:11 crc kubenswrapper[4832]: I0125 08:18:11.852577 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Jan 25 08:18:11 crc kubenswrapper[4832]: I0125 08:18:11.862802 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j4zl9\" (UniqueName: \"kubernetes.io/projected/95f0c1bd-2ef0-41c2-960f-ea7e06873c6b-kube-api-access-j4zl9\") pod \"nova-api-0\" (UID: \"95f0c1bd-2ef0-41c2-960f-ea7e06873c6b\") " pod="openstack/nova-api-0" Jan 25 08:18:11 crc kubenswrapper[4832]: I0125 08:18:11.864890 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 25 08:18:11 crc kubenswrapper[4832]: I0125 08:18:11.865471 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dc449e14-2c38-4376-8bae-1950edee8d5a-config-data\") pod \"nova-metadata-0\" (UID: \"dc449e14-2c38-4376-8bae-1950edee8d5a\") " pod="openstack/nova-metadata-0" Jan 25 08:18:11 crc kubenswrapper[4832]: I0125 08:18:11.868184 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Jan 25 08:18:11 crc kubenswrapper[4832]: I0125 08:18:11.869760 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/95f0c1bd-2ef0-41c2-960f-ea7e06873c6b-config-data\") pod \"nova-api-0\" (UID: \"95f0c1bd-2ef0-41c2-960f-ea7e06873c6b\") " pod="openstack/nova-api-0" Jan 25 08:18:11 crc kubenswrapper[4832]: I0125 08:18:11.878069 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/95f0c1bd-2ef0-41c2-960f-ea7e06873c6b-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"95f0c1bd-2ef0-41c2-960f-ea7e06873c6b\") " pod="openstack/nova-api-0" Jan 25 08:18:11 crc kubenswrapper[4832]: I0125 08:18:11.879050 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ltkrw\" (UniqueName: \"kubernetes.io/projected/dc449e14-2c38-4376-8bae-1950edee8d5a-kube-api-access-ltkrw\") pod \"nova-metadata-0\" (UID: \"dc449e14-2c38-4376-8bae-1950edee8d5a\") " pod="openstack/nova-metadata-0" Jan 25 08:18:11 crc kubenswrapper[4832]: I0125 08:18:11.881075 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 25 08:18:11 crc kubenswrapper[4832]: I0125 08:18:11.944165 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b4fac470-1791-4461-9a15-d3ce171d8f15-dns-svc\") pod \"dnsmasq-dns-845d6d6f59-gbk4s\" (UID: \"b4fac470-1791-4461-9a15-d3ce171d8f15\") " pod="openstack/dnsmasq-dns-845d6d6f59-gbk4s" Jan 25 08:18:11 crc kubenswrapper[4832]: I0125 08:18:11.944203 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b4fac470-1791-4461-9a15-d3ce171d8f15-ovsdbserver-sb\") pod \"dnsmasq-dns-845d6d6f59-gbk4s\" (UID: \"b4fac470-1791-4461-9a15-d3ce171d8f15\") " pod="openstack/dnsmasq-dns-845d6d6f59-gbk4s" Jan 25 08:18:11 crc kubenswrapper[4832]: I0125 08:18:11.944232 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zdmc2\" (UniqueName: \"kubernetes.io/projected/b4fac470-1791-4461-9a15-d3ce171d8f15-kube-api-access-zdmc2\") pod \"dnsmasq-dns-845d6d6f59-gbk4s\" (UID: \"b4fac470-1791-4461-9a15-d3ce171d8f15\") " pod="openstack/dnsmasq-dns-845d6d6f59-gbk4s" Jan 25 08:18:11 crc kubenswrapper[4832]: I0125 08:18:11.944293 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5bbea8c8-972b-41f2-b1e7-e2aa7f521384-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"5bbea8c8-972b-41f2-b1e7-e2aa7f521384\") " pod="openstack/nova-cell1-novncproxy-0" Jan 25 08:18:11 crc kubenswrapper[4832]: I0125 08:18:11.944315 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b4fac470-1791-4461-9a15-d3ce171d8f15-ovsdbserver-nb\") pod \"dnsmasq-dns-845d6d6f59-gbk4s\" (UID: \"b4fac470-1791-4461-9a15-d3ce171d8f15\") " pod="openstack/dnsmasq-dns-845d6d6f59-gbk4s" Jan 25 08:18:11 crc kubenswrapper[4832]: I0125 08:18:11.944332 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b4fac470-1791-4461-9a15-d3ce171d8f15-config\") pod \"dnsmasq-dns-845d6d6f59-gbk4s\" (UID: \"b4fac470-1791-4461-9a15-d3ce171d8f15\") " pod="openstack/dnsmasq-dns-845d6d6f59-gbk4s" Jan 25 08:18:11 crc kubenswrapper[4832]: I0125 08:18:11.944377 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5bbea8c8-972b-41f2-b1e7-e2aa7f521384-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"5bbea8c8-972b-41f2-b1e7-e2aa7f521384\") " pod="openstack/nova-cell1-novncproxy-0" Jan 25 08:18:11 crc kubenswrapper[4832]: I0125 08:18:11.944407 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d848c5d5-d11c-4e63-b958-f98b1930587f-config-data\") pod \"nova-scheduler-0\" (UID: \"d848c5d5-d11c-4e63-b958-f98b1930587f\") " pod="openstack/nova-scheduler-0" Jan 25 08:18:11 crc kubenswrapper[4832]: I0125 08:18:11.944425 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d848c5d5-d11c-4e63-b958-f98b1930587f-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"d848c5d5-d11c-4e63-b958-f98b1930587f\") " pod="openstack/nova-scheduler-0" Jan 25 08:18:11 crc kubenswrapper[4832]: I0125 08:18:11.944455 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/b4fac470-1791-4461-9a15-d3ce171d8f15-dns-swift-storage-0\") pod \"dnsmasq-dns-845d6d6f59-gbk4s\" (UID: \"b4fac470-1791-4461-9a15-d3ce171d8f15\") " pod="openstack/dnsmasq-dns-845d6d6f59-gbk4s" Jan 25 08:18:11 crc kubenswrapper[4832]: I0125 08:18:11.944483 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sb8wc\" (UniqueName: \"kubernetes.io/projected/d848c5d5-d11c-4e63-b958-f98b1930587f-kube-api-access-sb8wc\") pod \"nova-scheduler-0\" (UID: \"d848c5d5-d11c-4e63-b958-f98b1930587f\") " pod="openstack/nova-scheduler-0" Jan 25 08:18:11 crc kubenswrapper[4832]: I0125 08:18:11.944501 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fqf4s\" (UniqueName: \"kubernetes.io/projected/5bbea8c8-972b-41f2-b1e7-e2aa7f521384-kube-api-access-fqf4s\") pod \"nova-cell1-novncproxy-0\" (UID: \"5bbea8c8-972b-41f2-b1e7-e2aa7f521384\") " pod="openstack/nova-cell1-novncproxy-0" Jan 25 08:18:11 crc kubenswrapper[4832]: I0125 08:18:11.949511 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5bbea8c8-972b-41f2-b1e7-e2aa7f521384-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"5bbea8c8-972b-41f2-b1e7-e2aa7f521384\") " pod="openstack/nova-cell1-novncproxy-0" Jan 25 08:18:11 crc kubenswrapper[4832]: I0125 08:18:11.951798 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5bbea8c8-972b-41f2-b1e7-e2aa7f521384-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"5bbea8c8-972b-41f2-b1e7-e2aa7f521384\") " pod="openstack/nova-cell1-novncproxy-0" Jan 25 08:18:11 crc kubenswrapper[4832]: I0125 08:18:11.962416 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fqf4s\" (UniqueName: \"kubernetes.io/projected/5bbea8c8-972b-41f2-b1e7-e2aa7f521384-kube-api-access-fqf4s\") pod \"nova-cell1-novncproxy-0\" (UID: \"5bbea8c8-972b-41f2-b1e7-e2aa7f521384\") " pod="openstack/nova-cell1-novncproxy-0" Jan 25 08:18:12 crc kubenswrapper[4832]: I0125 08:18:12.002761 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 25 08:18:12 crc kubenswrapper[4832]: I0125 08:18:12.031560 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 25 08:18:12 crc kubenswrapper[4832]: I0125 08:18:12.046242 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b4fac470-1791-4461-9a15-d3ce171d8f15-ovsdbserver-nb\") pod \"dnsmasq-dns-845d6d6f59-gbk4s\" (UID: \"b4fac470-1791-4461-9a15-d3ce171d8f15\") " pod="openstack/dnsmasq-dns-845d6d6f59-gbk4s" Jan 25 08:18:12 crc kubenswrapper[4832]: I0125 08:18:12.046287 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b4fac470-1791-4461-9a15-d3ce171d8f15-config\") pod \"dnsmasq-dns-845d6d6f59-gbk4s\" (UID: \"b4fac470-1791-4461-9a15-d3ce171d8f15\") " pod="openstack/dnsmasq-dns-845d6d6f59-gbk4s" Jan 25 08:18:12 crc kubenswrapper[4832]: I0125 08:18:12.046346 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d848c5d5-d11c-4e63-b958-f98b1930587f-config-data\") pod \"nova-scheduler-0\" (UID: \"d848c5d5-d11c-4e63-b958-f98b1930587f\") " pod="openstack/nova-scheduler-0" Jan 25 08:18:12 crc kubenswrapper[4832]: I0125 08:18:12.046365 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d848c5d5-d11c-4e63-b958-f98b1930587f-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"d848c5d5-d11c-4e63-b958-f98b1930587f\") " pod="openstack/nova-scheduler-0" Jan 25 08:18:12 crc kubenswrapper[4832]: I0125 08:18:12.046493 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/b4fac470-1791-4461-9a15-d3ce171d8f15-dns-swift-storage-0\") pod \"dnsmasq-dns-845d6d6f59-gbk4s\" (UID: \"b4fac470-1791-4461-9a15-d3ce171d8f15\") " pod="openstack/dnsmasq-dns-845d6d6f59-gbk4s" Jan 25 08:18:12 crc kubenswrapper[4832]: I0125 08:18:12.046532 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sb8wc\" (UniqueName: \"kubernetes.io/projected/d848c5d5-d11c-4e63-b958-f98b1930587f-kube-api-access-sb8wc\") pod \"nova-scheduler-0\" (UID: \"d848c5d5-d11c-4e63-b958-f98b1930587f\") " pod="openstack/nova-scheduler-0" Jan 25 08:18:12 crc kubenswrapper[4832]: I0125 08:18:12.046584 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b4fac470-1791-4461-9a15-d3ce171d8f15-dns-svc\") pod \"dnsmasq-dns-845d6d6f59-gbk4s\" (UID: \"b4fac470-1791-4461-9a15-d3ce171d8f15\") " pod="openstack/dnsmasq-dns-845d6d6f59-gbk4s" Jan 25 08:18:12 crc kubenswrapper[4832]: I0125 08:18:12.046605 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b4fac470-1791-4461-9a15-d3ce171d8f15-ovsdbserver-sb\") pod \"dnsmasq-dns-845d6d6f59-gbk4s\" (UID: \"b4fac470-1791-4461-9a15-d3ce171d8f15\") " pod="openstack/dnsmasq-dns-845d6d6f59-gbk4s" Jan 25 08:18:12 crc kubenswrapper[4832]: I0125 08:18:12.046626 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zdmc2\" (UniqueName: \"kubernetes.io/projected/b4fac470-1791-4461-9a15-d3ce171d8f15-kube-api-access-zdmc2\") pod \"dnsmasq-dns-845d6d6f59-gbk4s\" (UID: \"b4fac470-1791-4461-9a15-d3ce171d8f15\") " pod="openstack/dnsmasq-dns-845d6d6f59-gbk4s" Jan 25 08:18:12 crc kubenswrapper[4832]: I0125 08:18:12.048063 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b4fac470-1791-4461-9a15-d3ce171d8f15-ovsdbserver-nb\") pod \"dnsmasq-dns-845d6d6f59-gbk4s\" (UID: \"b4fac470-1791-4461-9a15-d3ce171d8f15\") " pod="openstack/dnsmasq-dns-845d6d6f59-gbk4s" Jan 25 08:18:12 crc kubenswrapper[4832]: I0125 08:18:12.048609 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b4fac470-1791-4461-9a15-d3ce171d8f15-config\") pod \"dnsmasq-dns-845d6d6f59-gbk4s\" (UID: \"b4fac470-1791-4461-9a15-d3ce171d8f15\") " pod="openstack/dnsmasq-dns-845d6d6f59-gbk4s" Jan 25 08:18:12 crc kubenswrapper[4832]: I0125 08:18:12.049035 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/b4fac470-1791-4461-9a15-d3ce171d8f15-dns-swift-storage-0\") pod \"dnsmasq-dns-845d6d6f59-gbk4s\" (UID: \"b4fac470-1791-4461-9a15-d3ce171d8f15\") " pod="openstack/dnsmasq-dns-845d6d6f59-gbk4s" Jan 25 08:18:12 crc kubenswrapper[4832]: I0125 08:18:12.049318 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b4fac470-1791-4461-9a15-d3ce171d8f15-dns-svc\") pod \"dnsmasq-dns-845d6d6f59-gbk4s\" (UID: \"b4fac470-1791-4461-9a15-d3ce171d8f15\") " pod="openstack/dnsmasq-dns-845d6d6f59-gbk4s" Jan 25 08:18:12 crc kubenswrapper[4832]: I0125 08:18:12.050260 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b4fac470-1791-4461-9a15-d3ce171d8f15-ovsdbserver-sb\") pod \"dnsmasq-dns-845d6d6f59-gbk4s\" (UID: \"b4fac470-1791-4461-9a15-d3ce171d8f15\") " pod="openstack/dnsmasq-dns-845d6d6f59-gbk4s" Jan 25 08:18:12 crc kubenswrapper[4832]: I0125 08:18:12.053302 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d848c5d5-d11c-4e63-b958-f98b1930587f-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"d848c5d5-d11c-4e63-b958-f98b1930587f\") " pod="openstack/nova-scheduler-0" Jan 25 08:18:12 crc kubenswrapper[4832]: I0125 08:18:12.054012 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d848c5d5-d11c-4e63-b958-f98b1930587f-config-data\") pod \"nova-scheduler-0\" (UID: \"d848c5d5-d11c-4e63-b958-f98b1930587f\") " pod="openstack/nova-scheduler-0" Jan 25 08:18:12 crc kubenswrapper[4832]: I0125 08:18:12.068401 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zdmc2\" (UniqueName: \"kubernetes.io/projected/b4fac470-1791-4461-9a15-d3ce171d8f15-kube-api-access-zdmc2\") pod \"dnsmasq-dns-845d6d6f59-gbk4s\" (UID: \"b4fac470-1791-4461-9a15-d3ce171d8f15\") " pod="openstack/dnsmasq-dns-845d6d6f59-gbk4s" Jan 25 08:18:12 crc kubenswrapper[4832]: I0125 08:18:12.076141 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sb8wc\" (UniqueName: \"kubernetes.io/projected/d848c5d5-d11c-4e63-b958-f98b1930587f-kube-api-access-sb8wc\") pod \"nova-scheduler-0\" (UID: \"d848c5d5-d11c-4e63-b958-f98b1930587f\") " pod="openstack/nova-scheduler-0" Jan 25 08:18:12 crc kubenswrapper[4832]: I0125 08:18:12.235842 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 25 08:18:12 crc kubenswrapper[4832]: I0125 08:18:12.251230 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-845d6d6f59-gbk4s" Jan 25 08:18:12 crc kubenswrapper[4832]: I0125 08:18:12.258866 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 25 08:18:12 crc kubenswrapper[4832]: I0125 08:18:12.316046 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-nglwx"] Jan 25 08:18:12 crc kubenswrapper[4832]: I0125 08:18:12.480453 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-db-sync-c24ss"] Jan 25 08:18:12 crc kubenswrapper[4832]: I0125 08:18:12.482134 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-c24ss" Jan 25 08:18:12 crc kubenswrapper[4832]: I0125 08:18:12.486795 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-scripts" Jan 25 08:18:12 crc kubenswrapper[4832]: I0125 08:18:12.487027 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Jan 25 08:18:12 crc kubenswrapper[4832]: I0125 08:18:12.493378 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-c24ss"] Jan 25 08:18:12 crc kubenswrapper[4832]: I0125 08:18:12.555205 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 25 08:18:12 crc kubenswrapper[4832]: I0125 08:18:12.558781 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cwgsl\" (UniqueName: \"kubernetes.io/projected/30535fb7-5d1d-47e6-8394-3df7f9d032eb-kube-api-access-cwgsl\") pod \"nova-cell1-conductor-db-sync-c24ss\" (UID: \"30535fb7-5d1d-47e6-8394-3df7f9d032eb\") " pod="openstack/nova-cell1-conductor-db-sync-c24ss" Jan 25 08:18:12 crc kubenswrapper[4832]: I0125 08:18:12.558840 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/30535fb7-5d1d-47e6-8394-3df7f9d032eb-scripts\") pod \"nova-cell1-conductor-db-sync-c24ss\" (UID: \"30535fb7-5d1d-47e6-8394-3df7f9d032eb\") " pod="openstack/nova-cell1-conductor-db-sync-c24ss" Jan 25 08:18:12 crc kubenswrapper[4832]: I0125 08:18:12.558891 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/30535fb7-5d1d-47e6-8394-3df7f9d032eb-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-c24ss\" (UID: \"30535fb7-5d1d-47e6-8394-3df7f9d032eb\") " pod="openstack/nova-cell1-conductor-db-sync-c24ss" Jan 25 08:18:12 crc kubenswrapper[4832]: I0125 08:18:12.558927 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/30535fb7-5d1d-47e6-8394-3df7f9d032eb-config-data\") pod \"nova-cell1-conductor-db-sync-c24ss\" (UID: \"30535fb7-5d1d-47e6-8394-3df7f9d032eb\") " pod="openstack/nova-cell1-conductor-db-sync-c24ss" Jan 25 08:18:12 crc kubenswrapper[4832]: I0125 08:18:12.587857 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-nglwx" event={"ID":"d1a99b4f-2213-4a2a-9086-e755207a4e3c","Type":"ContainerStarted","Data":"c54b5bccd303b53ad0e3d2acd8a9fc651c99940d026f7fb6a375531e3792d6d2"} Jan 25 08:18:12 crc kubenswrapper[4832]: I0125 08:18:12.589855 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"95f0c1bd-2ef0-41c2-960f-ea7e06873c6b","Type":"ContainerStarted","Data":"8bc9e4043efba7378cb4eef0e94b5f484e43b6112be5f6201c78e825b476acf0"} Jan 25 08:18:12 crc kubenswrapper[4832]: I0125 08:18:12.661199 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cwgsl\" (UniqueName: \"kubernetes.io/projected/30535fb7-5d1d-47e6-8394-3df7f9d032eb-kube-api-access-cwgsl\") pod \"nova-cell1-conductor-db-sync-c24ss\" (UID: \"30535fb7-5d1d-47e6-8394-3df7f9d032eb\") " pod="openstack/nova-cell1-conductor-db-sync-c24ss" Jan 25 08:18:12 crc kubenswrapper[4832]: I0125 08:18:12.661257 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/30535fb7-5d1d-47e6-8394-3df7f9d032eb-scripts\") pod \"nova-cell1-conductor-db-sync-c24ss\" (UID: \"30535fb7-5d1d-47e6-8394-3df7f9d032eb\") " pod="openstack/nova-cell1-conductor-db-sync-c24ss" Jan 25 08:18:12 crc kubenswrapper[4832]: I0125 08:18:12.661315 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/30535fb7-5d1d-47e6-8394-3df7f9d032eb-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-c24ss\" (UID: \"30535fb7-5d1d-47e6-8394-3df7f9d032eb\") " pod="openstack/nova-cell1-conductor-db-sync-c24ss" Jan 25 08:18:12 crc kubenswrapper[4832]: I0125 08:18:12.661350 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/30535fb7-5d1d-47e6-8394-3df7f9d032eb-config-data\") pod \"nova-cell1-conductor-db-sync-c24ss\" (UID: \"30535fb7-5d1d-47e6-8394-3df7f9d032eb\") " pod="openstack/nova-cell1-conductor-db-sync-c24ss" Jan 25 08:18:12 crc kubenswrapper[4832]: I0125 08:18:12.666168 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/30535fb7-5d1d-47e6-8394-3df7f9d032eb-config-data\") pod \"nova-cell1-conductor-db-sync-c24ss\" (UID: \"30535fb7-5d1d-47e6-8394-3df7f9d032eb\") " pod="openstack/nova-cell1-conductor-db-sync-c24ss" Jan 25 08:18:12 crc kubenswrapper[4832]: I0125 08:18:12.666581 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/30535fb7-5d1d-47e6-8394-3df7f9d032eb-scripts\") pod \"nova-cell1-conductor-db-sync-c24ss\" (UID: \"30535fb7-5d1d-47e6-8394-3df7f9d032eb\") " pod="openstack/nova-cell1-conductor-db-sync-c24ss" Jan 25 08:18:12 crc kubenswrapper[4832]: I0125 08:18:12.669761 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/30535fb7-5d1d-47e6-8394-3df7f9d032eb-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-c24ss\" (UID: \"30535fb7-5d1d-47e6-8394-3df7f9d032eb\") " pod="openstack/nova-cell1-conductor-db-sync-c24ss" Jan 25 08:18:12 crc kubenswrapper[4832]: I0125 08:18:12.671852 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 25 08:18:12 crc kubenswrapper[4832]: I0125 08:18:12.679376 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cwgsl\" (UniqueName: \"kubernetes.io/projected/30535fb7-5d1d-47e6-8394-3df7f9d032eb-kube-api-access-cwgsl\") pod \"nova-cell1-conductor-db-sync-c24ss\" (UID: \"30535fb7-5d1d-47e6-8394-3df7f9d032eb\") " pod="openstack/nova-cell1-conductor-db-sync-c24ss" Jan 25 08:18:12 crc kubenswrapper[4832]: I0125 08:18:12.846594 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-c24ss" Jan 25 08:18:12 crc kubenswrapper[4832]: I0125 08:18:12.944903 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 25 08:18:12 crc kubenswrapper[4832]: I0125 08:18:12.963439 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-845d6d6f59-gbk4s"] Jan 25 08:18:13 crc kubenswrapper[4832]: I0125 08:18:13.082896 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 25 08:18:13 crc kubenswrapper[4832]: W0125 08:18:13.089895 4832 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd848c5d5_d11c_4e63_b958_f98b1930587f.slice/crio-f64a9c73159e75b8b904c44e467e7695ec4362f813d307cecefe052d5e83bb85 WatchSource:0}: Error finding container f64a9c73159e75b8b904c44e467e7695ec4362f813d307cecefe052d5e83bb85: Status 404 returned error can't find the container with id f64a9c73159e75b8b904c44e467e7695ec4362f813d307cecefe052d5e83bb85 Jan 25 08:18:13 crc kubenswrapper[4832]: I0125 08:18:13.373811 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-c24ss"] Jan 25 08:18:13 crc kubenswrapper[4832]: W0125 08:18:13.401694 4832 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod30535fb7_5d1d_47e6_8394_3df7f9d032eb.slice/crio-123125c156df852f484eb757b1483ddd625943babc609f2b3387378699ad658c WatchSource:0}: Error finding container 123125c156df852f484eb757b1483ddd625943babc609f2b3387378699ad658c: Status 404 returned error can't find the container with id 123125c156df852f484eb757b1483ddd625943babc609f2b3387378699ad658c Jan 25 08:18:13 crc kubenswrapper[4832]: I0125 08:18:13.603794 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"dc449e14-2c38-4376-8bae-1950edee8d5a","Type":"ContainerStarted","Data":"69660a7ae90a3e513737716008b2e0a9f84e8cc9175023164495364c47f8710a"} Jan 25 08:18:13 crc kubenswrapper[4832]: I0125 08:18:13.612097 4832 generic.go:334] "Generic (PLEG): container finished" podID="b4fac470-1791-4461-9a15-d3ce171d8f15" containerID="ee20077fe32eb2c6c4eeb72f0d13c25e701aaabf4d049ebb28591414265d2fce" exitCode=0 Jan 25 08:18:13 crc kubenswrapper[4832]: I0125 08:18:13.612206 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-845d6d6f59-gbk4s" event={"ID":"b4fac470-1791-4461-9a15-d3ce171d8f15","Type":"ContainerDied","Data":"ee20077fe32eb2c6c4eeb72f0d13c25e701aaabf4d049ebb28591414265d2fce"} Jan 25 08:18:13 crc kubenswrapper[4832]: I0125 08:18:13.612243 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-845d6d6f59-gbk4s" event={"ID":"b4fac470-1791-4461-9a15-d3ce171d8f15","Type":"ContainerStarted","Data":"78dde9b3b81ac468ad5541e6b4561506f93f4ea181ec74d91bdfe868317cfa89"} Jan 25 08:18:13 crc kubenswrapper[4832]: I0125 08:18:13.623529 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-c24ss" event={"ID":"30535fb7-5d1d-47e6-8394-3df7f9d032eb","Type":"ContainerStarted","Data":"123125c156df852f484eb757b1483ddd625943babc609f2b3387378699ad658c"} Jan 25 08:18:13 crc kubenswrapper[4832]: I0125 08:18:13.635961 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-nglwx" event={"ID":"d1a99b4f-2213-4a2a-9086-e755207a4e3c","Type":"ContainerStarted","Data":"574faa8798ceac6b8e063d9c738b9da32df65a6d57fde1ba725961285d3d8d0e"} Jan 25 08:18:13 crc kubenswrapper[4832]: I0125 08:18:13.643894 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"5bbea8c8-972b-41f2-b1e7-e2aa7f521384","Type":"ContainerStarted","Data":"1c923056f904629c76f592bad52d3ec1f1d7d4d8be0159e1ee7ee63afdd7b2f2"} Jan 25 08:18:13 crc kubenswrapper[4832]: I0125 08:18:13.645675 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"d848c5d5-d11c-4e63-b958-f98b1930587f","Type":"ContainerStarted","Data":"f64a9c73159e75b8b904c44e467e7695ec4362f813d307cecefe052d5e83bb85"} Jan 25 08:18:13 crc kubenswrapper[4832]: I0125 08:18:13.671715 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-cell-mapping-nglwx" podStartSLOduration=2.671692083 podStartE2EDuration="2.671692083s" podCreationTimestamp="2026-01-25 08:18:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-25 08:18:13.65480807 +0000 UTC m=+1276.328631603" watchObservedRunningTime="2026-01-25 08:18:13.671692083 +0000 UTC m=+1276.345515606" Jan 25 08:18:14 crc kubenswrapper[4832]: I0125 08:18:14.658773 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-c24ss" event={"ID":"30535fb7-5d1d-47e6-8394-3df7f9d032eb","Type":"ContainerStarted","Data":"72124bd7bf49d598aa55b3e27272ea9046d23af883d96705c9dd9a7fe614d8f3"} Jan 25 08:18:14 crc kubenswrapper[4832]: I0125 08:18:14.679498 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-db-sync-c24ss" podStartSLOduration=2.679479889 podStartE2EDuration="2.679479889s" podCreationTimestamp="2026-01-25 08:18:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-25 08:18:14.673471617 +0000 UTC m=+1277.347295150" watchObservedRunningTime="2026-01-25 08:18:14.679479889 +0000 UTC m=+1277.353303422" Jan 25 08:18:15 crc kubenswrapper[4832]: I0125 08:18:15.358861 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 25 08:18:15 crc kubenswrapper[4832]: I0125 08:18:15.397206 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 25 08:18:16 crc kubenswrapper[4832]: I0125 08:18:16.681025 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-845d6d6f59-gbk4s" event={"ID":"b4fac470-1791-4461-9a15-d3ce171d8f15","Type":"ContainerStarted","Data":"0319d357fe2a0f6513ef7ddeeeb79fe495ee1226844eedfb1c993bba74675e0f"} Jan 25 08:18:16 crc kubenswrapper[4832]: I0125 08:18:16.681534 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-845d6d6f59-gbk4s" Jan 25 08:18:16 crc kubenswrapper[4832]: I0125 08:18:16.684355 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"dc449e14-2c38-4376-8bae-1950edee8d5a","Type":"ContainerStarted","Data":"1e15f655f576c41f00d4eaab2003fac04bb2cec7f0f58dbc9f6ca35ce8cbdab5"} Jan 25 08:18:16 crc kubenswrapper[4832]: I0125 08:18:16.686923 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"5bbea8c8-972b-41f2-b1e7-e2aa7f521384","Type":"ContainerStarted","Data":"6e9dd37c0976baa93da3bc4c1f6d9f74625689b52e41de0aedb042657c74888e"} Jan 25 08:18:16 crc kubenswrapper[4832]: I0125 08:18:16.687045 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell1-novncproxy-0" podUID="5bbea8c8-972b-41f2-b1e7-e2aa7f521384" containerName="nova-cell1-novncproxy-novncproxy" containerID="cri-o://6e9dd37c0976baa93da3bc4c1f6d9f74625689b52e41de0aedb042657c74888e" gracePeriod=30 Jan 25 08:18:16 crc kubenswrapper[4832]: I0125 08:18:16.695181 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"95f0c1bd-2ef0-41c2-960f-ea7e06873c6b","Type":"ContainerStarted","Data":"e186d65d7a165d8b58d1dd38838b87f5dca98bbec31ada54fc448bd8f429b1ae"} Jan 25 08:18:16 crc kubenswrapper[4832]: I0125 08:18:16.697063 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"d848c5d5-d11c-4e63-b958-f98b1930587f","Type":"ContainerStarted","Data":"49406627d9e7da09cfae6f9e29a489670acfed15c08e76117fe0e3a4244d3181"} Jan 25 08:18:16 crc kubenswrapper[4832]: I0125 08:18:16.741985 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-845d6d6f59-gbk4s" podStartSLOduration=5.741960616 podStartE2EDuration="5.741960616s" podCreationTimestamp="2026-01-25 08:18:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-25 08:18:16.717330948 +0000 UTC m=+1279.391154481" watchObservedRunningTime="2026-01-25 08:18:16.741960616 +0000 UTC m=+1279.415784149" Jan 25 08:18:16 crc kubenswrapper[4832]: I0125 08:18:16.747915 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=2.797048238 podStartE2EDuration="5.747894356s" podCreationTimestamp="2026-01-25 08:18:11 +0000 UTC" firstStartedPulling="2026-01-25 08:18:12.96287797 +0000 UTC m=+1275.636701493" lastFinishedPulling="2026-01-25 08:18:15.913724078 +0000 UTC m=+1278.587547611" observedRunningTime="2026-01-25 08:18:16.742893374 +0000 UTC m=+1279.416716917" watchObservedRunningTime="2026-01-25 08:18:16.747894356 +0000 UTC m=+1279.421717889" Jan 25 08:18:16 crc kubenswrapper[4832]: I0125 08:18:16.777936 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.96318921 podStartE2EDuration="5.777915107s" podCreationTimestamp="2026-01-25 08:18:11 +0000 UTC" firstStartedPulling="2026-01-25 08:18:13.096793355 +0000 UTC m=+1275.770616878" lastFinishedPulling="2026-01-25 08:18:15.911519242 +0000 UTC m=+1278.585342775" observedRunningTime="2026-01-25 08:18:16.764589623 +0000 UTC m=+1279.438413156" watchObservedRunningTime="2026-01-25 08:18:16.777915107 +0000 UTC m=+1279.451738640" Jan 25 08:18:17 crc kubenswrapper[4832]: I0125 08:18:17.236174 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Jan 25 08:18:17 crc kubenswrapper[4832]: I0125 08:18:17.259515 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Jan 25 08:18:17 crc kubenswrapper[4832]: I0125 08:18:17.709884 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"dc449e14-2c38-4376-8bae-1950edee8d5a","Type":"ContainerStarted","Data":"05680f3618c0dcabf0b56109999f6a13a66c8d03368752b1c33800f39f7592da"} Jan 25 08:18:17 crc kubenswrapper[4832]: I0125 08:18:17.710067 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="dc449e14-2c38-4376-8bae-1950edee8d5a" containerName="nova-metadata-log" containerID="cri-o://1e15f655f576c41f00d4eaab2003fac04bb2cec7f0f58dbc9f6ca35ce8cbdab5" gracePeriod=30 Jan 25 08:18:17 crc kubenswrapper[4832]: I0125 08:18:17.710311 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="dc449e14-2c38-4376-8bae-1950edee8d5a" containerName="nova-metadata-metadata" containerID="cri-o://05680f3618c0dcabf0b56109999f6a13a66c8d03368752b1c33800f39f7592da" gracePeriod=30 Jan 25 08:18:17 crc kubenswrapper[4832]: I0125 08:18:17.713475 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"95f0c1bd-2ef0-41c2-960f-ea7e06873c6b","Type":"ContainerStarted","Data":"abbb6600be50a48311111b2e0d85ed9bb5b5c4b994f2586a23fe54cb81a55868"} Jan 25 08:18:17 crc kubenswrapper[4832]: I0125 08:18:17.779523 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=3.529929881 podStartE2EDuration="6.779464974s" podCreationTimestamp="2026-01-25 08:18:11 +0000 UTC" firstStartedPulling="2026-01-25 08:18:12.665569638 +0000 UTC m=+1275.339393171" lastFinishedPulling="2026-01-25 08:18:15.915104741 +0000 UTC m=+1278.588928264" observedRunningTime="2026-01-25 08:18:17.764144649 +0000 UTC m=+1280.437968182" watchObservedRunningTime="2026-01-25 08:18:17.779464974 +0000 UTC m=+1280.453288507" Jan 25 08:18:17 crc kubenswrapper[4832]: I0125 08:18:17.786126 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=3.435030151 podStartE2EDuration="6.786110266s" podCreationTimestamp="2026-01-25 08:18:11 +0000 UTC" firstStartedPulling="2026-01-25 08:18:12.563709826 +0000 UTC m=+1275.237533359" lastFinishedPulling="2026-01-25 08:18:15.914789901 +0000 UTC m=+1278.588613474" observedRunningTime="2026-01-25 08:18:17.785284691 +0000 UTC m=+1280.459108224" watchObservedRunningTime="2026-01-25 08:18:17.786110266 +0000 UTC m=+1280.459933799" Jan 25 08:18:18 crc kubenswrapper[4832]: I0125 08:18:18.340325 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 25 08:18:18 crc kubenswrapper[4832]: I0125 08:18:18.421083 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dc449e14-2c38-4376-8bae-1950edee8d5a-logs\") pod \"dc449e14-2c38-4376-8bae-1950edee8d5a\" (UID: \"dc449e14-2c38-4376-8bae-1950edee8d5a\") " Jan 25 08:18:18 crc kubenswrapper[4832]: I0125 08:18:18.421510 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dc449e14-2c38-4376-8bae-1950edee8d5a-config-data\") pod \"dc449e14-2c38-4376-8bae-1950edee8d5a\" (UID: \"dc449e14-2c38-4376-8bae-1950edee8d5a\") " Jan 25 08:18:18 crc kubenswrapper[4832]: I0125 08:18:18.421545 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ltkrw\" (UniqueName: \"kubernetes.io/projected/dc449e14-2c38-4376-8bae-1950edee8d5a-kube-api-access-ltkrw\") pod \"dc449e14-2c38-4376-8bae-1950edee8d5a\" (UID: \"dc449e14-2c38-4376-8bae-1950edee8d5a\") " Jan 25 08:18:18 crc kubenswrapper[4832]: I0125 08:18:18.421591 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dc449e14-2c38-4376-8bae-1950edee8d5a-combined-ca-bundle\") pod \"dc449e14-2c38-4376-8bae-1950edee8d5a\" (UID: \"dc449e14-2c38-4376-8bae-1950edee8d5a\") " Jan 25 08:18:18 crc kubenswrapper[4832]: I0125 08:18:18.422169 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dc449e14-2c38-4376-8bae-1950edee8d5a-logs" (OuterVolumeSpecName: "logs") pod "dc449e14-2c38-4376-8bae-1950edee8d5a" (UID: "dc449e14-2c38-4376-8bae-1950edee8d5a"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 25 08:18:18 crc kubenswrapper[4832]: I0125 08:18:18.440766 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dc449e14-2c38-4376-8bae-1950edee8d5a-kube-api-access-ltkrw" (OuterVolumeSpecName: "kube-api-access-ltkrw") pod "dc449e14-2c38-4376-8bae-1950edee8d5a" (UID: "dc449e14-2c38-4376-8bae-1950edee8d5a"). InnerVolumeSpecName "kube-api-access-ltkrw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 25 08:18:18 crc kubenswrapper[4832]: I0125 08:18:18.454194 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dc449e14-2c38-4376-8bae-1950edee8d5a-config-data" (OuterVolumeSpecName: "config-data") pod "dc449e14-2c38-4376-8bae-1950edee8d5a" (UID: "dc449e14-2c38-4376-8bae-1950edee8d5a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 25 08:18:18 crc kubenswrapper[4832]: I0125 08:18:18.457502 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dc449e14-2c38-4376-8bae-1950edee8d5a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "dc449e14-2c38-4376-8bae-1950edee8d5a" (UID: "dc449e14-2c38-4376-8bae-1950edee8d5a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 25 08:18:18 crc kubenswrapper[4832]: I0125 08:18:18.524716 4832 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dc449e14-2c38-4376-8bae-1950edee8d5a-logs\") on node \"crc\" DevicePath \"\"" Jan 25 08:18:18 crc kubenswrapper[4832]: I0125 08:18:18.524757 4832 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dc449e14-2c38-4376-8bae-1950edee8d5a-config-data\") on node \"crc\" DevicePath \"\"" Jan 25 08:18:18 crc kubenswrapper[4832]: I0125 08:18:18.524769 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ltkrw\" (UniqueName: \"kubernetes.io/projected/dc449e14-2c38-4376-8bae-1950edee8d5a-kube-api-access-ltkrw\") on node \"crc\" DevicePath \"\"" Jan 25 08:18:18 crc kubenswrapper[4832]: I0125 08:18:18.524780 4832 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dc449e14-2c38-4376-8bae-1950edee8d5a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 25 08:18:18 crc kubenswrapper[4832]: I0125 08:18:18.726928 4832 generic.go:334] "Generic (PLEG): container finished" podID="dc449e14-2c38-4376-8bae-1950edee8d5a" containerID="05680f3618c0dcabf0b56109999f6a13a66c8d03368752b1c33800f39f7592da" exitCode=0 Jan 25 08:18:18 crc kubenswrapper[4832]: I0125 08:18:18.726959 4832 generic.go:334] "Generic (PLEG): container finished" podID="dc449e14-2c38-4376-8bae-1950edee8d5a" containerID="1e15f655f576c41f00d4eaab2003fac04bb2cec7f0f58dbc9f6ca35ce8cbdab5" exitCode=143 Jan 25 08:18:18 crc kubenswrapper[4832]: I0125 08:18:18.727722 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 25 08:18:18 crc kubenswrapper[4832]: I0125 08:18:18.736502 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"dc449e14-2c38-4376-8bae-1950edee8d5a","Type":"ContainerDied","Data":"05680f3618c0dcabf0b56109999f6a13a66c8d03368752b1c33800f39f7592da"} Jan 25 08:18:18 crc kubenswrapper[4832]: I0125 08:18:18.736541 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"dc449e14-2c38-4376-8bae-1950edee8d5a","Type":"ContainerDied","Data":"1e15f655f576c41f00d4eaab2003fac04bb2cec7f0f58dbc9f6ca35ce8cbdab5"} Jan 25 08:18:18 crc kubenswrapper[4832]: I0125 08:18:18.736553 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"dc449e14-2c38-4376-8bae-1950edee8d5a","Type":"ContainerDied","Data":"69660a7ae90a3e513737716008b2e0a9f84e8cc9175023164495364c47f8710a"} Jan 25 08:18:18 crc kubenswrapper[4832]: I0125 08:18:18.736567 4832 scope.go:117] "RemoveContainer" containerID="05680f3618c0dcabf0b56109999f6a13a66c8d03368752b1c33800f39f7592da" Jan 25 08:18:18 crc kubenswrapper[4832]: I0125 08:18:18.776690 4832 scope.go:117] "RemoveContainer" containerID="1e15f655f576c41f00d4eaab2003fac04bb2cec7f0f58dbc9f6ca35ce8cbdab5" Jan 25 08:18:18 crc kubenswrapper[4832]: I0125 08:18:18.776984 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 25 08:18:18 crc kubenswrapper[4832]: I0125 08:18:18.789417 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Jan 25 08:18:18 crc kubenswrapper[4832]: I0125 08:18:18.805876 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Jan 25 08:18:18 crc kubenswrapper[4832]: I0125 08:18:18.806104 4832 scope.go:117] "RemoveContainer" containerID="05680f3618c0dcabf0b56109999f6a13a66c8d03368752b1c33800f39f7592da" Jan 25 08:18:18 crc kubenswrapper[4832]: E0125 08:18:18.806331 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dc449e14-2c38-4376-8bae-1950edee8d5a" containerName="nova-metadata-log" Jan 25 08:18:18 crc kubenswrapper[4832]: I0125 08:18:18.806347 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="dc449e14-2c38-4376-8bae-1950edee8d5a" containerName="nova-metadata-log" Jan 25 08:18:18 crc kubenswrapper[4832]: E0125 08:18:18.806360 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dc449e14-2c38-4376-8bae-1950edee8d5a" containerName="nova-metadata-metadata" Jan 25 08:18:18 crc kubenswrapper[4832]: I0125 08:18:18.806366 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="dc449e14-2c38-4376-8bae-1950edee8d5a" containerName="nova-metadata-metadata" Jan 25 08:18:18 crc kubenswrapper[4832]: I0125 08:18:18.806608 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="dc449e14-2c38-4376-8bae-1950edee8d5a" containerName="nova-metadata-metadata" Jan 25 08:18:18 crc kubenswrapper[4832]: I0125 08:18:18.806653 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="dc449e14-2c38-4376-8bae-1950edee8d5a" containerName="nova-metadata-log" Jan 25 08:18:18 crc kubenswrapper[4832]: E0125 08:18:18.806763 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"05680f3618c0dcabf0b56109999f6a13a66c8d03368752b1c33800f39f7592da\": container with ID starting with 05680f3618c0dcabf0b56109999f6a13a66c8d03368752b1c33800f39f7592da not found: ID does not exist" containerID="05680f3618c0dcabf0b56109999f6a13a66c8d03368752b1c33800f39f7592da" Jan 25 08:18:18 crc kubenswrapper[4832]: I0125 08:18:18.806842 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"05680f3618c0dcabf0b56109999f6a13a66c8d03368752b1c33800f39f7592da"} err="failed to get container status \"05680f3618c0dcabf0b56109999f6a13a66c8d03368752b1c33800f39f7592da\": rpc error: code = NotFound desc = could not find container \"05680f3618c0dcabf0b56109999f6a13a66c8d03368752b1c33800f39f7592da\": container with ID starting with 05680f3618c0dcabf0b56109999f6a13a66c8d03368752b1c33800f39f7592da not found: ID does not exist" Jan 25 08:18:18 crc kubenswrapper[4832]: I0125 08:18:18.806891 4832 scope.go:117] "RemoveContainer" containerID="1e15f655f576c41f00d4eaab2003fac04bb2cec7f0f58dbc9f6ca35ce8cbdab5" Jan 25 08:18:18 crc kubenswrapper[4832]: E0125 08:18:18.807411 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1e15f655f576c41f00d4eaab2003fac04bb2cec7f0f58dbc9f6ca35ce8cbdab5\": container with ID starting with 1e15f655f576c41f00d4eaab2003fac04bb2cec7f0f58dbc9f6ca35ce8cbdab5 not found: ID does not exist" containerID="1e15f655f576c41f00d4eaab2003fac04bb2cec7f0f58dbc9f6ca35ce8cbdab5" Jan 25 08:18:18 crc kubenswrapper[4832]: I0125 08:18:18.807455 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1e15f655f576c41f00d4eaab2003fac04bb2cec7f0f58dbc9f6ca35ce8cbdab5"} err="failed to get container status \"1e15f655f576c41f00d4eaab2003fac04bb2cec7f0f58dbc9f6ca35ce8cbdab5\": rpc error: code = NotFound desc = could not find container \"1e15f655f576c41f00d4eaab2003fac04bb2cec7f0f58dbc9f6ca35ce8cbdab5\": container with ID starting with 1e15f655f576c41f00d4eaab2003fac04bb2cec7f0f58dbc9f6ca35ce8cbdab5 not found: ID does not exist" Jan 25 08:18:18 crc kubenswrapper[4832]: I0125 08:18:18.807488 4832 scope.go:117] "RemoveContainer" containerID="05680f3618c0dcabf0b56109999f6a13a66c8d03368752b1c33800f39f7592da" Jan 25 08:18:18 crc kubenswrapper[4832]: I0125 08:18:18.807761 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 25 08:18:18 crc kubenswrapper[4832]: I0125 08:18:18.807772 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"05680f3618c0dcabf0b56109999f6a13a66c8d03368752b1c33800f39f7592da"} err="failed to get container status \"05680f3618c0dcabf0b56109999f6a13a66c8d03368752b1c33800f39f7592da\": rpc error: code = NotFound desc = could not find container \"05680f3618c0dcabf0b56109999f6a13a66c8d03368752b1c33800f39f7592da\": container with ID starting with 05680f3618c0dcabf0b56109999f6a13a66c8d03368752b1c33800f39f7592da not found: ID does not exist" Jan 25 08:18:18 crc kubenswrapper[4832]: I0125 08:18:18.807795 4832 scope.go:117] "RemoveContainer" containerID="1e15f655f576c41f00d4eaab2003fac04bb2cec7f0f58dbc9f6ca35ce8cbdab5" Jan 25 08:18:18 crc kubenswrapper[4832]: I0125 08:18:18.808202 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1e15f655f576c41f00d4eaab2003fac04bb2cec7f0f58dbc9f6ca35ce8cbdab5"} err="failed to get container status \"1e15f655f576c41f00d4eaab2003fac04bb2cec7f0f58dbc9f6ca35ce8cbdab5\": rpc error: code = NotFound desc = could not find container \"1e15f655f576c41f00d4eaab2003fac04bb2cec7f0f58dbc9f6ca35ce8cbdab5\": container with ID starting with 1e15f655f576c41f00d4eaab2003fac04bb2cec7f0f58dbc9f6ca35ce8cbdab5 not found: ID does not exist" Jan 25 08:18:18 crc kubenswrapper[4832]: I0125 08:18:18.811225 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Jan 25 08:18:18 crc kubenswrapper[4832]: I0125 08:18:18.818460 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Jan 25 08:18:18 crc kubenswrapper[4832]: I0125 08:18:18.823632 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 25 08:18:18 crc kubenswrapper[4832]: I0125 08:18:18.948832 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wclp2\" (UniqueName: \"kubernetes.io/projected/72cbecfc-3788-48bb-9b96-e7e12374e0ff-kube-api-access-wclp2\") pod \"nova-metadata-0\" (UID: \"72cbecfc-3788-48bb-9b96-e7e12374e0ff\") " pod="openstack/nova-metadata-0" Jan 25 08:18:18 crc kubenswrapper[4832]: I0125 08:18:18.948940 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/72cbecfc-3788-48bb-9b96-e7e12374e0ff-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"72cbecfc-3788-48bb-9b96-e7e12374e0ff\") " pod="openstack/nova-metadata-0" Jan 25 08:18:18 crc kubenswrapper[4832]: I0125 08:18:18.949243 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/72cbecfc-3788-48bb-9b96-e7e12374e0ff-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"72cbecfc-3788-48bb-9b96-e7e12374e0ff\") " pod="openstack/nova-metadata-0" Jan 25 08:18:18 crc kubenswrapper[4832]: I0125 08:18:18.949333 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/72cbecfc-3788-48bb-9b96-e7e12374e0ff-logs\") pod \"nova-metadata-0\" (UID: \"72cbecfc-3788-48bb-9b96-e7e12374e0ff\") " pod="openstack/nova-metadata-0" Jan 25 08:18:18 crc kubenswrapper[4832]: I0125 08:18:18.949671 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/72cbecfc-3788-48bb-9b96-e7e12374e0ff-config-data\") pod \"nova-metadata-0\" (UID: \"72cbecfc-3788-48bb-9b96-e7e12374e0ff\") " pod="openstack/nova-metadata-0" Jan 25 08:18:19 crc kubenswrapper[4832]: I0125 08:18:19.051553 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/72cbecfc-3788-48bb-9b96-e7e12374e0ff-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"72cbecfc-3788-48bb-9b96-e7e12374e0ff\") " pod="openstack/nova-metadata-0" Jan 25 08:18:19 crc kubenswrapper[4832]: I0125 08:18:19.052633 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/72cbecfc-3788-48bb-9b96-e7e12374e0ff-logs\") pod \"nova-metadata-0\" (UID: \"72cbecfc-3788-48bb-9b96-e7e12374e0ff\") " pod="openstack/nova-metadata-0" Jan 25 08:18:19 crc kubenswrapper[4832]: I0125 08:18:19.052764 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/72cbecfc-3788-48bb-9b96-e7e12374e0ff-config-data\") pod \"nova-metadata-0\" (UID: \"72cbecfc-3788-48bb-9b96-e7e12374e0ff\") " pod="openstack/nova-metadata-0" Jan 25 08:18:19 crc kubenswrapper[4832]: I0125 08:18:19.052861 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wclp2\" (UniqueName: \"kubernetes.io/projected/72cbecfc-3788-48bb-9b96-e7e12374e0ff-kube-api-access-wclp2\") pod \"nova-metadata-0\" (UID: \"72cbecfc-3788-48bb-9b96-e7e12374e0ff\") " pod="openstack/nova-metadata-0" Jan 25 08:18:19 crc kubenswrapper[4832]: I0125 08:18:19.052915 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/72cbecfc-3788-48bb-9b96-e7e12374e0ff-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"72cbecfc-3788-48bb-9b96-e7e12374e0ff\") " pod="openstack/nova-metadata-0" Jan 25 08:18:19 crc kubenswrapper[4832]: I0125 08:18:19.053071 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/72cbecfc-3788-48bb-9b96-e7e12374e0ff-logs\") pod \"nova-metadata-0\" (UID: \"72cbecfc-3788-48bb-9b96-e7e12374e0ff\") " pod="openstack/nova-metadata-0" Jan 25 08:18:19 crc kubenswrapper[4832]: I0125 08:18:19.055791 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/72cbecfc-3788-48bb-9b96-e7e12374e0ff-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"72cbecfc-3788-48bb-9b96-e7e12374e0ff\") " pod="openstack/nova-metadata-0" Jan 25 08:18:19 crc kubenswrapper[4832]: I0125 08:18:19.058041 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/72cbecfc-3788-48bb-9b96-e7e12374e0ff-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"72cbecfc-3788-48bb-9b96-e7e12374e0ff\") " pod="openstack/nova-metadata-0" Jan 25 08:18:19 crc kubenswrapper[4832]: I0125 08:18:19.061080 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/72cbecfc-3788-48bb-9b96-e7e12374e0ff-config-data\") pod \"nova-metadata-0\" (UID: \"72cbecfc-3788-48bb-9b96-e7e12374e0ff\") " pod="openstack/nova-metadata-0" Jan 25 08:18:19 crc kubenswrapper[4832]: I0125 08:18:19.072419 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wclp2\" (UniqueName: \"kubernetes.io/projected/72cbecfc-3788-48bb-9b96-e7e12374e0ff-kube-api-access-wclp2\") pod \"nova-metadata-0\" (UID: \"72cbecfc-3788-48bb-9b96-e7e12374e0ff\") " pod="openstack/nova-metadata-0" Jan 25 08:18:19 crc kubenswrapper[4832]: I0125 08:18:19.135435 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 25 08:18:19 crc kubenswrapper[4832]: I0125 08:18:19.574849 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 25 08:18:19 crc kubenswrapper[4832]: W0125 08:18:19.585243 4832 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod72cbecfc_3788_48bb_9b96_e7e12374e0ff.slice/crio-0925b2ace89c7461ba4e28e9d0258e3a3516ddcc9f006a88e500065226b725d8 WatchSource:0}: Error finding container 0925b2ace89c7461ba4e28e9d0258e3a3516ddcc9f006a88e500065226b725d8: Status 404 returned error can't find the container with id 0925b2ace89c7461ba4e28e9d0258e3a3516ddcc9f006a88e500065226b725d8 Jan 25 08:18:19 crc kubenswrapper[4832]: I0125 08:18:19.685916 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dc449e14-2c38-4376-8bae-1950edee8d5a" path="/var/lib/kubelet/pods/dc449e14-2c38-4376-8bae-1950edee8d5a/volumes" Jan 25 08:18:19 crc kubenswrapper[4832]: I0125 08:18:19.739960 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"72cbecfc-3788-48bb-9b96-e7e12374e0ff","Type":"ContainerStarted","Data":"0925b2ace89c7461ba4e28e9d0258e3a3516ddcc9f006a88e500065226b725d8"} Jan 25 08:18:20 crc kubenswrapper[4832]: I0125 08:18:20.752242 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"72cbecfc-3788-48bb-9b96-e7e12374e0ff","Type":"ContainerStarted","Data":"6bde26287b26a7ce1f511368bfd656cc8cff758e3ff23bae3180f58446f29877"} Jan 25 08:18:20 crc kubenswrapper[4832]: I0125 08:18:20.752836 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"72cbecfc-3788-48bb-9b96-e7e12374e0ff","Type":"ContainerStarted","Data":"f802a965926b75f2773750fd4e256e2155970446c16132b06acaa289d72049b5"} Jan 25 08:18:20 crc kubenswrapper[4832]: I0125 08:18:20.754891 4832 generic.go:334] "Generic (PLEG): container finished" podID="30535fb7-5d1d-47e6-8394-3df7f9d032eb" containerID="72124bd7bf49d598aa55b3e27272ea9046d23af883d96705c9dd9a7fe614d8f3" exitCode=0 Jan 25 08:18:20 crc kubenswrapper[4832]: I0125 08:18:20.754940 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-c24ss" event={"ID":"30535fb7-5d1d-47e6-8394-3df7f9d032eb","Type":"ContainerDied","Data":"72124bd7bf49d598aa55b3e27272ea9046d23af883d96705c9dd9a7fe614d8f3"} Jan 25 08:18:20 crc kubenswrapper[4832]: I0125 08:18:20.787124 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.787093686 podStartE2EDuration="2.787093686s" podCreationTimestamp="2026-01-25 08:18:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-25 08:18:20.772671748 +0000 UTC m=+1283.446495291" watchObservedRunningTime="2026-01-25 08:18:20.787093686 +0000 UTC m=+1283.460917239" Jan 25 08:18:21 crc kubenswrapper[4832]: I0125 08:18:21.764482 4832 generic.go:334] "Generic (PLEG): container finished" podID="d1a99b4f-2213-4a2a-9086-e755207a4e3c" containerID="574faa8798ceac6b8e063d9c738b9da32df65a6d57fde1ba725961285d3d8d0e" exitCode=0 Jan 25 08:18:21 crc kubenswrapper[4832]: I0125 08:18:21.764563 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-nglwx" event={"ID":"d1a99b4f-2213-4a2a-9086-e755207a4e3c","Type":"ContainerDied","Data":"574faa8798ceac6b8e063d9c738b9da32df65a6d57fde1ba725961285d3d8d0e"} Jan 25 08:18:22 crc kubenswrapper[4832]: I0125 08:18:22.003945 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 25 08:18:22 crc kubenswrapper[4832]: I0125 08:18:22.004278 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 25 08:18:22 crc kubenswrapper[4832]: I0125 08:18:22.149505 4832 patch_prober.go:28] interesting pod/machine-config-daemon-9r9sz container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 25 08:18:22 crc kubenswrapper[4832]: I0125 08:18:22.149563 4832 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-9r9sz" podUID="1fb47e8e-c812-41b4-9be7-3fad81e121b0" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 25 08:18:22 crc kubenswrapper[4832]: I0125 08:18:22.161612 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-c24ss" Jan 25 08:18:22 crc kubenswrapper[4832]: I0125 08:18:22.253696 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-845d6d6f59-gbk4s" Jan 25 08:18:22 crc kubenswrapper[4832]: I0125 08:18:22.259770 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Jan 25 08:18:22 crc kubenswrapper[4832]: I0125 08:18:22.299602 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Jan 25 08:18:22 crc kubenswrapper[4832]: I0125 08:18:22.320497 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cwgsl\" (UniqueName: \"kubernetes.io/projected/30535fb7-5d1d-47e6-8394-3df7f9d032eb-kube-api-access-cwgsl\") pod \"30535fb7-5d1d-47e6-8394-3df7f9d032eb\" (UID: \"30535fb7-5d1d-47e6-8394-3df7f9d032eb\") " Jan 25 08:18:22 crc kubenswrapper[4832]: I0125 08:18:22.320578 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/30535fb7-5d1d-47e6-8394-3df7f9d032eb-config-data\") pod \"30535fb7-5d1d-47e6-8394-3df7f9d032eb\" (UID: \"30535fb7-5d1d-47e6-8394-3df7f9d032eb\") " Jan 25 08:18:22 crc kubenswrapper[4832]: I0125 08:18:22.320674 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/30535fb7-5d1d-47e6-8394-3df7f9d032eb-combined-ca-bundle\") pod \"30535fb7-5d1d-47e6-8394-3df7f9d032eb\" (UID: \"30535fb7-5d1d-47e6-8394-3df7f9d032eb\") " Jan 25 08:18:22 crc kubenswrapper[4832]: I0125 08:18:22.320716 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/30535fb7-5d1d-47e6-8394-3df7f9d032eb-scripts\") pod \"30535fb7-5d1d-47e6-8394-3df7f9d032eb\" (UID: \"30535fb7-5d1d-47e6-8394-3df7f9d032eb\") " Jan 25 08:18:22 crc kubenswrapper[4832]: I0125 08:18:22.330654 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5784cf869f-5ld69"] Jan 25 08:18:22 crc kubenswrapper[4832]: I0125 08:18:22.331204 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5784cf869f-5ld69" podUID="23584092-31c4-45a1-bf04-88e7f6bb9ece" containerName="dnsmasq-dns" containerID="cri-o://b8928205d0efd78f2007dc8145ab2101564458ff697c4c0457d12393a40ff035" gracePeriod=10 Jan 25 08:18:22 crc kubenswrapper[4832]: I0125 08:18:22.334623 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/30535fb7-5d1d-47e6-8394-3df7f9d032eb-scripts" (OuterVolumeSpecName: "scripts") pod "30535fb7-5d1d-47e6-8394-3df7f9d032eb" (UID: "30535fb7-5d1d-47e6-8394-3df7f9d032eb"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 25 08:18:22 crc kubenswrapper[4832]: I0125 08:18:22.339125 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/30535fb7-5d1d-47e6-8394-3df7f9d032eb-kube-api-access-cwgsl" (OuterVolumeSpecName: "kube-api-access-cwgsl") pod "30535fb7-5d1d-47e6-8394-3df7f9d032eb" (UID: "30535fb7-5d1d-47e6-8394-3df7f9d032eb"). InnerVolumeSpecName "kube-api-access-cwgsl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 25 08:18:22 crc kubenswrapper[4832]: I0125 08:18:22.366649 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/30535fb7-5d1d-47e6-8394-3df7f9d032eb-config-data" (OuterVolumeSpecName: "config-data") pod "30535fb7-5d1d-47e6-8394-3df7f9d032eb" (UID: "30535fb7-5d1d-47e6-8394-3df7f9d032eb"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 25 08:18:22 crc kubenswrapper[4832]: I0125 08:18:22.399512 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/30535fb7-5d1d-47e6-8394-3df7f9d032eb-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "30535fb7-5d1d-47e6-8394-3df7f9d032eb" (UID: "30535fb7-5d1d-47e6-8394-3df7f9d032eb"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 25 08:18:22 crc kubenswrapper[4832]: I0125 08:18:22.423853 4832 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/30535fb7-5d1d-47e6-8394-3df7f9d032eb-config-data\") on node \"crc\" DevicePath \"\"" Jan 25 08:18:22 crc kubenswrapper[4832]: I0125 08:18:22.423887 4832 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/30535fb7-5d1d-47e6-8394-3df7f9d032eb-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 25 08:18:22 crc kubenswrapper[4832]: I0125 08:18:22.423901 4832 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/30535fb7-5d1d-47e6-8394-3df7f9d032eb-scripts\") on node \"crc\" DevicePath \"\"" Jan 25 08:18:22 crc kubenswrapper[4832]: I0125 08:18:22.423913 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cwgsl\" (UniqueName: \"kubernetes.io/projected/30535fb7-5d1d-47e6-8394-3df7f9d032eb-kube-api-access-cwgsl\") on node \"crc\" DevicePath \"\"" Jan 25 08:18:22 crc kubenswrapper[4832]: I0125 08:18:22.783340 4832 generic.go:334] "Generic (PLEG): container finished" podID="23584092-31c4-45a1-bf04-88e7f6bb9ece" containerID="b8928205d0efd78f2007dc8145ab2101564458ff697c4c0457d12393a40ff035" exitCode=0 Jan 25 08:18:22 crc kubenswrapper[4832]: I0125 08:18:22.783559 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5784cf869f-5ld69" event={"ID":"23584092-31c4-45a1-bf04-88e7f6bb9ece","Type":"ContainerDied","Data":"b8928205d0efd78f2007dc8145ab2101564458ff697c4c0457d12393a40ff035"} Jan 25 08:18:22 crc kubenswrapper[4832]: I0125 08:18:22.786224 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-c24ss" Jan 25 08:18:22 crc kubenswrapper[4832]: I0125 08:18:22.786301 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-c24ss" event={"ID":"30535fb7-5d1d-47e6-8394-3df7f9d032eb","Type":"ContainerDied","Data":"123125c156df852f484eb757b1483ddd625943babc609f2b3387378699ad658c"} Jan 25 08:18:22 crc kubenswrapper[4832]: I0125 08:18:22.786321 4832 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="123125c156df852f484eb757b1483ddd625943babc609f2b3387378699ad658c" Jan 25 08:18:22 crc kubenswrapper[4832]: I0125 08:18:22.819025 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5784cf869f-5ld69" Jan 25 08:18:22 crc kubenswrapper[4832]: I0125 08:18:22.822036 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Jan 25 08:18:22 crc kubenswrapper[4832]: I0125 08:18:22.892738 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 25 08:18:22 crc kubenswrapper[4832]: E0125 08:18:22.893222 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="23584092-31c4-45a1-bf04-88e7f6bb9ece" containerName="init" Jan 25 08:18:22 crc kubenswrapper[4832]: I0125 08:18:22.893242 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="23584092-31c4-45a1-bf04-88e7f6bb9ece" containerName="init" Jan 25 08:18:22 crc kubenswrapper[4832]: E0125 08:18:22.893264 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="23584092-31c4-45a1-bf04-88e7f6bb9ece" containerName="dnsmasq-dns" Jan 25 08:18:22 crc kubenswrapper[4832]: I0125 08:18:22.893270 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="23584092-31c4-45a1-bf04-88e7f6bb9ece" containerName="dnsmasq-dns" Jan 25 08:18:22 crc kubenswrapper[4832]: E0125 08:18:22.893289 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="30535fb7-5d1d-47e6-8394-3df7f9d032eb" containerName="nova-cell1-conductor-db-sync" Jan 25 08:18:22 crc kubenswrapper[4832]: I0125 08:18:22.893298 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="30535fb7-5d1d-47e6-8394-3df7f9d032eb" containerName="nova-cell1-conductor-db-sync" Jan 25 08:18:22 crc kubenswrapper[4832]: I0125 08:18:22.893497 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="30535fb7-5d1d-47e6-8394-3df7f9d032eb" containerName="nova-cell1-conductor-db-sync" Jan 25 08:18:22 crc kubenswrapper[4832]: I0125 08:18:22.893512 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="23584092-31c4-45a1-bf04-88e7f6bb9ece" containerName="dnsmasq-dns" Jan 25 08:18:22 crc kubenswrapper[4832]: I0125 08:18:22.894151 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Jan 25 08:18:22 crc kubenswrapper[4832]: I0125 08:18:22.900033 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Jan 25 08:18:22 crc kubenswrapper[4832]: I0125 08:18:22.928316 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 25 08:18:22 crc kubenswrapper[4832]: I0125 08:18:22.934021 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/23584092-31c4-45a1-bf04-88e7f6bb9ece-dns-swift-storage-0\") pod \"23584092-31c4-45a1-bf04-88e7f6bb9ece\" (UID: \"23584092-31c4-45a1-bf04-88e7f6bb9ece\") " Jan 25 08:18:22 crc kubenswrapper[4832]: I0125 08:18:22.934111 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/23584092-31c4-45a1-bf04-88e7f6bb9ece-ovsdbserver-nb\") pod \"23584092-31c4-45a1-bf04-88e7f6bb9ece\" (UID: \"23584092-31c4-45a1-bf04-88e7f6bb9ece\") " Jan 25 08:18:22 crc kubenswrapper[4832]: I0125 08:18:22.934146 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/23584092-31c4-45a1-bf04-88e7f6bb9ece-config\") pod \"23584092-31c4-45a1-bf04-88e7f6bb9ece\" (UID: \"23584092-31c4-45a1-bf04-88e7f6bb9ece\") " Jan 25 08:18:22 crc kubenswrapper[4832]: I0125 08:18:22.934174 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7gg5k\" (UniqueName: \"kubernetes.io/projected/23584092-31c4-45a1-bf04-88e7f6bb9ece-kube-api-access-7gg5k\") pod \"23584092-31c4-45a1-bf04-88e7f6bb9ece\" (UID: \"23584092-31c4-45a1-bf04-88e7f6bb9ece\") " Jan 25 08:18:22 crc kubenswrapper[4832]: I0125 08:18:22.934220 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/23584092-31c4-45a1-bf04-88e7f6bb9ece-dns-svc\") pod \"23584092-31c4-45a1-bf04-88e7f6bb9ece\" (UID: \"23584092-31c4-45a1-bf04-88e7f6bb9ece\") " Jan 25 08:18:22 crc kubenswrapper[4832]: I0125 08:18:22.934296 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/23584092-31c4-45a1-bf04-88e7f6bb9ece-ovsdbserver-sb\") pod \"23584092-31c4-45a1-bf04-88e7f6bb9ece\" (UID: \"23584092-31c4-45a1-bf04-88e7f6bb9ece\") " Jan 25 08:18:22 crc kubenswrapper[4832]: I0125 08:18:22.939319 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/23584092-31c4-45a1-bf04-88e7f6bb9ece-kube-api-access-7gg5k" (OuterVolumeSpecName: "kube-api-access-7gg5k") pod "23584092-31c4-45a1-bf04-88e7f6bb9ece" (UID: "23584092-31c4-45a1-bf04-88e7f6bb9ece"). InnerVolumeSpecName "kube-api-access-7gg5k". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 25 08:18:22 crc kubenswrapper[4832]: I0125 08:18:22.991065 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/23584092-31c4-45a1-bf04-88e7f6bb9ece-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "23584092-31c4-45a1-bf04-88e7f6bb9ece" (UID: "23584092-31c4-45a1-bf04-88e7f6bb9ece"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 25 08:18:22 crc kubenswrapper[4832]: I0125 08:18:22.995090 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/23584092-31c4-45a1-bf04-88e7f6bb9ece-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "23584092-31c4-45a1-bf04-88e7f6bb9ece" (UID: "23584092-31c4-45a1-bf04-88e7f6bb9ece"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 25 08:18:23 crc kubenswrapper[4832]: I0125 08:18:23.000858 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/23584092-31c4-45a1-bf04-88e7f6bb9ece-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "23584092-31c4-45a1-bf04-88e7f6bb9ece" (UID: "23584092-31c4-45a1-bf04-88e7f6bb9ece"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 25 08:18:23 crc kubenswrapper[4832]: I0125 08:18:23.001999 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/23584092-31c4-45a1-bf04-88e7f6bb9ece-config" (OuterVolumeSpecName: "config") pod "23584092-31c4-45a1-bf04-88e7f6bb9ece" (UID: "23584092-31c4-45a1-bf04-88e7f6bb9ece"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 25 08:18:23 crc kubenswrapper[4832]: I0125 08:18:23.011683 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/23584092-31c4-45a1-bf04-88e7f6bb9ece-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "23584092-31c4-45a1-bf04-88e7f6bb9ece" (UID: "23584092-31c4-45a1-bf04-88e7f6bb9ece"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 25 08:18:23 crc kubenswrapper[4832]: I0125 08:18:23.036591 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mgc6g\" (UniqueName: \"kubernetes.io/projected/2052de31-aa8d-4127-b9ef-12bdb9d90fd9-kube-api-access-mgc6g\") pod \"nova-cell1-conductor-0\" (UID: \"2052de31-aa8d-4127-b9ef-12bdb9d90fd9\") " pod="openstack/nova-cell1-conductor-0" Jan 25 08:18:23 crc kubenswrapper[4832]: I0125 08:18:23.036648 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2052de31-aa8d-4127-b9ef-12bdb9d90fd9-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"2052de31-aa8d-4127-b9ef-12bdb9d90fd9\") " pod="openstack/nova-cell1-conductor-0" Jan 25 08:18:23 crc kubenswrapper[4832]: I0125 08:18:23.036703 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2052de31-aa8d-4127-b9ef-12bdb9d90fd9-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"2052de31-aa8d-4127-b9ef-12bdb9d90fd9\") " pod="openstack/nova-cell1-conductor-0" Jan 25 08:18:23 crc kubenswrapper[4832]: I0125 08:18:23.036814 4832 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/23584092-31c4-45a1-bf04-88e7f6bb9ece-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 25 08:18:23 crc kubenswrapper[4832]: I0125 08:18:23.036828 4832 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/23584092-31c4-45a1-bf04-88e7f6bb9ece-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 25 08:18:23 crc kubenswrapper[4832]: I0125 08:18:23.036837 4832 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/23584092-31c4-45a1-bf04-88e7f6bb9ece-config\") on node \"crc\" DevicePath \"\"" Jan 25 08:18:23 crc kubenswrapper[4832]: I0125 08:18:23.036849 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7gg5k\" (UniqueName: \"kubernetes.io/projected/23584092-31c4-45a1-bf04-88e7f6bb9ece-kube-api-access-7gg5k\") on node \"crc\" DevicePath \"\"" Jan 25 08:18:23 crc kubenswrapper[4832]: I0125 08:18:23.036858 4832 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/23584092-31c4-45a1-bf04-88e7f6bb9ece-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 25 08:18:23 crc kubenswrapper[4832]: I0125 08:18:23.036866 4832 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/23584092-31c4-45a1-bf04-88e7f6bb9ece-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 25 08:18:23 crc kubenswrapper[4832]: I0125 08:18:23.089090 4832 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="95f0c1bd-2ef0-41c2-960f-ea7e06873c6b" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.188:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 25 08:18:23 crc kubenswrapper[4832]: I0125 08:18:23.089112 4832 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="95f0c1bd-2ef0-41c2-960f-ea7e06873c6b" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.188:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 25 08:18:23 crc kubenswrapper[4832]: I0125 08:18:23.138658 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mgc6g\" (UniqueName: \"kubernetes.io/projected/2052de31-aa8d-4127-b9ef-12bdb9d90fd9-kube-api-access-mgc6g\") pod \"nova-cell1-conductor-0\" (UID: \"2052de31-aa8d-4127-b9ef-12bdb9d90fd9\") " pod="openstack/nova-cell1-conductor-0" Jan 25 08:18:23 crc kubenswrapper[4832]: I0125 08:18:23.138743 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2052de31-aa8d-4127-b9ef-12bdb9d90fd9-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"2052de31-aa8d-4127-b9ef-12bdb9d90fd9\") " pod="openstack/nova-cell1-conductor-0" Jan 25 08:18:23 crc kubenswrapper[4832]: I0125 08:18:23.138779 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2052de31-aa8d-4127-b9ef-12bdb9d90fd9-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"2052de31-aa8d-4127-b9ef-12bdb9d90fd9\") " pod="openstack/nova-cell1-conductor-0" Jan 25 08:18:23 crc kubenswrapper[4832]: I0125 08:18:23.143683 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2052de31-aa8d-4127-b9ef-12bdb9d90fd9-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"2052de31-aa8d-4127-b9ef-12bdb9d90fd9\") " pod="openstack/nova-cell1-conductor-0" Jan 25 08:18:23 crc kubenswrapper[4832]: I0125 08:18:23.147233 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2052de31-aa8d-4127-b9ef-12bdb9d90fd9-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"2052de31-aa8d-4127-b9ef-12bdb9d90fd9\") " pod="openstack/nova-cell1-conductor-0" Jan 25 08:18:23 crc kubenswrapper[4832]: I0125 08:18:23.157530 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mgc6g\" (UniqueName: \"kubernetes.io/projected/2052de31-aa8d-4127-b9ef-12bdb9d90fd9-kube-api-access-mgc6g\") pod \"nova-cell1-conductor-0\" (UID: \"2052de31-aa8d-4127-b9ef-12bdb9d90fd9\") " pod="openstack/nova-cell1-conductor-0" Jan 25 08:18:23 crc kubenswrapper[4832]: I0125 08:18:23.224754 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Jan 25 08:18:23 crc kubenswrapper[4832]: I0125 08:18:23.368665 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-nglwx" Jan 25 08:18:23 crc kubenswrapper[4832]: I0125 08:18:23.545726 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d1a99b4f-2213-4a2a-9086-e755207a4e3c-config-data\") pod \"d1a99b4f-2213-4a2a-9086-e755207a4e3c\" (UID: \"d1a99b4f-2213-4a2a-9086-e755207a4e3c\") " Jan 25 08:18:23 crc kubenswrapper[4832]: I0125 08:18:23.545820 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d1a99b4f-2213-4a2a-9086-e755207a4e3c-scripts\") pod \"d1a99b4f-2213-4a2a-9086-e755207a4e3c\" (UID: \"d1a99b4f-2213-4a2a-9086-e755207a4e3c\") " Jan 25 08:18:23 crc kubenswrapper[4832]: I0125 08:18:23.545844 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k6fx9\" (UniqueName: \"kubernetes.io/projected/d1a99b4f-2213-4a2a-9086-e755207a4e3c-kube-api-access-k6fx9\") pod \"d1a99b4f-2213-4a2a-9086-e755207a4e3c\" (UID: \"d1a99b4f-2213-4a2a-9086-e755207a4e3c\") " Jan 25 08:18:23 crc kubenswrapper[4832]: I0125 08:18:23.545897 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d1a99b4f-2213-4a2a-9086-e755207a4e3c-combined-ca-bundle\") pod \"d1a99b4f-2213-4a2a-9086-e755207a4e3c\" (UID: \"d1a99b4f-2213-4a2a-9086-e755207a4e3c\") " Jan 25 08:18:23 crc kubenswrapper[4832]: I0125 08:18:23.550003 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d1a99b4f-2213-4a2a-9086-e755207a4e3c-scripts" (OuterVolumeSpecName: "scripts") pod "d1a99b4f-2213-4a2a-9086-e755207a4e3c" (UID: "d1a99b4f-2213-4a2a-9086-e755207a4e3c"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 25 08:18:23 crc kubenswrapper[4832]: I0125 08:18:23.550601 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d1a99b4f-2213-4a2a-9086-e755207a4e3c-kube-api-access-k6fx9" (OuterVolumeSpecName: "kube-api-access-k6fx9") pod "d1a99b4f-2213-4a2a-9086-e755207a4e3c" (UID: "d1a99b4f-2213-4a2a-9086-e755207a4e3c"). InnerVolumeSpecName "kube-api-access-k6fx9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 25 08:18:23 crc kubenswrapper[4832]: I0125 08:18:23.571111 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d1a99b4f-2213-4a2a-9086-e755207a4e3c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d1a99b4f-2213-4a2a-9086-e755207a4e3c" (UID: "d1a99b4f-2213-4a2a-9086-e755207a4e3c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 25 08:18:23 crc kubenswrapper[4832]: I0125 08:18:23.592532 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d1a99b4f-2213-4a2a-9086-e755207a4e3c-config-data" (OuterVolumeSpecName: "config-data") pod "d1a99b4f-2213-4a2a-9086-e755207a4e3c" (UID: "d1a99b4f-2213-4a2a-9086-e755207a4e3c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 25 08:18:23 crc kubenswrapper[4832]: I0125 08:18:23.648421 4832 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d1a99b4f-2213-4a2a-9086-e755207a4e3c-config-data\") on node \"crc\" DevicePath \"\"" Jan 25 08:18:23 crc kubenswrapper[4832]: I0125 08:18:23.648460 4832 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d1a99b4f-2213-4a2a-9086-e755207a4e3c-scripts\") on node \"crc\" DevicePath \"\"" Jan 25 08:18:23 crc kubenswrapper[4832]: I0125 08:18:23.648475 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k6fx9\" (UniqueName: \"kubernetes.io/projected/d1a99b4f-2213-4a2a-9086-e755207a4e3c-kube-api-access-k6fx9\") on node \"crc\" DevicePath \"\"" Jan 25 08:18:23 crc kubenswrapper[4832]: I0125 08:18:23.648489 4832 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d1a99b4f-2213-4a2a-9086-e755207a4e3c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 25 08:18:23 crc kubenswrapper[4832]: I0125 08:18:23.737457 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 25 08:18:23 crc kubenswrapper[4832]: W0125 08:18:23.739154 4832 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2052de31_aa8d_4127_b9ef_12bdb9d90fd9.slice/crio-18d52b0cb0493ce78f7ffe07a2fd8deed9fd8a0357264fcd3d0d49a9e23554cb WatchSource:0}: Error finding container 18d52b0cb0493ce78f7ffe07a2fd8deed9fd8a0357264fcd3d0d49a9e23554cb: Status 404 returned error can't find the container with id 18d52b0cb0493ce78f7ffe07a2fd8deed9fd8a0357264fcd3d0d49a9e23554cb Jan 25 08:18:23 crc kubenswrapper[4832]: I0125 08:18:23.824879 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5784cf869f-5ld69" event={"ID":"23584092-31c4-45a1-bf04-88e7f6bb9ece","Type":"ContainerDied","Data":"3a9334c361bca692b685a64fae3b6a9bb4c9df39a7756612e7c611056f12bab4"} Jan 25 08:18:23 crc kubenswrapper[4832]: I0125 08:18:23.825022 4832 scope.go:117] "RemoveContainer" containerID="b8928205d0efd78f2007dc8145ab2101564458ff697c4c0457d12393a40ff035" Jan 25 08:18:23 crc kubenswrapper[4832]: I0125 08:18:23.824922 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5784cf869f-5ld69" Jan 25 08:18:23 crc kubenswrapper[4832]: I0125 08:18:23.828423 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-nglwx" event={"ID":"d1a99b4f-2213-4a2a-9086-e755207a4e3c","Type":"ContainerDied","Data":"c54b5bccd303b53ad0e3d2acd8a9fc651c99940d026f7fb6a375531e3792d6d2"} Jan 25 08:18:23 crc kubenswrapper[4832]: I0125 08:18:23.828469 4832 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c54b5bccd303b53ad0e3d2acd8a9fc651c99940d026f7fb6a375531e3792d6d2" Jan 25 08:18:23 crc kubenswrapper[4832]: I0125 08:18:23.828473 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-nglwx" Jan 25 08:18:23 crc kubenswrapper[4832]: I0125 08:18:23.831446 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"2052de31-aa8d-4127-b9ef-12bdb9d90fd9","Type":"ContainerStarted","Data":"18d52b0cb0493ce78f7ffe07a2fd8deed9fd8a0357264fcd3d0d49a9e23554cb"} Jan 25 08:18:23 crc kubenswrapper[4832]: I0125 08:18:23.856254 4832 scope.go:117] "RemoveContainer" containerID="b14131af1f01635c790897f065c2918beb976670f6a0aa776de8cb70a7977691" Jan 25 08:18:23 crc kubenswrapper[4832]: I0125 08:18:23.875541 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5784cf869f-5ld69"] Jan 25 08:18:23 crc kubenswrapper[4832]: I0125 08:18:23.887941 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5784cf869f-5ld69"] Jan 25 08:18:23 crc kubenswrapper[4832]: I0125 08:18:23.980231 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 25 08:18:23 crc kubenswrapper[4832]: I0125 08:18:23.980540 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="95f0c1bd-2ef0-41c2-960f-ea7e06873c6b" containerName="nova-api-log" containerID="cri-o://e186d65d7a165d8b58d1dd38838b87f5dca98bbec31ada54fc448bd8f429b1ae" gracePeriod=30 Jan 25 08:18:23 crc kubenswrapper[4832]: I0125 08:18:23.980658 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="95f0c1bd-2ef0-41c2-960f-ea7e06873c6b" containerName="nova-api-api" containerID="cri-o://abbb6600be50a48311111b2e0d85ed9bb5b5c4b994f2586a23fe54cb81a55868" gracePeriod=30 Jan 25 08:18:24 crc kubenswrapper[4832]: I0125 08:18:24.000878 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 25 08:18:24 crc kubenswrapper[4832]: I0125 08:18:24.027044 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 25 08:18:24 crc kubenswrapper[4832]: I0125 08:18:24.027283 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="72cbecfc-3788-48bb-9b96-e7e12374e0ff" containerName="nova-metadata-log" containerID="cri-o://f802a965926b75f2773750fd4e256e2155970446c16132b06acaa289d72049b5" gracePeriod=30 Jan 25 08:18:24 crc kubenswrapper[4832]: I0125 08:18:24.027373 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="72cbecfc-3788-48bb-9b96-e7e12374e0ff" containerName="nova-metadata-metadata" containerID="cri-o://6bde26287b26a7ce1f511368bfd656cc8cff758e3ff23bae3180f58446f29877" gracePeriod=30 Jan 25 08:18:24 crc kubenswrapper[4832]: I0125 08:18:24.135905 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 25 08:18:24 crc kubenswrapper[4832]: I0125 08:18:24.135955 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 25 08:18:24 crc kubenswrapper[4832]: I0125 08:18:24.604066 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 25 08:18:24 crc kubenswrapper[4832]: I0125 08:18:24.767480 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/72cbecfc-3788-48bb-9b96-e7e12374e0ff-nova-metadata-tls-certs\") pod \"72cbecfc-3788-48bb-9b96-e7e12374e0ff\" (UID: \"72cbecfc-3788-48bb-9b96-e7e12374e0ff\") " Jan 25 08:18:24 crc kubenswrapper[4832]: I0125 08:18:24.767592 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/72cbecfc-3788-48bb-9b96-e7e12374e0ff-config-data\") pod \"72cbecfc-3788-48bb-9b96-e7e12374e0ff\" (UID: \"72cbecfc-3788-48bb-9b96-e7e12374e0ff\") " Jan 25 08:18:24 crc kubenswrapper[4832]: I0125 08:18:24.767633 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/72cbecfc-3788-48bb-9b96-e7e12374e0ff-combined-ca-bundle\") pod \"72cbecfc-3788-48bb-9b96-e7e12374e0ff\" (UID: \"72cbecfc-3788-48bb-9b96-e7e12374e0ff\") " Jan 25 08:18:24 crc kubenswrapper[4832]: I0125 08:18:24.767673 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/72cbecfc-3788-48bb-9b96-e7e12374e0ff-logs\") pod \"72cbecfc-3788-48bb-9b96-e7e12374e0ff\" (UID: \"72cbecfc-3788-48bb-9b96-e7e12374e0ff\") " Jan 25 08:18:24 crc kubenswrapper[4832]: I0125 08:18:24.767782 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wclp2\" (UniqueName: \"kubernetes.io/projected/72cbecfc-3788-48bb-9b96-e7e12374e0ff-kube-api-access-wclp2\") pod \"72cbecfc-3788-48bb-9b96-e7e12374e0ff\" (UID: \"72cbecfc-3788-48bb-9b96-e7e12374e0ff\") " Jan 25 08:18:24 crc kubenswrapper[4832]: I0125 08:18:24.769570 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/72cbecfc-3788-48bb-9b96-e7e12374e0ff-logs" (OuterVolumeSpecName: "logs") pod "72cbecfc-3788-48bb-9b96-e7e12374e0ff" (UID: "72cbecfc-3788-48bb-9b96-e7e12374e0ff"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 25 08:18:24 crc kubenswrapper[4832]: I0125 08:18:24.773739 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/72cbecfc-3788-48bb-9b96-e7e12374e0ff-kube-api-access-wclp2" (OuterVolumeSpecName: "kube-api-access-wclp2") pod "72cbecfc-3788-48bb-9b96-e7e12374e0ff" (UID: "72cbecfc-3788-48bb-9b96-e7e12374e0ff"). InnerVolumeSpecName "kube-api-access-wclp2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 25 08:18:24 crc kubenswrapper[4832]: I0125 08:18:24.796146 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/72cbecfc-3788-48bb-9b96-e7e12374e0ff-config-data" (OuterVolumeSpecName: "config-data") pod "72cbecfc-3788-48bb-9b96-e7e12374e0ff" (UID: "72cbecfc-3788-48bb-9b96-e7e12374e0ff"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 25 08:18:24 crc kubenswrapper[4832]: I0125 08:18:24.807629 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/72cbecfc-3788-48bb-9b96-e7e12374e0ff-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "72cbecfc-3788-48bb-9b96-e7e12374e0ff" (UID: "72cbecfc-3788-48bb-9b96-e7e12374e0ff"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 25 08:18:24 crc kubenswrapper[4832]: I0125 08:18:24.826922 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/72cbecfc-3788-48bb-9b96-e7e12374e0ff-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "72cbecfc-3788-48bb-9b96-e7e12374e0ff" (UID: "72cbecfc-3788-48bb-9b96-e7e12374e0ff"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 25 08:18:24 crc kubenswrapper[4832]: I0125 08:18:24.853064 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"2052de31-aa8d-4127-b9ef-12bdb9d90fd9","Type":"ContainerStarted","Data":"6e23dafdf520e41877889501c4fd32c49380f6c94d2d3c00acada4b7314cc2a4"} Jan 25 08:18:24 crc kubenswrapper[4832]: I0125 08:18:24.853281 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-conductor-0" Jan 25 08:18:24 crc kubenswrapper[4832]: I0125 08:18:24.858836 4832 generic.go:334] "Generic (PLEG): container finished" podID="95f0c1bd-2ef0-41c2-960f-ea7e06873c6b" containerID="e186d65d7a165d8b58d1dd38838b87f5dca98bbec31ada54fc448bd8f429b1ae" exitCode=143 Jan 25 08:18:24 crc kubenswrapper[4832]: I0125 08:18:24.858933 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"95f0c1bd-2ef0-41c2-960f-ea7e06873c6b","Type":"ContainerDied","Data":"e186d65d7a165d8b58d1dd38838b87f5dca98bbec31ada54fc448bd8f429b1ae"} Jan 25 08:18:24 crc kubenswrapper[4832]: I0125 08:18:24.860249 4832 generic.go:334] "Generic (PLEG): container finished" podID="72cbecfc-3788-48bb-9b96-e7e12374e0ff" containerID="6bde26287b26a7ce1f511368bfd656cc8cff758e3ff23bae3180f58446f29877" exitCode=0 Jan 25 08:18:24 crc kubenswrapper[4832]: I0125 08:18:24.860285 4832 generic.go:334] "Generic (PLEG): container finished" podID="72cbecfc-3788-48bb-9b96-e7e12374e0ff" containerID="f802a965926b75f2773750fd4e256e2155970446c16132b06acaa289d72049b5" exitCode=143 Jan 25 08:18:24 crc kubenswrapper[4832]: I0125 08:18:24.860463 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="d848c5d5-d11c-4e63-b958-f98b1930587f" containerName="nova-scheduler-scheduler" containerID="cri-o://49406627d9e7da09cfae6f9e29a489670acfed15c08e76117fe0e3a4244d3181" gracePeriod=30 Jan 25 08:18:24 crc kubenswrapper[4832]: I0125 08:18:24.860748 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 25 08:18:24 crc kubenswrapper[4832]: I0125 08:18:24.862285 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"72cbecfc-3788-48bb-9b96-e7e12374e0ff","Type":"ContainerDied","Data":"6bde26287b26a7ce1f511368bfd656cc8cff758e3ff23bae3180f58446f29877"} Jan 25 08:18:24 crc kubenswrapper[4832]: I0125 08:18:24.862405 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"72cbecfc-3788-48bb-9b96-e7e12374e0ff","Type":"ContainerDied","Data":"f802a965926b75f2773750fd4e256e2155970446c16132b06acaa289d72049b5"} Jan 25 08:18:24 crc kubenswrapper[4832]: I0125 08:18:24.862413 4832 scope.go:117] "RemoveContainer" containerID="6bde26287b26a7ce1f511368bfd656cc8cff758e3ff23bae3180f58446f29877" Jan 25 08:18:24 crc kubenswrapper[4832]: I0125 08:18:24.862421 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"72cbecfc-3788-48bb-9b96-e7e12374e0ff","Type":"ContainerDied","Data":"0925b2ace89c7461ba4e28e9d0258e3a3516ddcc9f006a88e500065226b725d8"} Jan 25 08:18:24 crc kubenswrapper[4832]: I0125 08:18:24.873314 4832 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/72cbecfc-3788-48bb-9b96-e7e12374e0ff-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 25 08:18:24 crc kubenswrapper[4832]: I0125 08:18:24.873338 4832 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/72cbecfc-3788-48bb-9b96-e7e12374e0ff-logs\") on node \"crc\" DevicePath \"\"" Jan 25 08:18:24 crc kubenswrapper[4832]: I0125 08:18:24.873349 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wclp2\" (UniqueName: \"kubernetes.io/projected/72cbecfc-3788-48bb-9b96-e7e12374e0ff-kube-api-access-wclp2\") on node \"crc\" DevicePath \"\"" Jan 25 08:18:24 crc kubenswrapper[4832]: I0125 08:18:24.873358 4832 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/72cbecfc-3788-48bb-9b96-e7e12374e0ff-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 25 08:18:24 crc kubenswrapper[4832]: I0125 08:18:24.873367 4832 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/72cbecfc-3788-48bb-9b96-e7e12374e0ff-config-data\") on node \"crc\" DevicePath \"\"" Jan 25 08:18:24 crc kubenswrapper[4832]: I0125 08:18:24.875708 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-0" podStartSLOduration=2.875696425 podStartE2EDuration="2.875696425s" podCreationTimestamp="2026-01-25 08:18:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-25 08:18:24.870807217 +0000 UTC m=+1287.544630750" watchObservedRunningTime="2026-01-25 08:18:24.875696425 +0000 UTC m=+1287.549519958" Jan 25 08:18:24 crc kubenswrapper[4832]: I0125 08:18:24.901548 4832 scope.go:117] "RemoveContainer" containerID="f802a965926b75f2773750fd4e256e2155970446c16132b06acaa289d72049b5" Jan 25 08:18:24 crc kubenswrapper[4832]: I0125 08:18:24.904945 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 25 08:18:24 crc kubenswrapper[4832]: I0125 08:18:24.927928 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Jan 25 08:18:24 crc kubenswrapper[4832]: I0125 08:18:24.927990 4832 scope.go:117] "RemoveContainer" containerID="6bde26287b26a7ce1f511368bfd656cc8cff758e3ff23bae3180f58446f29877" Jan 25 08:18:24 crc kubenswrapper[4832]: E0125 08:18:24.928520 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6bde26287b26a7ce1f511368bfd656cc8cff758e3ff23bae3180f58446f29877\": container with ID starting with 6bde26287b26a7ce1f511368bfd656cc8cff758e3ff23bae3180f58446f29877 not found: ID does not exist" containerID="6bde26287b26a7ce1f511368bfd656cc8cff758e3ff23bae3180f58446f29877" Jan 25 08:18:24 crc kubenswrapper[4832]: I0125 08:18:24.928556 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6bde26287b26a7ce1f511368bfd656cc8cff758e3ff23bae3180f58446f29877"} err="failed to get container status \"6bde26287b26a7ce1f511368bfd656cc8cff758e3ff23bae3180f58446f29877\": rpc error: code = NotFound desc = could not find container \"6bde26287b26a7ce1f511368bfd656cc8cff758e3ff23bae3180f58446f29877\": container with ID starting with 6bde26287b26a7ce1f511368bfd656cc8cff758e3ff23bae3180f58446f29877 not found: ID does not exist" Jan 25 08:18:24 crc kubenswrapper[4832]: I0125 08:18:24.928585 4832 scope.go:117] "RemoveContainer" containerID="f802a965926b75f2773750fd4e256e2155970446c16132b06acaa289d72049b5" Jan 25 08:18:24 crc kubenswrapper[4832]: E0125 08:18:24.928888 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f802a965926b75f2773750fd4e256e2155970446c16132b06acaa289d72049b5\": container with ID starting with f802a965926b75f2773750fd4e256e2155970446c16132b06acaa289d72049b5 not found: ID does not exist" containerID="f802a965926b75f2773750fd4e256e2155970446c16132b06acaa289d72049b5" Jan 25 08:18:24 crc kubenswrapper[4832]: I0125 08:18:24.928917 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f802a965926b75f2773750fd4e256e2155970446c16132b06acaa289d72049b5"} err="failed to get container status \"f802a965926b75f2773750fd4e256e2155970446c16132b06acaa289d72049b5\": rpc error: code = NotFound desc = could not find container \"f802a965926b75f2773750fd4e256e2155970446c16132b06acaa289d72049b5\": container with ID starting with f802a965926b75f2773750fd4e256e2155970446c16132b06acaa289d72049b5 not found: ID does not exist" Jan 25 08:18:24 crc kubenswrapper[4832]: I0125 08:18:24.928942 4832 scope.go:117] "RemoveContainer" containerID="6bde26287b26a7ce1f511368bfd656cc8cff758e3ff23bae3180f58446f29877" Jan 25 08:18:24 crc kubenswrapper[4832]: I0125 08:18:24.931989 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6bde26287b26a7ce1f511368bfd656cc8cff758e3ff23bae3180f58446f29877"} err="failed to get container status \"6bde26287b26a7ce1f511368bfd656cc8cff758e3ff23bae3180f58446f29877\": rpc error: code = NotFound desc = could not find container \"6bde26287b26a7ce1f511368bfd656cc8cff758e3ff23bae3180f58446f29877\": container with ID starting with 6bde26287b26a7ce1f511368bfd656cc8cff758e3ff23bae3180f58446f29877 not found: ID does not exist" Jan 25 08:18:24 crc kubenswrapper[4832]: I0125 08:18:24.932040 4832 scope.go:117] "RemoveContainer" containerID="f802a965926b75f2773750fd4e256e2155970446c16132b06acaa289d72049b5" Jan 25 08:18:24 crc kubenswrapper[4832]: I0125 08:18:24.932561 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f802a965926b75f2773750fd4e256e2155970446c16132b06acaa289d72049b5"} err="failed to get container status \"f802a965926b75f2773750fd4e256e2155970446c16132b06acaa289d72049b5\": rpc error: code = NotFound desc = could not find container \"f802a965926b75f2773750fd4e256e2155970446c16132b06acaa289d72049b5\": container with ID starting with f802a965926b75f2773750fd4e256e2155970446c16132b06acaa289d72049b5 not found: ID does not exist" Jan 25 08:18:24 crc kubenswrapper[4832]: I0125 08:18:24.936442 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Jan 25 08:18:24 crc kubenswrapper[4832]: E0125 08:18:24.936910 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="72cbecfc-3788-48bb-9b96-e7e12374e0ff" containerName="nova-metadata-metadata" Jan 25 08:18:24 crc kubenswrapper[4832]: I0125 08:18:24.936935 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="72cbecfc-3788-48bb-9b96-e7e12374e0ff" containerName="nova-metadata-metadata" Jan 25 08:18:24 crc kubenswrapper[4832]: E0125 08:18:24.936951 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="72cbecfc-3788-48bb-9b96-e7e12374e0ff" containerName="nova-metadata-log" Jan 25 08:18:24 crc kubenswrapper[4832]: I0125 08:18:24.936962 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="72cbecfc-3788-48bb-9b96-e7e12374e0ff" containerName="nova-metadata-log" Jan 25 08:18:24 crc kubenswrapper[4832]: E0125 08:18:24.936975 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d1a99b4f-2213-4a2a-9086-e755207a4e3c" containerName="nova-manage" Jan 25 08:18:24 crc kubenswrapper[4832]: I0125 08:18:24.936982 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="d1a99b4f-2213-4a2a-9086-e755207a4e3c" containerName="nova-manage" Jan 25 08:18:24 crc kubenswrapper[4832]: I0125 08:18:24.937195 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="72cbecfc-3788-48bb-9b96-e7e12374e0ff" containerName="nova-metadata-metadata" Jan 25 08:18:24 crc kubenswrapper[4832]: I0125 08:18:24.937224 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="d1a99b4f-2213-4a2a-9086-e755207a4e3c" containerName="nova-manage" Jan 25 08:18:24 crc kubenswrapper[4832]: I0125 08:18:24.937245 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="72cbecfc-3788-48bb-9b96-e7e12374e0ff" containerName="nova-metadata-log" Jan 25 08:18:24 crc kubenswrapper[4832]: I0125 08:18:24.938245 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 25 08:18:24 crc kubenswrapper[4832]: I0125 08:18:24.940651 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Jan 25 08:18:24 crc kubenswrapper[4832]: I0125 08:18:24.941083 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Jan 25 08:18:24 crc kubenswrapper[4832]: I0125 08:18:24.951102 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 25 08:18:25 crc kubenswrapper[4832]: I0125 08:18:25.076667 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fcff2a1c-2a06-4930-aec6-2970335e6e78-config-data\") pod \"nova-metadata-0\" (UID: \"fcff2a1c-2a06-4930-aec6-2970335e6e78\") " pod="openstack/nova-metadata-0" Jan 25 08:18:25 crc kubenswrapper[4832]: I0125 08:18:25.076720 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-msz8v\" (UniqueName: \"kubernetes.io/projected/fcff2a1c-2a06-4930-aec6-2970335e6e78-kube-api-access-msz8v\") pod \"nova-metadata-0\" (UID: \"fcff2a1c-2a06-4930-aec6-2970335e6e78\") " pod="openstack/nova-metadata-0" Jan 25 08:18:25 crc kubenswrapper[4832]: I0125 08:18:25.076798 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fcff2a1c-2a06-4930-aec6-2970335e6e78-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"fcff2a1c-2a06-4930-aec6-2970335e6e78\") " pod="openstack/nova-metadata-0" Jan 25 08:18:25 crc kubenswrapper[4832]: I0125 08:18:25.076876 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fcff2a1c-2a06-4930-aec6-2970335e6e78-logs\") pod \"nova-metadata-0\" (UID: \"fcff2a1c-2a06-4930-aec6-2970335e6e78\") " pod="openstack/nova-metadata-0" Jan 25 08:18:25 crc kubenswrapper[4832]: I0125 08:18:25.076911 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/fcff2a1c-2a06-4930-aec6-2970335e6e78-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"fcff2a1c-2a06-4930-aec6-2970335e6e78\") " pod="openstack/nova-metadata-0" Jan 25 08:18:25 crc kubenswrapper[4832]: I0125 08:18:25.178408 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fcff2a1c-2a06-4930-aec6-2970335e6e78-logs\") pod \"nova-metadata-0\" (UID: \"fcff2a1c-2a06-4930-aec6-2970335e6e78\") " pod="openstack/nova-metadata-0" Jan 25 08:18:25 crc kubenswrapper[4832]: I0125 08:18:25.178474 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/fcff2a1c-2a06-4930-aec6-2970335e6e78-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"fcff2a1c-2a06-4930-aec6-2970335e6e78\") " pod="openstack/nova-metadata-0" Jan 25 08:18:25 crc kubenswrapper[4832]: I0125 08:18:25.178581 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fcff2a1c-2a06-4930-aec6-2970335e6e78-config-data\") pod \"nova-metadata-0\" (UID: \"fcff2a1c-2a06-4930-aec6-2970335e6e78\") " pod="openstack/nova-metadata-0" Jan 25 08:18:25 crc kubenswrapper[4832]: I0125 08:18:25.178611 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-msz8v\" (UniqueName: \"kubernetes.io/projected/fcff2a1c-2a06-4930-aec6-2970335e6e78-kube-api-access-msz8v\") pod \"nova-metadata-0\" (UID: \"fcff2a1c-2a06-4930-aec6-2970335e6e78\") " pod="openstack/nova-metadata-0" Jan 25 08:18:25 crc kubenswrapper[4832]: I0125 08:18:25.178655 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fcff2a1c-2a06-4930-aec6-2970335e6e78-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"fcff2a1c-2a06-4930-aec6-2970335e6e78\") " pod="openstack/nova-metadata-0" Jan 25 08:18:25 crc kubenswrapper[4832]: I0125 08:18:25.178978 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fcff2a1c-2a06-4930-aec6-2970335e6e78-logs\") pod \"nova-metadata-0\" (UID: \"fcff2a1c-2a06-4930-aec6-2970335e6e78\") " pod="openstack/nova-metadata-0" Jan 25 08:18:25 crc kubenswrapper[4832]: I0125 08:18:25.185170 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fcff2a1c-2a06-4930-aec6-2970335e6e78-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"fcff2a1c-2a06-4930-aec6-2970335e6e78\") " pod="openstack/nova-metadata-0" Jan 25 08:18:25 crc kubenswrapper[4832]: I0125 08:18:25.191332 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fcff2a1c-2a06-4930-aec6-2970335e6e78-config-data\") pod \"nova-metadata-0\" (UID: \"fcff2a1c-2a06-4930-aec6-2970335e6e78\") " pod="openstack/nova-metadata-0" Jan 25 08:18:25 crc kubenswrapper[4832]: I0125 08:18:25.193750 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/fcff2a1c-2a06-4930-aec6-2970335e6e78-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"fcff2a1c-2a06-4930-aec6-2970335e6e78\") " pod="openstack/nova-metadata-0" Jan 25 08:18:25 crc kubenswrapper[4832]: I0125 08:18:25.197038 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-msz8v\" (UniqueName: \"kubernetes.io/projected/fcff2a1c-2a06-4930-aec6-2970335e6e78-kube-api-access-msz8v\") pod \"nova-metadata-0\" (UID: \"fcff2a1c-2a06-4930-aec6-2970335e6e78\") " pod="openstack/nova-metadata-0" Jan 25 08:18:25 crc kubenswrapper[4832]: I0125 08:18:25.256198 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 25 08:18:25 crc kubenswrapper[4832]: I0125 08:18:25.688958 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="23584092-31c4-45a1-bf04-88e7f6bb9ece" path="/var/lib/kubelet/pods/23584092-31c4-45a1-bf04-88e7f6bb9ece/volumes" Jan 25 08:18:25 crc kubenswrapper[4832]: I0125 08:18:25.689852 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="72cbecfc-3788-48bb-9b96-e7e12374e0ff" path="/var/lib/kubelet/pods/72cbecfc-3788-48bb-9b96-e7e12374e0ff/volumes" Jan 25 08:18:25 crc kubenswrapper[4832]: I0125 08:18:25.779945 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 25 08:18:25 crc kubenswrapper[4832]: W0125 08:18:25.785017 4832 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfcff2a1c_2a06_4930_aec6_2970335e6e78.slice/crio-c227bdc70c6ae3586ade0af7a0a3a0c9868bcd73c84315b48d0dcdc8cba6892b WatchSource:0}: Error finding container c227bdc70c6ae3586ade0af7a0a3a0c9868bcd73c84315b48d0dcdc8cba6892b: Status 404 returned error can't find the container with id c227bdc70c6ae3586ade0af7a0a3a0c9868bcd73c84315b48d0dcdc8cba6892b Jan 25 08:18:25 crc kubenswrapper[4832]: I0125 08:18:25.876508 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"fcff2a1c-2a06-4930-aec6-2970335e6e78","Type":"ContainerStarted","Data":"c227bdc70c6ae3586ade0af7a0a3a0c9868bcd73c84315b48d0dcdc8cba6892b"} Jan 25 08:18:26 crc kubenswrapper[4832]: I0125 08:18:26.887993 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"fcff2a1c-2a06-4930-aec6-2970335e6e78","Type":"ContainerStarted","Data":"cfb3f58aebd01b784ef5c30886993ea09e6016a58e785780765bd2caf20533af"} Jan 25 08:18:26 crc kubenswrapper[4832]: I0125 08:18:26.888500 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"fcff2a1c-2a06-4930-aec6-2970335e6e78","Type":"ContainerStarted","Data":"7697b5c3285287de3a50d7a78ae8d1d130db9866c171a8ac9f02b1cbe751db00"} Jan 25 08:18:26 crc kubenswrapper[4832]: I0125 08:18:26.926159 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.926135436 podStartE2EDuration="2.926135436s" podCreationTimestamp="2026-01-25 08:18:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-25 08:18:26.907793739 +0000 UTC m=+1289.581617272" watchObservedRunningTime="2026-01-25 08:18:26.926135436 +0000 UTC m=+1289.599958979" Jan 25 08:18:27 crc kubenswrapper[4832]: E0125 08:18:27.262294 4832 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="49406627d9e7da09cfae6f9e29a489670acfed15c08e76117fe0e3a4244d3181" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 25 08:18:27 crc kubenswrapper[4832]: E0125 08:18:27.264960 4832 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="49406627d9e7da09cfae6f9e29a489670acfed15c08e76117fe0e3a4244d3181" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 25 08:18:27 crc kubenswrapper[4832]: E0125 08:18:27.267000 4832 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="49406627d9e7da09cfae6f9e29a489670acfed15c08e76117fe0e3a4244d3181" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 25 08:18:27 crc kubenswrapper[4832]: E0125 08:18:27.267150 4832 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="d848c5d5-d11c-4e63-b958-f98b1930587f" containerName="nova-scheduler-scheduler" Jan 25 08:18:28 crc kubenswrapper[4832]: I0125 08:18:28.263288 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-conductor-0" Jan 25 08:18:28 crc kubenswrapper[4832]: I0125 08:18:28.752345 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 25 08:18:28 crc kubenswrapper[4832]: I0125 08:18:28.880329 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 25 08:18:28 crc kubenswrapper[4832]: I0125 08:18:28.919436 4832 generic.go:334] "Generic (PLEG): container finished" podID="95f0c1bd-2ef0-41c2-960f-ea7e06873c6b" containerID="abbb6600be50a48311111b2e0d85ed9bb5b5c4b994f2586a23fe54cb81a55868" exitCode=0 Jan 25 08:18:28 crc kubenswrapper[4832]: I0125 08:18:28.919502 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"95f0c1bd-2ef0-41c2-960f-ea7e06873c6b","Type":"ContainerDied","Data":"abbb6600be50a48311111b2e0d85ed9bb5b5c4b994f2586a23fe54cb81a55868"} Jan 25 08:18:28 crc kubenswrapper[4832]: I0125 08:18:28.919534 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"95f0c1bd-2ef0-41c2-960f-ea7e06873c6b","Type":"ContainerDied","Data":"8bc9e4043efba7378cb4eef0e94b5f484e43b6112be5f6201c78e825b476acf0"} Jan 25 08:18:28 crc kubenswrapper[4832]: I0125 08:18:28.919553 4832 scope.go:117] "RemoveContainer" containerID="abbb6600be50a48311111b2e0d85ed9bb5b5c4b994f2586a23fe54cb81a55868" Jan 25 08:18:28 crc kubenswrapper[4832]: I0125 08:18:28.919660 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 25 08:18:28 crc kubenswrapper[4832]: I0125 08:18:28.920537 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d848c5d5-d11c-4e63-b958-f98b1930587f-combined-ca-bundle\") pod \"d848c5d5-d11c-4e63-b958-f98b1930587f\" (UID: \"d848c5d5-d11c-4e63-b958-f98b1930587f\") " Jan 25 08:18:28 crc kubenswrapper[4832]: I0125 08:18:28.920609 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d848c5d5-d11c-4e63-b958-f98b1930587f-config-data\") pod \"d848c5d5-d11c-4e63-b958-f98b1930587f\" (UID: \"d848c5d5-d11c-4e63-b958-f98b1930587f\") " Jan 25 08:18:28 crc kubenswrapper[4832]: I0125 08:18:28.920758 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sb8wc\" (UniqueName: \"kubernetes.io/projected/d848c5d5-d11c-4e63-b958-f98b1930587f-kube-api-access-sb8wc\") pod \"d848c5d5-d11c-4e63-b958-f98b1930587f\" (UID: \"d848c5d5-d11c-4e63-b958-f98b1930587f\") " Jan 25 08:18:28 crc kubenswrapper[4832]: I0125 08:18:28.926095 4832 generic.go:334] "Generic (PLEG): container finished" podID="d848c5d5-d11c-4e63-b958-f98b1930587f" containerID="49406627d9e7da09cfae6f9e29a489670acfed15c08e76117fe0e3a4244d3181" exitCode=0 Jan 25 08:18:28 crc kubenswrapper[4832]: I0125 08:18:28.926169 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 25 08:18:28 crc kubenswrapper[4832]: I0125 08:18:28.926209 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"d848c5d5-d11c-4e63-b958-f98b1930587f","Type":"ContainerDied","Data":"49406627d9e7da09cfae6f9e29a489670acfed15c08e76117fe0e3a4244d3181"} Jan 25 08:18:28 crc kubenswrapper[4832]: I0125 08:18:28.926755 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"d848c5d5-d11c-4e63-b958-f98b1930587f","Type":"ContainerDied","Data":"f64a9c73159e75b8b904c44e467e7695ec4362f813d307cecefe052d5e83bb85"} Jan 25 08:18:28 crc kubenswrapper[4832]: I0125 08:18:28.926926 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d848c5d5-d11c-4e63-b958-f98b1930587f-kube-api-access-sb8wc" (OuterVolumeSpecName: "kube-api-access-sb8wc") pod "d848c5d5-d11c-4e63-b958-f98b1930587f" (UID: "d848c5d5-d11c-4e63-b958-f98b1930587f"). InnerVolumeSpecName "kube-api-access-sb8wc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 25 08:18:28 crc kubenswrapper[4832]: I0125 08:18:28.954496 4832 scope.go:117] "RemoveContainer" containerID="e186d65d7a165d8b58d1dd38838b87f5dca98bbec31ada54fc448bd8f429b1ae" Jan 25 08:18:28 crc kubenswrapper[4832]: I0125 08:18:28.965428 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d848c5d5-d11c-4e63-b958-f98b1930587f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d848c5d5-d11c-4e63-b958-f98b1930587f" (UID: "d848c5d5-d11c-4e63-b958-f98b1930587f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 25 08:18:28 crc kubenswrapper[4832]: I0125 08:18:28.974016 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d848c5d5-d11c-4e63-b958-f98b1930587f-config-data" (OuterVolumeSpecName: "config-data") pod "d848c5d5-d11c-4e63-b958-f98b1930587f" (UID: "d848c5d5-d11c-4e63-b958-f98b1930587f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 25 08:18:28 crc kubenswrapper[4832]: I0125 08:18:28.994482 4832 scope.go:117] "RemoveContainer" containerID="abbb6600be50a48311111b2e0d85ed9bb5b5c4b994f2586a23fe54cb81a55868" Jan 25 08:18:28 crc kubenswrapper[4832]: E0125 08:18:28.995291 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"abbb6600be50a48311111b2e0d85ed9bb5b5c4b994f2586a23fe54cb81a55868\": container with ID starting with abbb6600be50a48311111b2e0d85ed9bb5b5c4b994f2586a23fe54cb81a55868 not found: ID does not exist" containerID="abbb6600be50a48311111b2e0d85ed9bb5b5c4b994f2586a23fe54cb81a55868" Jan 25 08:18:28 crc kubenswrapper[4832]: I0125 08:18:28.995337 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"abbb6600be50a48311111b2e0d85ed9bb5b5c4b994f2586a23fe54cb81a55868"} err="failed to get container status \"abbb6600be50a48311111b2e0d85ed9bb5b5c4b994f2586a23fe54cb81a55868\": rpc error: code = NotFound desc = could not find container \"abbb6600be50a48311111b2e0d85ed9bb5b5c4b994f2586a23fe54cb81a55868\": container with ID starting with abbb6600be50a48311111b2e0d85ed9bb5b5c4b994f2586a23fe54cb81a55868 not found: ID does not exist" Jan 25 08:18:28 crc kubenswrapper[4832]: I0125 08:18:28.995367 4832 scope.go:117] "RemoveContainer" containerID="e186d65d7a165d8b58d1dd38838b87f5dca98bbec31ada54fc448bd8f429b1ae" Jan 25 08:18:28 crc kubenswrapper[4832]: E0125 08:18:28.995793 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e186d65d7a165d8b58d1dd38838b87f5dca98bbec31ada54fc448bd8f429b1ae\": container with ID starting with e186d65d7a165d8b58d1dd38838b87f5dca98bbec31ada54fc448bd8f429b1ae not found: ID does not exist" containerID="e186d65d7a165d8b58d1dd38838b87f5dca98bbec31ada54fc448bd8f429b1ae" Jan 25 08:18:28 crc kubenswrapper[4832]: I0125 08:18:28.995859 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e186d65d7a165d8b58d1dd38838b87f5dca98bbec31ada54fc448bd8f429b1ae"} err="failed to get container status \"e186d65d7a165d8b58d1dd38838b87f5dca98bbec31ada54fc448bd8f429b1ae\": rpc error: code = NotFound desc = could not find container \"e186d65d7a165d8b58d1dd38838b87f5dca98bbec31ada54fc448bd8f429b1ae\": container with ID starting with e186d65d7a165d8b58d1dd38838b87f5dca98bbec31ada54fc448bd8f429b1ae not found: ID does not exist" Jan 25 08:18:28 crc kubenswrapper[4832]: I0125 08:18:28.995882 4832 scope.go:117] "RemoveContainer" containerID="49406627d9e7da09cfae6f9e29a489670acfed15c08e76117fe0e3a4244d3181" Jan 25 08:18:29 crc kubenswrapper[4832]: I0125 08:18:29.019075 4832 scope.go:117] "RemoveContainer" containerID="49406627d9e7da09cfae6f9e29a489670acfed15c08e76117fe0e3a4244d3181" Jan 25 08:18:29 crc kubenswrapper[4832]: E0125 08:18:29.019972 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"49406627d9e7da09cfae6f9e29a489670acfed15c08e76117fe0e3a4244d3181\": container with ID starting with 49406627d9e7da09cfae6f9e29a489670acfed15c08e76117fe0e3a4244d3181 not found: ID does not exist" containerID="49406627d9e7da09cfae6f9e29a489670acfed15c08e76117fe0e3a4244d3181" Jan 25 08:18:29 crc kubenswrapper[4832]: I0125 08:18:29.020010 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"49406627d9e7da09cfae6f9e29a489670acfed15c08e76117fe0e3a4244d3181"} err="failed to get container status \"49406627d9e7da09cfae6f9e29a489670acfed15c08e76117fe0e3a4244d3181\": rpc error: code = NotFound desc = could not find container \"49406627d9e7da09cfae6f9e29a489670acfed15c08e76117fe0e3a4244d3181\": container with ID starting with 49406627d9e7da09cfae6f9e29a489670acfed15c08e76117fe0e3a4244d3181 not found: ID does not exist" Jan 25 08:18:29 crc kubenswrapper[4832]: I0125 08:18:29.022903 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/95f0c1bd-2ef0-41c2-960f-ea7e06873c6b-logs\") pod \"95f0c1bd-2ef0-41c2-960f-ea7e06873c6b\" (UID: \"95f0c1bd-2ef0-41c2-960f-ea7e06873c6b\") " Jan 25 08:18:29 crc kubenswrapper[4832]: I0125 08:18:29.022971 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/95f0c1bd-2ef0-41c2-960f-ea7e06873c6b-combined-ca-bundle\") pod \"95f0c1bd-2ef0-41c2-960f-ea7e06873c6b\" (UID: \"95f0c1bd-2ef0-41c2-960f-ea7e06873c6b\") " Jan 25 08:18:29 crc kubenswrapper[4832]: I0125 08:18:29.023009 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j4zl9\" (UniqueName: \"kubernetes.io/projected/95f0c1bd-2ef0-41c2-960f-ea7e06873c6b-kube-api-access-j4zl9\") pod \"95f0c1bd-2ef0-41c2-960f-ea7e06873c6b\" (UID: \"95f0c1bd-2ef0-41c2-960f-ea7e06873c6b\") " Jan 25 08:18:29 crc kubenswrapper[4832]: I0125 08:18:29.023148 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/95f0c1bd-2ef0-41c2-960f-ea7e06873c6b-config-data\") pod \"95f0c1bd-2ef0-41c2-960f-ea7e06873c6b\" (UID: \"95f0c1bd-2ef0-41c2-960f-ea7e06873c6b\") " Jan 25 08:18:29 crc kubenswrapper[4832]: I0125 08:18:29.023502 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/95f0c1bd-2ef0-41c2-960f-ea7e06873c6b-logs" (OuterVolumeSpecName: "logs") pod "95f0c1bd-2ef0-41c2-960f-ea7e06873c6b" (UID: "95f0c1bd-2ef0-41c2-960f-ea7e06873c6b"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 25 08:18:29 crc kubenswrapper[4832]: I0125 08:18:29.023887 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sb8wc\" (UniqueName: \"kubernetes.io/projected/d848c5d5-d11c-4e63-b958-f98b1930587f-kube-api-access-sb8wc\") on node \"crc\" DevicePath \"\"" Jan 25 08:18:29 crc kubenswrapper[4832]: I0125 08:18:29.023915 4832 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/95f0c1bd-2ef0-41c2-960f-ea7e06873c6b-logs\") on node \"crc\" DevicePath \"\"" Jan 25 08:18:29 crc kubenswrapper[4832]: I0125 08:18:29.023926 4832 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d848c5d5-d11c-4e63-b958-f98b1930587f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 25 08:18:29 crc kubenswrapper[4832]: I0125 08:18:29.023936 4832 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d848c5d5-d11c-4e63-b958-f98b1930587f-config-data\") on node \"crc\" DevicePath \"\"" Jan 25 08:18:29 crc kubenswrapper[4832]: I0125 08:18:29.026697 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/95f0c1bd-2ef0-41c2-960f-ea7e06873c6b-kube-api-access-j4zl9" (OuterVolumeSpecName: "kube-api-access-j4zl9") pod "95f0c1bd-2ef0-41c2-960f-ea7e06873c6b" (UID: "95f0c1bd-2ef0-41c2-960f-ea7e06873c6b"). InnerVolumeSpecName "kube-api-access-j4zl9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 25 08:18:29 crc kubenswrapper[4832]: I0125 08:18:29.049649 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/95f0c1bd-2ef0-41c2-960f-ea7e06873c6b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "95f0c1bd-2ef0-41c2-960f-ea7e06873c6b" (UID: "95f0c1bd-2ef0-41c2-960f-ea7e06873c6b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 25 08:18:29 crc kubenswrapper[4832]: I0125 08:18:29.066085 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/95f0c1bd-2ef0-41c2-960f-ea7e06873c6b-config-data" (OuterVolumeSpecName: "config-data") pod "95f0c1bd-2ef0-41c2-960f-ea7e06873c6b" (UID: "95f0c1bd-2ef0-41c2-960f-ea7e06873c6b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 25 08:18:29 crc kubenswrapper[4832]: I0125 08:18:29.126561 4832 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/95f0c1bd-2ef0-41c2-960f-ea7e06873c6b-config-data\") on node \"crc\" DevicePath \"\"" Jan 25 08:18:29 crc kubenswrapper[4832]: I0125 08:18:29.126618 4832 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/95f0c1bd-2ef0-41c2-960f-ea7e06873c6b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 25 08:18:29 crc kubenswrapper[4832]: I0125 08:18:29.126637 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j4zl9\" (UniqueName: \"kubernetes.io/projected/95f0c1bd-2ef0-41c2-960f-ea7e06873c6b-kube-api-access-j4zl9\") on node \"crc\" DevicePath \"\"" Jan 25 08:18:29 crc kubenswrapper[4832]: I0125 08:18:29.274604 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 25 08:18:29 crc kubenswrapper[4832]: I0125 08:18:29.286506 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Jan 25 08:18:29 crc kubenswrapper[4832]: I0125 08:18:29.303491 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Jan 25 08:18:29 crc kubenswrapper[4832]: E0125 08:18:29.303999 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d848c5d5-d11c-4e63-b958-f98b1930587f" containerName="nova-scheduler-scheduler" Jan 25 08:18:29 crc kubenswrapper[4832]: I0125 08:18:29.304023 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="d848c5d5-d11c-4e63-b958-f98b1930587f" containerName="nova-scheduler-scheduler" Jan 25 08:18:29 crc kubenswrapper[4832]: E0125 08:18:29.304041 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="95f0c1bd-2ef0-41c2-960f-ea7e06873c6b" containerName="nova-api-api" Jan 25 08:18:29 crc kubenswrapper[4832]: I0125 08:18:29.304049 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="95f0c1bd-2ef0-41c2-960f-ea7e06873c6b" containerName="nova-api-api" Jan 25 08:18:29 crc kubenswrapper[4832]: E0125 08:18:29.304071 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="95f0c1bd-2ef0-41c2-960f-ea7e06873c6b" containerName="nova-api-log" Jan 25 08:18:29 crc kubenswrapper[4832]: I0125 08:18:29.304079 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="95f0c1bd-2ef0-41c2-960f-ea7e06873c6b" containerName="nova-api-log" Jan 25 08:18:29 crc kubenswrapper[4832]: I0125 08:18:29.304329 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="d848c5d5-d11c-4e63-b958-f98b1930587f" containerName="nova-scheduler-scheduler" Jan 25 08:18:29 crc kubenswrapper[4832]: I0125 08:18:29.304357 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="95f0c1bd-2ef0-41c2-960f-ea7e06873c6b" containerName="nova-api-log" Jan 25 08:18:29 crc kubenswrapper[4832]: I0125 08:18:29.304400 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="95f0c1bd-2ef0-41c2-960f-ea7e06873c6b" containerName="nova-api-api" Jan 25 08:18:29 crc kubenswrapper[4832]: I0125 08:18:29.305729 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 25 08:18:29 crc kubenswrapper[4832]: I0125 08:18:29.310828 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Jan 25 08:18:29 crc kubenswrapper[4832]: I0125 08:18:29.342637 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 25 08:18:29 crc kubenswrapper[4832]: I0125 08:18:29.359118 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Jan 25 08:18:29 crc kubenswrapper[4832]: I0125 08:18:29.368278 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 25 08:18:29 crc kubenswrapper[4832]: I0125 08:18:29.380737 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Jan 25 08:18:29 crc kubenswrapper[4832]: I0125 08:18:29.382607 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 25 08:18:29 crc kubenswrapper[4832]: I0125 08:18:29.384535 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Jan 25 08:18:29 crc kubenswrapper[4832]: I0125 08:18:29.399906 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 25 08:18:29 crc kubenswrapper[4832]: I0125 08:18:29.439796 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5f2f5901-82a8-4669-91aa-a8973cac5799-config-data\") pod \"nova-scheduler-0\" (UID: \"5f2f5901-82a8-4669-91aa-a8973cac5799\") " pod="openstack/nova-scheduler-0" Jan 25 08:18:29 crc kubenswrapper[4832]: I0125 08:18:29.439876 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5f2f5901-82a8-4669-91aa-a8973cac5799-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"5f2f5901-82a8-4669-91aa-a8973cac5799\") " pod="openstack/nova-scheduler-0" Jan 25 08:18:29 crc kubenswrapper[4832]: I0125 08:18:29.439898 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q28dr\" (UniqueName: \"kubernetes.io/projected/5f2f5901-82a8-4669-91aa-a8973cac5799-kube-api-access-q28dr\") pod \"nova-scheduler-0\" (UID: \"5f2f5901-82a8-4669-91aa-a8973cac5799\") " pod="openstack/nova-scheduler-0" Jan 25 08:18:29 crc kubenswrapper[4832]: I0125 08:18:29.542070 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5f2f5901-82a8-4669-91aa-a8973cac5799-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"5f2f5901-82a8-4669-91aa-a8973cac5799\") " pod="openstack/nova-scheduler-0" Jan 25 08:18:29 crc kubenswrapper[4832]: I0125 08:18:29.542119 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q28dr\" (UniqueName: \"kubernetes.io/projected/5f2f5901-82a8-4669-91aa-a8973cac5799-kube-api-access-q28dr\") pod \"nova-scheduler-0\" (UID: \"5f2f5901-82a8-4669-91aa-a8973cac5799\") " pod="openstack/nova-scheduler-0" Jan 25 08:18:29 crc kubenswrapper[4832]: I0125 08:18:29.542178 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7f893740-ce4d-4ee2-994d-98739d4b1f7d-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"7f893740-ce4d-4ee2-994d-98739d4b1f7d\") " pod="openstack/nova-api-0" Jan 25 08:18:29 crc kubenswrapper[4832]: I0125 08:18:29.542234 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vjv9l\" (UniqueName: \"kubernetes.io/projected/7f893740-ce4d-4ee2-994d-98739d4b1f7d-kube-api-access-vjv9l\") pod \"nova-api-0\" (UID: \"7f893740-ce4d-4ee2-994d-98739d4b1f7d\") " pod="openstack/nova-api-0" Jan 25 08:18:29 crc kubenswrapper[4832]: I0125 08:18:29.542260 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7f893740-ce4d-4ee2-994d-98739d4b1f7d-logs\") pod \"nova-api-0\" (UID: \"7f893740-ce4d-4ee2-994d-98739d4b1f7d\") " pod="openstack/nova-api-0" Jan 25 08:18:29 crc kubenswrapper[4832]: I0125 08:18:29.542291 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7f893740-ce4d-4ee2-994d-98739d4b1f7d-config-data\") pod \"nova-api-0\" (UID: \"7f893740-ce4d-4ee2-994d-98739d4b1f7d\") " pod="openstack/nova-api-0" Jan 25 08:18:29 crc kubenswrapper[4832]: I0125 08:18:29.542585 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5f2f5901-82a8-4669-91aa-a8973cac5799-config-data\") pod \"nova-scheduler-0\" (UID: \"5f2f5901-82a8-4669-91aa-a8973cac5799\") " pod="openstack/nova-scheduler-0" Jan 25 08:18:29 crc kubenswrapper[4832]: I0125 08:18:29.551115 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5f2f5901-82a8-4669-91aa-a8973cac5799-config-data\") pod \"nova-scheduler-0\" (UID: \"5f2f5901-82a8-4669-91aa-a8973cac5799\") " pod="openstack/nova-scheduler-0" Jan 25 08:18:29 crc kubenswrapper[4832]: I0125 08:18:29.555944 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5f2f5901-82a8-4669-91aa-a8973cac5799-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"5f2f5901-82a8-4669-91aa-a8973cac5799\") " pod="openstack/nova-scheduler-0" Jan 25 08:18:29 crc kubenswrapper[4832]: I0125 08:18:29.558740 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q28dr\" (UniqueName: \"kubernetes.io/projected/5f2f5901-82a8-4669-91aa-a8973cac5799-kube-api-access-q28dr\") pod \"nova-scheduler-0\" (UID: \"5f2f5901-82a8-4669-91aa-a8973cac5799\") " pod="openstack/nova-scheduler-0" Jan 25 08:18:29 crc kubenswrapper[4832]: I0125 08:18:29.644676 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7f893740-ce4d-4ee2-994d-98739d4b1f7d-config-data\") pod \"nova-api-0\" (UID: \"7f893740-ce4d-4ee2-994d-98739d4b1f7d\") " pod="openstack/nova-api-0" Jan 25 08:18:29 crc kubenswrapper[4832]: I0125 08:18:29.644917 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7f893740-ce4d-4ee2-994d-98739d4b1f7d-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"7f893740-ce4d-4ee2-994d-98739d4b1f7d\") " pod="openstack/nova-api-0" Jan 25 08:18:29 crc kubenswrapper[4832]: I0125 08:18:29.645016 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vjv9l\" (UniqueName: \"kubernetes.io/projected/7f893740-ce4d-4ee2-994d-98739d4b1f7d-kube-api-access-vjv9l\") pod \"nova-api-0\" (UID: \"7f893740-ce4d-4ee2-994d-98739d4b1f7d\") " pod="openstack/nova-api-0" Jan 25 08:18:29 crc kubenswrapper[4832]: I0125 08:18:29.645073 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7f893740-ce4d-4ee2-994d-98739d4b1f7d-logs\") pod \"nova-api-0\" (UID: \"7f893740-ce4d-4ee2-994d-98739d4b1f7d\") " pod="openstack/nova-api-0" Jan 25 08:18:29 crc kubenswrapper[4832]: I0125 08:18:29.645754 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7f893740-ce4d-4ee2-994d-98739d4b1f7d-logs\") pod \"nova-api-0\" (UID: \"7f893740-ce4d-4ee2-994d-98739d4b1f7d\") " pod="openstack/nova-api-0" Jan 25 08:18:29 crc kubenswrapper[4832]: I0125 08:18:29.649066 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7f893740-ce4d-4ee2-994d-98739d4b1f7d-config-data\") pod \"nova-api-0\" (UID: \"7f893740-ce4d-4ee2-994d-98739d4b1f7d\") " pod="openstack/nova-api-0" Jan 25 08:18:29 crc kubenswrapper[4832]: I0125 08:18:29.650167 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 25 08:18:29 crc kubenswrapper[4832]: I0125 08:18:29.651227 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7f893740-ce4d-4ee2-994d-98739d4b1f7d-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"7f893740-ce4d-4ee2-994d-98739d4b1f7d\") " pod="openstack/nova-api-0" Jan 25 08:18:29 crc kubenswrapper[4832]: I0125 08:18:29.664548 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vjv9l\" (UniqueName: \"kubernetes.io/projected/7f893740-ce4d-4ee2-994d-98739d4b1f7d-kube-api-access-vjv9l\") pod \"nova-api-0\" (UID: \"7f893740-ce4d-4ee2-994d-98739d4b1f7d\") " pod="openstack/nova-api-0" Jan 25 08:18:29 crc kubenswrapper[4832]: I0125 08:18:29.685269 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="95f0c1bd-2ef0-41c2-960f-ea7e06873c6b" path="/var/lib/kubelet/pods/95f0c1bd-2ef0-41c2-960f-ea7e06873c6b/volumes" Jan 25 08:18:29 crc kubenswrapper[4832]: I0125 08:18:29.686591 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d848c5d5-d11c-4e63-b958-f98b1930587f" path="/var/lib/kubelet/pods/d848c5d5-d11c-4e63-b958-f98b1930587f/volumes" Jan 25 08:18:29 crc kubenswrapper[4832]: I0125 08:18:29.707591 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 25 08:18:30 crc kubenswrapper[4832]: I0125 08:18:30.102691 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 25 08:18:30 crc kubenswrapper[4832]: W0125 08:18:30.104343 4832 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5f2f5901_82a8_4669_91aa_a8973cac5799.slice/crio-74d8dffa6c375f60dc134a7cdd5905607b0d895f0a96d7bda72e5a7c8401eb3e WatchSource:0}: Error finding container 74d8dffa6c375f60dc134a7cdd5905607b0d895f0a96d7bda72e5a7c8401eb3e: Status 404 returned error can't find the container with id 74d8dffa6c375f60dc134a7cdd5905607b0d895f0a96d7bda72e5a7c8401eb3e Jan 25 08:18:30 crc kubenswrapper[4832]: I0125 08:18:30.257294 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 25 08:18:30 crc kubenswrapper[4832]: I0125 08:18:30.258739 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 25 08:18:30 crc kubenswrapper[4832]: I0125 08:18:30.279872 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 25 08:18:30 crc kubenswrapper[4832]: I0125 08:18:30.794068 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Jan 25 08:18:30 crc kubenswrapper[4832]: I0125 08:18:30.961832 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"7f893740-ce4d-4ee2-994d-98739d4b1f7d","Type":"ContainerStarted","Data":"1cf457f5f0bc24ca1984ac878d4897dfabdd8be119fc99537048cec1e98fd646"} Jan 25 08:18:30 crc kubenswrapper[4832]: I0125 08:18:30.962209 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"7f893740-ce4d-4ee2-994d-98739d4b1f7d","Type":"ContainerStarted","Data":"d3f82876a152280be0153952ab474002f89799e62b5c3764abcb48c4ba1f79ab"} Jan 25 08:18:30 crc kubenswrapper[4832]: I0125 08:18:30.962230 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"7f893740-ce4d-4ee2-994d-98739d4b1f7d","Type":"ContainerStarted","Data":"6e9cadbf9897c01825e4da6c935d68d38c55b39f1edc627364ba9456e3e27986"} Jan 25 08:18:30 crc kubenswrapper[4832]: I0125 08:18:30.966430 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"5f2f5901-82a8-4669-91aa-a8973cac5799","Type":"ContainerStarted","Data":"035976e0914f65e00fd75711cd7fc1f0543ef5eef21ce3fd3c8a346f34096785"} Jan 25 08:18:30 crc kubenswrapper[4832]: I0125 08:18:30.966464 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"5f2f5901-82a8-4669-91aa-a8973cac5799","Type":"ContainerStarted","Data":"74d8dffa6c375f60dc134a7cdd5905607b0d895f0a96d7bda72e5a7c8401eb3e"} Jan 25 08:18:30 crc kubenswrapper[4832]: I0125 08:18:30.992912 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=1.992888022 podStartE2EDuration="1.992888022s" podCreationTimestamp="2026-01-25 08:18:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-25 08:18:30.982530758 +0000 UTC m=+1293.656354291" watchObservedRunningTime="2026-01-25 08:18:30.992888022 +0000 UTC m=+1293.666711555" Jan 25 08:18:31 crc kubenswrapper[4832]: I0125 08:18:31.004983 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.004958308 podStartE2EDuration="2.004958308s" podCreationTimestamp="2026-01-25 08:18:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-25 08:18:30.995424219 +0000 UTC m=+1293.669247752" watchObservedRunningTime="2026-01-25 08:18:31.004958308 +0000 UTC m=+1293.678781841" Jan 25 08:18:34 crc kubenswrapper[4832]: I0125 08:18:34.207499 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 25 08:18:34 crc kubenswrapper[4832]: I0125 08:18:34.208088 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/kube-state-metrics-0" podUID="2bf96fb8-1a77-4546-ba91-aa18499fa5c4" containerName="kube-state-metrics" containerID="cri-o://782826cc8e1662afe1f667341008333bafa7d7142321c45593db4d079f0b255d" gracePeriod=30 Jan 25 08:18:34 crc kubenswrapper[4832]: I0125 08:18:34.651728 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Jan 25 08:18:34 crc kubenswrapper[4832]: I0125 08:18:34.710620 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 25 08:18:34 crc kubenswrapper[4832]: I0125 08:18:34.885636 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-585mz\" (UniqueName: \"kubernetes.io/projected/2bf96fb8-1a77-4546-ba91-aa18499fa5c4-kube-api-access-585mz\") pod \"2bf96fb8-1a77-4546-ba91-aa18499fa5c4\" (UID: \"2bf96fb8-1a77-4546-ba91-aa18499fa5c4\") " Jan 25 08:18:34 crc kubenswrapper[4832]: I0125 08:18:34.892704 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2bf96fb8-1a77-4546-ba91-aa18499fa5c4-kube-api-access-585mz" (OuterVolumeSpecName: "kube-api-access-585mz") pod "2bf96fb8-1a77-4546-ba91-aa18499fa5c4" (UID: "2bf96fb8-1a77-4546-ba91-aa18499fa5c4"). InnerVolumeSpecName "kube-api-access-585mz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 25 08:18:34 crc kubenswrapper[4832]: I0125 08:18:34.989030 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-585mz\" (UniqueName: \"kubernetes.io/projected/2bf96fb8-1a77-4546-ba91-aa18499fa5c4-kube-api-access-585mz\") on node \"crc\" DevicePath \"\"" Jan 25 08:18:35 crc kubenswrapper[4832]: I0125 08:18:35.006024 4832 generic.go:334] "Generic (PLEG): container finished" podID="2bf96fb8-1a77-4546-ba91-aa18499fa5c4" containerID="782826cc8e1662afe1f667341008333bafa7d7142321c45593db4d079f0b255d" exitCode=2 Jan 25 08:18:35 crc kubenswrapper[4832]: I0125 08:18:35.006079 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"2bf96fb8-1a77-4546-ba91-aa18499fa5c4","Type":"ContainerDied","Data":"782826cc8e1662afe1f667341008333bafa7d7142321c45593db4d079f0b255d"} Jan 25 08:18:35 crc kubenswrapper[4832]: I0125 08:18:35.006126 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 25 08:18:35 crc kubenswrapper[4832]: I0125 08:18:35.006141 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"2bf96fb8-1a77-4546-ba91-aa18499fa5c4","Type":"ContainerDied","Data":"631abf2a2e5554c2327a2bbc655e10f6c7c1fba7de706586683185004fe4b4b0"} Jan 25 08:18:35 crc kubenswrapper[4832]: I0125 08:18:35.006167 4832 scope.go:117] "RemoveContainer" containerID="782826cc8e1662afe1f667341008333bafa7d7142321c45593db4d079f0b255d" Jan 25 08:18:35 crc kubenswrapper[4832]: I0125 08:18:35.035379 4832 scope.go:117] "RemoveContainer" containerID="782826cc8e1662afe1f667341008333bafa7d7142321c45593db4d079f0b255d" Jan 25 08:18:35 crc kubenswrapper[4832]: E0125 08:18:35.035889 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"782826cc8e1662afe1f667341008333bafa7d7142321c45593db4d079f0b255d\": container with ID starting with 782826cc8e1662afe1f667341008333bafa7d7142321c45593db4d079f0b255d not found: ID does not exist" containerID="782826cc8e1662afe1f667341008333bafa7d7142321c45593db4d079f0b255d" Jan 25 08:18:35 crc kubenswrapper[4832]: I0125 08:18:35.035934 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"782826cc8e1662afe1f667341008333bafa7d7142321c45593db4d079f0b255d"} err="failed to get container status \"782826cc8e1662afe1f667341008333bafa7d7142321c45593db4d079f0b255d\": rpc error: code = NotFound desc = could not find container \"782826cc8e1662afe1f667341008333bafa7d7142321c45593db4d079f0b255d\": container with ID starting with 782826cc8e1662afe1f667341008333bafa7d7142321c45593db4d079f0b255d not found: ID does not exist" Jan 25 08:18:35 crc kubenswrapper[4832]: I0125 08:18:35.039959 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 25 08:18:35 crc kubenswrapper[4832]: I0125 08:18:35.048666 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 25 08:18:35 crc kubenswrapper[4832]: I0125 08:18:35.063825 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Jan 25 08:18:35 crc kubenswrapper[4832]: E0125 08:18:35.064425 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2bf96fb8-1a77-4546-ba91-aa18499fa5c4" containerName="kube-state-metrics" Jan 25 08:18:35 crc kubenswrapper[4832]: I0125 08:18:35.064447 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="2bf96fb8-1a77-4546-ba91-aa18499fa5c4" containerName="kube-state-metrics" Jan 25 08:18:35 crc kubenswrapper[4832]: I0125 08:18:35.064700 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="2bf96fb8-1a77-4546-ba91-aa18499fa5c4" containerName="kube-state-metrics" Jan 25 08:18:35 crc kubenswrapper[4832]: I0125 08:18:35.065540 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 25 08:18:35 crc kubenswrapper[4832]: I0125 08:18:35.068658 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-kube-state-metrics-svc" Jan 25 08:18:35 crc kubenswrapper[4832]: I0125 08:18:35.068718 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"kube-state-metrics-tls-config" Jan 25 08:18:35 crc kubenswrapper[4832]: I0125 08:18:35.075868 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 25 08:18:35 crc kubenswrapper[4832]: I0125 08:18:35.098082 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/ad2ea2ab-d727-4547-b2b4-d905b66428e5-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"ad2ea2ab-d727-4547-b2b4-d905b66428e5\") " pod="openstack/kube-state-metrics-0" Jan 25 08:18:35 crc kubenswrapper[4832]: I0125 08:18:35.098134 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ad2ea2ab-d727-4547-b2b4-d905b66428e5-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"ad2ea2ab-d727-4547-b2b4-d905b66428e5\") " pod="openstack/kube-state-metrics-0" Jan 25 08:18:35 crc kubenswrapper[4832]: I0125 08:18:35.098175 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t7xqj\" (UniqueName: \"kubernetes.io/projected/ad2ea2ab-d727-4547-b2b4-d905b66428e5-kube-api-access-t7xqj\") pod \"kube-state-metrics-0\" (UID: \"ad2ea2ab-d727-4547-b2b4-d905b66428e5\") " pod="openstack/kube-state-metrics-0" Jan 25 08:18:35 crc kubenswrapper[4832]: I0125 08:18:35.098203 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/ad2ea2ab-d727-4547-b2b4-d905b66428e5-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"ad2ea2ab-d727-4547-b2b4-d905b66428e5\") " pod="openstack/kube-state-metrics-0" Jan 25 08:18:35 crc kubenswrapper[4832]: I0125 08:18:35.199544 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t7xqj\" (UniqueName: \"kubernetes.io/projected/ad2ea2ab-d727-4547-b2b4-d905b66428e5-kube-api-access-t7xqj\") pod \"kube-state-metrics-0\" (UID: \"ad2ea2ab-d727-4547-b2b4-d905b66428e5\") " pod="openstack/kube-state-metrics-0" Jan 25 08:18:35 crc kubenswrapper[4832]: I0125 08:18:35.199611 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/ad2ea2ab-d727-4547-b2b4-d905b66428e5-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"ad2ea2ab-d727-4547-b2b4-d905b66428e5\") " pod="openstack/kube-state-metrics-0" Jan 25 08:18:35 crc kubenswrapper[4832]: I0125 08:18:35.199764 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/ad2ea2ab-d727-4547-b2b4-d905b66428e5-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"ad2ea2ab-d727-4547-b2b4-d905b66428e5\") " pod="openstack/kube-state-metrics-0" Jan 25 08:18:35 crc kubenswrapper[4832]: I0125 08:18:35.199808 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ad2ea2ab-d727-4547-b2b4-d905b66428e5-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"ad2ea2ab-d727-4547-b2b4-d905b66428e5\") " pod="openstack/kube-state-metrics-0" Jan 25 08:18:35 crc kubenswrapper[4832]: I0125 08:18:35.205214 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/ad2ea2ab-d727-4547-b2b4-d905b66428e5-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"ad2ea2ab-d727-4547-b2b4-d905b66428e5\") " pod="openstack/kube-state-metrics-0" Jan 25 08:18:35 crc kubenswrapper[4832]: I0125 08:18:35.205892 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/ad2ea2ab-d727-4547-b2b4-d905b66428e5-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"ad2ea2ab-d727-4547-b2b4-d905b66428e5\") " pod="openstack/kube-state-metrics-0" Jan 25 08:18:35 crc kubenswrapper[4832]: I0125 08:18:35.206681 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ad2ea2ab-d727-4547-b2b4-d905b66428e5-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"ad2ea2ab-d727-4547-b2b4-d905b66428e5\") " pod="openstack/kube-state-metrics-0" Jan 25 08:18:35 crc kubenswrapper[4832]: I0125 08:18:35.220198 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t7xqj\" (UniqueName: \"kubernetes.io/projected/ad2ea2ab-d727-4547-b2b4-d905b66428e5-kube-api-access-t7xqj\") pod \"kube-state-metrics-0\" (UID: \"ad2ea2ab-d727-4547-b2b4-d905b66428e5\") " pod="openstack/kube-state-metrics-0" Jan 25 08:18:35 crc kubenswrapper[4832]: I0125 08:18:35.257433 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 25 08:18:35 crc kubenswrapper[4832]: I0125 08:18:35.257501 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 25 08:18:35 crc kubenswrapper[4832]: I0125 08:18:35.383989 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 25 08:18:35 crc kubenswrapper[4832]: I0125 08:18:35.681739 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2bf96fb8-1a77-4546-ba91-aa18499fa5c4" path="/var/lib/kubelet/pods/2bf96fb8-1a77-4546-ba91-aa18499fa5c4/volumes" Jan 25 08:18:35 crc kubenswrapper[4832]: I0125 08:18:35.834655 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 25 08:18:35 crc kubenswrapper[4832]: W0125 08:18:35.838618 4832 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podad2ea2ab_d727_4547_b2b4_d905b66428e5.slice/crio-27183eb15c4f413f50f4954651949b3e507369ed4e31ccfaad16312060ccd2f6 WatchSource:0}: Error finding container 27183eb15c4f413f50f4954651949b3e507369ed4e31ccfaad16312060ccd2f6: Status 404 returned error can't find the container with id 27183eb15c4f413f50f4954651949b3e507369ed4e31ccfaad16312060ccd2f6 Jan 25 08:18:36 crc kubenswrapper[4832]: I0125 08:18:36.017654 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"ad2ea2ab-d727-4547-b2b4-d905b66428e5","Type":"ContainerStarted","Data":"27183eb15c4f413f50f4954651949b3e507369ed4e31ccfaad16312060ccd2f6"} Jan 25 08:18:36 crc kubenswrapper[4832]: I0125 08:18:36.162797 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 25 08:18:36 crc kubenswrapper[4832]: I0125 08:18:36.163130 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="a51d9c21-2b71-46f0-8b63-9961d75247fe" containerName="ceilometer-central-agent" containerID="cri-o://bc92d92afa96c88a2d68885c5bc1fea24da6a85f74e6e5429f981fc324348a16" gracePeriod=30 Jan 25 08:18:36 crc kubenswrapper[4832]: I0125 08:18:36.163149 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="a51d9c21-2b71-46f0-8b63-9961d75247fe" containerName="proxy-httpd" containerID="cri-o://427ffa790e251b576c344c77d7e41b6e5519f58d85c8f21ec107fe25c1d306d6" gracePeriod=30 Jan 25 08:18:36 crc kubenswrapper[4832]: I0125 08:18:36.163209 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="a51d9c21-2b71-46f0-8b63-9961d75247fe" containerName="sg-core" containerID="cri-o://9932a79a927984403bed18124182da23df84fc7421fe75b4cc847e0252c545c2" gracePeriod=30 Jan 25 08:18:36 crc kubenswrapper[4832]: I0125 08:18:36.163491 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="a51d9c21-2b71-46f0-8b63-9961d75247fe" containerName="ceilometer-notification-agent" containerID="cri-o://d6a35425c90b18fbe9e4730d3566e1a10343f541ac5eccd9145e3375295b8a75" gracePeriod=30 Jan 25 08:18:36 crc kubenswrapper[4832]: I0125 08:18:36.273610 4832 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="fcff2a1c-2a06-4930-aec6-2970335e6e78" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.196:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 25 08:18:36 crc kubenswrapper[4832]: I0125 08:18:36.273871 4832 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="fcff2a1c-2a06-4930-aec6-2970335e6e78" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.196:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 25 08:18:37 crc kubenswrapper[4832]: I0125 08:18:37.028729 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"ad2ea2ab-d727-4547-b2b4-d905b66428e5","Type":"ContainerStarted","Data":"76167ab26ea804d26715fa87666dcdf8d62c9c1862e3e8989590dd3edeb60135"} Jan 25 08:18:37 crc kubenswrapper[4832]: I0125 08:18:37.029592 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Jan 25 08:18:37 crc kubenswrapper[4832]: I0125 08:18:37.032695 4832 generic.go:334] "Generic (PLEG): container finished" podID="a51d9c21-2b71-46f0-8b63-9961d75247fe" containerID="427ffa790e251b576c344c77d7e41b6e5519f58d85c8f21ec107fe25c1d306d6" exitCode=0 Jan 25 08:18:37 crc kubenswrapper[4832]: I0125 08:18:37.033054 4832 generic.go:334] "Generic (PLEG): container finished" podID="a51d9c21-2b71-46f0-8b63-9961d75247fe" containerID="9932a79a927984403bed18124182da23df84fc7421fe75b4cc847e0252c545c2" exitCode=2 Jan 25 08:18:37 crc kubenswrapper[4832]: I0125 08:18:37.033200 4832 generic.go:334] "Generic (PLEG): container finished" podID="a51d9c21-2b71-46f0-8b63-9961d75247fe" containerID="bc92d92afa96c88a2d68885c5bc1fea24da6a85f74e6e5429f981fc324348a16" exitCode=0 Jan 25 08:18:37 crc kubenswrapper[4832]: I0125 08:18:37.032776 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a51d9c21-2b71-46f0-8b63-9961d75247fe","Type":"ContainerDied","Data":"427ffa790e251b576c344c77d7e41b6e5519f58d85c8f21ec107fe25c1d306d6"} Jan 25 08:18:37 crc kubenswrapper[4832]: I0125 08:18:37.033514 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a51d9c21-2b71-46f0-8b63-9961d75247fe","Type":"ContainerDied","Data":"9932a79a927984403bed18124182da23df84fc7421fe75b4cc847e0252c545c2"} Jan 25 08:18:37 crc kubenswrapper[4832]: I0125 08:18:37.033696 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a51d9c21-2b71-46f0-8b63-9961d75247fe","Type":"ContainerDied","Data":"bc92d92afa96c88a2d68885c5bc1fea24da6a85f74e6e5429f981fc324348a16"} Jan 25 08:18:37 crc kubenswrapper[4832]: I0125 08:18:37.059958 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=1.6675206070000002 podStartE2EDuration="2.059933387s" podCreationTimestamp="2026-01-25 08:18:35 +0000 UTC" firstStartedPulling="2026-01-25 08:18:35.840204308 +0000 UTC m=+1298.514027841" lastFinishedPulling="2026-01-25 08:18:36.232617098 +0000 UTC m=+1298.906440621" observedRunningTime="2026-01-25 08:18:37.046970344 +0000 UTC m=+1299.720793887" watchObservedRunningTime="2026-01-25 08:18:37.059933387 +0000 UTC m=+1299.733756920" Jan 25 08:18:39 crc kubenswrapper[4832]: I0125 08:18:39.474604 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 25 08:18:39 crc kubenswrapper[4832]: I0125 08:18:39.509377 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/a51d9c21-2b71-46f0-8b63-9961d75247fe-sg-core-conf-yaml\") pod \"a51d9c21-2b71-46f0-8b63-9961d75247fe\" (UID: \"a51d9c21-2b71-46f0-8b63-9961d75247fe\") " Jan 25 08:18:39 crc kubenswrapper[4832]: I0125 08:18:39.509859 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a51d9c21-2b71-46f0-8b63-9961d75247fe-scripts\") pod \"a51d9c21-2b71-46f0-8b63-9961d75247fe\" (UID: \"a51d9c21-2b71-46f0-8b63-9961d75247fe\") " Jan 25 08:18:39 crc kubenswrapper[4832]: I0125 08:18:39.509898 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a51d9c21-2b71-46f0-8b63-9961d75247fe-log-httpd\") pod \"a51d9c21-2b71-46f0-8b63-9961d75247fe\" (UID: \"a51d9c21-2b71-46f0-8b63-9961d75247fe\") " Jan 25 08:18:39 crc kubenswrapper[4832]: I0125 08:18:39.510199 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n7nj7\" (UniqueName: \"kubernetes.io/projected/a51d9c21-2b71-46f0-8b63-9961d75247fe-kube-api-access-n7nj7\") pod \"a51d9c21-2b71-46f0-8b63-9961d75247fe\" (UID: \"a51d9c21-2b71-46f0-8b63-9961d75247fe\") " Jan 25 08:18:39 crc kubenswrapper[4832]: I0125 08:18:39.510289 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a51d9c21-2b71-46f0-8b63-9961d75247fe-run-httpd\") pod \"a51d9c21-2b71-46f0-8b63-9961d75247fe\" (UID: \"a51d9c21-2b71-46f0-8b63-9961d75247fe\") " Jan 25 08:18:39 crc kubenswrapper[4832]: I0125 08:18:39.510334 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a51d9c21-2b71-46f0-8b63-9961d75247fe-config-data\") pod \"a51d9c21-2b71-46f0-8b63-9961d75247fe\" (UID: \"a51d9c21-2b71-46f0-8b63-9961d75247fe\") " Jan 25 08:18:39 crc kubenswrapper[4832]: I0125 08:18:39.510400 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a51d9c21-2b71-46f0-8b63-9961d75247fe-combined-ca-bundle\") pod \"a51d9c21-2b71-46f0-8b63-9961d75247fe\" (UID: \"a51d9c21-2b71-46f0-8b63-9961d75247fe\") " Jan 25 08:18:39 crc kubenswrapper[4832]: I0125 08:18:39.518018 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a51d9c21-2b71-46f0-8b63-9961d75247fe-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "a51d9c21-2b71-46f0-8b63-9961d75247fe" (UID: "a51d9c21-2b71-46f0-8b63-9961d75247fe"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 25 08:18:39 crc kubenswrapper[4832]: I0125 08:18:39.518469 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a51d9c21-2b71-46f0-8b63-9961d75247fe-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "a51d9c21-2b71-46f0-8b63-9961d75247fe" (UID: "a51d9c21-2b71-46f0-8b63-9961d75247fe"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 25 08:18:39 crc kubenswrapper[4832]: I0125 08:18:39.522101 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a51d9c21-2b71-46f0-8b63-9961d75247fe-scripts" (OuterVolumeSpecName: "scripts") pod "a51d9c21-2b71-46f0-8b63-9961d75247fe" (UID: "a51d9c21-2b71-46f0-8b63-9961d75247fe"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 25 08:18:39 crc kubenswrapper[4832]: I0125 08:18:39.524012 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a51d9c21-2b71-46f0-8b63-9961d75247fe-kube-api-access-n7nj7" (OuterVolumeSpecName: "kube-api-access-n7nj7") pod "a51d9c21-2b71-46f0-8b63-9961d75247fe" (UID: "a51d9c21-2b71-46f0-8b63-9961d75247fe"). InnerVolumeSpecName "kube-api-access-n7nj7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 25 08:18:39 crc kubenswrapper[4832]: I0125 08:18:39.560520 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a51d9c21-2b71-46f0-8b63-9961d75247fe-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "a51d9c21-2b71-46f0-8b63-9961d75247fe" (UID: "a51d9c21-2b71-46f0-8b63-9961d75247fe"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 25 08:18:39 crc kubenswrapper[4832]: I0125 08:18:39.613466 4832 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a51d9c21-2b71-46f0-8b63-9961d75247fe-scripts\") on node \"crc\" DevicePath \"\"" Jan 25 08:18:39 crc kubenswrapper[4832]: I0125 08:18:39.613512 4832 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a51d9c21-2b71-46f0-8b63-9961d75247fe-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 25 08:18:39 crc kubenswrapper[4832]: I0125 08:18:39.613530 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n7nj7\" (UniqueName: \"kubernetes.io/projected/a51d9c21-2b71-46f0-8b63-9961d75247fe-kube-api-access-n7nj7\") on node \"crc\" DevicePath \"\"" Jan 25 08:18:39 crc kubenswrapper[4832]: I0125 08:18:39.613551 4832 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a51d9c21-2b71-46f0-8b63-9961d75247fe-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 25 08:18:39 crc kubenswrapper[4832]: I0125 08:18:39.613568 4832 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/a51d9c21-2b71-46f0-8b63-9961d75247fe-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 25 08:18:39 crc kubenswrapper[4832]: I0125 08:18:39.622936 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a51d9c21-2b71-46f0-8b63-9961d75247fe-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a51d9c21-2b71-46f0-8b63-9961d75247fe" (UID: "a51d9c21-2b71-46f0-8b63-9961d75247fe"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 25 08:18:39 crc kubenswrapper[4832]: I0125 08:18:39.651675 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Jan 25 08:18:39 crc kubenswrapper[4832]: I0125 08:18:39.654660 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a51d9c21-2b71-46f0-8b63-9961d75247fe-config-data" (OuterVolumeSpecName: "config-data") pod "a51d9c21-2b71-46f0-8b63-9961d75247fe" (UID: "a51d9c21-2b71-46f0-8b63-9961d75247fe"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 25 08:18:39 crc kubenswrapper[4832]: I0125 08:18:39.687339 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Jan 25 08:18:39 crc kubenswrapper[4832]: I0125 08:18:39.708805 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 25 08:18:39 crc kubenswrapper[4832]: I0125 08:18:39.708860 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 25 08:18:39 crc kubenswrapper[4832]: I0125 08:18:39.715856 4832 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a51d9c21-2b71-46f0-8b63-9961d75247fe-config-data\") on node \"crc\" DevicePath \"\"" Jan 25 08:18:39 crc kubenswrapper[4832]: I0125 08:18:39.716184 4832 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a51d9c21-2b71-46f0-8b63-9961d75247fe-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 25 08:18:40 crc kubenswrapper[4832]: I0125 08:18:40.062863 4832 generic.go:334] "Generic (PLEG): container finished" podID="a51d9c21-2b71-46f0-8b63-9961d75247fe" containerID="d6a35425c90b18fbe9e4730d3566e1a10343f541ac5eccd9145e3375295b8a75" exitCode=0 Jan 25 08:18:40 crc kubenswrapper[4832]: I0125 08:18:40.062899 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a51d9c21-2b71-46f0-8b63-9961d75247fe","Type":"ContainerDied","Data":"d6a35425c90b18fbe9e4730d3566e1a10343f541ac5eccd9145e3375295b8a75"} Jan 25 08:18:40 crc kubenswrapper[4832]: I0125 08:18:40.062941 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a51d9c21-2b71-46f0-8b63-9961d75247fe","Type":"ContainerDied","Data":"cb8b35be57621d2200a2533e36665fb8f3c966b024204287d6fa4f5f0430a94f"} Jan 25 08:18:40 crc kubenswrapper[4832]: I0125 08:18:40.062961 4832 scope.go:117] "RemoveContainer" containerID="427ffa790e251b576c344c77d7e41b6e5519f58d85c8f21ec107fe25c1d306d6" Jan 25 08:18:40 crc kubenswrapper[4832]: I0125 08:18:40.062964 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 25 08:18:40 crc kubenswrapper[4832]: I0125 08:18:40.094369 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 25 08:18:40 crc kubenswrapper[4832]: I0125 08:18:40.095231 4832 scope.go:117] "RemoveContainer" containerID="9932a79a927984403bed18124182da23df84fc7421fe75b4cc847e0252c545c2" Jan 25 08:18:40 crc kubenswrapper[4832]: I0125 08:18:40.111960 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Jan 25 08:18:40 crc kubenswrapper[4832]: I0125 08:18:40.115996 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 25 08:18:40 crc kubenswrapper[4832]: I0125 08:18:40.124539 4832 scope.go:117] "RemoveContainer" containerID="d6a35425c90b18fbe9e4730d3566e1a10343f541ac5eccd9145e3375295b8a75" Jan 25 08:18:40 crc kubenswrapper[4832]: I0125 08:18:40.139456 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 25 08:18:40 crc kubenswrapper[4832]: E0125 08:18:40.139906 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a51d9c21-2b71-46f0-8b63-9961d75247fe" containerName="sg-core" Jan 25 08:18:40 crc kubenswrapper[4832]: I0125 08:18:40.139923 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="a51d9c21-2b71-46f0-8b63-9961d75247fe" containerName="sg-core" Jan 25 08:18:40 crc kubenswrapper[4832]: E0125 08:18:40.139954 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a51d9c21-2b71-46f0-8b63-9961d75247fe" containerName="ceilometer-notification-agent" Jan 25 08:18:40 crc kubenswrapper[4832]: I0125 08:18:40.139961 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="a51d9c21-2b71-46f0-8b63-9961d75247fe" containerName="ceilometer-notification-agent" Jan 25 08:18:40 crc kubenswrapper[4832]: E0125 08:18:40.139970 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a51d9c21-2b71-46f0-8b63-9961d75247fe" containerName="ceilometer-central-agent" Jan 25 08:18:40 crc kubenswrapper[4832]: I0125 08:18:40.139977 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="a51d9c21-2b71-46f0-8b63-9961d75247fe" containerName="ceilometer-central-agent" Jan 25 08:18:40 crc kubenswrapper[4832]: E0125 08:18:40.139991 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a51d9c21-2b71-46f0-8b63-9961d75247fe" containerName="proxy-httpd" Jan 25 08:18:40 crc kubenswrapper[4832]: I0125 08:18:40.139998 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="a51d9c21-2b71-46f0-8b63-9961d75247fe" containerName="proxy-httpd" Jan 25 08:18:40 crc kubenswrapper[4832]: I0125 08:18:40.140182 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="a51d9c21-2b71-46f0-8b63-9961d75247fe" containerName="ceilometer-notification-agent" Jan 25 08:18:40 crc kubenswrapper[4832]: I0125 08:18:40.140197 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="a51d9c21-2b71-46f0-8b63-9961d75247fe" containerName="ceilometer-central-agent" Jan 25 08:18:40 crc kubenswrapper[4832]: I0125 08:18:40.140211 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="a51d9c21-2b71-46f0-8b63-9961d75247fe" containerName="proxy-httpd" Jan 25 08:18:40 crc kubenswrapper[4832]: I0125 08:18:40.140228 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="a51d9c21-2b71-46f0-8b63-9961d75247fe" containerName="sg-core" Jan 25 08:18:40 crc kubenswrapper[4832]: I0125 08:18:40.142054 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 25 08:18:40 crc kubenswrapper[4832]: I0125 08:18:40.149079 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 25 08:18:40 crc kubenswrapper[4832]: I0125 08:18:40.149417 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 25 08:18:40 crc kubenswrapper[4832]: I0125 08:18:40.149577 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Jan 25 08:18:40 crc kubenswrapper[4832]: I0125 08:18:40.154230 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 25 08:18:40 crc kubenswrapper[4832]: I0125 08:18:40.200732 4832 scope.go:117] "RemoveContainer" containerID="bc92d92afa96c88a2d68885c5bc1fea24da6a85f74e6e5429f981fc324348a16" Jan 25 08:18:40 crc kubenswrapper[4832]: I0125 08:18:40.221356 4832 scope.go:117] "RemoveContainer" containerID="427ffa790e251b576c344c77d7e41b6e5519f58d85c8f21ec107fe25c1d306d6" Jan 25 08:18:40 crc kubenswrapper[4832]: E0125 08:18:40.221982 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"427ffa790e251b576c344c77d7e41b6e5519f58d85c8f21ec107fe25c1d306d6\": container with ID starting with 427ffa790e251b576c344c77d7e41b6e5519f58d85c8f21ec107fe25c1d306d6 not found: ID does not exist" containerID="427ffa790e251b576c344c77d7e41b6e5519f58d85c8f21ec107fe25c1d306d6" Jan 25 08:18:40 crc kubenswrapper[4832]: I0125 08:18:40.222012 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"427ffa790e251b576c344c77d7e41b6e5519f58d85c8f21ec107fe25c1d306d6"} err="failed to get container status \"427ffa790e251b576c344c77d7e41b6e5519f58d85c8f21ec107fe25c1d306d6\": rpc error: code = NotFound desc = could not find container \"427ffa790e251b576c344c77d7e41b6e5519f58d85c8f21ec107fe25c1d306d6\": container with ID starting with 427ffa790e251b576c344c77d7e41b6e5519f58d85c8f21ec107fe25c1d306d6 not found: ID does not exist" Jan 25 08:18:40 crc kubenswrapper[4832]: I0125 08:18:40.222052 4832 scope.go:117] "RemoveContainer" containerID="9932a79a927984403bed18124182da23df84fc7421fe75b4cc847e0252c545c2" Jan 25 08:18:40 crc kubenswrapper[4832]: E0125 08:18:40.222377 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9932a79a927984403bed18124182da23df84fc7421fe75b4cc847e0252c545c2\": container with ID starting with 9932a79a927984403bed18124182da23df84fc7421fe75b4cc847e0252c545c2 not found: ID does not exist" containerID="9932a79a927984403bed18124182da23df84fc7421fe75b4cc847e0252c545c2" Jan 25 08:18:40 crc kubenswrapper[4832]: I0125 08:18:40.222548 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9932a79a927984403bed18124182da23df84fc7421fe75b4cc847e0252c545c2"} err="failed to get container status \"9932a79a927984403bed18124182da23df84fc7421fe75b4cc847e0252c545c2\": rpc error: code = NotFound desc = could not find container \"9932a79a927984403bed18124182da23df84fc7421fe75b4cc847e0252c545c2\": container with ID starting with 9932a79a927984403bed18124182da23df84fc7421fe75b4cc847e0252c545c2 not found: ID does not exist" Jan 25 08:18:40 crc kubenswrapper[4832]: I0125 08:18:40.222629 4832 scope.go:117] "RemoveContainer" containerID="d6a35425c90b18fbe9e4730d3566e1a10343f541ac5eccd9145e3375295b8a75" Jan 25 08:18:40 crc kubenswrapper[4832]: E0125 08:18:40.222949 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d6a35425c90b18fbe9e4730d3566e1a10343f541ac5eccd9145e3375295b8a75\": container with ID starting with d6a35425c90b18fbe9e4730d3566e1a10343f541ac5eccd9145e3375295b8a75 not found: ID does not exist" containerID="d6a35425c90b18fbe9e4730d3566e1a10343f541ac5eccd9145e3375295b8a75" Jan 25 08:18:40 crc kubenswrapper[4832]: I0125 08:18:40.223033 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d6a35425c90b18fbe9e4730d3566e1a10343f541ac5eccd9145e3375295b8a75"} err="failed to get container status \"d6a35425c90b18fbe9e4730d3566e1a10343f541ac5eccd9145e3375295b8a75\": rpc error: code = NotFound desc = could not find container \"d6a35425c90b18fbe9e4730d3566e1a10343f541ac5eccd9145e3375295b8a75\": container with ID starting with d6a35425c90b18fbe9e4730d3566e1a10343f541ac5eccd9145e3375295b8a75 not found: ID does not exist" Jan 25 08:18:40 crc kubenswrapper[4832]: I0125 08:18:40.223102 4832 scope.go:117] "RemoveContainer" containerID="bc92d92afa96c88a2d68885c5bc1fea24da6a85f74e6e5429f981fc324348a16" Jan 25 08:18:40 crc kubenswrapper[4832]: E0125 08:18:40.223416 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bc92d92afa96c88a2d68885c5bc1fea24da6a85f74e6e5429f981fc324348a16\": container with ID starting with bc92d92afa96c88a2d68885c5bc1fea24da6a85f74e6e5429f981fc324348a16 not found: ID does not exist" containerID="bc92d92afa96c88a2d68885c5bc1fea24da6a85f74e6e5429f981fc324348a16" Jan 25 08:18:40 crc kubenswrapper[4832]: I0125 08:18:40.223440 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bc92d92afa96c88a2d68885c5bc1fea24da6a85f74e6e5429f981fc324348a16"} err="failed to get container status \"bc92d92afa96c88a2d68885c5bc1fea24da6a85f74e6e5429f981fc324348a16\": rpc error: code = NotFound desc = could not find container \"bc92d92afa96c88a2d68885c5bc1fea24da6a85f74e6e5429f981fc324348a16\": container with ID starting with bc92d92afa96c88a2d68885c5bc1fea24da6a85f74e6e5429f981fc324348a16 not found: ID does not exist" Jan 25 08:18:40 crc kubenswrapper[4832]: I0125 08:18:40.227594 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/9101d936-3e35-4a66-92e9-88560d52bdaf-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"9101d936-3e35-4a66-92e9-88560d52bdaf\") " pod="openstack/ceilometer-0" Jan 25 08:18:40 crc kubenswrapper[4832]: I0125 08:18:40.227807 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-klsbx\" (UniqueName: \"kubernetes.io/projected/9101d936-3e35-4a66-92e9-88560d52bdaf-kube-api-access-klsbx\") pod \"ceilometer-0\" (UID: \"9101d936-3e35-4a66-92e9-88560d52bdaf\") " pod="openstack/ceilometer-0" Jan 25 08:18:40 crc kubenswrapper[4832]: I0125 08:18:40.227904 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9101d936-3e35-4a66-92e9-88560d52bdaf-scripts\") pod \"ceilometer-0\" (UID: \"9101d936-3e35-4a66-92e9-88560d52bdaf\") " pod="openstack/ceilometer-0" Jan 25 08:18:40 crc kubenswrapper[4832]: I0125 08:18:40.228024 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9101d936-3e35-4a66-92e9-88560d52bdaf-config-data\") pod \"ceilometer-0\" (UID: \"9101d936-3e35-4a66-92e9-88560d52bdaf\") " pod="openstack/ceilometer-0" Jan 25 08:18:40 crc kubenswrapper[4832]: I0125 08:18:40.228164 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9101d936-3e35-4a66-92e9-88560d52bdaf-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"9101d936-3e35-4a66-92e9-88560d52bdaf\") " pod="openstack/ceilometer-0" Jan 25 08:18:40 crc kubenswrapper[4832]: I0125 08:18:40.228268 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9101d936-3e35-4a66-92e9-88560d52bdaf-log-httpd\") pod \"ceilometer-0\" (UID: \"9101d936-3e35-4a66-92e9-88560d52bdaf\") " pod="openstack/ceilometer-0" Jan 25 08:18:40 crc kubenswrapper[4832]: I0125 08:18:40.228428 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/9101d936-3e35-4a66-92e9-88560d52bdaf-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"9101d936-3e35-4a66-92e9-88560d52bdaf\") " pod="openstack/ceilometer-0" Jan 25 08:18:40 crc kubenswrapper[4832]: I0125 08:18:40.228516 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9101d936-3e35-4a66-92e9-88560d52bdaf-run-httpd\") pod \"ceilometer-0\" (UID: \"9101d936-3e35-4a66-92e9-88560d52bdaf\") " pod="openstack/ceilometer-0" Jan 25 08:18:40 crc kubenswrapper[4832]: I0125 08:18:40.330321 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9101d936-3e35-4a66-92e9-88560d52bdaf-run-httpd\") pod \"ceilometer-0\" (UID: \"9101d936-3e35-4a66-92e9-88560d52bdaf\") " pod="openstack/ceilometer-0" Jan 25 08:18:40 crc kubenswrapper[4832]: I0125 08:18:40.330748 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/9101d936-3e35-4a66-92e9-88560d52bdaf-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"9101d936-3e35-4a66-92e9-88560d52bdaf\") " pod="openstack/ceilometer-0" Jan 25 08:18:40 crc kubenswrapper[4832]: I0125 08:18:40.330827 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-klsbx\" (UniqueName: \"kubernetes.io/projected/9101d936-3e35-4a66-92e9-88560d52bdaf-kube-api-access-klsbx\") pod \"ceilometer-0\" (UID: \"9101d936-3e35-4a66-92e9-88560d52bdaf\") " pod="openstack/ceilometer-0" Jan 25 08:18:40 crc kubenswrapper[4832]: I0125 08:18:40.330878 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9101d936-3e35-4a66-92e9-88560d52bdaf-scripts\") pod \"ceilometer-0\" (UID: \"9101d936-3e35-4a66-92e9-88560d52bdaf\") " pod="openstack/ceilometer-0" Jan 25 08:18:40 crc kubenswrapper[4832]: I0125 08:18:40.330965 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9101d936-3e35-4a66-92e9-88560d52bdaf-config-data\") pod \"ceilometer-0\" (UID: \"9101d936-3e35-4a66-92e9-88560d52bdaf\") " pod="openstack/ceilometer-0" Jan 25 08:18:40 crc kubenswrapper[4832]: I0125 08:18:40.331167 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9101d936-3e35-4a66-92e9-88560d52bdaf-run-httpd\") pod \"ceilometer-0\" (UID: \"9101d936-3e35-4a66-92e9-88560d52bdaf\") " pod="openstack/ceilometer-0" Jan 25 08:18:40 crc kubenswrapper[4832]: I0125 08:18:40.331171 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9101d936-3e35-4a66-92e9-88560d52bdaf-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"9101d936-3e35-4a66-92e9-88560d52bdaf\") " pod="openstack/ceilometer-0" Jan 25 08:18:40 crc kubenswrapper[4832]: I0125 08:18:40.331374 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9101d936-3e35-4a66-92e9-88560d52bdaf-log-httpd\") pod \"ceilometer-0\" (UID: \"9101d936-3e35-4a66-92e9-88560d52bdaf\") " pod="openstack/ceilometer-0" Jan 25 08:18:40 crc kubenswrapper[4832]: I0125 08:18:40.331560 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/9101d936-3e35-4a66-92e9-88560d52bdaf-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"9101d936-3e35-4a66-92e9-88560d52bdaf\") " pod="openstack/ceilometer-0" Jan 25 08:18:40 crc kubenswrapper[4832]: I0125 08:18:40.333928 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9101d936-3e35-4a66-92e9-88560d52bdaf-log-httpd\") pod \"ceilometer-0\" (UID: \"9101d936-3e35-4a66-92e9-88560d52bdaf\") " pod="openstack/ceilometer-0" Jan 25 08:18:40 crc kubenswrapper[4832]: I0125 08:18:40.336991 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9101d936-3e35-4a66-92e9-88560d52bdaf-scripts\") pod \"ceilometer-0\" (UID: \"9101d936-3e35-4a66-92e9-88560d52bdaf\") " pod="openstack/ceilometer-0" Jan 25 08:18:40 crc kubenswrapper[4832]: I0125 08:18:40.337221 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9101d936-3e35-4a66-92e9-88560d52bdaf-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"9101d936-3e35-4a66-92e9-88560d52bdaf\") " pod="openstack/ceilometer-0" Jan 25 08:18:40 crc kubenswrapper[4832]: I0125 08:18:40.344182 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/9101d936-3e35-4a66-92e9-88560d52bdaf-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"9101d936-3e35-4a66-92e9-88560d52bdaf\") " pod="openstack/ceilometer-0" Jan 25 08:18:40 crc kubenswrapper[4832]: I0125 08:18:40.368107 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9101d936-3e35-4a66-92e9-88560d52bdaf-config-data\") pod \"ceilometer-0\" (UID: \"9101d936-3e35-4a66-92e9-88560d52bdaf\") " pod="openstack/ceilometer-0" Jan 25 08:18:40 crc kubenswrapper[4832]: I0125 08:18:40.368162 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-klsbx\" (UniqueName: \"kubernetes.io/projected/9101d936-3e35-4a66-92e9-88560d52bdaf-kube-api-access-klsbx\") pod \"ceilometer-0\" (UID: \"9101d936-3e35-4a66-92e9-88560d52bdaf\") " pod="openstack/ceilometer-0" Jan 25 08:18:40 crc kubenswrapper[4832]: I0125 08:18:40.374061 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/9101d936-3e35-4a66-92e9-88560d52bdaf-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"9101d936-3e35-4a66-92e9-88560d52bdaf\") " pod="openstack/ceilometer-0" Jan 25 08:18:40 crc kubenswrapper[4832]: I0125 08:18:40.501604 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 25 08:18:40 crc kubenswrapper[4832]: I0125 08:18:40.790596 4832 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="7f893740-ce4d-4ee2-994d-98739d4b1f7d" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.198:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 25 08:18:40 crc kubenswrapper[4832]: I0125 08:18:40.790600 4832 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="7f893740-ce4d-4ee2-994d-98739d4b1f7d" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.198:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 25 08:18:41 crc kubenswrapper[4832]: I0125 08:18:41.038048 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 25 08:18:41 crc kubenswrapper[4832]: I0125 08:18:41.082979 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9101d936-3e35-4a66-92e9-88560d52bdaf","Type":"ContainerStarted","Data":"357278623f287155e873ee794b78e37bac452c149062b3ec2e090bd2dccc5e96"} Jan 25 08:18:41 crc kubenswrapper[4832]: I0125 08:18:41.681938 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a51d9c21-2b71-46f0-8b63-9961d75247fe" path="/var/lib/kubelet/pods/a51d9c21-2b71-46f0-8b63-9961d75247fe/volumes" Jan 25 08:18:42 crc kubenswrapper[4832]: I0125 08:18:42.093260 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9101d936-3e35-4a66-92e9-88560d52bdaf","Type":"ContainerStarted","Data":"d765879c71739e6935bf6475d537272526ee231565a2f327f71ffab075c3e247"} Jan 25 08:18:43 crc kubenswrapper[4832]: I0125 08:18:43.104711 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9101d936-3e35-4a66-92e9-88560d52bdaf","Type":"ContainerStarted","Data":"a765f8d10900cfcafae85d87cecf2181a5ffbe8690b52f95b4fd800d5394f489"} Jan 25 08:18:44 crc kubenswrapper[4832]: I0125 08:18:44.122882 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9101d936-3e35-4a66-92e9-88560d52bdaf","Type":"ContainerStarted","Data":"52785ee0645da8dc5ff72f013d11ace083baf5422213fae6de8d4578a40f8eda"} Jan 25 08:18:45 crc kubenswrapper[4832]: I0125 08:18:45.134632 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9101d936-3e35-4a66-92e9-88560d52bdaf","Type":"ContainerStarted","Data":"4fa6d63be10b5d4711e21498893af0f9fa399d0356e4c5337cf455531c592b58"} Jan 25 08:18:45 crc kubenswrapper[4832]: I0125 08:18:45.135950 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 25 08:18:45 crc kubenswrapper[4832]: I0125 08:18:45.185886 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.143953648 podStartE2EDuration="5.185860048s" podCreationTimestamp="2026-01-25 08:18:40 +0000 UTC" firstStartedPulling="2026-01-25 08:18:41.043810158 +0000 UTC m=+1303.717633691" lastFinishedPulling="2026-01-25 08:18:44.085716518 +0000 UTC m=+1306.759540091" observedRunningTime="2026-01-25 08:18:45.159176808 +0000 UTC m=+1307.833000341" watchObservedRunningTime="2026-01-25 08:18:45.185860048 +0000 UTC m=+1307.859683581" Jan 25 08:18:45 crc kubenswrapper[4832]: I0125 08:18:45.263034 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Jan 25 08:18:45 crc kubenswrapper[4832]: I0125 08:18:45.268147 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Jan 25 08:18:45 crc kubenswrapper[4832]: I0125 08:18:45.269890 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Jan 25 08:18:45 crc kubenswrapper[4832]: I0125 08:18:45.393753 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Jan 25 08:18:46 crc kubenswrapper[4832]: I0125 08:18:46.149520 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Jan 25 08:18:47 crc kubenswrapper[4832]: I0125 08:18:47.122868 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 25 08:18:47 crc kubenswrapper[4832]: I0125 08:18:47.156238 4832 generic.go:334] "Generic (PLEG): container finished" podID="5bbea8c8-972b-41f2-b1e7-e2aa7f521384" containerID="6e9dd37c0976baa93da3bc4c1f6d9f74625689b52e41de0aedb042657c74888e" exitCode=137 Jan 25 08:18:47 crc kubenswrapper[4832]: I0125 08:18:47.156322 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 25 08:18:47 crc kubenswrapper[4832]: I0125 08:18:47.156360 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"5bbea8c8-972b-41f2-b1e7-e2aa7f521384","Type":"ContainerDied","Data":"6e9dd37c0976baa93da3bc4c1f6d9f74625689b52e41de0aedb042657c74888e"} Jan 25 08:18:47 crc kubenswrapper[4832]: I0125 08:18:47.156445 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"5bbea8c8-972b-41f2-b1e7-e2aa7f521384","Type":"ContainerDied","Data":"1c923056f904629c76f592bad52d3ec1f1d7d4d8be0159e1ee7ee63afdd7b2f2"} Jan 25 08:18:47 crc kubenswrapper[4832]: I0125 08:18:47.156476 4832 scope.go:117] "RemoveContainer" containerID="6e9dd37c0976baa93da3bc4c1f6d9f74625689b52e41de0aedb042657c74888e" Jan 25 08:18:47 crc kubenswrapper[4832]: I0125 08:18:47.171357 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5bbea8c8-972b-41f2-b1e7-e2aa7f521384-combined-ca-bundle\") pod \"5bbea8c8-972b-41f2-b1e7-e2aa7f521384\" (UID: \"5bbea8c8-972b-41f2-b1e7-e2aa7f521384\") " Jan 25 08:18:47 crc kubenswrapper[4832]: I0125 08:18:47.171488 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5bbea8c8-972b-41f2-b1e7-e2aa7f521384-config-data\") pod \"5bbea8c8-972b-41f2-b1e7-e2aa7f521384\" (UID: \"5bbea8c8-972b-41f2-b1e7-e2aa7f521384\") " Jan 25 08:18:47 crc kubenswrapper[4832]: I0125 08:18:47.171613 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fqf4s\" (UniqueName: \"kubernetes.io/projected/5bbea8c8-972b-41f2-b1e7-e2aa7f521384-kube-api-access-fqf4s\") pod \"5bbea8c8-972b-41f2-b1e7-e2aa7f521384\" (UID: \"5bbea8c8-972b-41f2-b1e7-e2aa7f521384\") " Jan 25 08:18:47 crc kubenswrapper[4832]: I0125 08:18:47.183215 4832 scope.go:117] "RemoveContainer" containerID="6e9dd37c0976baa93da3bc4c1f6d9f74625689b52e41de0aedb042657c74888e" Jan 25 08:18:47 crc kubenswrapper[4832]: E0125 08:18:47.186925 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6e9dd37c0976baa93da3bc4c1f6d9f74625689b52e41de0aedb042657c74888e\": container with ID starting with 6e9dd37c0976baa93da3bc4c1f6d9f74625689b52e41de0aedb042657c74888e not found: ID does not exist" containerID="6e9dd37c0976baa93da3bc4c1f6d9f74625689b52e41de0aedb042657c74888e" Jan 25 08:18:47 crc kubenswrapper[4832]: I0125 08:18:47.186970 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6e9dd37c0976baa93da3bc4c1f6d9f74625689b52e41de0aedb042657c74888e"} err="failed to get container status \"6e9dd37c0976baa93da3bc4c1f6d9f74625689b52e41de0aedb042657c74888e\": rpc error: code = NotFound desc = could not find container \"6e9dd37c0976baa93da3bc4c1f6d9f74625689b52e41de0aedb042657c74888e\": container with ID starting with 6e9dd37c0976baa93da3bc4c1f6d9f74625689b52e41de0aedb042657c74888e not found: ID does not exist" Jan 25 08:18:47 crc kubenswrapper[4832]: I0125 08:18:47.189694 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5bbea8c8-972b-41f2-b1e7-e2aa7f521384-kube-api-access-fqf4s" (OuterVolumeSpecName: "kube-api-access-fqf4s") pod "5bbea8c8-972b-41f2-b1e7-e2aa7f521384" (UID: "5bbea8c8-972b-41f2-b1e7-e2aa7f521384"). InnerVolumeSpecName "kube-api-access-fqf4s". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 25 08:18:47 crc kubenswrapper[4832]: I0125 08:18:47.199615 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5bbea8c8-972b-41f2-b1e7-e2aa7f521384-config-data" (OuterVolumeSpecName: "config-data") pod "5bbea8c8-972b-41f2-b1e7-e2aa7f521384" (UID: "5bbea8c8-972b-41f2-b1e7-e2aa7f521384"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 25 08:18:47 crc kubenswrapper[4832]: I0125 08:18:47.208685 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5bbea8c8-972b-41f2-b1e7-e2aa7f521384-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "5bbea8c8-972b-41f2-b1e7-e2aa7f521384" (UID: "5bbea8c8-972b-41f2-b1e7-e2aa7f521384"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 25 08:18:47 crc kubenswrapper[4832]: I0125 08:18:47.273727 4832 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5bbea8c8-972b-41f2-b1e7-e2aa7f521384-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 25 08:18:47 crc kubenswrapper[4832]: I0125 08:18:47.273772 4832 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5bbea8c8-972b-41f2-b1e7-e2aa7f521384-config-data\") on node \"crc\" DevicePath \"\"" Jan 25 08:18:47 crc kubenswrapper[4832]: I0125 08:18:47.273789 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fqf4s\" (UniqueName: \"kubernetes.io/projected/5bbea8c8-972b-41f2-b1e7-e2aa7f521384-kube-api-access-fqf4s\") on node \"crc\" DevicePath \"\"" Jan 25 08:18:47 crc kubenswrapper[4832]: I0125 08:18:47.495169 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 25 08:18:47 crc kubenswrapper[4832]: I0125 08:18:47.511337 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 25 08:18:47 crc kubenswrapper[4832]: I0125 08:18:47.527107 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 25 08:18:47 crc kubenswrapper[4832]: E0125 08:18:47.527793 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5bbea8c8-972b-41f2-b1e7-e2aa7f521384" containerName="nova-cell1-novncproxy-novncproxy" Jan 25 08:18:47 crc kubenswrapper[4832]: I0125 08:18:47.527819 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="5bbea8c8-972b-41f2-b1e7-e2aa7f521384" containerName="nova-cell1-novncproxy-novncproxy" Jan 25 08:18:47 crc kubenswrapper[4832]: I0125 08:18:47.528076 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="5bbea8c8-972b-41f2-b1e7-e2aa7f521384" containerName="nova-cell1-novncproxy-novncproxy" Jan 25 08:18:47 crc kubenswrapper[4832]: I0125 08:18:47.529019 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 25 08:18:47 crc kubenswrapper[4832]: I0125 08:18:47.534981 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-vencrypt" Jan 25 08:18:47 crc kubenswrapper[4832]: I0125 08:18:47.535170 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-public-svc" Jan 25 08:18:47 crc kubenswrapper[4832]: I0125 08:18:47.535310 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Jan 25 08:18:47 crc kubenswrapper[4832]: I0125 08:18:47.538726 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 25 08:18:47 crc kubenswrapper[4832]: I0125 08:18:47.580449 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/c420c690-6a2a-4ccc-876b-b3ca1d5d8781-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"c420c690-6a2a-4ccc-876b-b3ca1d5d8781\") " pod="openstack/nova-cell1-novncproxy-0" Jan 25 08:18:47 crc kubenswrapper[4832]: I0125 08:18:47.580514 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8z6j2\" (UniqueName: \"kubernetes.io/projected/c420c690-6a2a-4ccc-876b-b3ca1d5d8781-kube-api-access-8z6j2\") pod \"nova-cell1-novncproxy-0\" (UID: \"c420c690-6a2a-4ccc-876b-b3ca1d5d8781\") " pod="openstack/nova-cell1-novncproxy-0" Jan 25 08:18:47 crc kubenswrapper[4832]: I0125 08:18:47.580673 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c420c690-6a2a-4ccc-876b-b3ca1d5d8781-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"c420c690-6a2a-4ccc-876b-b3ca1d5d8781\") " pod="openstack/nova-cell1-novncproxy-0" Jan 25 08:18:47 crc kubenswrapper[4832]: I0125 08:18:47.580713 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c420c690-6a2a-4ccc-876b-b3ca1d5d8781-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"c420c690-6a2a-4ccc-876b-b3ca1d5d8781\") " pod="openstack/nova-cell1-novncproxy-0" Jan 25 08:18:47 crc kubenswrapper[4832]: I0125 08:18:47.580810 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/c420c690-6a2a-4ccc-876b-b3ca1d5d8781-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"c420c690-6a2a-4ccc-876b-b3ca1d5d8781\") " pod="openstack/nova-cell1-novncproxy-0" Jan 25 08:18:47 crc kubenswrapper[4832]: I0125 08:18:47.682101 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c420c690-6a2a-4ccc-876b-b3ca1d5d8781-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"c420c690-6a2a-4ccc-876b-b3ca1d5d8781\") " pod="openstack/nova-cell1-novncproxy-0" Jan 25 08:18:47 crc kubenswrapper[4832]: I0125 08:18:47.682246 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/c420c690-6a2a-4ccc-876b-b3ca1d5d8781-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"c420c690-6a2a-4ccc-876b-b3ca1d5d8781\") " pod="openstack/nova-cell1-novncproxy-0" Jan 25 08:18:47 crc kubenswrapper[4832]: I0125 08:18:47.682935 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/c420c690-6a2a-4ccc-876b-b3ca1d5d8781-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"c420c690-6a2a-4ccc-876b-b3ca1d5d8781\") " pod="openstack/nova-cell1-novncproxy-0" Jan 25 08:18:47 crc kubenswrapper[4832]: I0125 08:18:47.682976 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8z6j2\" (UniqueName: \"kubernetes.io/projected/c420c690-6a2a-4ccc-876b-b3ca1d5d8781-kube-api-access-8z6j2\") pod \"nova-cell1-novncproxy-0\" (UID: \"c420c690-6a2a-4ccc-876b-b3ca1d5d8781\") " pod="openstack/nova-cell1-novncproxy-0" Jan 25 08:18:47 crc kubenswrapper[4832]: I0125 08:18:47.683009 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c420c690-6a2a-4ccc-876b-b3ca1d5d8781-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"c420c690-6a2a-4ccc-876b-b3ca1d5d8781\") " pod="openstack/nova-cell1-novncproxy-0" Jan 25 08:18:47 crc kubenswrapper[4832]: I0125 08:18:47.684306 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5bbea8c8-972b-41f2-b1e7-e2aa7f521384" path="/var/lib/kubelet/pods/5bbea8c8-972b-41f2-b1e7-e2aa7f521384/volumes" Jan 25 08:18:47 crc kubenswrapper[4832]: I0125 08:18:47.687327 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c420c690-6a2a-4ccc-876b-b3ca1d5d8781-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"c420c690-6a2a-4ccc-876b-b3ca1d5d8781\") " pod="openstack/nova-cell1-novncproxy-0" Jan 25 08:18:47 crc kubenswrapper[4832]: I0125 08:18:47.687362 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/c420c690-6a2a-4ccc-876b-b3ca1d5d8781-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"c420c690-6a2a-4ccc-876b-b3ca1d5d8781\") " pod="openstack/nova-cell1-novncproxy-0" Jan 25 08:18:47 crc kubenswrapper[4832]: I0125 08:18:47.687643 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/c420c690-6a2a-4ccc-876b-b3ca1d5d8781-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"c420c690-6a2a-4ccc-876b-b3ca1d5d8781\") " pod="openstack/nova-cell1-novncproxy-0" Jan 25 08:18:47 crc kubenswrapper[4832]: I0125 08:18:47.687744 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c420c690-6a2a-4ccc-876b-b3ca1d5d8781-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"c420c690-6a2a-4ccc-876b-b3ca1d5d8781\") " pod="openstack/nova-cell1-novncproxy-0" Jan 25 08:18:47 crc kubenswrapper[4832]: I0125 08:18:47.698761 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8z6j2\" (UniqueName: \"kubernetes.io/projected/c420c690-6a2a-4ccc-876b-b3ca1d5d8781-kube-api-access-8z6j2\") pod \"nova-cell1-novncproxy-0\" (UID: \"c420c690-6a2a-4ccc-876b-b3ca1d5d8781\") " pod="openstack/nova-cell1-novncproxy-0" Jan 25 08:18:47 crc kubenswrapper[4832]: I0125 08:18:47.847178 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 25 08:18:48 crc kubenswrapper[4832]: I0125 08:18:48.355992 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 25 08:18:48 crc kubenswrapper[4832]: W0125 08:18:48.381348 4832 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc420c690_6a2a_4ccc_876b_b3ca1d5d8781.slice/crio-10e235b2802f124d10a6231a01a7ebd2a7b5c443bb31432b2a9aeb05c38a79e8 WatchSource:0}: Error finding container 10e235b2802f124d10a6231a01a7ebd2a7b5c443bb31432b2a9aeb05c38a79e8: Status 404 returned error can't find the container with id 10e235b2802f124d10a6231a01a7ebd2a7b5c443bb31432b2a9aeb05c38a79e8 Jan 25 08:18:49 crc kubenswrapper[4832]: I0125 08:18:49.174128 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"c420c690-6a2a-4ccc-876b-b3ca1d5d8781","Type":"ContainerStarted","Data":"0c7614053f4bef27987b88ac8031dccedb0c13e5d1a349d7e3dba105d072993d"} Jan 25 08:18:49 crc kubenswrapper[4832]: I0125 08:18:49.174612 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"c420c690-6a2a-4ccc-876b-b3ca1d5d8781","Type":"ContainerStarted","Data":"10e235b2802f124d10a6231a01a7ebd2a7b5c443bb31432b2a9aeb05c38a79e8"} Jan 25 08:18:49 crc kubenswrapper[4832]: I0125 08:18:49.199840 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=2.199816142 podStartE2EDuration="2.199816142s" podCreationTimestamp="2026-01-25 08:18:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-25 08:18:49.187625701 +0000 UTC m=+1311.861449234" watchObservedRunningTime="2026-01-25 08:18:49.199816142 +0000 UTC m=+1311.873639675" Jan 25 08:18:49 crc kubenswrapper[4832]: I0125 08:18:49.711827 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Jan 25 08:18:49 crc kubenswrapper[4832]: I0125 08:18:49.712307 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Jan 25 08:18:49 crc kubenswrapper[4832]: I0125 08:18:49.712526 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 25 08:18:49 crc kubenswrapper[4832]: I0125 08:18:49.712541 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 25 08:18:49 crc kubenswrapper[4832]: I0125 08:18:49.715319 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Jan 25 08:18:49 crc kubenswrapper[4832]: I0125 08:18:49.715365 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Jan 25 08:18:49 crc kubenswrapper[4832]: I0125 08:18:49.906257 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-59cf4bdb65-87zjq"] Jan 25 08:18:49 crc kubenswrapper[4832]: I0125 08:18:49.908074 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-59cf4bdb65-87zjq" Jan 25 08:18:49 crc kubenswrapper[4832]: I0125 08:18:49.918418 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-59cf4bdb65-87zjq"] Jan 25 08:18:50 crc kubenswrapper[4832]: I0125 08:18:50.033451 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2422fda2-c886-45e9-93ee-8ef936a365f8-config\") pod \"dnsmasq-dns-59cf4bdb65-87zjq\" (UID: \"2422fda2-c886-45e9-93ee-8ef936a365f8\") " pod="openstack/dnsmasq-dns-59cf4bdb65-87zjq" Jan 25 08:18:50 crc kubenswrapper[4832]: I0125 08:18:50.033543 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2422fda2-c886-45e9-93ee-8ef936a365f8-ovsdbserver-nb\") pod \"dnsmasq-dns-59cf4bdb65-87zjq\" (UID: \"2422fda2-c886-45e9-93ee-8ef936a365f8\") " pod="openstack/dnsmasq-dns-59cf4bdb65-87zjq" Jan 25 08:18:50 crc kubenswrapper[4832]: I0125 08:18:50.033573 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2422fda2-c886-45e9-93ee-8ef936a365f8-ovsdbserver-sb\") pod \"dnsmasq-dns-59cf4bdb65-87zjq\" (UID: \"2422fda2-c886-45e9-93ee-8ef936a365f8\") " pod="openstack/dnsmasq-dns-59cf4bdb65-87zjq" Jan 25 08:18:50 crc kubenswrapper[4832]: I0125 08:18:50.033593 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sd5s8\" (UniqueName: \"kubernetes.io/projected/2422fda2-c886-45e9-93ee-8ef936a365f8-kube-api-access-sd5s8\") pod \"dnsmasq-dns-59cf4bdb65-87zjq\" (UID: \"2422fda2-c886-45e9-93ee-8ef936a365f8\") " pod="openstack/dnsmasq-dns-59cf4bdb65-87zjq" Jan 25 08:18:50 crc kubenswrapper[4832]: I0125 08:18:50.033630 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2422fda2-c886-45e9-93ee-8ef936a365f8-dns-svc\") pod \"dnsmasq-dns-59cf4bdb65-87zjq\" (UID: \"2422fda2-c886-45e9-93ee-8ef936a365f8\") " pod="openstack/dnsmasq-dns-59cf4bdb65-87zjq" Jan 25 08:18:50 crc kubenswrapper[4832]: I0125 08:18:50.033648 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/2422fda2-c886-45e9-93ee-8ef936a365f8-dns-swift-storage-0\") pod \"dnsmasq-dns-59cf4bdb65-87zjq\" (UID: \"2422fda2-c886-45e9-93ee-8ef936a365f8\") " pod="openstack/dnsmasq-dns-59cf4bdb65-87zjq" Jan 25 08:18:50 crc kubenswrapper[4832]: I0125 08:18:50.135524 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2422fda2-c886-45e9-93ee-8ef936a365f8-config\") pod \"dnsmasq-dns-59cf4bdb65-87zjq\" (UID: \"2422fda2-c886-45e9-93ee-8ef936a365f8\") " pod="openstack/dnsmasq-dns-59cf4bdb65-87zjq" Jan 25 08:18:50 crc kubenswrapper[4832]: I0125 08:18:50.135635 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2422fda2-c886-45e9-93ee-8ef936a365f8-ovsdbserver-nb\") pod \"dnsmasq-dns-59cf4bdb65-87zjq\" (UID: \"2422fda2-c886-45e9-93ee-8ef936a365f8\") " pod="openstack/dnsmasq-dns-59cf4bdb65-87zjq" Jan 25 08:18:50 crc kubenswrapper[4832]: I0125 08:18:50.135680 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2422fda2-c886-45e9-93ee-8ef936a365f8-ovsdbserver-sb\") pod \"dnsmasq-dns-59cf4bdb65-87zjq\" (UID: \"2422fda2-c886-45e9-93ee-8ef936a365f8\") " pod="openstack/dnsmasq-dns-59cf4bdb65-87zjq" Jan 25 08:18:50 crc kubenswrapper[4832]: I0125 08:18:50.135707 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sd5s8\" (UniqueName: \"kubernetes.io/projected/2422fda2-c886-45e9-93ee-8ef936a365f8-kube-api-access-sd5s8\") pod \"dnsmasq-dns-59cf4bdb65-87zjq\" (UID: \"2422fda2-c886-45e9-93ee-8ef936a365f8\") " pod="openstack/dnsmasq-dns-59cf4bdb65-87zjq" Jan 25 08:18:50 crc kubenswrapper[4832]: I0125 08:18:50.135750 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2422fda2-c886-45e9-93ee-8ef936a365f8-dns-svc\") pod \"dnsmasq-dns-59cf4bdb65-87zjq\" (UID: \"2422fda2-c886-45e9-93ee-8ef936a365f8\") " pod="openstack/dnsmasq-dns-59cf4bdb65-87zjq" Jan 25 08:18:50 crc kubenswrapper[4832]: I0125 08:18:50.135778 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/2422fda2-c886-45e9-93ee-8ef936a365f8-dns-swift-storage-0\") pod \"dnsmasq-dns-59cf4bdb65-87zjq\" (UID: \"2422fda2-c886-45e9-93ee-8ef936a365f8\") " pod="openstack/dnsmasq-dns-59cf4bdb65-87zjq" Jan 25 08:18:50 crc kubenswrapper[4832]: I0125 08:18:50.136907 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/2422fda2-c886-45e9-93ee-8ef936a365f8-dns-swift-storage-0\") pod \"dnsmasq-dns-59cf4bdb65-87zjq\" (UID: \"2422fda2-c886-45e9-93ee-8ef936a365f8\") " pod="openstack/dnsmasq-dns-59cf4bdb65-87zjq" Jan 25 08:18:50 crc kubenswrapper[4832]: I0125 08:18:50.136907 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2422fda2-c886-45e9-93ee-8ef936a365f8-ovsdbserver-sb\") pod \"dnsmasq-dns-59cf4bdb65-87zjq\" (UID: \"2422fda2-c886-45e9-93ee-8ef936a365f8\") " pod="openstack/dnsmasq-dns-59cf4bdb65-87zjq" Jan 25 08:18:50 crc kubenswrapper[4832]: I0125 08:18:50.136958 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2422fda2-c886-45e9-93ee-8ef936a365f8-dns-svc\") pod \"dnsmasq-dns-59cf4bdb65-87zjq\" (UID: \"2422fda2-c886-45e9-93ee-8ef936a365f8\") " pod="openstack/dnsmasq-dns-59cf4bdb65-87zjq" Jan 25 08:18:50 crc kubenswrapper[4832]: I0125 08:18:50.136913 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2422fda2-c886-45e9-93ee-8ef936a365f8-ovsdbserver-nb\") pod \"dnsmasq-dns-59cf4bdb65-87zjq\" (UID: \"2422fda2-c886-45e9-93ee-8ef936a365f8\") " pod="openstack/dnsmasq-dns-59cf4bdb65-87zjq" Jan 25 08:18:50 crc kubenswrapper[4832]: I0125 08:18:50.137477 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2422fda2-c886-45e9-93ee-8ef936a365f8-config\") pod \"dnsmasq-dns-59cf4bdb65-87zjq\" (UID: \"2422fda2-c886-45e9-93ee-8ef936a365f8\") " pod="openstack/dnsmasq-dns-59cf4bdb65-87zjq" Jan 25 08:18:50 crc kubenswrapper[4832]: I0125 08:18:50.160368 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sd5s8\" (UniqueName: \"kubernetes.io/projected/2422fda2-c886-45e9-93ee-8ef936a365f8-kube-api-access-sd5s8\") pod \"dnsmasq-dns-59cf4bdb65-87zjq\" (UID: \"2422fda2-c886-45e9-93ee-8ef936a365f8\") " pod="openstack/dnsmasq-dns-59cf4bdb65-87zjq" Jan 25 08:18:50 crc kubenswrapper[4832]: I0125 08:18:50.230786 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-59cf4bdb65-87zjq" Jan 25 08:18:50 crc kubenswrapper[4832]: I0125 08:18:50.759215 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-59cf4bdb65-87zjq"] Jan 25 08:18:51 crc kubenswrapper[4832]: I0125 08:18:51.192348 4832 generic.go:334] "Generic (PLEG): container finished" podID="2422fda2-c886-45e9-93ee-8ef936a365f8" containerID="441710a55dd61d984bbd4a2b8c2df3a20de3c702498a2d5e7bf09b6f1ee5621b" exitCode=0 Jan 25 08:18:51 crc kubenswrapper[4832]: I0125 08:18:51.192459 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-59cf4bdb65-87zjq" event={"ID":"2422fda2-c886-45e9-93ee-8ef936a365f8","Type":"ContainerDied","Data":"441710a55dd61d984bbd4a2b8c2df3a20de3c702498a2d5e7bf09b6f1ee5621b"} Jan 25 08:18:51 crc kubenswrapper[4832]: I0125 08:18:51.192819 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-59cf4bdb65-87zjq" event={"ID":"2422fda2-c886-45e9-93ee-8ef936a365f8","Type":"ContainerStarted","Data":"be555f6210e88b40a8756acd65b7d9518ab4de3c485d173dd4d7c00a78f76ab3"} Jan 25 08:18:52 crc kubenswrapper[4832]: I0125 08:18:52.149647 4832 patch_prober.go:28] interesting pod/machine-config-daemon-9r9sz container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 25 08:18:52 crc kubenswrapper[4832]: I0125 08:18:52.149960 4832 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-9r9sz" podUID="1fb47e8e-c812-41b4-9be7-3fad81e121b0" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 25 08:18:52 crc kubenswrapper[4832]: I0125 08:18:52.150004 4832 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-9r9sz" Jan 25 08:18:52 crc kubenswrapper[4832]: I0125 08:18:52.150723 4832 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"a703522300807412e74dfb0216f7c46b79210bcc992ea5f87976c5936fa1c4d9"} pod="openshift-machine-config-operator/machine-config-daemon-9r9sz" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 25 08:18:52 crc kubenswrapper[4832]: I0125 08:18:52.150773 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-9r9sz" podUID="1fb47e8e-c812-41b4-9be7-3fad81e121b0" containerName="machine-config-daemon" containerID="cri-o://a703522300807412e74dfb0216f7c46b79210bcc992ea5f87976c5936fa1c4d9" gracePeriod=600 Jan 25 08:18:52 crc kubenswrapper[4832]: I0125 08:18:52.212750 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 25 08:18:52 crc kubenswrapper[4832]: I0125 08:18:52.217518 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-59cf4bdb65-87zjq" event={"ID":"2422fda2-c886-45e9-93ee-8ef936a365f8","Type":"ContainerStarted","Data":"258632df7d35708001d8d4e18182a4b71a169fc05d60153ff36d5d1f35c4a34e"} Jan 25 08:18:52 crc kubenswrapper[4832]: I0125 08:18:52.217673 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="7f893740-ce4d-4ee2-994d-98739d4b1f7d" containerName="nova-api-log" containerID="cri-o://d3f82876a152280be0153952ab474002f89799e62b5c3764abcb48c4ba1f79ab" gracePeriod=30 Jan 25 08:18:52 crc kubenswrapper[4832]: I0125 08:18:52.217749 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="7f893740-ce4d-4ee2-994d-98739d4b1f7d" containerName="nova-api-api" containerID="cri-o://1cf457f5f0bc24ca1984ac878d4897dfabdd8be119fc99537048cec1e98fd646" gracePeriod=30 Jan 25 08:18:52 crc kubenswrapper[4832]: I0125 08:18:52.251205 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-59cf4bdb65-87zjq" podStartSLOduration=3.251183961 podStartE2EDuration="3.251183961s" podCreationTimestamp="2026-01-25 08:18:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-25 08:18:52.238071382 +0000 UTC m=+1314.911894905" watchObservedRunningTime="2026-01-25 08:18:52.251183961 +0000 UTC m=+1314.925007494" Jan 25 08:18:52 crc kubenswrapper[4832]: I0125 08:18:52.347267 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 25 08:18:52 crc kubenswrapper[4832]: I0125 08:18:52.347667 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="9101d936-3e35-4a66-92e9-88560d52bdaf" containerName="ceilometer-central-agent" containerID="cri-o://d765879c71739e6935bf6475d537272526ee231565a2f327f71ffab075c3e247" gracePeriod=30 Jan 25 08:18:52 crc kubenswrapper[4832]: I0125 08:18:52.347804 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="9101d936-3e35-4a66-92e9-88560d52bdaf" containerName="ceilometer-notification-agent" containerID="cri-o://a765f8d10900cfcafae85d87cecf2181a5ffbe8690b52f95b4fd800d5394f489" gracePeriod=30 Jan 25 08:18:52 crc kubenswrapper[4832]: I0125 08:18:52.347799 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="9101d936-3e35-4a66-92e9-88560d52bdaf" containerName="sg-core" containerID="cri-o://52785ee0645da8dc5ff72f013d11ace083baf5422213fae6de8d4578a40f8eda" gracePeriod=30 Jan 25 08:18:52 crc kubenswrapper[4832]: I0125 08:18:52.347973 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="9101d936-3e35-4a66-92e9-88560d52bdaf" containerName="proxy-httpd" containerID="cri-o://4fa6d63be10b5d4711e21498893af0f9fa399d0356e4c5337cf455531c592b58" gracePeriod=30 Jan 25 08:18:52 crc kubenswrapper[4832]: I0125 08:18:52.848198 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Jan 25 08:18:53 crc kubenswrapper[4832]: I0125 08:18:53.230018 4832 generic.go:334] "Generic (PLEG): container finished" podID="7f893740-ce4d-4ee2-994d-98739d4b1f7d" containerID="d3f82876a152280be0153952ab474002f89799e62b5c3764abcb48c4ba1f79ab" exitCode=143 Jan 25 08:18:53 crc kubenswrapper[4832]: I0125 08:18:53.230092 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"7f893740-ce4d-4ee2-994d-98739d4b1f7d","Type":"ContainerDied","Data":"d3f82876a152280be0153952ab474002f89799e62b5c3764abcb48c4ba1f79ab"} Jan 25 08:18:53 crc kubenswrapper[4832]: I0125 08:18:53.235932 4832 generic.go:334] "Generic (PLEG): container finished" podID="9101d936-3e35-4a66-92e9-88560d52bdaf" containerID="4fa6d63be10b5d4711e21498893af0f9fa399d0356e4c5337cf455531c592b58" exitCode=0 Jan 25 08:18:53 crc kubenswrapper[4832]: I0125 08:18:53.235980 4832 generic.go:334] "Generic (PLEG): container finished" podID="9101d936-3e35-4a66-92e9-88560d52bdaf" containerID="52785ee0645da8dc5ff72f013d11ace083baf5422213fae6de8d4578a40f8eda" exitCode=2 Jan 25 08:18:53 crc kubenswrapper[4832]: I0125 08:18:53.235999 4832 generic.go:334] "Generic (PLEG): container finished" podID="9101d936-3e35-4a66-92e9-88560d52bdaf" containerID="d765879c71739e6935bf6475d537272526ee231565a2f327f71ffab075c3e247" exitCode=0 Jan 25 08:18:53 crc kubenswrapper[4832]: I0125 08:18:53.236095 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9101d936-3e35-4a66-92e9-88560d52bdaf","Type":"ContainerDied","Data":"4fa6d63be10b5d4711e21498893af0f9fa399d0356e4c5337cf455531c592b58"} Jan 25 08:18:53 crc kubenswrapper[4832]: I0125 08:18:53.236137 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9101d936-3e35-4a66-92e9-88560d52bdaf","Type":"ContainerDied","Data":"52785ee0645da8dc5ff72f013d11ace083baf5422213fae6de8d4578a40f8eda"} Jan 25 08:18:53 crc kubenswrapper[4832]: I0125 08:18:53.236156 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9101d936-3e35-4a66-92e9-88560d52bdaf","Type":"ContainerDied","Data":"d765879c71739e6935bf6475d537272526ee231565a2f327f71ffab075c3e247"} Jan 25 08:18:53 crc kubenswrapper[4832]: I0125 08:18:53.244692 4832 generic.go:334] "Generic (PLEG): container finished" podID="1fb47e8e-c812-41b4-9be7-3fad81e121b0" containerID="a703522300807412e74dfb0216f7c46b79210bcc992ea5f87976c5936fa1c4d9" exitCode=0 Jan 25 08:18:53 crc kubenswrapper[4832]: I0125 08:18:53.244784 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-9r9sz" event={"ID":"1fb47e8e-c812-41b4-9be7-3fad81e121b0","Type":"ContainerDied","Data":"a703522300807412e74dfb0216f7c46b79210bcc992ea5f87976c5936fa1c4d9"} Jan 25 08:18:53 crc kubenswrapper[4832]: I0125 08:18:53.244845 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-9r9sz" event={"ID":"1fb47e8e-c812-41b4-9be7-3fad81e121b0","Type":"ContainerStarted","Data":"cac454964b3d1f20ac28961991abf402bf242194f2fbad579737da7d57d4a27f"} Jan 25 08:18:53 crc kubenswrapper[4832]: I0125 08:18:53.244863 4832 scope.go:117] "RemoveContainer" containerID="bc7fb24eb792d448b55ed5e2d984c4783247ec2dc70708259ed13f1676a5263b" Jan 25 08:18:53 crc kubenswrapper[4832]: I0125 08:18:53.245721 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-59cf4bdb65-87zjq" Jan 25 08:18:54 crc kubenswrapper[4832]: I0125 08:18:54.256769 4832 generic.go:334] "Generic (PLEG): container finished" podID="9101d936-3e35-4a66-92e9-88560d52bdaf" containerID="a765f8d10900cfcafae85d87cecf2181a5ffbe8690b52f95b4fd800d5394f489" exitCode=0 Jan 25 08:18:54 crc kubenswrapper[4832]: I0125 08:18:54.257109 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9101d936-3e35-4a66-92e9-88560d52bdaf","Type":"ContainerDied","Data":"a765f8d10900cfcafae85d87cecf2181a5ffbe8690b52f95b4fd800d5394f489"} Jan 25 08:18:54 crc kubenswrapper[4832]: I0125 08:18:54.606617 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 25 08:18:54 crc kubenswrapper[4832]: I0125 08:18:54.625878 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9101d936-3e35-4a66-92e9-88560d52bdaf-log-httpd\") pod \"9101d936-3e35-4a66-92e9-88560d52bdaf\" (UID: \"9101d936-3e35-4a66-92e9-88560d52bdaf\") " Jan 25 08:18:54 crc kubenswrapper[4832]: I0125 08:18:54.625946 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/9101d936-3e35-4a66-92e9-88560d52bdaf-ceilometer-tls-certs\") pod \"9101d936-3e35-4a66-92e9-88560d52bdaf\" (UID: \"9101d936-3e35-4a66-92e9-88560d52bdaf\") " Jan 25 08:18:54 crc kubenswrapper[4832]: I0125 08:18:54.626045 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-klsbx\" (UniqueName: \"kubernetes.io/projected/9101d936-3e35-4a66-92e9-88560d52bdaf-kube-api-access-klsbx\") pod \"9101d936-3e35-4a66-92e9-88560d52bdaf\" (UID: \"9101d936-3e35-4a66-92e9-88560d52bdaf\") " Jan 25 08:18:54 crc kubenswrapper[4832]: I0125 08:18:54.626130 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9101d936-3e35-4a66-92e9-88560d52bdaf-scripts\") pod \"9101d936-3e35-4a66-92e9-88560d52bdaf\" (UID: \"9101d936-3e35-4a66-92e9-88560d52bdaf\") " Jan 25 08:18:54 crc kubenswrapper[4832]: I0125 08:18:54.626203 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9101d936-3e35-4a66-92e9-88560d52bdaf-combined-ca-bundle\") pod \"9101d936-3e35-4a66-92e9-88560d52bdaf\" (UID: \"9101d936-3e35-4a66-92e9-88560d52bdaf\") " Jan 25 08:18:54 crc kubenswrapper[4832]: I0125 08:18:54.626349 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/9101d936-3e35-4a66-92e9-88560d52bdaf-sg-core-conf-yaml\") pod \"9101d936-3e35-4a66-92e9-88560d52bdaf\" (UID: \"9101d936-3e35-4a66-92e9-88560d52bdaf\") " Jan 25 08:18:54 crc kubenswrapper[4832]: I0125 08:18:54.626455 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9101d936-3e35-4a66-92e9-88560d52bdaf-config-data\") pod \"9101d936-3e35-4a66-92e9-88560d52bdaf\" (UID: \"9101d936-3e35-4a66-92e9-88560d52bdaf\") " Jan 25 08:18:54 crc kubenswrapper[4832]: I0125 08:18:54.626484 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9101d936-3e35-4a66-92e9-88560d52bdaf-run-httpd\") pod \"9101d936-3e35-4a66-92e9-88560d52bdaf\" (UID: \"9101d936-3e35-4a66-92e9-88560d52bdaf\") " Jan 25 08:18:54 crc kubenswrapper[4832]: I0125 08:18:54.627479 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9101d936-3e35-4a66-92e9-88560d52bdaf-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "9101d936-3e35-4a66-92e9-88560d52bdaf" (UID: "9101d936-3e35-4a66-92e9-88560d52bdaf"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 25 08:18:54 crc kubenswrapper[4832]: I0125 08:18:54.628712 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9101d936-3e35-4a66-92e9-88560d52bdaf-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "9101d936-3e35-4a66-92e9-88560d52bdaf" (UID: "9101d936-3e35-4a66-92e9-88560d52bdaf"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 25 08:18:54 crc kubenswrapper[4832]: I0125 08:18:54.633514 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9101d936-3e35-4a66-92e9-88560d52bdaf-scripts" (OuterVolumeSpecName: "scripts") pod "9101d936-3e35-4a66-92e9-88560d52bdaf" (UID: "9101d936-3e35-4a66-92e9-88560d52bdaf"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 25 08:18:54 crc kubenswrapper[4832]: I0125 08:18:54.637703 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9101d936-3e35-4a66-92e9-88560d52bdaf-kube-api-access-klsbx" (OuterVolumeSpecName: "kube-api-access-klsbx") pod "9101d936-3e35-4a66-92e9-88560d52bdaf" (UID: "9101d936-3e35-4a66-92e9-88560d52bdaf"). InnerVolumeSpecName "kube-api-access-klsbx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 25 08:18:54 crc kubenswrapper[4832]: I0125 08:18:54.686898 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9101d936-3e35-4a66-92e9-88560d52bdaf-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "9101d936-3e35-4a66-92e9-88560d52bdaf" (UID: "9101d936-3e35-4a66-92e9-88560d52bdaf"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 25 08:18:54 crc kubenswrapper[4832]: I0125 08:18:54.729588 4832 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9101d936-3e35-4a66-92e9-88560d52bdaf-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 25 08:18:54 crc kubenswrapper[4832]: I0125 08:18:54.729613 4832 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9101d936-3e35-4a66-92e9-88560d52bdaf-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 25 08:18:54 crc kubenswrapper[4832]: I0125 08:18:54.729639 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-klsbx\" (UniqueName: \"kubernetes.io/projected/9101d936-3e35-4a66-92e9-88560d52bdaf-kube-api-access-klsbx\") on node \"crc\" DevicePath \"\"" Jan 25 08:18:54 crc kubenswrapper[4832]: I0125 08:18:54.729648 4832 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9101d936-3e35-4a66-92e9-88560d52bdaf-scripts\") on node \"crc\" DevicePath \"\"" Jan 25 08:18:54 crc kubenswrapper[4832]: I0125 08:18:54.729658 4832 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/9101d936-3e35-4a66-92e9-88560d52bdaf-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 25 08:18:54 crc kubenswrapper[4832]: I0125 08:18:54.737048 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9101d936-3e35-4a66-92e9-88560d52bdaf-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9101d936-3e35-4a66-92e9-88560d52bdaf" (UID: "9101d936-3e35-4a66-92e9-88560d52bdaf"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 25 08:18:54 crc kubenswrapper[4832]: I0125 08:18:54.747286 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9101d936-3e35-4a66-92e9-88560d52bdaf-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "9101d936-3e35-4a66-92e9-88560d52bdaf" (UID: "9101d936-3e35-4a66-92e9-88560d52bdaf"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 25 08:18:54 crc kubenswrapper[4832]: I0125 08:18:54.777067 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9101d936-3e35-4a66-92e9-88560d52bdaf-config-data" (OuterVolumeSpecName: "config-data") pod "9101d936-3e35-4a66-92e9-88560d52bdaf" (UID: "9101d936-3e35-4a66-92e9-88560d52bdaf"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 25 08:18:54 crc kubenswrapper[4832]: I0125 08:18:54.831784 4832 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9101d936-3e35-4a66-92e9-88560d52bdaf-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 25 08:18:54 crc kubenswrapper[4832]: I0125 08:18:54.831813 4832 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9101d936-3e35-4a66-92e9-88560d52bdaf-config-data\") on node \"crc\" DevicePath \"\"" Jan 25 08:18:54 crc kubenswrapper[4832]: I0125 08:18:54.831824 4832 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/9101d936-3e35-4a66-92e9-88560d52bdaf-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 25 08:18:55 crc kubenswrapper[4832]: I0125 08:18:55.281661 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9101d936-3e35-4a66-92e9-88560d52bdaf","Type":"ContainerDied","Data":"357278623f287155e873ee794b78e37bac452c149062b3ec2e090bd2dccc5e96"} Jan 25 08:18:55 crc kubenswrapper[4832]: I0125 08:18:55.281713 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 25 08:18:55 crc kubenswrapper[4832]: I0125 08:18:55.281753 4832 scope.go:117] "RemoveContainer" containerID="4fa6d63be10b5d4711e21498893af0f9fa399d0356e4c5337cf455531c592b58" Jan 25 08:18:55 crc kubenswrapper[4832]: I0125 08:18:55.319823 4832 scope.go:117] "RemoveContainer" containerID="52785ee0645da8dc5ff72f013d11ace083baf5422213fae6de8d4578a40f8eda" Jan 25 08:18:55 crc kubenswrapper[4832]: I0125 08:18:55.386294 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 25 08:18:55 crc kubenswrapper[4832]: I0125 08:18:55.401786 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 25 08:18:55 crc kubenswrapper[4832]: I0125 08:18:55.418442 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 25 08:18:55 crc kubenswrapper[4832]: E0125 08:18:55.420819 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9101d936-3e35-4a66-92e9-88560d52bdaf" containerName="sg-core" Jan 25 08:18:55 crc kubenswrapper[4832]: I0125 08:18:55.420863 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="9101d936-3e35-4a66-92e9-88560d52bdaf" containerName="sg-core" Jan 25 08:18:55 crc kubenswrapper[4832]: E0125 08:18:55.420922 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9101d936-3e35-4a66-92e9-88560d52bdaf" containerName="ceilometer-notification-agent" Jan 25 08:18:55 crc kubenswrapper[4832]: I0125 08:18:55.420933 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="9101d936-3e35-4a66-92e9-88560d52bdaf" containerName="ceilometer-notification-agent" Jan 25 08:18:55 crc kubenswrapper[4832]: E0125 08:18:55.420974 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9101d936-3e35-4a66-92e9-88560d52bdaf" containerName="proxy-httpd" Jan 25 08:18:55 crc kubenswrapper[4832]: I0125 08:18:55.420983 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="9101d936-3e35-4a66-92e9-88560d52bdaf" containerName="proxy-httpd" Jan 25 08:18:55 crc kubenswrapper[4832]: E0125 08:18:55.421013 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9101d936-3e35-4a66-92e9-88560d52bdaf" containerName="ceilometer-central-agent" Jan 25 08:18:55 crc kubenswrapper[4832]: I0125 08:18:55.421020 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="9101d936-3e35-4a66-92e9-88560d52bdaf" containerName="ceilometer-central-agent" Jan 25 08:18:55 crc kubenswrapper[4832]: I0125 08:18:55.421593 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="9101d936-3e35-4a66-92e9-88560d52bdaf" containerName="ceilometer-central-agent" Jan 25 08:18:55 crc kubenswrapper[4832]: I0125 08:18:55.421619 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="9101d936-3e35-4a66-92e9-88560d52bdaf" containerName="proxy-httpd" Jan 25 08:18:55 crc kubenswrapper[4832]: I0125 08:18:55.421650 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="9101d936-3e35-4a66-92e9-88560d52bdaf" containerName="ceilometer-notification-agent" Jan 25 08:18:55 crc kubenswrapper[4832]: I0125 08:18:55.421676 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="9101d936-3e35-4a66-92e9-88560d52bdaf" containerName="sg-core" Jan 25 08:18:55 crc kubenswrapper[4832]: I0125 08:18:55.422325 4832 scope.go:117] "RemoveContainer" containerID="a765f8d10900cfcafae85d87cecf2181a5ffbe8690b52f95b4fd800d5394f489" Jan 25 08:18:55 crc kubenswrapper[4832]: I0125 08:18:55.426905 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 25 08:18:55 crc kubenswrapper[4832]: I0125 08:18:55.436199 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 25 08:18:55 crc kubenswrapper[4832]: I0125 08:18:55.436500 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Jan 25 08:18:55 crc kubenswrapper[4832]: I0125 08:18:55.436733 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 25 08:18:55 crc kubenswrapper[4832]: I0125 08:18:55.437416 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 25 08:18:55 crc kubenswrapper[4832]: I0125 08:18:55.457170 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6zz9t\" (UniqueName: \"kubernetes.io/projected/eb5b7f6d-8b64-475d-b4b4-c12ce7e9c468-kube-api-access-6zz9t\") pod \"ceilometer-0\" (UID: \"eb5b7f6d-8b64-475d-b4b4-c12ce7e9c468\") " pod="openstack/ceilometer-0" Jan 25 08:18:55 crc kubenswrapper[4832]: I0125 08:18:55.457269 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eb5b7f6d-8b64-475d-b4b4-c12ce7e9c468-config-data\") pod \"ceilometer-0\" (UID: \"eb5b7f6d-8b64-475d-b4b4-c12ce7e9c468\") " pod="openstack/ceilometer-0" Jan 25 08:18:55 crc kubenswrapper[4832]: I0125 08:18:55.457324 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/eb5b7f6d-8b64-475d-b4b4-c12ce7e9c468-scripts\") pod \"ceilometer-0\" (UID: \"eb5b7f6d-8b64-475d-b4b4-c12ce7e9c468\") " pod="openstack/ceilometer-0" Jan 25 08:18:55 crc kubenswrapper[4832]: I0125 08:18:55.457370 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/eb5b7f6d-8b64-475d-b4b4-c12ce7e9c468-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"eb5b7f6d-8b64-475d-b4b4-c12ce7e9c468\") " pod="openstack/ceilometer-0" Jan 25 08:18:55 crc kubenswrapper[4832]: I0125 08:18:55.457404 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/eb5b7f6d-8b64-475d-b4b4-c12ce7e9c468-log-httpd\") pod \"ceilometer-0\" (UID: \"eb5b7f6d-8b64-475d-b4b4-c12ce7e9c468\") " pod="openstack/ceilometer-0" Jan 25 08:18:55 crc kubenswrapper[4832]: I0125 08:18:55.457432 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/eb5b7f6d-8b64-475d-b4b4-c12ce7e9c468-run-httpd\") pod \"ceilometer-0\" (UID: \"eb5b7f6d-8b64-475d-b4b4-c12ce7e9c468\") " pod="openstack/ceilometer-0" Jan 25 08:18:55 crc kubenswrapper[4832]: I0125 08:18:55.457491 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eb5b7f6d-8b64-475d-b4b4-c12ce7e9c468-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"eb5b7f6d-8b64-475d-b4b4-c12ce7e9c468\") " pod="openstack/ceilometer-0" Jan 25 08:18:55 crc kubenswrapper[4832]: I0125 08:18:55.457509 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/eb5b7f6d-8b64-475d-b4b4-c12ce7e9c468-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"eb5b7f6d-8b64-475d-b4b4-c12ce7e9c468\") " pod="openstack/ceilometer-0" Jan 25 08:18:55 crc kubenswrapper[4832]: I0125 08:18:55.512369 4832 scope.go:117] "RemoveContainer" containerID="d765879c71739e6935bf6475d537272526ee231565a2f327f71ffab075c3e247" Jan 25 08:18:55 crc kubenswrapper[4832]: I0125 08:18:55.560096 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eb5b7f6d-8b64-475d-b4b4-c12ce7e9c468-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"eb5b7f6d-8b64-475d-b4b4-c12ce7e9c468\") " pod="openstack/ceilometer-0" Jan 25 08:18:55 crc kubenswrapper[4832]: I0125 08:18:55.560154 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/eb5b7f6d-8b64-475d-b4b4-c12ce7e9c468-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"eb5b7f6d-8b64-475d-b4b4-c12ce7e9c468\") " pod="openstack/ceilometer-0" Jan 25 08:18:55 crc kubenswrapper[4832]: I0125 08:18:55.560199 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6zz9t\" (UniqueName: \"kubernetes.io/projected/eb5b7f6d-8b64-475d-b4b4-c12ce7e9c468-kube-api-access-6zz9t\") pod \"ceilometer-0\" (UID: \"eb5b7f6d-8b64-475d-b4b4-c12ce7e9c468\") " pod="openstack/ceilometer-0" Jan 25 08:18:55 crc kubenswrapper[4832]: I0125 08:18:55.560280 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eb5b7f6d-8b64-475d-b4b4-c12ce7e9c468-config-data\") pod \"ceilometer-0\" (UID: \"eb5b7f6d-8b64-475d-b4b4-c12ce7e9c468\") " pod="openstack/ceilometer-0" Jan 25 08:18:55 crc kubenswrapper[4832]: I0125 08:18:55.560336 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/eb5b7f6d-8b64-475d-b4b4-c12ce7e9c468-scripts\") pod \"ceilometer-0\" (UID: \"eb5b7f6d-8b64-475d-b4b4-c12ce7e9c468\") " pod="openstack/ceilometer-0" Jan 25 08:18:55 crc kubenswrapper[4832]: I0125 08:18:55.560427 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/eb5b7f6d-8b64-475d-b4b4-c12ce7e9c468-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"eb5b7f6d-8b64-475d-b4b4-c12ce7e9c468\") " pod="openstack/ceilometer-0" Jan 25 08:18:55 crc kubenswrapper[4832]: I0125 08:18:55.560457 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/eb5b7f6d-8b64-475d-b4b4-c12ce7e9c468-log-httpd\") pod \"ceilometer-0\" (UID: \"eb5b7f6d-8b64-475d-b4b4-c12ce7e9c468\") " pod="openstack/ceilometer-0" Jan 25 08:18:55 crc kubenswrapper[4832]: I0125 08:18:55.560498 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/eb5b7f6d-8b64-475d-b4b4-c12ce7e9c468-run-httpd\") pod \"ceilometer-0\" (UID: \"eb5b7f6d-8b64-475d-b4b4-c12ce7e9c468\") " pod="openstack/ceilometer-0" Jan 25 08:18:55 crc kubenswrapper[4832]: I0125 08:18:55.561152 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/eb5b7f6d-8b64-475d-b4b4-c12ce7e9c468-log-httpd\") pod \"ceilometer-0\" (UID: \"eb5b7f6d-8b64-475d-b4b4-c12ce7e9c468\") " pod="openstack/ceilometer-0" Jan 25 08:18:55 crc kubenswrapper[4832]: I0125 08:18:55.561190 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/eb5b7f6d-8b64-475d-b4b4-c12ce7e9c468-run-httpd\") pod \"ceilometer-0\" (UID: \"eb5b7f6d-8b64-475d-b4b4-c12ce7e9c468\") " pod="openstack/ceilometer-0" Jan 25 08:18:55 crc kubenswrapper[4832]: I0125 08:18:55.567902 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/eb5b7f6d-8b64-475d-b4b4-c12ce7e9c468-scripts\") pod \"ceilometer-0\" (UID: \"eb5b7f6d-8b64-475d-b4b4-c12ce7e9c468\") " pod="openstack/ceilometer-0" Jan 25 08:18:55 crc kubenswrapper[4832]: I0125 08:18:55.570983 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/eb5b7f6d-8b64-475d-b4b4-c12ce7e9c468-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"eb5b7f6d-8b64-475d-b4b4-c12ce7e9c468\") " pod="openstack/ceilometer-0" Jan 25 08:18:55 crc kubenswrapper[4832]: I0125 08:18:55.571547 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/eb5b7f6d-8b64-475d-b4b4-c12ce7e9c468-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"eb5b7f6d-8b64-475d-b4b4-c12ce7e9c468\") " pod="openstack/ceilometer-0" Jan 25 08:18:55 crc kubenswrapper[4832]: I0125 08:18:55.572274 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eb5b7f6d-8b64-475d-b4b4-c12ce7e9c468-config-data\") pod \"ceilometer-0\" (UID: \"eb5b7f6d-8b64-475d-b4b4-c12ce7e9c468\") " pod="openstack/ceilometer-0" Jan 25 08:18:55 crc kubenswrapper[4832]: I0125 08:18:55.577059 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eb5b7f6d-8b64-475d-b4b4-c12ce7e9c468-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"eb5b7f6d-8b64-475d-b4b4-c12ce7e9c468\") " pod="openstack/ceilometer-0" Jan 25 08:18:55 crc kubenswrapper[4832]: I0125 08:18:55.580198 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6zz9t\" (UniqueName: \"kubernetes.io/projected/eb5b7f6d-8b64-475d-b4b4-c12ce7e9c468-kube-api-access-6zz9t\") pod \"ceilometer-0\" (UID: \"eb5b7f6d-8b64-475d-b4b4-c12ce7e9c468\") " pod="openstack/ceilometer-0" Jan 25 08:18:55 crc kubenswrapper[4832]: I0125 08:18:55.683861 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9101d936-3e35-4a66-92e9-88560d52bdaf" path="/var/lib/kubelet/pods/9101d936-3e35-4a66-92e9-88560d52bdaf/volumes" Jan 25 08:18:55 crc kubenswrapper[4832]: I0125 08:18:55.834870 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 25 08:18:55 crc kubenswrapper[4832]: I0125 08:18:55.844095 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 25 08:18:55 crc kubenswrapper[4832]: I0125 08:18:55.867017 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7f893740-ce4d-4ee2-994d-98739d4b1f7d-config-data\") pod \"7f893740-ce4d-4ee2-994d-98739d4b1f7d\" (UID: \"7f893740-ce4d-4ee2-994d-98739d4b1f7d\") " Jan 25 08:18:55 crc kubenswrapper[4832]: I0125 08:18:55.867097 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7f893740-ce4d-4ee2-994d-98739d4b1f7d-combined-ca-bundle\") pod \"7f893740-ce4d-4ee2-994d-98739d4b1f7d\" (UID: \"7f893740-ce4d-4ee2-994d-98739d4b1f7d\") " Jan 25 08:18:55 crc kubenswrapper[4832]: I0125 08:18:55.867202 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vjv9l\" (UniqueName: \"kubernetes.io/projected/7f893740-ce4d-4ee2-994d-98739d4b1f7d-kube-api-access-vjv9l\") pod \"7f893740-ce4d-4ee2-994d-98739d4b1f7d\" (UID: \"7f893740-ce4d-4ee2-994d-98739d4b1f7d\") " Jan 25 08:18:55 crc kubenswrapper[4832]: I0125 08:18:55.867305 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7f893740-ce4d-4ee2-994d-98739d4b1f7d-logs\") pod \"7f893740-ce4d-4ee2-994d-98739d4b1f7d\" (UID: \"7f893740-ce4d-4ee2-994d-98739d4b1f7d\") " Jan 25 08:18:55 crc kubenswrapper[4832]: I0125 08:18:55.869058 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7f893740-ce4d-4ee2-994d-98739d4b1f7d-logs" (OuterVolumeSpecName: "logs") pod "7f893740-ce4d-4ee2-994d-98739d4b1f7d" (UID: "7f893740-ce4d-4ee2-994d-98739d4b1f7d"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 25 08:18:55 crc kubenswrapper[4832]: I0125 08:18:55.887612 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7f893740-ce4d-4ee2-994d-98739d4b1f7d-kube-api-access-vjv9l" (OuterVolumeSpecName: "kube-api-access-vjv9l") pod "7f893740-ce4d-4ee2-994d-98739d4b1f7d" (UID: "7f893740-ce4d-4ee2-994d-98739d4b1f7d"). InnerVolumeSpecName "kube-api-access-vjv9l". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 25 08:18:55 crc kubenswrapper[4832]: I0125 08:18:55.910810 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7f893740-ce4d-4ee2-994d-98739d4b1f7d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7f893740-ce4d-4ee2-994d-98739d4b1f7d" (UID: "7f893740-ce4d-4ee2-994d-98739d4b1f7d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 25 08:18:55 crc kubenswrapper[4832]: I0125 08:18:55.912185 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7f893740-ce4d-4ee2-994d-98739d4b1f7d-config-data" (OuterVolumeSpecName: "config-data") pod "7f893740-ce4d-4ee2-994d-98739d4b1f7d" (UID: "7f893740-ce4d-4ee2-994d-98739d4b1f7d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 25 08:18:55 crc kubenswrapper[4832]: I0125 08:18:55.983075 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vjv9l\" (UniqueName: \"kubernetes.io/projected/7f893740-ce4d-4ee2-994d-98739d4b1f7d-kube-api-access-vjv9l\") on node \"crc\" DevicePath \"\"" Jan 25 08:18:55 crc kubenswrapper[4832]: I0125 08:18:55.983119 4832 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7f893740-ce4d-4ee2-994d-98739d4b1f7d-logs\") on node \"crc\" DevicePath \"\"" Jan 25 08:18:55 crc kubenswrapper[4832]: I0125 08:18:55.983138 4832 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7f893740-ce4d-4ee2-994d-98739d4b1f7d-config-data\") on node \"crc\" DevicePath \"\"" Jan 25 08:18:55 crc kubenswrapper[4832]: I0125 08:18:55.983158 4832 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7f893740-ce4d-4ee2-994d-98739d4b1f7d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 25 08:18:56 crc kubenswrapper[4832]: I0125 08:18:56.293949 4832 generic.go:334] "Generic (PLEG): container finished" podID="7f893740-ce4d-4ee2-994d-98739d4b1f7d" containerID="1cf457f5f0bc24ca1984ac878d4897dfabdd8be119fc99537048cec1e98fd646" exitCode=0 Jan 25 08:18:56 crc kubenswrapper[4832]: I0125 08:18:56.294051 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 25 08:18:56 crc kubenswrapper[4832]: I0125 08:18:56.294034 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"7f893740-ce4d-4ee2-994d-98739d4b1f7d","Type":"ContainerDied","Data":"1cf457f5f0bc24ca1984ac878d4897dfabdd8be119fc99537048cec1e98fd646"} Jan 25 08:18:56 crc kubenswrapper[4832]: I0125 08:18:56.294130 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"7f893740-ce4d-4ee2-994d-98739d4b1f7d","Type":"ContainerDied","Data":"6e9cadbf9897c01825e4da6c935d68d38c55b39f1edc627364ba9456e3e27986"} Jan 25 08:18:56 crc kubenswrapper[4832]: I0125 08:18:56.294165 4832 scope.go:117] "RemoveContainer" containerID="1cf457f5f0bc24ca1984ac878d4897dfabdd8be119fc99537048cec1e98fd646" Jan 25 08:18:56 crc kubenswrapper[4832]: I0125 08:18:56.335562 4832 scope.go:117] "RemoveContainer" containerID="d3f82876a152280be0153952ab474002f89799e62b5c3764abcb48c4ba1f79ab" Jan 25 08:18:56 crc kubenswrapper[4832]: I0125 08:18:56.349548 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 25 08:18:56 crc kubenswrapper[4832]: I0125 08:18:56.357965 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Jan 25 08:18:56 crc kubenswrapper[4832]: I0125 08:18:56.367601 4832 scope.go:117] "RemoveContainer" containerID="1cf457f5f0bc24ca1984ac878d4897dfabdd8be119fc99537048cec1e98fd646" Jan 25 08:18:56 crc kubenswrapper[4832]: E0125 08:18:56.368057 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1cf457f5f0bc24ca1984ac878d4897dfabdd8be119fc99537048cec1e98fd646\": container with ID starting with 1cf457f5f0bc24ca1984ac878d4897dfabdd8be119fc99537048cec1e98fd646 not found: ID does not exist" containerID="1cf457f5f0bc24ca1984ac878d4897dfabdd8be119fc99537048cec1e98fd646" Jan 25 08:18:56 crc kubenswrapper[4832]: I0125 08:18:56.368111 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1cf457f5f0bc24ca1984ac878d4897dfabdd8be119fc99537048cec1e98fd646"} err="failed to get container status \"1cf457f5f0bc24ca1984ac878d4897dfabdd8be119fc99537048cec1e98fd646\": rpc error: code = NotFound desc = could not find container \"1cf457f5f0bc24ca1984ac878d4897dfabdd8be119fc99537048cec1e98fd646\": container with ID starting with 1cf457f5f0bc24ca1984ac878d4897dfabdd8be119fc99537048cec1e98fd646 not found: ID does not exist" Jan 25 08:18:56 crc kubenswrapper[4832]: I0125 08:18:56.368139 4832 scope.go:117] "RemoveContainer" containerID="d3f82876a152280be0153952ab474002f89799e62b5c3764abcb48c4ba1f79ab" Jan 25 08:18:56 crc kubenswrapper[4832]: E0125 08:18:56.368429 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d3f82876a152280be0153952ab474002f89799e62b5c3764abcb48c4ba1f79ab\": container with ID starting with d3f82876a152280be0153952ab474002f89799e62b5c3764abcb48c4ba1f79ab not found: ID does not exist" containerID="d3f82876a152280be0153952ab474002f89799e62b5c3764abcb48c4ba1f79ab" Jan 25 08:18:56 crc kubenswrapper[4832]: I0125 08:18:56.368452 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d3f82876a152280be0153952ab474002f89799e62b5c3764abcb48c4ba1f79ab"} err="failed to get container status \"d3f82876a152280be0153952ab474002f89799e62b5c3764abcb48c4ba1f79ab\": rpc error: code = NotFound desc = could not find container \"d3f82876a152280be0153952ab474002f89799e62b5c3764abcb48c4ba1f79ab\": container with ID starting with d3f82876a152280be0153952ab474002f89799e62b5c3764abcb48c4ba1f79ab not found: ID does not exist" Jan 25 08:18:56 crc kubenswrapper[4832]: I0125 08:18:56.386760 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 25 08:18:56 crc kubenswrapper[4832]: W0125 08:18:56.389876 4832 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podeb5b7f6d_8b64_475d_b4b4_c12ce7e9c468.slice/crio-5028a6103337a1c45039f324b389e86af8ad24710b6a583531147bc00a33a05e WatchSource:0}: Error finding container 5028a6103337a1c45039f324b389e86af8ad24710b6a583531147bc00a33a05e: Status 404 returned error can't find the container with id 5028a6103337a1c45039f324b389e86af8ad24710b6a583531147bc00a33a05e Jan 25 08:18:56 crc kubenswrapper[4832]: I0125 08:18:56.393595 4832 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 25 08:18:56 crc kubenswrapper[4832]: I0125 08:18:56.398144 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Jan 25 08:18:56 crc kubenswrapper[4832]: E0125 08:18:56.398590 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7f893740-ce4d-4ee2-994d-98739d4b1f7d" containerName="nova-api-api" Jan 25 08:18:56 crc kubenswrapper[4832]: I0125 08:18:56.398610 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="7f893740-ce4d-4ee2-994d-98739d4b1f7d" containerName="nova-api-api" Jan 25 08:18:56 crc kubenswrapper[4832]: E0125 08:18:56.398639 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7f893740-ce4d-4ee2-994d-98739d4b1f7d" containerName="nova-api-log" Jan 25 08:18:56 crc kubenswrapper[4832]: I0125 08:18:56.398647 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="7f893740-ce4d-4ee2-994d-98739d4b1f7d" containerName="nova-api-log" Jan 25 08:18:56 crc kubenswrapper[4832]: I0125 08:18:56.398841 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="7f893740-ce4d-4ee2-994d-98739d4b1f7d" containerName="nova-api-api" Jan 25 08:18:56 crc kubenswrapper[4832]: I0125 08:18:56.398859 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="7f893740-ce4d-4ee2-994d-98739d4b1f7d" containerName="nova-api-log" Jan 25 08:18:56 crc kubenswrapper[4832]: I0125 08:18:56.399850 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 25 08:18:56 crc kubenswrapper[4832]: I0125 08:18:56.402112 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Jan 25 08:18:56 crc kubenswrapper[4832]: I0125 08:18:56.402605 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Jan 25 08:18:56 crc kubenswrapper[4832]: I0125 08:18:56.402933 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Jan 25 08:18:56 crc kubenswrapper[4832]: I0125 08:18:56.407232 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 25 08:18:56 crc kubenswrapper[4832]: I0125 08:18:56.498717 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f9ebe7ae-8c59-4736-8722-b0d8bcfa61f0-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"f9ebe7ae-8c59-4736-8722-b0d8bcfa61f0\") " pod="openstack/nova-api-0" Jan 25 08:18:56 crc kubenswrapper[4832]: I0125 08:18:56.498803 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f9ebe7ae-8c59-4736-8722-b0d8bcfa61f0-internal-tls-certs\") pod \"nova-api-0\" (UID: \"f9ebe7ae-8c59-4736-8722-b0d8bcfa61f0\") " pod="openstack/nova-api-0" Jan 25 08:18:56 crc kubenswrapper[4832]: I0125 08:18:56.498827 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wcjqg\" (UniqueName: \"kubernetes.io/projected/f9ebe7ae-8c59-4736-8722-b0d8bcfa61f0-kube-api-access-wcjqg\") pod \"nova-api-0\" (UID: \"f9ebe7ae-8c59-4736-8722-b0d8bcfa61f0\") " pod="openstack/nova-api-0" Jan 25 08:18:56 crc kubenswrapper[4832]: I0125 08:18:56.498860 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f9ebe7ae-8c59-4736-8722-b0d8bcfa61f0-config-data\") pod \"nova-api-0\" (UID: \"f9ebe7ae-8c59-4736-8722-b0d8bcfa61f0\") " pod="openstack/nova-api-0" Jan 25 08:18:56 crc kubenswrapper[4832]: I0125 08:18:56.499005 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f9ebe7ae-8c59-4736-8722-b0d8bcfa61f0-public-tls-certs\") pod \"nova-api-0\" (UID: \"f9ebe7ae-8c59-4736-8722-b0d8bcfa61f0\") " pod="openstack/nova-api-0" Jan 25 08:18:56 crc kubenswrapper[4832]: I0125 08:18:56.499171 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f9ebe7ae-8c59-4736-8722-b0d8bcfa61f0-logs\") pod \"nova-api-0\" (UID: \"f9ebe7ae-8c59-4736-8722-b0d8bcfa61f0\") " pod="openstack/nova-api-0" Jan 25 08:18:56 crc kubenswrapper[4832]: I0125 08:18:56.600799 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f9ebe7ae-8c59-4736-8722-b0d8bcfa61f0-internal-tls-certs\") pod \"nova-api-0\" (UID: \"f9ebe7ae-8c59-4736-8722-b0d8bcfa61f0\") " pod="openstack/nova-api-0" Jan 25 08:18:56 crc kubenswrapper[4832]: I0125 08:18:56.600850 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wcjqg\" (UniqueName: \"kubernetes.io/projected/f9ebe7ae-8c59-4736-8722-b0d8bcfa61f0-kube-api-access-wcjqg\") pod \"nova-api-0\" (UID: \"f9ebe7ae-8c59-4736-8722-b0d8bcfa61f0\") " pod="openstack/nova-api-0" Jan 25 08:18:56 crc kubenswrapper[4832]: I0125 08:18:56.600886 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f9ebe7ae-8c59-4736-8722-b0d8bcfa61f0-config-data\") pod \"nova-api-0\" (UID: \"f9ebe7ae-8c59-4736-8722-b0d8bcfa61f0\") " pod="openstack/nova-api-0" Jan 25 08:18:56 crc kubenswrapper[4832]: I0125 08:18:56.600920 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f9ebe7ae-8c59-4736-8722-b0d8bcfa61f0-public-tls-certs\") pod \"nova-api-0\" (UID: \"f9ebe7ae-8c59-4736-8722-b0d8bcfa61f0\") " pod="openstack/nova-api-0" Jan 25 08:18:56 crc kubenswrapper[4832]: I0125 08:18:56.600989 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f9ebe7ae-8c59-4736-8722-b0d8bcfa61f0-logs\") pod \"nova-api-0\" (UID: \"f9ebe7ae-8c59-4736-8722-b0d8bcfa61f0\") " pod="openstack/nova-api-0" Jan 25 08:18:56 crc kubenswrapper[4832]: I0125 08:18:56.601030 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f9ebe7ae-8c59-4736-8722-b0d8bcfa61f0-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"f9ebe7ae-8c59-4736-8722-b0d8bcfa61f0\") " pod="openstack/nova-api-0" Jan 25 08:18:56 crc kubenswrapper[4832]: I0125 08:18:56.601563 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f9ebe7ae-8c59-4736-8722-b0d8bcfa61f0-logs\") pod \"nova-api-0\" (UID: \"f9ebe7ae-8c59-4736-8722-b0d8bcfa61f0\") " pod="openstack/nova-api-0" Jan 25 08:18:56 crc kubenswrapper[4832]: I0125 08:18:56.607762 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f9ebe7ae-8c59-4736-8722-b0d8bcfa61f0-public-tls-certs\") pod \"nova-api-0\" (UID: \"f9ebe7ae-8c59-4736-8722-b0d8bcfa61f0\") " pod="openstack/nova-api-0" Jan 25 08:18:56 crc kubenswrapper[4832]: I0125 08:18:56.608559 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f9ebe7ae-8c59-4736-8722-b0d8bcfa61f0-internal-tls-certs\") pod \"nova-api-0\" (UID: \"f9ebe7ae-8c59-4736-8722-b0d8bcfa61f0\") " pod="openstack/nova-api-0" Jan 25 08:18:56 crc kubenswrapper[4832]: I0125 08:18:56.609878 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f9ebe7ae-8c59-4736-8722-b0d8bcfa61f0-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"f9ebe7ae-8c59-4736-8722-b0d8bcfa61f0\") " pod="openstack/nova-api-0" Jan 25 08:18:56 crc kubenswrapper[4832]: I0125 08:18:56.610738 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f9ebe7ae-8c59-4736-8722-b0d8bcfa61f0-config-data\") pod \"nova-api-0\" (UID: \"f9ebe7ae-8c59-4736-8722-b0d8bcfa61f0\") " pod="openstack/nova-api-0" Jan 25 08:18:56 crc kubenswrapper[4832]: I0125 08:18:56.618595 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wcjqg\" (UniqueName: \"kubernetes.io/projected/f9ebe7ae-8c59-4736-8722-b0d8bcfa61f0-kube-api-access-wcjqg\") pod \"nova-api-0\" (UID: \"f9ebe7ae-8c59-4736-8722-b0d8bcfa61f0\") " pod="openstack/nova-api-0" Jan 25 08:18:56 crc kubenswrapper[4832]: I0125 08:18:56.725221 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 25 08:18:57 crc kubenswrapper[4832]: I0125 08:18:57.309788 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 25 08:18:57 crc kubenswrapper[4832]: I0125 08:18:57.314134 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"eb5b7f6d-8b64-475d-b4b4-c12ce7e9c468","Type":"ContainerStarted","Data":"a85db5095c5ca75321685e32f6df518b33b890fed114b064ea0a8233a23d4bf2"} Jan 25 08:18:57 crc kubenswrapper[4832]: I0125 08:18:57.314184 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"eb5b7f6d-8b64-475d-b4b4-c12ce7e9c468","Type":"ContainerStarted","Data":"5028a6103337a1c45039f324b389e86af8ad24710b6a583531147bc00a33a05e"} Jan 25 08:18:57 crc kubenswrapper[4832]: I0125 08:18:57.713218 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7f893740-ce4d-4ee2-994d-98739d4b1f7d" path="/var/lib/kubelet/pods/7f893740-ce4d-4ee2-994d-98739d4b1f7d/volumes" Jan 25 08:18:57 crc kubenswrapper[4832]: I0125 08:18:57.847494 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-cell1-novncproxy-0" Jan 25 08:18:57 crc kubenswrapper[4832]: I0125 08:18:57.871640 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-cell1-novncproxy-0" Jan 25 08:18:58 crc kubenswrapper[4832]: I0125 08:18:58.329717 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"f9ebe7ae-8c59-4736-8722-b0d8bcfa61f0","Type":"ContainerStarted","Data":"9de35a63bac6dead6113b6c1fd3c5e2bd0ddb664dbe0ca107111996947ec14b2"} Jan 25 08:18:58 crc kubenswrapper[4832]: I0125 08:18:58.330090 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"f9ebe7ae-8c59-4736-8722-b0d8bcfa61f0","Type":"ContainerStarted","Data":"bd9c318a2a577ef8e6704c4fba8e7f191a4bdf4816299ee85079cfbdcbb226cc"} Jan 25 08:18:58 crc kubenswrapper[4832]: I0125 08:18:58.330104 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"f9ebe7ae-8c59-4736-8722-b0d8bcfa61f0","Type":"ContainerStarted","Data":"fa0db33144a070e871cc8148838c11bce025a8dc568bc860970ad3ed9b8983a6"} Jan 25 08:18:58 crc kubenswrapper[4832]: I0125 08:18:58.332030 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"eb5b7f6d-8b64-475d-b4b4-c12ce7e9c468","Type":"ContainerStarted","Data":"9d881df17c3a0ad609851db8f792ad31007a654bd365680d5b3565310368a1c6"} Jan 25 08:18:58 crc kubenswrapper[4832]: I0125 08:18:58.352641 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.352621419 podStartE2EDuration="2.352621419s" podCreationTimestamp="2026-01-25 08:18:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-25 08:18:58.351813495 +0000 UTC m=+1321.025637048" watchObservedRunningTime="2026-01-25 08:18:58.352621419 +0000 UTC m=+1321.026444962" Jan 25 08:18:58 crc kubenswrapper[4832]: I0125 08:18:58.361090 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-novncproxy-0" Jan 25 08:18:58 crc kubenswrapper[4832]: I0125 08:18:58.557596 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-cell-mapping-6jrsn"] Jan 25 08:18:58 crc kubenswrapper[4832]: I0125 08:18:58.559003 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-6jrsn" Jan 25 08:18:58 crc kubenswrapper[4832]: I0125 08:18:58.566071 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-config-data" Jan 25 08:18:58 crc kubenswrapper[4832]: I0125 08:18:58.566201 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-scripts" Jan 25 08:18:58 crc kubenswrapper[4832]: I0125 08:18:58.567511 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-6jrsn"] Jan 25 08:18:58 crc kubenswrapper[4832]: I0125 08:18:58.643119 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/043a28cc-bd52-47d0-83cd-59e5b8b101b4-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-6jrsn\" (UID: \"043a28cc-bd52-47d0-83cd-59e5b8b101b4\") " pod="openstack/nova-cell1-cell-mapping-6jrsn" Jan 25 08:18:58 crc kubenswrapper[4832]: I0125 08:18:58.643177 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/043a28cc-bd52-47d0-83cd-59e5b8b101b4-config-data\") pod \"nova-cell1-cell-mapping-6jrsn\" (UID: \"043a28cc-bd52-47d0-83cd-59e5b8b101b4\") " pod="openstack/nova-cell1-cell-mapping-6jrsn" Jan 25 08:18:58 crc kubenswrapper[4832]: I0125 08:18:58.643274 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/043a28cc-bd52-47d0-83cd-59e5b8b101b4-scripts\") pod \"nova-cell1-cell-mapping-6jrsn\" (UID: \"043a28cc-bd52-47d0-83cd-59e5b8b101b4\") " pod="openstack/nova-cell1-cell-mapping-6jrsn" Jan 25 08:18:58 crc kubenswrapper[4832]: I0125 08:18:58.643520 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tz762\" (UniqueName: \"kubernetes.io/projected/043a28cc-bd52-47d0-83cd-59e5b8b101b4-kube-api-access-tz762\") pod \"nova-cell1-cell-mapping-6jrsn\" (UID: \"043a28cc-bd52-47d0-83cd-59e5b8b101b4\") " pod="openstack/nova-cell1-cell-mapping-6jrsn" Jan 25 08:18:58 crc kubenswrapper[4832]: I0125 08:18:58.744923 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/043a28cc-bd52-47d0-83cd-59e5b8b101b4-scripts\") pod \"nova-cell1-cell-mapping-6jrsn\" (UID: \"043a28cc-bd52-47d0-83cd-59e5b8b101b4\") " pod="openstack/nova-cell1-cell-mapping-6jrsn" Jan 25 08:18:58 crc kubenswrapper[4832]: I0125 08:18:58.745611 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tz762\" (UniqueName: \"kubernetes.io/projected/043a28cc-bd52-47d0-83cd-59e5b8b101b4-kube-api-access-tz762\") pod \"nova-cell1-cell-mapping-6jrsn\" (UID: \"043a28cc-bd52-47d0-83cd-59e5b8b101b4\") " pod="openstack/nova-cell1-cell-mapping-6jrsn" Jan 25 08:18:58 crc kubenswrapper[4832]: I0125 08:18:58.745715 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/043a28cc-bd52-47d0-83cd-59e5b8b101b4-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-6jrsn\" (UID: \"043a28cc-bd52-47d0-83cd-59e5b8b101b4\") " pod="openstack/nova-cell1-cell-mapping-6jrsn" Jan 25 08:18:58 crc kubenswrapper[4832]: I0125 08:18:58.745743 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/043a28cc-bd52-47d0-83cd-59e5b8b101b4-config-data\") pod \"nova-cell1-cell-mapping-6jrsn\" (UID: \"043a28cc-bd52-47d0-83cd-59e5b8b101b4\") " pod="openstack/nova-cell1-cell-mapping-6jrsn" Jan 25 08:18:58 crc kubenswrapper[4832]: I0125 08:18:58.750585 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/043a28cc-bd52-47d0-83cd-59e5b8b101b4-scripts\") pod \"nova-cell1-cell-mapping-6jrsn\" (UID: \"043a28cc-bd52-47d0-83cd-59e5b8b101b4\") " pod="openstack/nova-cell1-cell-mapping-6jrsn" Jan 25 08:18:58 crc kubenswrapper[4832]: I0125 08:18:58.750783 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/043a28cc-bd52-47d0-83cd-59e5b8b101b4-config-data\") pod \"nova-cell1-cell-mapping-6jrsn\" (UID: \"043a28cc-bd52-47d0-83cd-59e5b8b101b4\") " pod="openstack/nova-cell1-cell-mapping-6jrsn" Jan 25 08:18:58 crc kubenswrapper[4832]: I0125 08:18:58.751796 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/043a28cc-bd52-47d0-83cd-59e5b8b101b4-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-6jrsn\" (UID: \"043a28cc-bd52-47d0-83cd-59e5b8b101b4\") " pod="openstack/nova-cell1-cell-mapping-6jrsn" Jan 25 08:18:58 crc kubenswrapper[4832]: I0125 08:18:58.764073 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tz762\" (UniqueName: \"kubernetes.io/projected/043a28cc-bd52-47d0-83cd-59e5b8b101b4-kube-api-access-tz762\") pod \"nova-cell1-cell-mapping-6jrsn\" (UID: \"043a28cc-bd52-47d0-83cd-59e5b8b101b4\") " pod="openstack/nova-cell1-cell-mapping-6jrsn" Jan 25 08:18:59 crc kubenswrapper[4832]: I0125 08:18:59.030878 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-6jrsn" Jan 25 08:18:59 crc kubenswrapper[4832]: I0125 08:18:59.345667 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"eb5b7f6d-8b64-475d-b4b4-c12ce7e9c468","Type":"ContainerStarted","Data":"876a1434d91202b43acf24c661c391a8893242e33d90ffa4a01e7173c2cdc784"} Jan 25 08:18:59 crc kubenswrapper[4832]: I0125 08:18:59.498036 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-6jrsn"] Jan 25 08:18:59 crc kubenswrapper[4832]: W0125 08:18:59.504159 4832 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod043a28cc_bd52_47d0_83cd_59e5b8b101b4.slice/crio-936d02ba5f54aea06441037df31a18b6fc9c8c2b4ead21b5485d4e8ad80baecd WatchSource:0}: Error finding container 936d02ba5f54aea06441037df31a18b6fc9c8c2b4ead21b5485d4e8ad80baecd: Status 404 returned error can't find the container with id 936d02ba5f54aea06441037df31a18b6fc9c8c2b4ead21b5485d4e8ad80baecd Jan 25 08:19:00 crc kubenswrapper[4832]: I0125 08:19:00.232614 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-59cf4bdb65-87zjq" Jan 25 08:19:00 crc kubenswrapper[4832]: I0125 08:19:00.378566 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-6jrsn" event={"ID":"043a28cc-bd52-47d0-83cd-59e5b8b101b4","Type":"ContainerStarted","Data":"19bfe1ab953cc86ae66dd70baae770eb99576c2e1d66361d4363058af63653f2"} Jan 25 08:19:00 crc kubenswrapper[4832]: I0125 08:19:00.378630 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-6jrsn" event={"ID":"043a28cc-bd52-47d0-83cd-59e5b8b101b4","Type":"ContainerStarted","Data":"936d02ba5f54aea06441037df31a18b6fc9c8c2b4ead21b5485d4e8ad80baecd"} Jan 25 08:19:00 crc kubenswrapper[4832]: I0125 08:19:00.401852 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-845d6d6f59-gbk4s"] Jan 25 08:19:00 crc kubenswrapper[4832]: I0125 08:19:00.402433 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-845d6d6f59-gbk4s" podUID="b4fac470-1791-4461-9a15-d3ce171d8f15" containerName="dnsmasq-dns" containerID="cri-o://0319d357fe2a0f6513ef7ddeeeb79fe495ee1226844eedfb1c993bba74675e0f" gracePeriod=10 Jan 25 08:19:00 crc kubenswrapper[4832]: I0125 08:19:00.419639 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-cell-mapping-6jrsn" podStartSLOduration=2.419611662 podStartE2EDuration="2.419611662s" podCreationTimestamp="2026-01-25 08:18:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-25 08:19:00.398030078 +0000 UTC m=+1323.071853611" watchObservedRunningTime="2026-01-25 08:19:00.419611662 +0000 UTC m=+1323.093435195" Jan 25 08:19:01 crc kubenswrapper[4832]: I0125 08:19:01.031303 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-845d6d6f59-gbk4s" Jan 25 08:19:01 crc kubenswrapper[4832]: I0125 08:19:01.118296 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b4fac470-1791-4461-9a15-d3ce171d8f15-dns-svc\") pod \"b4fac470-1791-4461-9a15-d3ce171d8f15\" (UID: \"b4fac470-1791-4461-9a15-d3ce171d8f15\") " Jan 25 08:19:01 crc kubenswrapper[4832]: I0125 08:19:01.118441 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b4fac470-1791-4461-9a15-d3ce171d8f15-ovsdbserver-sb\") pod \"b4fac470-1791-4461-9a15-d3ce171d8f15\" (UID: \"b4fac470-1791-4461-9a15-d3ce171d8f15\") " Jan 25 08:19:01 crc kubenswrapper[4832]: I0125 08:19:01.118472 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/b4fac470-1791-4461-9a15-d3ce171d8f15-dns-swift-storage-0\") pod \"b4fac470-1791-4461-9a15-d3ce171d8f15\" (UID: \"b4fac470-1791-4461-9a15-d3ce171d8f15\") " Jan 25 08:19:01 crc kubenswrapper[4832]: I0125 08:19:01.118547 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zdmc2\" (UniqueName: \"kubernetes.io/projected/b4fac470-1791-4461-9a15-d3ce171d8f15-kube-api-access-zdmc2\") pod \"b4fac470-1791-4461-9a15-d3ce171d8f15\" (UID: \"b4fac470-1791-4461-9a15-d3ce171d8f15\") " Jan 25 08:19:01 crc kubenswrapper[4832]: I0125 08:19:01.118573 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b4fac470-1791-4461-9a15-d3ce171d8f15-config\") pod \"b4fac470-1791-4461-9a15-d3ce171d8f15\" (UID: \"b4fac470-1791-4461-9a15-d3ce171d8f15\") " Jan 25 08:19:01 crc kubenswrapper[4832]: I0125 08:19:01.118630 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b4fac470-1791-4461-9a15-d3ce171d8f15-ovsdbserver-nb\") pod \"b4fac470-1791-4461-9a15-d3ce171d8f15\" (UID: \"b4fac470-1791-4461-9a15-d3ce171d8f15\") " Jan 25 08:19:01 crc kubenswrapper[4832]: I0125 08:19:01.125645 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b4fac470-1791-4461-9a15-d3ce171d8f15-kube-api-access-zdmc2" (OuterVolumeSpecName: "kube-api-access-zdmc2") pod "b4fac470-1791-4461-9a15-d3ce171d8f15" (UID: "b4fac470-1791-4461-9a15-d3ce171d8f15"). InnerVolumeSpecName "kube-api-access-zdmc2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 25 08:19:01 crc kubenswrapper[4832]: I0125 08:19:01.195661 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b4fac470-1791-4461-9a15-d3ce171d8f15-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "b4fac470-1791-4461-9a15-d3ce171d8f15" (UID: "b4fac470-1791-4461-9a15-d3ce171d8f15"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 25 08:19:01 crc kubenswrapper[4832]: I0125 08:19:01.208220 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b4fac470-1791-4461-9a15-d3ce171d8f15-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "b4fac470-1791-4461-9a15-d3ce171d8f15" (UID: "b4fac470-1791-4461-9a15-d3ce171d8f15"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 25 08:19:01 crc kubenswrapper[4832]: I0125 08:19:01.210511 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b4fac470-1791-4461-9a15-d3ce171d8f15-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "b4fac470-1791-4461-9a15-d3ce171d8f15" (UID: "b4fac470-1791-4461-9a15-d3ce171d8f15"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 25 08:19:01 crc kubenswrapper[4832]: I0125 08:19:01.213990 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b4fac470-1791-4461-9a15-d3ce171d8f15-config" (OuterVolumeSpecName: "config") pod "b4fac470-1791-4461-9a15-d3ce171d8f15" (UID: "b4fac470-1791-4461-9a15-d3ce171d8f15"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 25 08:19:01 crc kubenswrapper[4832]: I0125 08:19:01.221759 4832 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b4fac470-1791-4461-9a15-d3ce171d8f15-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 25 08:19:01 crc kubenswrapper[4832]: I0125 08:19:01.221789 4832 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/b4fac470-1791-4461-9a15-d3ce171d8f15-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 25 08:19:01 crc kubenswrapper[4832]: I0125 08:19:01.221799 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zdmc2\" (UniqueName: \"kubernetes.io/projected/b4fac470-1791-4461-9a15-d3ce171d8f15-kube-api-access-zdmc2\") on node \"crc\" DevicePath \"\"" Jan 25 08:19:01 crc kubenswrapper[4832]: I0125 08:19:01.221811 4832 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b4fac470-1791-4461-9a15-d3ce171d8f15-config\") on node \"crc\" DevicePath \"\"" Jan 25 08:19:01 crc kubenswrapper[4832]: I0125 08:19:01.221821 4832 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b4fac470-1791-4461-9a15-d3ce171d8f15-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 25 08:19:01 crc kubenswrapper[4832]: I0125 08:19:01.258542 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b4fac470-1791-4461-9a15-d3ce171d8f15-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "b4fac470-1791-4461-9a15-d3ce171d8f15" (UID: "b4fac470-1791-4461-9a15-d3ce171d8f15"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 25 08:19:01 crc kubenswrapper[4832]: I0125 08:19:01.324934 4832 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b4fac470-1791-4461-9a15-d3ce171d8f15-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 25 08:19:01 crc kubenswrapper[4832]: I0125 08:19:01.392968 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"eb5b7f6d-8b64-475d-b4b4-c12ce7e9c468","Type":"ContainerStarted","Data":"95dfd557715c5f907d1a1ca13b40c98c91abc755603afb92e57b233dee92dd78"} Jan 25 08:19:01 crc kubenswrapper[4832]: I0125 08:19:01.394307 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 25 08:19:01 crc kubenswrapper[4832]: I0125 08:19:01.399735 4832 generic.go:334] "Generic (PLEG): container finished" podID="b4fac470-1791-4461-9a15-d3ce171d8f15" containerID="0319d357fe2a0f6513ef7ddeeeb79fe495ee1226844eedfb1c993bba74675e0f" exitCode=0 Jan 25 08:19:01 crc kubenswrapper[4832]: I0125 08:19:01.400452 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-845d6d6f59-gbk4s" Jan 25 08:19:01 crc kubenswrapper[4832]: I0125 08:19:01.404708 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-845d6d6f59-gbk4s" event={"ID":"b4fac470-1791-4461-9a15-d3ce171d8f15","Type":"ContainerDied","Data":"0319d357fe2a0f6513ef7ddeeeb79fe495ee1226844eedfb1c993bba74675e0f"} Jan 25 08:19:01 crc kubenswrapper[4832]: I0125 08:19:01.404888 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-845d6d6f59-gbk4s" event={"ID":"b4fac470-1791-4461-9a15-d3ce171d8f15","Type":"ContainerDied","Data":"78dde9b3b81ac468ad5541e6b4561506f93f4ea181ec74d91bdfe868317cfa89"} Jan 25 08:19:01 crc kubenswrapper[4832]: I0125 08:19:01.404984 4832 scope.go:117] "RemoveContainer" containerID="0319d357fe2a0f6513ef7ddeeeb79fe495ee1226844eedfb1c993bba74675e0f" Jan 25 08:19:01 crc kubenswrapper[4832]: I0125 08:19:01.445175 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.626339297 podStartE2EDuration="6.445148218s" podCreationTimestamp="2026-01-25 08:18:55 +0000 UTC" firstStartedPulling="2026-01-25 08:18:56.393331615 +0000 UTC m=+1319.067155148" lastFinishedPulling="2026-01-25 08:19:00.212140486 +0000 UTC m=+1322.885964069" observedRunningTime="2026-01-25 08:19:01.423210422 +0000 UTC m=+1324.097033945" watchObservedRunningTime="2026-01-25 08:19:01.445148218 +0000 UTC m=+1324.118971751" Jan 25 08:19:01 crc kubenswrapper[4832]: I0125 08:19:01.451766 4832 scope.go:117] "RemoveContainer" containerID="ee20077fe32eb2c6c4eeb72f0d13c25e701aaabf4d049ebb28591414265d2fce" Jan 25 08:19:01 crc kubenswrapper[4832]: I0125 08:19:01.458268 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-845d6d6f59-gbk4s"] Jan 25 08:19:01 crc kubenswrapper[4832]: I0125 08:19:01.467609 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-845d6d6f59-gbk4s"] Jan 25 08:19:01 crc kubenswrapper[4832]: I0125 08:19:01.474976 4832 scope.go:117] "RemoveContainer" containerID="0319d357fe2a0f6513ef7ddeeeb79fe495ee1226844eedfb1c993bba74675e0f" Jan 25 08:19:01 crc kubenswrapper[4832]: E0125 08:19:01.475638 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0319d357fe2a0f6513ef7ddeeeb79fe495ee1226844eedfb1c993bba74675e0f\": container with ID starting with 0319d357fe2a0f6513ef7ddeeeb79fe495ee1226844eedfb1c993bba74675e0f not found: ID does not exist" containerID="0319d357fe2a0f6513ef7ddeeeb79fe495ee1226844eedfb1c993bba74675e0f" Jan 25 08:19:01 crc kubenswrapper[4832]: I0125 08:19:01.475686 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0319d357fe2a0f6513ef7ddeeeb79fe495ee1226844eedfb1c993bba74675e0f"} err="failed to get container status \"0319d357fe2a0f6513ef7ddeeeb79fe495ee1226844eedfb1c993bba74675e0f\": rpc error: code = NotFound desc = could not find container \"0319d357fe2a0f6513ef7ddeeeb79fe495ee1226844eedfb1c993bba74675e0f\": container with ID starting with 0319d357fe2a0f6513ef7ddeeeb79fe495ee1226844eedfb1c993bba74675e0f not found: ID does not exist" Jan 25 08:19:01 crc kubenswrapper[4832]: I0125 08:19:01.475716 4832 scope.go:117] "RemoveContainer" containerID="ee20077fe32eb2c6c4eeb72f0d13c25e701aaabf4d049ebb28591414265d2fce" Jan 25 08:19:01 crc kubenswrapper[4832]: E0125 08:19:01.476178 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ee20077fe32eb2c6c4eeb72f0d13c25e701aaabf4d049ebb28591414265d2fce\": container with ID starting with ee20077fe32eb2c6c4eeb72f0d13c25e701aaabf4d049ebb28591414265d2fce not found: ID does not exist" containerID="ee20077fe32eb2c6c4eeb72f0d13c25e701aaabf4d049ebb28591414265d2fce" Jan 25 08:19:01 crc kubenswrapper[4832]: I0125 08:19:01.476241 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ee20077fe32eb2c6c4eeb72f0d13c25e701aaabf4d049ebb28591414265d2fce"} err="failed to get container status \"ee20077fe32eb2c6c4eeb72f0d13c25e701aaabf4d049ebb28591414265d2fce\": rpc error: code = NotFound desc = could not find container \"ee20077fe32eb2c6c4eeb72f0d13c25e701aaabf4d049ebb28591414265d2fce\": container with ID starting with ee20077fe32eb2c6c4eeb72f0d13c25e701aaabf4d049ebb28591414265d2fce not found: ID does not exist" Jan 25 08:19:01 crc kubenswrapper[4832]: I0125 08:19:01.686350 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b4fac470-1791-4461-9a15-d3ce171d8f15" path="/var/lib/kubelet/pods/b4fac470-1791-4461-9a15-d3ce171d8f15/volumes" Jan 25 08:19:05 crc kubenswrapper[4832]: I0125 08:19:05.445746 4832 generic.go:334] "Generic (PLEG): container finished" podID="043a28cc-bd52-47d0-83cd-59e5b8b101b4" containerID="19bfe1ab953cc86ae66dd70baae770eb99576c2e1d66361d4363058af63653f2" exitCode=0 Jan 25 08:19:05 crc kubenswrapper[4832]: I0125 08:19:05.445820 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-6jrsn" event={"ID":"043a28cc-bd52-47d0-83cd-59e5b8b101b4","Type":"ContainerDied","Data":"19bfe1ab953cc86ae66dd70baae770eb99576c2e1d66361d4363058af63653f2"} Jan 25 08:19:06 crc kubenswrapper[4832]: I0125 08:19:06.725464 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 25 08:19:06 crc kubenswrapper[4832]: I0125 08:19:06.725917 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 25 08:19:06 crc kubenswrapper[4832]: I0125 08:19:06.904274 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-6jrsn" Jan 25 08:19:06 crc kubenswrapper[4832]: I0125 08:19:06.972966 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/043a28cc-bd52-47d0-83cd-59e5b8b101b4-combined-ca-bundle\") pod \"043a28cc-bd52-47d0-83cd-59e5b8b101b4\" (UID: \"043a28cc-bd52-47d0-83cd-59e5b8b101b4\") " Jan 25 08:19:06 crc kubenswrapper[4832]: I0125 08:19:06.973284 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/043a28cc-bd52-47d0-83cd-59e5b8b101b4-config-data\") pod \"043a28cc-bd52-47d0-83cd-59e5b8b101b4\" (UID: \"043a28cc-bd52-47d0-83cd-59e5b8b101b4\") " Jan 25 08:19:06 crc kubenswrapper[4832]: I0125 08:19:06.973358 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tz762\" (UniqueName: \"kubernetes.io/projected/043a28cc-bd52-47d0-83cd-59e5b8b101b4-kube-api-access-tz762\") pod \"043a28cc-bd52-47d0-83cd-59e5b8b101b4\" (UID: \"043a28cc-bd52-47d0-83cd-59e5b8b101b4\") " Jan 25 08:19:06 crc kubenswrapper[4832]: I0125 08:19:06.973497 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/043a28cc-bd52-47d0-83cd-59e5b8b101b4-scripts\") pod \"043a28cc-bd52-47d0-83cd-59e5b8b101b4\" (UID: \"043a28cc-bd52-47d0-83cd-59e5b8b101b4\") " Jan 25 08:19:06 crc kubenswrapper[4832]: I0125 08:19:06.979319 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/043a28cc-bd52-47d0-83cd-59e5b8b101b4-scripts" (OuterVolumeSpecName: "scripts") pod "043a28cc-bd52-47d0-83cd-59e5b8b101b4" (UID: "043a28cc-bd52-47d0-83cd-59e5b8b101b4"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 25 08:19:06 crc kubenswrapper[4832]: I0125 08:19:06.979470 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/043a28cc-bd52-47d0-83cd-59e5b8b101b4-kube-api-access-tz762" (OuterVolumeSpecName: "kube-api-access-tz762") pod "043a28cc-bd52-47d0-83cd-59e5b8b101b4" (UID: "043a28cc-bd52-47d0-83cd-59e5b8b101b4"). InnerVolumeSpecName "kube-api-access-tz762". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 25 08:19:07 crc kubenswrapper[4832]: I0125 08:19:07.030220 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/043a28cc-bd52-47d0-83cd-59e5b8b101b4-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "043a28cc-bd52-47d0-83cd-59e5b8b101b4" (UID: "043a28cc-bd52-47d0-83cd-59e5b8b101b4"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 25 08:19:07 crc kubenswrapper[4832]: I0125 08:19:07.032215 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/043a28cc-bd52-47d0-83cd-59e5b8b101b4-config-data" (OuterVolumeSpecName: "config-data") pod "043a28cc-bd52-47d0-83cd-59e5b8b101b4" (UID: "043a28cc-bd52-47d0-83cd-59e5b8b101b4"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 25 08:19:07 crc kubenswrapper[4832]: I0125 08:19:07.078708 4832 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/043a28cc-bd52-47d0-83cd-59e5b8b101b4-config-data\") on node \"crc\" DevicePath \"\"" Jan 25 08:19:07 crc kubenswrapper[4832]: I0125 08:19:07.078808 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tz762\" (UniqueName: \"kubernetes.io/projected/043a28cc-bd52-47d0-83cd-59e5b8b101b4-kube-api-access-tz762\") on node \"crc\" DevicePath \"\"" Jan 25 08:19:07 crc kubenswrapper[4832]: I0125 08:19:07.078877 4832 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/043a28cc-bd52-47d0-83cd-59e5b8b101b4-scripts\") on node \"crc\" DevicePath \"\"" Jan 25 08:19:07 crc kubenswrapper[4832]: I0125 08:19:07.078903 4832 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/043a28cc-bd52-47d0-83cd-59e5b8b101b4-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 25 08:19:07 crc kubenswrapper[4832]: I0125 08:19:07.476648 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-6jrsn" event={"ID":"043a28cc-bd52-47d0-83cd-59e5b8b101b4","Type":"ContainerDied","Data":"936d02ba5f54aea06441037df31a18b6fc9c8c2b4ead21b5485d4e8ad80baecd"} Jan 25 08:19:07 crc kubenswrapper[4832]: I0125 08:19:07.476694 4832 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="936d02ba5f54aea06441037df31a18b6fc9c8c2b4ead21b5485d4e8ad80baecd" Jan 25 08:19:07 crc kubenswrapper[4832]: I0125 08:19:07.476726 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-6jrsn" Jan 25 08:19:07 crc kubenswrapper[4832]: I0125 08:19:07.723234 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 25 08:19:07 crc kubenswrapper[4832]: I0125 08:19:07.723560 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="f9ebe7ae-8c59-4736-8722-b0d8bcfa61f0" containerName="nova-api-log" containerID="cri-o://bd9c318a2a577ef8e6704c4fba8e7f191a4bdf4816299ee85079cfbdcbb226cc" gracePeriod=30 Jan 25 08:19:07 crc kubenswrapper[4832]: I0125 08:19:07.723598 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="f9ebe7ae-8c59-4736-8722-b0d8bcfa61f0" containerName="nova-api-api" containerID="cri-o://9de35a63bac6dead6113b6c1fd3c5e2bd0ddb664dbe0ca107111996947ec14b2" gracePeriod=30 Jan 25 08:19:07 crc kubenswrapper[4832]: I0125 08:19:07.738113 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 25 08:19:07 crc kubenswrapper[4832]: I0125 08:19:07.738494 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="5f2f5901-82a8-4669-91aa-a8973cac5799" containerName="nova-scheduler-scheduler" containerID="cri-o://035976e0914f65e00fd75711cd7fc1f0543ef5eef21ce3fd3c8a346f34096785" gracePeriod=30 Jan 25 08:19:07 crc kubenswrapper[4832]: I0125 08:19:07.746351 4832 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="f9ebe7ae-8c59-4736-8722-b0d8bcfa61f0" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.0.204:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 25 08:19:07 crc kubenswrapper[4832]: I0125 08:19:07.746356 4832 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="f9ebe7ae-8c59-4736-8722-b0d8bcfa61f0" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.0.204:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 25 08:19:07 crc kubenswrapper[4832]: I0125 08:19:07.751138 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 25 08:19:07 crc kubenswrapper[4832]: I0125 08:19:07.751855 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="fcff2a1c-2a06-4930-aec6-2970335e6e78" containerName="nova-metadata-log" containerID="cri-o://7697b5c3285287de3a50d7a78ae8d1d130db9866c171a8ac9f02b1cbe751db00" gracePeriod=30 Jan 25 08:19:07 crc kubenswrapper[4832]: I0125 08:19:07.751903 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="fcff2a1c-2a06-4930-aec6-2970335e6e78" containerName="nova-metadata-metadata" containerID="cri-o://cfb3f58aebd01b784ef5c30886993ea09e6016a58e785780765bd2caf20533af" gracePeriod=30 Jan 25 08:19:08 crc kubenswrapper[4832]: I0125 08:19:08.490734 4832 generic.go:334] "Generic (PLEG): container finished" podID="fcff2a1c-2a06-4930-aec6-2970335e6e78" containerID="7697b5c3285287de3a50d7a78ae8d1d130db9866c171a8ac9f02b1cbe751db00" exitCode=143 Jan 25 08:19:08 crc kubenswrapper[4832]: I0125 08:19:08.490793 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"fcff2a1c-2a06-4930-aec6-2970335e6e78","Type":"ContainerDied","Data":"7697b5c3285287de3a50d7a78ae8d1d130db9866c171a8ac9f02b1cbe751db00"} Jan 25 08:19:08 crc kubenswrapper[4832]: I0125 08:19:08.493951 4832 generic.go:334] "Generic (PLEG): container finished" podID="5f2f5901-82a8-4669-91aa-a8973cac5799" containerID="035976e0914f65e00fd75711cd7fc1f0543ef5eef21ce3fd3c8a346f34096785" exitCode=0 Jan 25 08:19:08 crc kubenswrapper[4832]: I0125 08:19:08.494058 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"5f2f5901-82a8-4669-91aa-a8973cac5799","Type":"ContainerDied","Data":"035976e0914f65e00fd75711cd7fc1f0543ef5eef21ce3fd3c8a346f34096785"} Jan 25 08:19:08 crc kubenswrapper[4832]: I0125 08:19:08.497133 4832 generic.go:334] "Generic (PLEG): container finished" podID="f9ebe7ae-8c59-4736-8722-b0d8bcfa61f0" containerID="bd9c318a2a577ef8e6704c4fba8e7f191a4bdf4816299ee85079cfbdcbb226cc" exitCode=143 Jan 25 08:19:08 crc kubenswrapper[4832]: I0125 08:19:08.497171 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"f9ebe7ae-8c59-4736-8722-b0d8bcfa61f0","Type":"ContainerDied","Data":"bd9c318a2a577ef8e6704c4fba8e7f191a4bdf4816299ee85079cfbdcbb226cc"} Jan 25 08:19:08 crc kubenswrapper[4832]: I0125 08:19:08.870252 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 25 08:19:09 crc kubenswrapper[4832]: I0125 08:19:09.016797 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q28dr\" (UniqueName: \"kubernetes.io/projected/5f2f5901-82a8-4669-91aa-a8973cac5799-kube-api-access-q28dr\") pod \"5f2f5901-82a8-4669-91aa-a8973cac5799\" (UID: \"5f2f5901-82a8-4669-91aa-a8973cac5799\") " Jan 25 08:19:09 crc kubenswrapper[4832]: I0125 08:19:09.016945 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5f2f5901-82a8-4669-91aa-a8973cac5799-config-data\") pod \"5f2f5901-82a8-4669-91aa-a8973cac5799\" (UID: \"5f2f5901-82a8-4669-91aa-a8973cac5799\") " Jan 25 08:19:09 crc kubenswrapper[4832]: I0125 08:19:09.017207 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5f2f5901-82a8-4669-91aa-a8973cac5799-combined-ca-bundle\") pod \"5f2f5901-82a8-4669-91aa-a8973cac5799\" (UID: \"5f2f5901-82a8-4669-91aa-a8973cac5799\") " Jan 25 08:19:09 crc kubenswrapper[4832]: I0125 08:19:09.042711 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5f2f5901-82a8-4669-91aa-a8973cac5799-kube-api-access-q28dr" (OuterVolumeSpecName: "kube-api-access-q28dr") pod "5f2f5901-82a8-4669-91aa-a8973cac5799" (UID: "5f2f5901-82a8-4669-91aa-a8973cac5799"). InnerVolumeSpecName "kube-api-access-q28dr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 25 08:19:09 crc kubenswrapper[4832]: I0125 08:19:09.053605 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5f2f5901-82a8-4669-91aa-a8973cac5799-config-data" (OuterVolumeSpecName: "config-data") pod "5f2f5901-82a8-4669-91aa-a8973cac5799" (UID: "5f2f5901-82a8-4669-91aa-a8973cac5799"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 25 08:19:09 crc kubenswrapper[4832]: I0125 08:19:09.063588 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5f2f5901-82a8-4669-91aa-a8973cac5799-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "5f2f5901-82a8-4669-91aa-a8973cac5799" (UID: "5f2f5901-82a8-4669-91aa-a8973cac5799"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 25 08:19:09 crc kubenswrapper[4832]: I0125 08:19:09.119527 4832 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5f2f5901-82a8-4669-91aa-a8973cac5799-config-data\") on node \"crc\" DevicePath \"\"" Jan 25 08:19:09 crc kubenswrapper[4832]: I0125 08:19:09.119563 4832 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5f2f5901-82a8-4669-91aa-a8973cac5799-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 25 08:19:09 crc kubenswrapper[4832]: I0125 08:19:09.119576 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q28dr\" (UniqueName: \"kubernetes.io/projected/5f2f5901-82a8-4669-91aa-a8973cac5799-kube-api-access-q28dr\") on node \"crc\" DevicePath \"\"" Jan 25 08:19:09 crc kubenswrapper[4832]: I0125 08:19:09.525204 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"5f2f5901-82a8-4669-91aa-a8973cac5799","Type":"ContainerDied","Data":"74d8dffa6c375f60dc134a7cdd5905607b0d895f0a96d7bda72e5a7c8401eb3e"} Jan 25 08:19:09 crc kubenswrapper[4832]: I0125 08:19:09.525291 4832 scope.go:117] "RemoveContainer" containerID="035976e0914f65e00fd75711cd7fc1f0543ef5eef21ce3fd3c8a346f34096785" Jan 25 08:19:09 crc kubenswrapper[4832]: I0125 08:19:09.525317 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 25 08:19:09 crc kubenswrapper[4832]: I0125 08:19:09.580302 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 25 08:19:09 crc kubenswrapper[4832]: I0125 08:19:09.593156 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Jan 25 08:19:09 crc kubenswrapper[4832]: I0125 08:19:09.604304 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Jan 25 08:19:09 crc kubenswrapper[4832]: E0125 08:19:09.604899 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="043a28cc-bd52-47d0-83cd-59e5b8b101b4" containerName="nova-manage" Jan 25 08:19:09 crc kubenswrapper[4832]: I0125 08:19:09.604920 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="043a28cc-bd52-47d0-83cd-59e5b8b101b4" containerName="nova-manage" Jan 25 08:19:09 crc kubenswrapper[4832]: E0125 08:19:09.604930 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b4fac470-1791-4461-9a15-d3ce171d8f15" containerName="init" Jan 25 08:19:09 crc kubenswrapper[4832]: I0125 08:19:09.604937 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="b4fac470-1791-4461-9a15-d3ce171d8f15" containerName="init" Jan 25 08:19:09 crc kubenswrapper[4832]: E0125 08:19:09.604962 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b4fac470-1791-4461-9a15-d3ce171d8f15" containerName="dnsmasq-dns" Jan 25 08:19:09 crc kubenswrapper[4832]: I0125 08:19:09.604968 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="b4fac470-1791-4461-9a15-d3ce171d8f15" containerName="dnsmasq-dns" Jan 25 08:19:09 crc kubenswrapper[4832]: E0125 08:19:09.604994 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5f2f5901-82a8-4669-91aa-a8973cac5799" containerName="nova-scheduler-scheduler" Jan 25 08:19:09 crc kubenswrapper[4832]: I0125 08:19:09.605000 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="5f2f5901-82a8-4669-91aa-a8973cac5799" containerName="nova-scheduler-scheduler" Jan 25 08:19:09 crc kubenswrapper[4832]: I0125 08:19:09.605169 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="043a28cc-bd52-47d0-83cd-59e5b8b101b4" containerName="nova-manage" Jan 25 08:19:09 crc kubenswrapper[4832]: I0125 08:19:09.605196 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="b4fac470-1791-4461-9a15-d3ce171d8f15" containerName="dnsmasq-dns" Jan 25 08:19:09 crc kubenswrapper[4832]: I0125 08:19:09.605222 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="5f2f5901-82a8-4669-91aa-a8973cac5799" containerName="nova-scheduler-scheduler" Jan 25 08:19:09 crc kubenswrapper[4832]: I0125 08:19:09.605950 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 25 08:19:09 crc kubenswrapper[4832]: I0125 08:19:09.610907 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Jan 25 08:19:09 crc kubenswrapper[4832]: I0125 08:19:09.614758 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 25 08:19:09 crc kubenswrapper[4832]: I0125 08:19:09.681168 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5f2f5901-82a8-4669-91aa-a8973cac5799" path="/var/lib/kubelet/pods/5f2f5901-82a8-4669-91aa-a8973cac5799/volumes" Jan 25 08:19:09 crc kubenswrapper[4832]: I0125 08:19:09.730511 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d322a933-38eb-4eb0-81c7-86d11a5f2d2c-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"d322a933-38eb-4eb0-81c7-86d11a5f2d2c\") " pod="openstack/nova-scheduler-0" Jan 25 08:19:09 crc kubenswrapper[4832]: I0125 08:19:09.730801 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d322a933-38eb-4eb0-81c7-86d11a5f2d2c-config-data\") pod \"nova-scheduler-0\" (UID: \"d322a933-38eb-4eb0-81c7-86d11a5f2d2c\") " pod="openstack/nova-scheduler-0" Jan 25 08:19:09 crc kubenswrapper[4832]: I0125 08:19:09.730889 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dbjl2\" (UniqueName: \"kubernetes.io/projected/d322a933-38eb-4eb0-81c7-86d11a5f2d2c-kube-api-access-dbjl2\") pod \"nova-scheduler-0\" (UID: \"d322a933-38eb-4eb0-81c7-86d11a5f2d2c\") " pod="openstack/nova-scheduler-0" Jan 25 08:19:09 crc kubenswrapper[4832]: I0125 08:19:09.833489 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d322a933-38eb-4eb0-81c7-86d11a5f2d2c-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"d322a933-38eb-4eb0-81c7-86d11a5f2d2c\") " pod="openstack/nova-scheduler-0" Jan 25 08:19:09 crc kubenswrapper[4832]: I0125 08:19:09.833626 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d322a933-38eb-4eb0-81c7-86d11a5f2d2c-config-data\") pod \"nova-scheduler-0\" (UID: \"d322a933-38eb-4eb0-81c7-86d11a5f2d2c\") " pod="openstack/nova-scheduler-0" Jan 25 08:19:09 crc kubenswrapper[4832]: I0125 08:19:09.833661 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dbjl2\" (UniqueName: \"kubernetes.io/projected/d322a933-38eb-4eb0-81c7-86d11a5f2d2c-kube-api-access-dbjl2\") pod \"nova-scheduler-0\" (UID: \"d322a933-38eb-4eb0-81c7-86d11a5f2d2c\") " pod="openstack/nova-scheduler-0" Jan 25 08:19:09 crc kubenswrapper[4832]: I0125 08:19:09.843247 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d322a933-38eb-4eb0-81c7-86d11a5f2d2c-config-data\") pod \"nova-scheduler-0\" (UID: \"d322a933-38eb-4eb0-81c7-86d11a5f2d2c\") " pod="openstack/nova-scheduler-0" Jan 25 08:19:09 crc kubenswrapper[4832]: I0125 08:19:09.843253 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d322a933-38eb-4eb0-81c7-86d11a5f2d2c-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"d322a933-38eb-4eb0-81c7-86d11a5f2d2c\") " pod="openstack/nova-scheduler-0" Jan 25 08:19:09 crc kubenswrapper[4832]: I0125 08:19:09.850335 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dbjl2\" (UniqueName: \"kubernetes.io/projected/d322a933-38eb-4eb0-81c7-86d11a5f2d2c-kube-api-access-dbjl2\") pod \"nova-scheduler-0\" (UID: \"d322a933-38eb-4eb0-81c7-86d11a5f2d2c\") " pod="openstack/nova-scheduler-0" Jan 25 08:19:09 crc kubenswrapper[4832]: I0125 08:19:09.929207 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 25 08:19:10 crc kubenswrapper[4832]: I0125 08:19:10.372960 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 25 08:19:10 crc kubenswrapper[4832]: W0125 08:19:10.378575 4832 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd322a933_38eb_4eb0_81c7_86d11a5f2d2c.slice/crio-faa8efc35e8f6b0877979d7b2a442aae038e095be80d985ca50f314b7b544435 WatchSource:0}: Error finding container faa8efc35e8f6b0877979d7b2a442aae038e095be80d985ca50f314b7b544435: Status 404 returned error can't find the container with id faa8efc35e8f6b0877979d7b2a442aae038e095be80d985ca50f314b7b544435 Jan 25 08:19:10 crc kubenswrapper[4832]: I0125 08:19:10.537786 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"d322a933-38eb-4eb0-81c7-86d11a5f2d2c","Type":"ContainerStarted","Data":"faa8efc35e8f6b0877979d7b2a442aae038e095be80d985ca50f314b7b544435"} Jan 25 08:19:10 crc kubenswrapper[4832]: I0125 08:19:10.892769 4832 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="fcff2a1c-2a06-4930-aec6-2970335e6e78" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.196:8775/\": read tcp 10.217.0.2:55458->10.217.0.196:8775: read: connection reset by peer" Jan 25 08:19:10 crc kubenswrapper[4832]: I0125 08:19:10.892971 4832 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="fcff2a1c-2a06-4930-aec6-2970335e6e78" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.196:8775/\": read tcp 10.217.0.2:55450->10.217.0.196:8775: read: connection reset by peer" Jan 25 08:19:11 crc kubenswrapper[4832]: I0125 08:19:11.401368 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 25 08:19:11 crc kubenswrapper[4832]: I0125 08:19:11.487499 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/fcff2a1c-2a06-4930-aec6-2970335e6e78-nova-metadata-tls-certs\") pod \"fcff2a1c-2a06-4930-aec6-2970335e6e78\" (UID: \"fcff2a1c-2a06-4930-aec6-2970335e6e78\") " Jan 25 08:19:11 crc kubenswrapper[4832]: I0125 08:19:11.487728 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fcff2a1c-2a06-4930-aec6-2970335e6e78-logs\") pod \"fcff2a1c-2a06-4930-aec6-2970335e6e78\" (UID: \"fcff2a1c-2a06-4930-aec6-2970335e6e78\") " Jan 25 08:19:11 crc kubenswrapper[4832]: I0125 08:19:11.487993 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fcff2a1c-2a06-4930-aec6-2970335e6e78-config-data\") pod \"fcff2a1c-2a06-4930-aec6-2970335e6e78\" (UID: \"fcff2a1c-2a06-4930-aec6-2970335e6e78\") " Jan 25 08:19:11 crc kubenswrapper[4832]: I0125 08:19:11.488126 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fcff2a1c-2a06-4930-aec6-2970335e6e78-combined-ca-bundle\") pod \"fcff2a1c-2a06-4930-aec6-2970335e6e78\" (UID: \"fcff2a1c-2a06-4930-aec6-2970335e6e78\") " Jan 25 08:19:11 crc kubenswrapper[4832]: I0125 08:19:11.488177 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fcff2a1c-2a06-4930-aec6-2970335e6e78-logs" (OuterVolumeSpecName: "logs") pod "fcff2a1c-2a06-4930-aec6-2970335e6e78" (UID: "fcff2a1c-2a06-4930-aec6-2970335e6e78"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 25 08:19:11 crc kubenswrapper[4832]: I0125 08:19:11.488733 4832 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fcff2a1c-2a06-4930-aec6-2970335e6e78-logs\") on node \"crc\" DevicePath \"\"" Jan 25 08:19:11 crc kubenswrapper[4832]: I0125 08:19:11.528276 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fcff2a1c-2a06-4930-aec6-2970335e6e78-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "fcff2a1c-2a06-4930-aec6-2970335e6e78" (UID: "fcff2a1c-2a06-4930-aec6-2970335e6e78"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 25 08:19:11 crc kubenswrapper[4832]: I0125 08:19:11.538409 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fcff2a1c-2a06-4930-aec6-2970335e6e78-config-data" (OuterVolumeSpecName: "config-data") pod "fcff2a1c-2a06-4930-aec6-2970335e6e78" (UID: "fcff2a1c-2a06-4930-aec6-2970335e6e78"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 25 08:19:11 crc kubenswrapper[4832]: I0125 08:19:11.567555 4832 generic.go:334] "Generic (PLEG): container finished" podID="fcff2a1c-2a06-4930-aec6-2970335e6e78" containerID="cfb3f58aebd01b784ef5c30886993ea09e6016a58e785780765bd2caf20533af" exitCode=0 Jan 25 08:19:11 crc kubenswrapper[4832]: I0125 08:19:11.567812 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"fcff2a1c-2a06-4930-aec6-2970335e6e78","Type":"ContainerDied","Data":"cfb3f58aebd01b784ef5c30886993ea09e6016a58e785780765bd2caf20533af"} Jan 25 08:19:11 crc kubenswrapper[4832]: I0125 08:19:11.567909 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 25 08:19:11 crc kubenswrapper[4832]: I0125 08:19:11.567956 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"fcff2a1c-2a06-4930-aec6-2970335e6e78","Type":"ContainerDied","Data":"c227bdc70c6ae3586ade0af7a0a3a0c9868bcd73c84315b48d0dcdc8cba6892b"} Jan 25 08:19:11 crc kubenswrapper[4832]: I0125 08:19:11.568071 4832 scope.go:117] "RemoveContainer" containerID="cfb3f58aebd01b784ef5c30886993ea09e6016a58e785780765bd2caf20533af" Jan 25 08:19:11 crc kubenswrapper[4832]: I0125 08:19:11.573082 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"d322a933-38eb-4eb0-81c7-86d11a5f2d2c","Type":"ContainerStarted","Data":"c4d290aec8c87a9293443f7ba4eeef8cb9631babbbd27f4b4bacd289a7ed0ca7"} Jan 25 08:19:11 crc kubenswrapper[4832]: I0125 08:19:11.589786 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-msz8v\" (UniqueName: \"kubernetes.io/projected/fcff2a1c-2a06-4930-aec6-2970335e6e78-kube-api-access-msz8v\") pod \"fcff2a1c-2a06-4930-aec6-2970335e6e78\" (UID: \"fcff2a1c-2a06-4930-aec6-2970335e6e78\") " Jan 25 08:19:11 crc kubenswrapper[4832]: I0125 08:19:11.590879 4832 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fcff2a1c-2a06-4930-aec6-2970335e6e78-config-data\") on node \"crc\" DevicePath \"\"" Jan 25 08:19:11 crc kubenswrapper[4832]: I0125 08:19:11.591144 4832 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fcff2a1c-2a06-4930-aec6-2970335e6e78-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 25 08:19:11 crc kubenswrapper[4832]: I0125 08:19:11.598148 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fcff2a1c-2a06-4930-aec6-2970335e6e78-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "fcff2a1c-2a06-4930-aec6-2970335e6e78" (UID: "fcff2a1c-2a06-4930-aec6-2970335e6e78"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 25 08:19:11 crc kubenswrapper[4832]: I0125 08:19:11.602793 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fcff2a1c-2a06-4930-aec6-2970335e6e78-kube-api-access-msz8v" (OuterVolumeSpecName: "kube-api-access-msz8v") pod "fcff2a1c-2a06-4930-aec6-2970335e6e78" (UID: "fcff2a1c-2a06-4930-aec6-2970335e6e78"). InnerVolumeSpecName "kube-api-access-msz8v". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 25 08:19:11 crc kubenswrapper[4832]: I0125 08:19:11.615486 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.615446156 podStartE2EDuration="2.615446156s" podCreationTimestamp="2026-01-25 08:19:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-25 08:19:11.598704878 +0000 UTC m=+1334.272528431" watchObservedRunningTime="2026-01-25 08:19:11.615446156 +0000 UTC m=+1334.289269689" Jan 25 08:19:11 crc kubenswrapper[4832]: I0125 08:19:11.693929 4832 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/fcff2a1c-2a06-4930-aec6-2970335e6e78-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 25 08:19:11 crc kubenswrapper[4832]: I0125 08:19:11.693974 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-msz8v\" (UniqueName: \"kubernetes.io/projected/fcff2a1c-2a06-4930-aec6-2970335e6e78-kube-api-access-msz8v\") on node \"crc\" DevicePath \"\"" Jan 25 08:19:11 crc kubenswrapper[4832]: I0125 08:19:11.701860 4832 scope.go:117] "RemoveContainer" containerID="7697b5c3285287de3a50d7a78ae8d1d130db9866c171a8ac9f02b1cbe751db00" Jan 25 08:19:11 crc kubenswrapper[4832]: I0125 08:19:11.729063 4832 scope.go:117] "RemoveContainer" containerID="cfb3f58aebd01b784ef5c30886993ea09e6016a58e785780765bd2caf20533af" Jan 25 08:19:11 crc kubenswrapper[4832]: E0125 08:19:11.730198 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cfb3f58aebd01b784ef5c30886993ea09e6016a58e785780765bd2caf20533af\": container with ID starting with cfb3f58aebd01b784ef5c30886993ea09e6016a58e785780765bd2caf20533af not found: ID does not exist" containerID="cfb3f58aebd01b784ef5c30886993ea09e6016a58e785780765bd2caf20533af" Jan 25 08:19:11 crc kubenswrapper[4832]: I0125 08:19:11.730239 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cfb3f58aebd01b784ef5c30886993ea09e6016a58e785780765bd2caf20533af"} err="failed to get container status \"cfb3f58aebd01b784ef5c30886993ea09e6016a58e785780765bd2caf20533af\": rpc error: code = NotFound desc = could not find container \"cfb3f58aebd01b784ef5c30886993ea09e6016a58e785780765bd2caf20533af\": container with ID starting with cfb3f58aebd01b784ef5c30886993ea09e6016a58e785780765bd2caf20533af not found: ID does not exist" Jan 25 08:19:11 crc kubenswrapper[4832]: I0125 08:19:11.730265 4832 scope.go:117] "RemoveContainer" containerID="7697b5c3285287de3a50d7a78ae8d1d130db9866c171a8ac9f02b1cbe751db00" Jan 25 08:19:11 crc kubenswrapper[4832]: E0125 08:19:11.730663 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7697b5c3285287de3a50d7a78ae8d1d130db9866c171a8ac9f02b1cbe751db00\": container with ID starting with 7697b5c3285287de3a50d7a78ae8d1d130db9866c171a8ac9f02b1cbe751db00 not found: ID does not exist" containerID="7697b5c3285287de3a50d7a78ae8d1d130db9866c171a8ac9f02b1cbe751db00" Jan 25 08:19:11 crc kubenswrapper[4832]: I0125 08:19:11.730734 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7697b5c3285287de3a50d7a78ae8d1d130db9866c171a8ac9f02b1cbe751db00"} err="failed to get container status \"7697b5c3285287de3a50d7a78ae8d1d130db9866c171a8ac9f02b1cbe751db00\": rpc error: code = NotFound desc = could not find container \"7697b5c3285287de3a50d7a78ae8d1d130db9866c171a8ac9f02b1cbe751db00\": container with ID starting with 7697b5c3285287de3a50d7a78ae8d1d130db9866c171a8ac9f02b1cbe751db00 not found: ID does not exist" Jan 25 08:19:11 crc kubenswrapper[4832]: I0125 08:19:11.905947 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 25 08:19:11 crc kubenswrapper[4832]: I0125 08:19:11.936085 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Jan 25 08:19:11 crc kubenswrapper[4832]: I0125 08:19:11.952259 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Jan 25 08:19:11 crc kubenswrapper[4832]: E0125 08:19:11.953006 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fcff2a1c-2a06-4930-aec6-2970335e6e78" containerName="nova-metadata-metadata" Jan 25 08:19:11 crc kubenswrapper[4832]: I0125 08:19:11.953036 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="fcff2a1c-2a06-4930-aec6-2970335e6e78" containerName="nova-metadata-metadata" Jan 25 08:19:11 crc kubenswrapper[4832]: E0125 08:19:11.953067 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fcff2a1c-2a06-4930-aec6-2970335e6e78" containerName="nova-metadata-log" Jan 25 08:19:11 crc kubenswrapper[4832]: I0125 08:19:11.953077 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="fcff2a1c-2a06-4930-aec6-2970335e6e78" containerName="nova-metadata-log" Jan 25 08:19:11 crc kubenswrapper[4832]: I0125 08:19:11.953342 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="fcff2a1c-2a06-4930-aec6-2970335e6e78" containerName="nova-metadata-metadata" Jan 25 08:19:11 crc kubenswrapper[4832]: I0125 08:19:11.953369 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="fcff2a1c-2a06-4930-aec6-2970335e6e78" containerName="nova-metadata-log" Jan 25 08:19:11 crc kubenswrapper[4832]: I0125 08:19:11.954923 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 25 08:19:11 crc kubenswrapper[4832]: I0125 08:19:11.960744 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Jan 25 08:19:11 crc kubenswrapper[4832]: I0125 08:19:11.962172 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 25 08:19:11 crc kubenswrapper[4832]: I0125 08:19:11.963678 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Jan 25 08:19:12 crc kubenswrapper[4832]: I0125 08:19:12.118935 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3c0a6750-31ec-4a66-8160-2f74a44a5d33-logs\") pod \"nova-metadata-0\" (UID: \"3c0a6750-31ec-4a66-8160-2f74a44a5d33\") " pod="openstack/nova-metadata-0" Jan 25 08:19:12 crc kubenswrapper[4832]: I0125 08:19:12.119014 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/3c0a6750-31ec-4a66-8160-2f74a44a5d33-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"3c0a6750-31ec-4a66-8160-2f74a44a5d33\") " pod="openstack/nova-metadata-0" Jan 25 08:19:12 crc kubenswrapper[4832]: I0125 08:19:12.119129 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3c0a6750-31ec-4a66-8160-2f74a44a5d33-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"3c0a6750-31ec-4a66-8160-2f74a44a5d33\") " pod="openstack/nova-metadata-0" Jan 25 08:19:12 crc kubenswrapper[4832]: I0125 08:19:12.119172 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3c0a6750-31ec-4a66-8160-2f74a44a5d33-config-data\") pod \"nova-metadata-0\" (UID: \"3c0a6750-31ec-4a66-8160-2f74a44a5d33\") " pod="openstack/nova-metadata-0" Jan 25 08:19:12 crc kubenswrapper[4832]: I0125 08:19:12.119220 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w4xwc\" (UniqueName: \"kubernetes.io/projected/3c0a6750-31ec-4a66-8160-2f74a44a5d33-kube-api-access-w4xwc\") pod \"nova-metadata-0\" (UID: \"3c0a6750-31ec-4a66-8160-2f74a44a5d33\") " pod="openstack/nova-metadata-0" Jan 25 08:19:12 crc kubenswrapper[4832]: I0125 08:19:12.221666 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3c0a6750-31ec-4a66-8160-2f74a44a5d33-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"3c0a6750-31ec-4a66-8160-2f74a44a5d33\") " pod="openstack/nova-metadata-0" Jan 25 08:19:12 crc kubenswrapper[4832]: I0125 08:19:12.221741 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3c0a6750-31ec-4a66-8160-2f74a44a5d33-config-data\") pod \"nova-metadata-0\" (UID: \"3c0a6750-31ec-4a66-8160-2f74a44a5d33\") " pod="openstack/nova-metadata-0" Jan 25 08:19:12 crc kubenswrapper[4832]: I0125 08:19:12.221799 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w4xwc\" (UniqueName: \"kubernetes.io/projected/3c0a6750-31ec-4a66-8160-2f74a44a5d33-kube-api-access-w4xwc\") pod \"nova-metadata-0\" (UID: \"3c0a6750-31ec-4a66-8160-2f74a44a5d33\") " pod="openstack/nova-metadata-0" Jan 25 08:19:12 crc kubenswrapper[4832]: I0125 08:19:12.221878 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3c0a6750-31ec-4a66-8160-2f74a44a5d33-logs\") pod \"nova-metadata-0\" (UID: \"3c0a6750-31ec-4a66-8160-2f74a44a5d33\") " pod="openstack/nova-metadata-0" Jan 25 08:19:12 crc kubenswrapper[4832]: I0125 08:19:12.221908 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/3c0a6750-31ec-4a66-8160-2f74a44a5d33-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"3c0a6750-31ec-4a66-8160-2f74a44a5d33\") " pod="openstack/nova-metadata-0" Jan 25 08:19:12 crc kubenswrapper[4832]: I0125 08:19:12.222315 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3c0a6750-31ec-4a66-8160-2f74a44a5d33-logs\") pod \"nova-metadata-0\" (UID: \"3c0a6750-31ec-4a66-8160-2f74a44a5d33\") " pod="openstack/nova-metadata-0" Jan 25 08:19:12 crc kubenswrapper[4832]: I0125 08:19:12.226812 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3c0a6750-31ec-4a66-8160-2f74a44a5d33-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"3c0a6750-31ec-4a66-8160-2f74a44a5d33\") " pod="openstack/nova-metadata-0" Jan 25 08:19:12 crc kubenswrapper[4832]: I0125 08:19:12.227404 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/3c0a6750-31ec-4a66-8160-2f74a44a5d33-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"3c0a6750-31ec-4a66-8160-2f74a44a5d33\") " pod="openstack/nova-metadata-0" Jan 25 08:19:12 crc kubenswrapper[4832]: I0125 08:19:12.228017 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3c0a6750-31ec-4a66-8160-2f74a44a5d33-config-data\") pod \"nova-metadata-0\" (UID: \"3c0a6750-31ec-4a66-8160-2f74a44a5d33\") " pod="openstack/nova-metadata-0" Jan 25 08:19:12 crc kubenswrapper[4832]: I0125 08:19:12.247624 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w4xwc\" (UniqueName: \"kubernetes.io/projected/3c0a6750-31ec-4a66-8160-2f74a44a5d33-kube-api-access-w4xwc\") pod \"nova-metadata-0\" (UID: \"3c0a6750-31ec-4a66-8160-2f74a44a5d33\") " pod="openstack/nova-metadata-0" Jan 25 08:19:12 crc kubenswrapper[4832]: I0125 08:19:12.280866 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 25 08:19:12 crc kubenswrapper[4832]: I0125 08:19:12.748112 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 25 08:19:13 crc kubenswrapper[4832]: I0125 08:19:13.552986 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 25 08:19:13 crc kubenswrapper[4832]: I0125 08:19:13.597062 4832 generic.go:334] "Generic (PLEG): container finished" podID="f9ebe7ae-8c59-4736-8722-b0d8bcfa61f0" containerID="9de35a63bac6dead6113b6c1fd3c5e2bd0ddb664dbe0ca107111996947ec14b2" exitCode=0 Jan 25 08:19:13 crc kubenswrapper[4832]: I0125 08:19:13.597133 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"f9ebe7ae-8c59-4736-8722-b0d8bcfa61f0","Type":"ContainerDied","Data":"9de35a63bac6dead6113b6c1fd3c5e2bd0ddb664dbe0ca107111996947ec14b2"} Jan 25 08:19:13 crc kubenswrapper[4832]: I0125 08:19:13.597158 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 25 08:19:13 crc kubenswrapper[4832]: I0125 08:19:13.597187 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"f9ebe7ae-8c59-4736-8722-b0d8bcfa61f0","Type":"ContainerDied","Data":"fa0db33144a070e871cc8148838c11bce025a8dc568bc860970ad3ed9b8983a6"} Jan 25 08:19:13 crc kubenswrapper[4832]: I0125 08:19:13.597206 4832 scope.go:117] "RemoveContainer" containerID="9de35a63bac6dead6113b6c1fd3c5e2bd0ddb664dbe0ca107111996947ec14b2" Jan 25 08:19:13 crc kubenswrapper[4832]: I0125 08:19:13.600202 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"3c0a6750-31ec-4a66-8160-2f74a44a5d33","Type":"ContainerStarted","Data":"8d41b7fe1b0f4e5f3ca968fead3942d5933ed6398f18a0aed5fbef93211b9723"} Jan 25 08:19:13 crc kubenswrapper[4832]: I0125 08:19:13.600237 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"3c0a6750-31ec-4a66-8160-2f74a44a5d33","Type":"ContainerStarted","Data":"3ddd70f926b22ff1621c6a533ec5a942dc589d875e2660f72f49379820e698b4"} Jan 25 08:19:13 crc kubenswrapper[4832]: I0125 08:19:13.600248 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"3c0a6750-31ec-4a66-8160-2f74a44a5d33","Type":"ContainerStarted","Data":"400171c170bf6d5b4ad78250e6b6dbae3dbcbbb3042002d5a0ff4144fc146956"} Jan 25 08:19:13 crc kubenswrapper[4832]: I0125 08:19:13.619999 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.619979044 podStartE2EDuration="2.619979044s" podCreationTimestamp="2026-01-25 08:19:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-25 08:19:13.615135907 +0000 UTC m=+1336.288959440" watchObservedRunningTime="2026-01-25 08:19:13.619979044 +0000 UTC m=+1336.293802577" Jan 25 08:19:13 crc kubenswrapper[4832]: I0125 08:19:13.621105 4832 scope.go:117] "RemoveContainer" containerID="bd9c318a2a577ef8e6704c4fba8e7f191a4bdf4816299ee85079cfbdcbb226cc" Jan 25 08:19:13 crc kubenswrapper[4832]: I0125 08:19:13.640718 4832 scope.go:117] "RemoveContainer" containerID="9de35a63bac6dead6113b6c1fd3c5e2bd0ddb664dbe0ca107111996947ec14b2" Jan 25 08:19:13 crc kubenswrapper[4832]: E0125 08:19:13.641166 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9de35a63bac6dead6113b6c1fd3c5e2bd0ddb664dbe0ca107111996947ec14b2\": container with ID starting with 9de35a63bac6dead6113b6c1fd3c5e2bd0ddb664dbe0ca107111996947ec14b2 not found: ID does not exist" containerID="9de35a63bac6dead6113b6c1fd3c5e2bd0ddb664dbe0ca107111996947ec14b2" Jan 25 08:19:13 crc kubenswrapper[4832]: I0125 08:19:13.641211 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9de35a63bac6dead6113b6c1fd3c5e2bd0ddb664dbe0ca107111996947ec14b2"} err="failed to get container status \"9de35a63bac6dead6113b6c1fd3c5e2bd0ddb664dbe0ca107111996947ec14b2\": rpc error: code = NotFound desc = could not find container \"9de35a63bac6dead6113b6c1fd3c5e2bd0ddb664dbe0ca107111996947ec14b2\": container with ID starting with 9de35a63bac6dead6113b6c1fd3c5e2bd0ddb664dbe0ca107111996947ec14b2 not found: ID does not exist" Jan 25 08:19:13 crc kubenswrapper[4832]: I0125 08:19:13.641239 4832 scope.go:117] "RemoveContainer" containerID="bd9c318a2a577ef8e6704c4fba8e7f191a4bdf4816299ee85079cfbdcbb226cc" Jan 25 08:19:13 crc kubenswrapper[4832]: E0125 08:19:13.641961 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bd9c318a2a577ef8e6704c4fba8e7f191a4bdf4816299ee85079cfbdcbb226cc\": container with ID starting with bd9c318a2a577ef8e6704c4fba8e7f191a4bdf4816299ee85079cfbdcbb226cc not found: ID does not exist" containerID="bd9c318a2a577ef8e6704c4fba8e7f191a4bdf4816299ee85079cfbdcbb226cc" Jan 25 08:19:13 crc kubenswrapper[4832]: I0125 08:19:13.641991 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bd9c318a2a577ef8e6704c4fba8e7f191a4bdf4816299ee85079cfbdcbb226cc"} err="failed to get container status \"bd9c318a2a577ef8e6704c4fba8e7f191a4bdf4816299ee85079cfbdcbb226cc\": rpc error: code = NotFound desc = could not find container \"bd9c318a2a577ef8e6704c4fba8e7f191a4bdf4816299ee85079cfbdcbb226cc\": container with ID starting with bd9c318a2a577ef8e6704c4fba8e7f191a4bdf4816299ee85079cfbdcbb226cc not found: ID does not exist" Jan 25 08:19:13 crc kubenswrapper[4832]: I0125 08:19:13.646683 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wcjqg\" (UniqueName: \"kubernetes.io/projected/f9ebe7ae-8c59-4736-8722-b0d8bcfa61f0-kube-api-access-wcjqg\") pod \"f9ebe7ae-8c59-4736-8722-b0d8bcfa61f0\" (UID: \"f9ebe7ae-8c59-4736-8722-b0d8bcfa61f0\") " Jan 25 08:19:13 crc kubenswrapper[4832]: I0125 08:19:13.646772 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f9ebe7ae-8c59-4736-8722-b0d8bcfa61f0-config-data\") pod \"f9ebe7ae-8c59-4736-8722-b0d8bcfa61f0\" (UID: \"f9ebe7ae-8c59-4736-8722-b0d8bcfa61f0\") " Jan 25 08:19:13 crc kubenswrapper[4832]: I0125 08:19:13.646804 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f9ebe7ae-8c59-4736-8722-b0d8bcfa61f0-combined-ca-bundle\") pod \"f9ebe7ae-8c59-4736-8722-b0d8bcfa61f0\" (UID: \"f9ebe7ae-8c59-4736-8722-b0d8bcfa61f0\") " Jan 25 08:19:13 crc kubenswrapper[4832]: I0125 08:19:13.646914 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f9ebe7ae-8c59-4736-8722-b0d8bcfa61f0-internal-tls-certs\") pod \"f9ebe7ae-8c59-4736-8722-b0d8bcfa61f0\" (UID: \"f9ebe7ae-8c59-4736-8722-b0d8bcfa61f0\") " Jan 25 08:19:13 crc kubenswrapper[4832]: I0125 08:19:13.646985 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f9ebe7ae-8c59-4736-8722-b0d8bcfa61f0-logs\") pod \"f9ebe7ae-8c59-4736-8722-b0d8bcfa61f0\" (UID: \"f9ebe7ae-8c59-4736-8722-b0d8bcfa61f0\") " Jan 25 08:19:13 crc kubenswrapper[4832]: I0125 08:19:13.647047 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f9ebe7ae-8c59-4736-8722-b0d8bcfa61f0-public-tls-certs\") pod \"f9ebe7ae-8c59-4736-8722-b0d8bcfa61f0\" (UID: \"f9ebe7ae-8c59-4736-8722-b0d8bcfa61f0\") " Jan 25 08:19:13 crc kubenswrapper[4832]: I0125 08:19:13.647418 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f9ebe7ae-8c59-4736-8722-b0d8bcfa61f0-logs" (OuterVolumeSpecName: "logs") pod "f9ebe7ae-8c59-4736-8722-b0d8bcfa61f0" (UID: "f9ebe7ae-8c59-4736-8722-b0d8bcfa61f0"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 25 08:19:13 crc kubenswrapper[4832]: I0125 08:19:13.648093 4832 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f9ebe7ae-8c59-4736-8722-b0d8bcfa61f0-logs\") on node \"crc\" DevicePath \"\"" Jan 25 08:19:13 crc kubenswrapper[4832]: I0125 08:19:13.683719 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fcff2a1c-2a06-4930-aec6-2970335e6e78" path="/var/lib/kubelet/pods/fcff2a1c-2a06-4930-aec6-2970335e6e78/volumes" Jan 25 08:19:13 crc kubenswrapper[4832]: I0125 08:19:13.687473 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f9ebe7ae-8c59-4736-8722-b0d8bcfa61f0-kube-api-access-wcjqg" (OuterVolumeSpecName: "kube-api-access-wcjqg") pod "f9ebe7ae-8c59-4736-8722-b0d8bcfa61f0" (UID: "f9ebe7ae-8c59-4736-8722-b0d8bcfa61f0"). InnerVolumeSpecName "kube-api-access-wcjqg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 25 08:19:13 crc kubenswrapper[4832]: I0125 08:19:13.694103 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f9ebe7ae-8c59-4736-8722-b0d8bcfa61f0-config-data" (OuterVolumeSpecName: "config-data") pod "f9ebe7ae-8c59-4736-8722-b0d8bcfa61f0" (UID: "f9ebe7ae-8c59-4736-8722-b0d8bcfa61f0"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 25 08:19:13 crc kubenswrapper[4832]: I0125 08:19:13.695592 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f9ebe7ae-8c59-4736-8722-b0d8bcfa61f0-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f9ebe7ae-8c59-4736-8722-b0d8bcfa61f0" (UID: "f9ebe7ae-8c59-4736-8722-b0d8bcfa61f0"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 25 08:19:13 crc kubenswrapper[4832]: I0125 08:19:13.734756 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f9ebe7ae-8c59-4736-8722-b0d8bcfa61f0-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "f9ebe7ae-8c59-4736-8722-b0d8bcfa61f0" (UID: "f9ebe7ae-8c59-4736-8722-b0d8bcfa61f0"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 25 08:19:13 crc kubenswrapper[4832]: I0125 08:19:13.742729 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f9ebe7ae-8c59-4736-8722-b0d8bcfa61f0-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "f9ebe7ae-8c59-4736-8722-b0d8bcfa61f0" (UID: "f9ebe7ae-8c59-4736-8722-b0d8bcfa61f0"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 25 08:19:13 crc kubenswrapper[4832]: I0125 08:19:13.750344 4832 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f9ebe7ae-8c59-4736-8722-b0d8bcfa61f0-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 25 08:19:13 crc kubenswrapper[4832]: I0125 08:19:13.750416 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wcjqg\" (UniqueName: \"kubernetes.io/projected/f9ebe7ae-8c59-4736-8722-b0d8bcfa61f0-kube-api-access-wcjqg\") on node \"crc\" DevicePath \"\"" Jan 25 08:19:13 crc kubenswrapper[4832]: I0125 08:19:13.750432 4832 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f9ebe7ae-8c59-4736-8722-b0d8bcfa61f0-config-data\") on node \"crc\" DevicePath \"\"" Jan 25 08:19:13 crc kubenswrapper[4832]: I0125 08:19:13.750443 4832 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f9ebe7ae-8c59-4736-8722-b0d8bcfa61f0-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 25 08:19:13 crc kubenswrapper[4832]: I0125 08:19:13.750456 4832 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f9ebe7ae-8c59-4736-8722-b0d8bcfa61f0-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 25 08:19:13 crc kubenswrapper[4832]: I0125 08:19:13.945301 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 25 08:19:13 crc kubenswrapper[4832]: I0125 08:19:13.961106 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Jan 25 08:19:13 crc kubenswrapper[4832]: I0125 08:19:13.971851 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Jan 25 08:19:13 crc kubenswrapper[4832]: E0125 08:19:13.972277 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f9ebe7ae-8c59-4736-8722-b0d8bcfa61f0" containerName="nova-api-api" Jan 25 08:19:13 crc kubenswrapper[4832]: I0125 08:19:13.972297 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="f9ebe7ae-8c59-4736-8722-b0d8bcfa61f0" containerName="nova-api-api" Jan 25 08:19:13 crc kubenswrapper[4832]: E0125 08:19:13.972313 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f9ebe7ae-8c59-4736-8722-b0d8bcfa61f0" containerName="nova-api-log" Jan 25 08:19:13 crc kubenswrapper[4832]: I0125 08:19:13.972319 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="f9ebe7ae-8c59-4736-8722-b0d8bcfa61f0" containerName="nova-api-log" Jan 25 08:19:13 crc kubenswrapper[4832]: I0125 08:19:13.972522 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="f9ebe7ae-8c59-4736-8722-b0d8bcfa61f0" containerName="nova-api-api" Jan 25 08:19:13 crc kubenswrapper[4832]: I0125 08:19:13.972545 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="f9ebe7ae-8c59-4736-8722-b0d8bcfa61f0" containerName="nova-api-log" Jan 25 08:19:13 crc kubenswrapper[4832]: I0125 08:19:13.973597 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 25 08:19:13 crc kubenswrapper[4832]: I0125 08:19:13.976130 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Jan 25 08:19:13 crc kubenswrapper[4832]: I0125 08:19:13.977471 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Jan 25 08:19:13 crc kubenswrapper[4832]: I0125 08:19:13.979432 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Jan 25 08:19:13 crc kubenswrapper[4832]: I0125 08:19:13.982214 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 25 08:19:14 crc kubenswrapper[4832]: I0125 08:19:14.058231 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/853956ed-8d6c-401a-9d3b-7325013053a4-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"853956ed-8d6c-401a-9d3b-7325013053a4\") " pod="openstack/nova-api-0" Jan 25 08:19:14 crc kubenswrapper[4832]: I0125 08:19:14.058618 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/853956ed-8d6c-401a-9d3b-7325013053a4-public-tls-certs\") pod \"nova-api-0\" (UID: \"853956ed-8d6c-401a-9d3b-7325013053a4\") " pod="openstack/nova-api-0" Jan 25 08:19:14 crc kubenswrapper[4832]: I0125 08:19:14.058678 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/853956ed-8d6c-401a-9d3b-7325013053a4-logs\") pod \"nova-api-0\" (UID: \"853956ed-8d6c-401a-9d3b-7325013053a4\") " pod="openstack/nova-api-0" Jan 25 08:19:14 crc kubenswrapper[4832]: I0125 08:19:14.058709 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-65rch\" (UniqueName: \"kubernetes.io/projected/853956ed-8d6c-401a-9d3b-7325013053a4-kube-api-access-65rch\") pod \"nova-api-0\" (UID: \"853956ed-8d6c-401a-9d3b-7325013053a4\") " pod="openstack/nova-api-0" Jan 25 08:19:14 crc kubenswrapper[4832]: I0125 08:19:14.058733 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/853956ed-8d6c-401a-9d3b-7325013053a4-internal-tls-certs\") pod \"nova-api-0\" (UID: \"853956ed-8d6c-401a-9d3b-7325013053a4\") " pod="openstack/nova-api-0" Jan 25 08:19:14 crc kubenswrapper[4832]: I0125 08:19:14.058765 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/853956ed-8d6c-401a-9d3b-7325013053a4-config-data\") pod \"nova-api-0\" (UID: \"853956ed-8d6c-401a-9d3b-7325013053a4\") " pod="openstack/nova-api-0" Jan 25 08:19:14 crc kubenswrapper[4832]: I0125 08:19:14.161040 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/853956ed-8d6c-401a-9d3b-7325013053a4-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"853956ed-8d6c-401a-9d3b-7325013053a4\") " pod="openstack/nova-api-0" Jan 25 08:19:14 crc kubenswrapper[4832]: I0125 08:19:14.161136 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/853956ed-8d6c-401a-9d3b-7325013053a4-public-tls-certs\") pod \"nova-api-0\" (UID: \"853956ed-8d6c-401a-9d3b-7325013053a4\") " pod="openstack/nova-api-0" Jan 25 08:19:14 crc kubenswrapper[4832]: I0125 08:19:14.161195 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/853956ed-8d6c-401a-9d3b-7325013053a4-logs\") pod \"nova-api-0\" (UID: \"853956ed-8d6c-401a-9d3b-7325013053a4\") " pod="openstack/nova-api-0" Jan 25 08:19:14 crc kubenswrapper[4832]: I0125 08:19:14.161226 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-65rch\" (UniqueName: \"kubernetes.io/projected/853956ed-8d6c-401a-9d3b-7325013053a4-kube-api-access-65rch\") pod \"nova-api-0\" (UID: \"853956ed-8d6c-401a-9d3b-7325013053a4\") " pod="openstack/nova-api-0" Jan 25 08:19:14 crc kubenswrapper[4832]: I0125 08:19:14.161246 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/853956ed-8d6c-401a-9d3b-7325013053a4-internal-tls-certs\") pod \"nova-api-0\" (UID: \"853956ed-8d6c-401a-9d3b-7325013053a4\") " pod="openstack/nova-api-0" Jan 25 08:19:14 crc kubenswrapper[4832]: I0125 08:19:14.161275 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/853956ed-8d6c-401a-9d3b-7325013053a4-config-data\") pod \"nova-api-0\" (UID: \"853956ed-8d6c-401a-9d3b-7325013053a4\") " pod="openstack/nova-api-0" Jan 25 08:19:14 crc kubenswrapper[4832]: I0125 08:19:14.161920 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/853956ed-8d6c-401a-9d3b-7325013053a4-logs\") pod \"nova-api-0\" (UID: \"853956ed-8d6c-401a-9d3b-7325013053a4\") " pod="openstack/nova-api-0" Jan 25 08:19:14 crc kubenswrapper[4832]: I0125 08:19:14.166075 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/853956ed-8d6c-401a-9d3b-7325013053a4-public-tls-certs\") pod \"nova-api-0\" (UID: \"853956ed-8d6c-401a-9d3b-7325013053a4\") " pod="openstack/nova-api-0" Jan 25 08:19:14 crc kubenswrapper[4832]: I0125 08:19:14.166523 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/853956ed-8d6c-401a-9d3b-7325013053a4-internal-tls-certs\") pod \"nova-api-0\" (UID: \"853956ed-8d6c-401a-9d3b-7325013053a4\") " pod="openstack/nova-api-0" Jan 25 08:19:14 crc kubenswrapper[4832]: I0125 08:19:14.166757 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/853956ed-8d6c-401a-9d3b-7325013053a4-config-data\") pod \"nova-api-0\" (UID: \"853956ed-8d6c-401a-9d3b-7325013053a4\") " pod="openstack/nova-api-0" Jan 25 08:19:14 crc kubenswrapper[4832]: I0125 08:19:14.166911 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/853956ed-8d6c-401a-9d3b-7325013053a4-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"853956ed-8d6c-401a-9d3b-7325013053a4\") " pod="openstack/nova-api-0" Jan 25 08:19:14 crc kubenswrapper[4832]: I0125 08:19:14.184688 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-65rch\" (UniqueName: \"kubernetes.io/projected/853956ed-8d6c-401a-9d3b-7325013053a4-kube-api-access-65rch\") pod \"nova-api-0\" (UID: \"853956ed-8d6c-401a-9d3b-7325013053a4\") " pod="openstack/nova-api-0" Jan 25 08:19:14 crc kubenswrapper[4832]: I0125 08:19:14.309123 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 25 08:19:14 crc kubenswrapper[4832]: I0125 08:19:14.837790 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 25 08:19:14 crc kubenswrapper[4832]: W0125 08:19:14.839560 4832 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod853956ed_8d6c_401a_9d3b_7325013053a4.slice/crio-e0bb8c2418ff2a2b9e02d98c4215a35f937158ca0a7432e62f442f07bec1a5ea WatchSource:0}: Error finding container e0bb8c2418ff2a2b9e02d98c4215a35f937158ca0a7432e62f442f07bec1a5ea: Status 404 returned error can't find the container with id e0bb8c2418ff2a2b9e02d98c4215a35f937158ca0a7432e62f442f07bec1a5ea Jan 25 08:19:14 crc kubenswrapper[4832]: I0125 08:19:14.929689 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Jan 25 08:19:15 crc kubenswrapper[4832]: I0125 08:19:15.626476 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"853956ed-8d6c-401a-9d3b-7325013053a4","Type":"ContainerStarted","Data":"164d2a512dcb0693733d209f8666a9a4d3c33d9c4ee5294743bae87ccc95e9f9"} Jan 25 08:19:15 crc kubenswrapper[4832]: I0125 08:19:15.627184 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"853956ed-8d6c-401a-9d3b-7325013053a4","Type":"ContainerStarted","Data":"91e17a63f463e7f54ebe9b13c7e553d7bece88b6d30a31682d82358157400b4d"} Jan 25 08:19:15 crc kubenswrapper[4832]: I0125 08:19:15.627196 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"853956ed-8d6c-401a-9d3b-7325013053a4","Type":"ContainerStarted","Data":"e0bb8c2418ff2a2b9e02d98c4215a35f937158ca0a7432e62f442f07bec1a5ea"} Jan 25 08:19:15 crc kubenswrapper[4832]: I0125 08:19:15.657079 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.657056389 podStartE2EDuration="2.657056389s" podCreationTimestamp="2026-01-25 08:19:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-25 08:19:15.643501397 +0000 UTC m=+1338.317324950" watchObservedRunningTime="2026-01-25 08:19:15.657056389 +0000 UTC m=+1338.330879922" Jan 25 08:19:15 crc kubenswrapper[4832]: I0125 08:19:15.680647 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f9ebe7ae-8c59-4736-8722-b0d8bcfa61f0" path="/var/lib/kubelet/pods/f9ebe7ae-8c59-4736-8722-b0d8bcfa61f0/volumes" Jan 25 08:19:17 crc kubenswrapper[4832]: I0125 08:19:17.281828 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 25 08:19:17 crc kubenswrapper[4832]: I0125 08:19:17.282521 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 25 08:19:19 crc kubenswrapper[4832]: I0125 08:19:19.930155 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Jan 25 08:19:19 crc kubenswrapper[4832]: I0125 08:19:19.962374 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Jan 25 08:19:20 crc kubenswrapper[4832]: I0125 08:19:20.711793 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Jan 25 08:19:22 crc kubenswrapper[4832]: I0125 08:19:22.281549 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 25 08:19:22 crc kubenswrapper[4832]: I0125 08:19:22.281868 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 25 08:19:23 crc kubenswrapper[4832]: I0125 08:19:23.293561 4832 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="3c0a6750-31ec-4a66-8160-2f74a44a5d33" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.207:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 25 08:19:23 crc kubenswrapper[4832]: I0125 08:19:23.293567 4832 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="3c0a6750-31ec-4a66-8160-2f74a44a5d33" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.207:8775/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 25 08:19:24 crc kubenswrapper[4832]: I0125 08:19:24.310268 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 25 08:19:24 crc kubenswrapper[4832]: I0125 08:19:24.310724 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 25 08:19:25 crc kubenswrapper[4832]: I0125 08:19:25.332863 4832 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="853956ed-8d6c-401a-9d3b-7325013053a4" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.0.208:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 25 08:19:25 crc kubenswrapper[4832]: I0125 08:19:25.332886 4832 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="853956ed-8d6c-401a-9d3b-7325013053a4" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.0.208:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 25 08:19:25 crc kubenswrapper[4832]: I0125 08:19:25.853357 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Jan 25 08:19:32 crc kubenswrapper[4832]: I0125 08:19:32.289538 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Jan 25 08:19:32 crc kubenswrapper[4832]: I0125 08:19:32.291503 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Jan 25 08:19:32 crc kubenswrapper[4832]: I0125 08:19:32.295152 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Jan 25 08:19:32 crc kubenswrapper[4832]: I0125 08:19:32.779955 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Jan 25 08:19:34 crc kubenswrapper[4832]: I0125 08:19:34.318468 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Jan 25 08:19:34 crc kubenswrapper[4832]: I0125 08:19:34.318810 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Jan 25 08:19:34 crc kubenswrapper[4832]: I0125 08:19:34.319111 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 25 08:19:34 crc kubenswrapper[4832]: I0125 08:19:34.319153 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 25 08:19:34 crc kubenswrapper[4832]: I0125 08:19:34.326037 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Jan 25 08:19:34 crc kubenswrapper[4832]: I0125 08:19:34.326099 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Jan 25 08:19:42 crc kubenswrapper[4832]: I0125 08:19:42.479899 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 25 08:19:43 crc kubenswrapper[4832]: I0125 08:19:43.311797 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 25 08:19:47 crc kubenswrapper[4832]: I0125 08:19:47.076578 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-server-0" podUID="2f80d9a5-5d45-4053-875c-908242efc5e9" containerName="rabbitmq" containerID="cri-o://f156861900973b8bec71d88b12b47b18fb0be58100a51df160c5b222ddc36166" gracePeriod=604796 Jan 25 08:19:47 crc kubenswrapper[4832]: I0125 08:19:47.504493 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-cell1-server-0" podUID="9b86227f-350e-4e03-aefd-00f308ccb238" containerName="rabbitmq" containerID="cri-o://b4222cb79b322095ec7642cdbdab0fdb9e6322bb2158b4beba10850315703092" gracePeriod=604796 Jan 25 08:19:53 crc kubenswrapper[4832]: I0125 08:19:53.766516 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 25 08:19:53 crc kubenswrapper[4832]: I0125 08:19:53.886434 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/2f80d9a5-5d45-4053-875c-908242efc5e9-rabbitmq-tls\") pod \"2f80d9a5-5d45-4053-875c-908242efc5e9\" (UID: \"2f80d9a5-5d45-4053-875c-908242efc5e9\") " Jan 25 08:19:53 crc kubenswrapper[4832]: I0125 08:19:53.886980 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/2f80d9a5-5d45-4053-875c-908242efc5e9-plugins-conf\") pod \"2f80d9a5-5d45-4053-875c-908242efc5e9\" (UID: \"2f80d9a5-5d45-4053-875c-908242efc5e9\") " Jan 25 08:19:53 crc kubenswrapper[4832]: I0125 08:19:53.887044 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hvwf4\" (UniqueName: \"kubernetes.io/projected/2f80d9a5-5d45-4053-875c-908242efc5e9-kube-api-access-hvwf4\") pod \"2f80d9a5-5d45-4053-875c-908242efc5e9\" (UID: \"2f80d9a5-5d45-4053-875c-908242efc5e9\") " Jan 25 08:19:53 crc kubenswrapper[4832]: I0125 08:19:53.887072 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/2f80d9a5-5d45-4053-875c-908242efc5e9-rabbitmq-erlang-cookie\") pod \"2f80d9a5-5d45-4053-875c-908242efc5e9\" (UID: \"2f80d9a5-5d45-4053-875c-908242efc5e9\") " Jan 25 08:19:53 crc kubenswrapper[4832]: I0125 08:19:53.887119 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/2f80d9a5-5d45-4053-875c-908242efc5e9-erlang-cookie-secret\") pod \"2f80d9a5-5d45-4053-875c-908242efc5e9\" (UID: \"2f80d9a5-5d45-4053-875c-908242efc5e9\") " Jan 25 08:19:53 crc kubenswrapper[4832]: I0125 08:19:53.887149 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/2f80d9a5-5d45-4053-875c-908242efc5e9-config-data\") pod \"2f80d9a5-5d45-4053-875c-908242efc5e9\" (UID: \"2f80d9a5-5d45-4053-875c-908242efc5e9\") " Jan 25 08:19:53 crc kubenswrapper[4832]: I0125 08:19:53.887188 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/2f80d9a5-5d45-4053-875c-908242efc5e9-server-conf\") pod \"2f80d9a5-5d45-4053-875c-908242efc5e9\" (UID: \"2f80d9a5-5d45-4053-875c-908242efc5e9\") " Jan 25 08:19:53 crc kubenswrapper[4832]: I0125 08:19:53.887232 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/2f80d9a5-5d45-4053-875c-908242efc5e9-rabbitmq-plugins\") pod \"2f80d9a5-5d45-4053-875c-908242efc5e9\" (UID: \"2f80d9a5-5d45-4053-875c-908242efc5e9\") " Jan 25 08:19:53 crc kubenswrapper[4832]: I0125 08:19:53.887271 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/2f80d9a5-5d45-4053-875c-908242efc5e9-rabbitmq-confd\") pod \"2f80d9a5-5d45-4053-875c-908242efc5e9\" (UID: \"2f80d9a5-5d45-4053-875c-908242efc5e9\") " Jan 25 08:19:53 crc kubenswrapper[4832]: I0125 08:19:53.887314 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/2f80d9a5-5d45-4053-875c-908242efc5e9-pod-info\") pod \"2f80d9a5-5d45-4053-875c-908242efc5e9\" (UID: \"2f80d9a5-5d45-4053-875c-908242efc5e9\") " Jan 25 08:19:53 crc kubenswrapper[4832]: I0125 08:19:53.887373 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"2f80d9a5-5d45-4053-875c-908242efc5e9\" (UID: \"2f80d9a5-5d45-4053-875c-908242efc5e9\") " Jan 25 08:19:53 crc kubenswrapper[4832]: I0125 08:19:53.887483 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2f80d9a5-5d45-4053-875c-908242efc5e9-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "2f80d9a5-5d45-4053-875c-908242efc5e9" (UID: "2f80d9a5-5d45-4053-875c-908242efc5e9"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 25 08:19:53 crc kubenswrapper[4832]: I0125 08:19:53.887603 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2f80d9a5-5d45-4053-875c-908242efc5e9-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "2f80d9a5-5d45-4053-875c-908242efc5e9" (UID: "2f80d9a5-5d45-4053-875c-908242efc5e9"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 25 08:19:53 crc kubenswrapper[4832]: I0125 08:19:53.887743 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2f80d9a5-5d45-4053-875c-908242efc5e9-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "2f80d9a5-5d45-4053-875c-908242efc5e9" (UID: "2f80d9a5-5d45-4053-875c-908242efc5e9"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 25 08:19:53 crc kubenswrapper[4832]: I0125 08:19:53.888458 4832 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/2f80d9a5-5d45-4053-875c-908242efc5e9-plugins-conf\") on node \"crc\" DevicePath \"\"" Jan 25 08:19:53 crc kubenswrapper[4832]: I0125 08:19:53.888495 4832 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/2f80d9a5-5d45-4053-875c-908242efc5e9-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Jan 25 08:19:53 crc kubenswrapper[4832]: I0125 08:19:53.888516 4832 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/2f80d9a5-5d45-4053-875c-908242efc5e9-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Jan 25 08:19:53 crc kubenswrapper[4832]: I0125 08:19:53.893698 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/2f80d9a5-5d45-4053-875c-908242efc5e9-pod-info" (OuterVolumeSpecName: "pod-info") pod "2f80d9a5-5d45-4053-875c-908242efc5e9" (UID: "2f80d9a5-5d45-4053-875c-908242efc5e9"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Jan 25 08:19:53 crc kubenswrapper[4832]: I0125 08:19:53.897117 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage11-crc" (OuterVolumeSpecName: "persistence") pod "2f80d9a5-5d45-4053-875c-908242efc5e9" (UID: "2f80d9a5-5d45-4053-875c-908242efc5e9"). InnerVolumeSpecName "local-storage11-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 25 08:19:53 crc kubenswrapper[4832]: I0125 08:19:53.902375 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2f80d9a5-5d45-4053-875c-908242efc5e9-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "2f80d9a5-5d45-4053-875c-908242efc5e9" (UID: "2f80d9a5-5d45-4053-875c-908242efc5e9"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 25 08:19:53 crc kubenswrapper[4832]: I0125 08:19:53.903151 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2f80d9a5-5d45-4053-875c-908242efc5e9-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "2f80d9a5-5d45-4053-875c-908242efc5e9" (UID: "2f80d9a5-5d45-4053-875c-908242efc5e9"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 25 08:19:53 crc kubenswrapper[4832]: I0125 08:19:53.904378 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2f80d9a5-5d45-4053-875c-908242efc5e9-kube-api-access-hvwf4" (OuterVolumeSpecName: "kube-api-access-hvwf4") pod "2f80d9a5-5d45-4053-875c-908242efc5e9" (UID: "2f80d9a5-5d45-4053-875c-908242efc5e9"). InnerVolumeSpecName "kube-api-access-hvwf4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 25 08:19:53 crc kubenswrapper[4832]: I0125 08:19:53.937914 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2f80d9a5-5d45-4053-875c-908242efc5e9-config-data" (OuterVolumeSpecName: "config-data") pod "2f80d9a5-5d45-4053-875c-908242efc5e9" (UID: "2f80d9a5-5d45-4053-875c-908242efc5e9"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 25 08:19:53 crc kubenswrapper[4832]: I0125 08:19:53.961530 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2f80d9a5-5d45-4053-875c-908242efc5e9-server-conf" (OuterVolumeSpecName: "server-conf") pod "2f80d9a5-5d45-4053-875c-908242efc5e9" (UID: "2f80d9a5-5d45-4053-875c-908242efc5e9"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 25 08:19:53 crc kubenswrapper[4832]: I0125 08:19:53.966358 4832 generic.go:334] "Generic (PLEG): container finished" podID="2f80d9a5-5d45-4053-875c-908242efc5e9" containerID="f156861900973b8bec71d88b12b47b18fb0be58100a51df160c5b222ddc36166" exitCode=0 Jan 25 08:19:53 crc kubenswrapper[4832]: I0125 08:19:53.966430 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"2f80d9a5-5d45-4053-875c-908242efc5e9","Type":"ContainerDied","Data":"f156861900973b8bec71d88b12b47b18fb0be58100a51df160c5b222ddc36166"} Jan 25 08:19:53 crc kubenswrapper[4832]: I0125 08:19:53.966461 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"2f80d9a5-5d45-4053-875c-908242efc5e9","Type":"ContainerDied","Data":"0d94bc578c73ae11547fdb3111358597a30e981ca6604de55f0df30a236b7445"} Jan 25 08:19:53 crc kubenswrapper[4832]: I0125 08:19:53.966477 4832 scope.go:117] "RemoveContainer" containerID="f156861900973b8bec71d88b12b47b18fb0be58100a51df160c5b222ddc36166" Jan 25 08:19:53 crc kubenswrapper[4832]: I0125 08:19:53.966605 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 25 08:19:53 crc kubenswrapper[4832]: I0125 08:19:53.975986 4832 generic.go:334] "Generic (PLEG): container finished" podID="9b86227f-350e-4e03-aefd-00f308ccb238" containerID="b4222cb79b322095ec7642cdbdab0fdb9e6322bb2158b4beba10850315703092" exitCode=0 Jan 25 08:19:53 crc kubenswrapper[4832]: I0125 08:19:53.976022 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"9b86227f-350e-4e03-aefd-00f308ccb238","Type":"ContainerDied","Data":"b4222cb79b322095ec7642cdbdab0fdb9e6322bb2158b4beba10850315703092"} Jan 25 08:19:53 crc kubenswrapper[4832]: I0125 08:19:53.991656 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hvwf4\" (UniqueName: \"kubernetes.io/projected/2f80d9a5-5d45-4053-875c-908242efc5e9-kube-api-access-hvwf4\") on node \"crc\" DevicePath \"\"" Jan 25 08:19:53 crc kubenswrapper[4832]: I0125 08:19:53.991690 4832 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/2f80d9a5-5d45-4053-875c-908242efc5e9-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Jan 25 08:19:53 crc kubenswrapper[4832]: I0125 08:19:53.991699 4832 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/2f80d9a5-5d45-4053-875c-908242efc5e9-config-data\") on node \"crc\" DevicePath \"\"" Jan 25 08:19:53 crc kubenswrapper[4832]: I0125 08:19:53.991708 4832 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/2f80d9a5-5d45-4053-875c-908242efc5e9-server-conf\") on node \"crc\" DevicePath \"\"" Jan 25 08:19:53 crc kubenswrapper[4832]: I0125 08:19:53.991751 4832 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/2f80d9a5-5d45-4053-875c-908242efc5e9-pod-info\") on node \"crc\" DevicePath \"\"" Jan 25 08:19:53 crc kubenswrapper[4832]: I0125 08:19:53.991773 4832 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") on node \"crc\" " Jan 25 08:19:53 crc kubenswrapper[4832]: I0125 08:19:53.991781 4832 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/2f80d9a5-5d45-4053-875c-908242efc5e9-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Jan 25 08:19:54 crc kubenswrapper[4832]: I0125 08:19:54.004888 4832 scope.go:117] "RemoveContainer" containerID="8c6a9c3ffb2f64548b47ebec87882784fa19f4d77d6e1f3a9d7a92e52d67191e" Jan 25 08:19:54 crc kubenswrapper[4832]: I0125 08:19:54.013149 4832 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage11-crc" (UniqueName: "kubernetes.io/local-volume/local-storage11-crc") on node "crc" Jan 25 08:19:54 crc kubenswrapper[4832]: I0125 08:19:54.026991 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2f80d9a5-5d45-4053-875c-908242efc5e9-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "2f80d9a5-5d45-4053-875c-908242efc5e9" (UID: "2f80d9a5-5d45-4053-875c-908242efc5e9"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 25 08:19:54 crc kubenswrapper[4832]: I0125 08:19:54.028866 4832 scope.go:117] "RemoveContainer" containerID="f156861900973b8bec71d88b12b47b18fb0be58100a51df160c5b222ddc36166" Jan 25 08:19:54 crc kubenswrapper[4832]: E0125 08:19:54.029317 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f156861900973b8bec71d88b12b47b18fb0be58100a51df160c5b222ddc36166\": container with ID starting with f156861900973b8bec71d88b12b47b18fb0be58100a51df160c5b222ddc36166 not found: ID does not exist" containerID="f156861900973b8bec71d88b12b47b18fb0be58100a51df160c5b222ddc36166" Jan 25 08:19:54 crc kubenswrapper[4832]: I0125 08:19:54.029348 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f156861900973b8bec71d88b12b47b18fb0be58100a51df160c5b222ddc36166"} err="failed to get container status \"f156861900973b8bec71d88b12b47b18fb0be58100a51df160c5b222ddc36166\": rpc error: code = NotFound desc = could not find container \"f156861900973b8bec71d88b12b47b18fb0be58100a51df160c5b222ddc36166\": container with ID starting with f156861900973b8bec71d88b12b47b18fb0be58100a51df160c5b222ddc36166 not found: ID does not exist" Jan 25 08:19:54 crc kubenswrapper[4832]: I0125 08:19:54.029369 4832 scope.go:117] "RemoveContainer" containerID="8c6a9c3ffb2f64548b47ebec87882784fa19f4d77d6e1f3a9d7a92e52d67191e" Jan 25 08:19:54 crc kubenswrapper[4832]: E0125 08:19:54.029678 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8c6a9c3ffb2f64548b47ebec87882784fa19f4d77d6e1f3a9d7a92e52d67191e\": container with ID starting with 8c6a9c3ffb2f64548b47ebec87882784fa19f4d77d6e1f3a9d7a92e52d67191e not found: ID does not exist" containerID="8c6a9c3ffb2f64548b47ebec87882784fa19f4d77d6e1f3a9d7a92e52d67191e" Jan 25 08:19:54 crc kubenswrapper[4832]: I0125 08:19:54.029705 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8c6a9c3ffb2f64548b47ebec87882784fa19f4d77d6e1f3a9d7a92e52d67191e"} err="failed to get container status \"8c6a9c3ffb2f64548b47ebec87882784fa19f4d77d6e1f3a9d7a92e52d67191e\": rpc error: code = NotFound desc = could not find container \"8c6a9c3ffb2f64548b47ebec87882784fa19f4d77d6e1f3a9d7a92e52d67191e\": container with ID starting with 8c6a9c3ffb2f64548b47ebec87882784fa19f4d77d6e1f3a9d7a92e52d67191e not found: ID does not exist" Jan 25 08:19:54 crc kubenswrapper[4832]: I0125 08:19:54.095867 4832 reconciler_common.go:293] "Volume detached for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") on node \"crc\" DevicePath \"\"" Jan 25 08:19:54 crc kubenswrapper[4832]: I0125 08:19:54.095898 4832 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/2f80d9a5-5d45-4053-875c-908242efc5e9-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Jan 25 08:19:54 crc kubenswrapper[4832]: I0125 08:19:54.131982 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 25 08:19:54 crc kubenswrapper[4832]: I0125 08:19:54.197099 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/9b86227f-350e-4e03-aefd-00f308ccb238-plugins-conf\") pod \"9b86227f-350e-4e03-aefd-00f308ccb238\" (UID: \"9b86227f-350e-4e03-aefd-00f308ccb238\") " Jan 25 08:19:54 crc kubenswrapper[4832]: I0125 08:19:54.197144 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/9b86227f-350e-4e03-aefd-00f308ccb238-rabbitmq-confd\") pod \"9b86227f-350e-4e03-aefd-00f308ccb238\" (UID: \"9b86227f-350e-4e03-aefd-00f308ccb238\") " Jan 25 08:19:54 crc kubenswrapper[4832]: I0125 08:19:54.197195 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"9b86227f-350e-4e03-aefd-00f308ccb238\" (UID: \"9b86227f-350e-4e03-aefd-00f308ccb238\") " Jan 25 08:19:54 crc kubenswrapper[4832]: I0125 08:19:54.197220 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/9b86227f-350e-4e03-aefd-00f308ccb238-rabbitmq-plugins\") pod \"9b86227f-350e-4e03-aefd-00f308ccb238\" (UID: \"9b86227f-350e-4e03-aefd-00f308ccb238\") " Jan 25 08:19:54 crc kubenswrapper[4832]: I0125 08:19:54.197285 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gptfm\" (UniqueName: \"kubernetes.io/projected/9b86227f-350e-4e03-aefd-00f308ccb238-kube-api-access-gptfm\") pod \"9b86227f-350e-4e03-aefd-00f308ccb238\" (UID: \"9b86227f-350e-4e03-aefd-00f308ccb238\") " Jan 25 08:19:54 crc kubenswrapper[4832]: I0125 08:19:54.197324 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/9b86227f-350e-4e03-aefd-00f308ccb238-rabbitmq-erlang-cookie\") pod \"9b86227f-350e-4e03-aefd-00f308ccb238\" (UID: \"9b86227f-350e-4e03-aefd-00f308ccb238\") " Jan 25 08:19:54 crc kubenswrapper[4832]: I0125 08:19:54.197353 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/9b86227f-350e-4e03-aefd-00f308ccb238-pod-info\") pod \"9b86227f-350e-4e03-aefd-00f308ccb238\" (UID: \"9b86227f-350e-4e03-aefd-00f308ccb238\") " Jan 25 08:19:54 crc kubenswrapper[4832]: I0125 08:19:54.197397 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/9b86227f-350e-4e03-aefd-00f308ccb238-config-data\") pod \"9b86227f-350e-4e03-aefd-00f308ccb238\" (UID: \"9b86227f-350e-4e03-aefd-00f308ccb238\") " Jan 25 08:19:54 crc kubenswrapper[4832]: I0125 08:19:54.197443 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/9b86227f-350e-4e03-aefd-00f308ccb238-erlang-cookie-secret\") pod \"9b86227f-350e-4e03-aefd-00f308ccb238\" (UID: \"9b86227f-350e-4e03-aefd-00f308ccb238\") " Jan 25 08:19:54 crc kubenswrapper[4832]: I0125 08:19:54.197740 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/9b86227f-350e-4e03-aefd-00f308ccb238-rabbitmq-tls\") pod \"9b86227f-350e-4e03-aefd-00f308ccb238\" (UID: \"9b86227f-350e-4e03-aefd-00f308ccb238\") " Jan 25 08:19:54 crc kubenswrapper[4832]: I0125 08:19:54.197946 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/9b86227f-350e-4e03-aefd-00f308ccb238-server-conf\") pod \"9b86227f-350e-4e03-aefd-00f308ccb238\" (UID: \"9b86227f-350e-4e03-aefd-00f308ccb238\") " Jan 25 08:19:54 crc kubenswrapper[4832]: I0125 08:19:54.198977 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9b86227f-350e-4e03-aefd-00f308ccb238-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "9b86227f-350e-4e03-aefd-00f308ccb238" (UID: "9b86227f-350e-4e03-aefd-00f308ccb238"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 25 08:19:54 crc kubenswrapper[4832]: I0125 08:19:54.199573 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9b86227f-350e-4e03-aefd-00f308ccb238-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "9b86227f-350e-4e03-aefd-00f308ccb238" (UID: "9b86227f-350e-4e03-aefd-00f308ccb238"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 25 08:19:54 crc kubenswrapper[4832]: I0125 08:19:54.202586 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9b86227f-350e-4e03-aefd-00f308ccb238-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "9b86227f-350e-4e03-aefd-00f308ccb238" (UID: "9b86227f-350e-4e03-aefd-00f308ccb238"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 25 08:19:54 crc kubenswrapper[4832]: I0125 08:19:54.206997 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9b86227f-350e-4e03-aefd-00f308ccb238-kube-api-access-gptfm" (OuterVolumeSpecName: "kube-api-access-gptfm") pod "9b86227f-350e-4e03-aefd-00f308ccb238" (UID: "9b86227f-350e-4e03-aefd-00f308ccb238"). InnerVolumeSpecName "kube-api-access-gptfm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 25 08:19:54 crc kubenswrapper[4832]: I0125 08:19:54.207365 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9b86227f-350e-4e03-aefd-00f308ccb238-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "9b86227f-350e-4e03-aefd-00f308ccb238" (UID: "9b86227f-350e-4e03-aefd-00f308ccb238"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 25 08:19:54 crc kubenswrapper[4832]: I0125 08:19:54.207358 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9b86227f-350e-4e03-aefd-00f308ccb238-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "9b86227f-350e-4e03-aefd-00f308ccb238" (UID: "9b86227f-350e-4e03-aefd-00f308ccb238"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 25 08:19:54 crc kubenswrapper[4832]: I0125 08:19:54.207966 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage01-crc" (OuterVolumeSpecName: "persistence") pod "9b86227f-350e-4e03-aefd-00f308ccb238" (UID: "9b86227f-350e-4e03-aefd-00f308ccb238"). InnerVolumeSpecName "local-storage01-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 25 08:19:54 crc kubenswrapper[4832]: I0125 08:19:54.209104 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/9b86227f-350e-4e03-aefd-00f308ccb238-pod-info" (OuterVolumeSpecName: "pod-info") pod "9b86227f-350e-4e03-aefd-00f308ccb238" (UID: "9b86227f-350e-4e03-aefd-00f308ccb238"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Jan 25 08:19:54 crc kubenswrapper[4832]: I0125 08:19:54.232431 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9b86227f-350e-4e03-aefd-00f308ccb238-config-data" (OuterVolumeSpecName: "config-data") pod "9b86227f-350e-4e03-aefd-00f308ccb238" (UID: "9b86227f-350e-4e03-aefd-00f308ccb238"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 25 08:19:54 crc kubenswrapper[4832]: I0125 08:19:54.258627 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9b86227f-350e-4e03-aefd-00f308ccb238-server-conf" (OuterVolumeSpecName: "server-conf") pod "9b86227f-350e-4e03-aefd-00f308ccb238" (UID: "9b86227f-350e-4e03-aefd-00f308ccb238"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 25 08:19:54 crc kubenswrapper[4832]: I0125 08:19:54.300804 4832 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/9b86227f-350e-4e03-aefd-00f308ccb238-plugins-conf\") on node \"crc\" DevicePath \"\"" Jan 25 08:19:54 crc kubenswrapper[4832]: I0125 08:19:54.300849 4832 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") on node \"crc\" " Jan 25 08:19:54 crc kubenswrapper[4832]: I0125 08:19:54.300859 4832 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/9b86227f-350e-4e03-aefd-00f308ccb238-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Jan 25 08:19:54 crc kubenswrapper[4832]: I0125 08:19:54.300868 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gptfm\" (UniqueName: \"kubernetes.io/projected/9b86227f-350e-4e03-aefd-00f308ccb238-kube-api-access-gptfm\") on node \"crc\" DevicePath \"\"" Jan 25 08:19:54 crc kubenswrapper[4832]: I0125 08:19:54.300881 4832 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/9b86227f-350e-4e03-aefd-00f308ccb238-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Jan 25 08:19:54 crc kubenswrapper[4832]: I0125 08:19:54.300890 4832 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/9b86227f-350e-4e03-aefd-00f308ccb238-pod-info\") on node \"crc\" DevicePath \"\"" Jan 25 08:19:54 crc kubenswrapper[4832]: I0125 08:19:54.300898 4832 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/9b86227f-350e-4e03-aefd-00f308ccb238-config-data\") on node \"crc\" DevicePath \"\"" Jan 25 08:19:54 crc kubenswrapper[4832]: I0125 08:19:54.300906 4832 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/9b86227f-350e-4e03-aefd-00f308ccb238-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Jan 25 08:19:54 crc kubenswrapper[4832]: I0125 08:19:54.300914 4832 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/9b86227f-350e-4e03-aefd-00f308ccb238-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Jan 25 08:19:54 crc kubenswrapper[4832]: I0125 08:19:54.300921 4832 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/9b86227f-350e-4e03-aefd-00f308ccb238-server-conf\") on node \"crc\" DevicePath \"\"" Jan 25 08:19:54 crc kubenswrapper[4832]: I0125 08:19:54.319780 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 25 08:19:54 crc kubenswrapper[4832]: I0125 08:19:54.333311 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 25 08:19:54 crc kubenswrapper[4832]: I0125 08:19:54.346489 4832 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage01-crc" (UniqueName: "kubernetes.io/local-volume/local-storage01-crc") on node "crc" Jan 25 08:19:54 crc kubenswrapper[4832]: I0125 08:19:54.351575 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9b86227f-350e-4e03-aefd-00f308ccb238-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "9b86227f-350e-4e03-aefd-00f308ccb238" (UID: "9b86227f-350e-4e03-aefd-00f308ccb238"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 25 08:19:54 crc kubenswrapper[4832]: I0125 08:19:54.364574 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Jan 25 08:19:54 crc kubenswrapper[4832]: E0125 08:19:54.365045 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2f80d9a5-5d45-4053-875c-908242efc5e9" containerName="rabbitmq" Jan 25 08:19:54 crc kubenswrapper[4832]: I0125 08:19:54.365062 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="2f80d9a5-5d45-4053-875c-908242efc5e9" containerName="rabbitmq" Jan 25 08:19:54 crc kubenswrapper[4832]: E0125 08:19:54.365077 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9b86227f-350e-4e03-aefd-00f308ccb238" containerName="setup-container" Jan 25 08:19:54 crc kubenswrapper[4832]: I0125 08:19:54.365084 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="9b86227f-350e-4e03-aefd-00f308ccb238" containerName="setup-container" Jan 25 08:19:54 crc kubenswrapper[4832]: E0125 08:19:54.365107 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9b86227f-350e-4e03-aefd-00f308ccb238" containerName="rabbitmq" Jan 25 08:19:54 crc kubenswrapper[4832]: I0125 08:19:54.365113 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="9b86227f-350e-4e03-aefd-00f308ccb238" containerName="rabbitmq" Jan 25 08:19:54 crc kubenswrapper[4832]: E0125 08:19:54.365144 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2f80d9a5-5d45-4053-875c-908242efc5e9" containerName="setup-container" Jan 25 08:19:54 crc kubenswrapper[4832]: I0125 08:19:54.365150 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="2f80d9a5-5d45-4053-875c-908242efc5e9" containerName="setup-container" Jan 25 08:19:54 crc kubenswrapper[4832]: I0125 08:19:54.365332 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="2f80d9a5-5d45-4053-875c-908242efc5e9" containerName="rabbitmq" Jan 25 08:19:54 crc kubenswrapper[4832]: I0125 08:19:54.365352 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="9b86227f-350e-4e03-aefd-00f308ccb238" containerName="rabbitmq" Jan 25 08:19:54 crc kubenswrapper[4832]: I0125 08:19:54.366439 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 25 08:19:54 crc kubenswrapper[4832]: I0125 08:19:54.378977 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Jan 25 08:19:54 crc kubenswrapper[4832]: I0125 08:19:54.379031 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Jan 25 08:19:54 crc kubenswrapper[4832]: I0125 08:19:54.379279 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Jan 25 08:19:54 crc kubenswrapper[4832]: I0125 08:19:54.379713 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-ktmhd" Jan 25 08:19:54 crc kubenswrapper[4832]: I0125 08:19:54.379836 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Jan 25 08:19:54 crc kubenswrapper[4832]: I0125 08:19:54.379917 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Jan 25 08:19:54 crc kubenswrapper[4832]: I0125 08:19:54.380501 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Jan 25 08:19:54 crc kubenswrapper[4832]: I0125 08:19:54.394635 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 25 08:19:54 crc kubenswrapper[4832]: I0125 08:19:54.402261 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/efe389bf-7e64-417c-96c8-d302858a0722-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"efe389bf-7e64-417c-96c8-d302858a0722\") " pod="openstack/rabbitmq-server-0" Jan 25 08:19:54 crc kubenswrapper[4832]: I0125 08:19:54.402326 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/efe389bf-7e64-417c-96c8-d302858a0722-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"efe389bf-7e64-417c-96c8-d302858a0722\") " pod="openstack/rabbitmq-server-0" Jan 25 08:19:54 crc kubenswrapper[4832]: I0125 08:19:54.402357 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/efe389bf-7e64-417c-96c8-d302858a0722-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"efe389bf-7e64-417c-96c8-d302858a0722\") " pod="openstack/rabbitmq-server-0" Jan 25 08:19:54 crc kubenswrapper[4832]: I0125 08:19:54.402624 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"rabbitmq-server-0\" (UID: \"efe389bf-7e64-417c-96c8-d302858a0722\") " pod="openstack/rabbitmq-server-0" Jan 25 08:19:54 crc kubenswrapper[4832]: I0125 08:19:54.402887 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/efe389bf-7e64-417c-96c8-d302858a0722-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"efe389bf-7e64-417c-96c8-d302858a0722\") " pod="openstack/rabbitmq-server-0" Jan 25 08:19:54 crc kubenswrapper[4832]: I0125 08:19:54.402933 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/efe389bf-7e64-417c-96c8-d302858a0722-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"efe389bf-7e64-417c-96c8-d302858a0722\") " pod="openstack/rabbitmq-server-0" Jan 25 08:19:54 crc kubenswrapper[4832]: I0125 08:19:54.402970 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/efe389bf-7e64-417c-96c8-d302858a0722-pod-info\") pod \"rabbitmq-server-0\" (UID: \"efe389bf-7e64-417c-96c8-d302858a0722\") " pod="openstack/rabbitmq-server-0" Jan 25 08:19:54 crc kubenswrapper[4832]: I0125 08:19:54.403062 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/efe389bf-7e64-417c-96c8-d302858a0722-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"efe389bf-7e64-417c-96c8-d302858a0722\") " pod="openstack/rabbitmq-server-0" Jan 25 08:19:54 crc kubenswrapper[4832]: I0125 08:19:54.403123 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/efe389bf-7e64-417c-96c8-d302858a0722-server-conf\") pod \"rabbitmq-server-0\" (UID: \"efe389bf-7e64-417c-96c8-d302858a0722\") " pod="openstack/rabbitmq-server-0" Jan 25 08:19:54 crc kubenswrapper[4832]: I0125 08:19:54.403187 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7ftbp\" (UniqueName: \"kubernetes.io/projected/efe389bf-7e64-417c-96c8-d302858a0722-kube-api-access-7ftbp\") pod \"rabbitmq-server-0\" (UID: \"efe389bf-7e64-417c-96c8-d302858a0722\") " pod="openstack/rabbitmq-server-0" Jan 25 08:19:54 crc kubenswrapper[4832]: I0125 08:19:54.403204 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/efe389bf-7e64-417c-96c8-d302858a0722-config-data\") pod \"rabbitmq-server-0\" (UID: \"efe389bf-7e64-417c-96c8-d302858a0722\") " pod="openstack/rabbitmq-server-0" Jan 25 08:19:54 crc kubenswrapper[4832]: I0125 08:19:54.403284 4832 reconciler_common.go:293] "Volume detached for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") on node \"crc\" DevicePath \"\"" Jan 25 08:19:54 crc kubenswrapper[4832]: I0125 08:19:54.403300 4832 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/9b86227f-350e-4e03-aefd-00f308ccb238-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Jan 25 08:19:54 crc kubenswrapper[4832]: I0125 08:19:54.504363 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/efe389bf-7e64-417c-96c8-d302858a0722-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"efe389bf-7e64-417c-96c8-d302858a0722\") " pod="openstack/rabbitmq-server-0" Jan 25 08:19:54 crc kubenswrapper[4832]: I0125 08:19:54.504425 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/efe389bf-7e64-417c-96c8-d302858a0722-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"efe389bf-7e64-417c-96c8-d302858a0722\") " pod="openstack/rabbitmq-server-0" Jan 25 08:19:54 crc kubenswrapper[4832]: I0125 08:19:54.504468 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"rabbitmq-server-0\" (UID: \"efe389bf-7e64-417c-96c8-d302858a0722\") " pod="openstack/rabbitmq-server-0" Jan 25 08:19:54 crc kubenswrapper[4832]: I0125 08:19:54.504506 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/efe389bf-7e64-417c-96c8-d302858a0722-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"efe389bf-7e64-417c-96c8-d302858a0722\") " pod="openstack/rabbitmq-server-0" Jan 25 08:19:54 crc kubenswrapper[4832]: I0125 08:19:54.504526 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/efe389bf-7e64-417c-96c8-d302858a0722-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"efe389bf-7e64-417c-96c8-d302858a0722\") " pod="openstack/rabbitmq-server-0" Jan 25 08:19:54 crc kubenswrapper[4832]: I0125 08:19:54.504544 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/efe389bf-7e64-417c-96c8-d302858a0722-pod-info\") pod \"rabbitmq-server-0\" (UID: \"efe389bf-7e64-417c-96c8-d302858a0722\") " pod="openstack/rabbitmq-server-0" Jan 25 08:19:54 crc kubenswrapper[4832]: I0125 08:19:54.504582 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/efe389bf-7e64-417c-96c8-d302858a0722-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"efe389bf-7e64-417c-96c8-d302858a0722\") " pod="openstack/rabbitmq-server-0" Jan 25 08:19:54 crc kubenswrapper[4832]: I0125 08:19:54.504615 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/efe389bf-7e64-417c-96c8-d302858a0722-server-conf\") pod \"rabbitmq-server-0\" (UID: \"efe389bf-7e64-417c-96c8-d302858a0722\") " pod="openstack/rabbitmq-server-0" Jan 25 08:19:54 crc kubenswrapper[4832]: I0125 08:19:54.504639 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7ftbp\" (UniqueName: \"kubernetes.io/projected/efe389bf-7e64-417c-96c8-d302858a0722-kube-api-access-7ftbp\") pod \"rabbitmq-server-0\" (UID: \"efe389bf-7e64-417c-96c8-d302858a0722\") " pod="openstack/rabbitmq-server-0" Jan 25 08:19:54 crc kubenswrapper[4832]: I0125 08:19:54.504657 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/efe389bf-7e64-417c-96c8-d302858a0722-config-data\") pod \"rabbitmq-server-0\" (UID: \"efe389bf-7e64-417c-96c8-d302858a0722\") " pod="openstack/rabbitmq-server-0" Jan 25 08:19:54 crc kubenswrapper[4832]: I0125 08:19:54.504684 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/efe389bf-7e64-417c-96c8-d302858a0722-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"efe389bf-7e64-417c-96c8-d302858a0722\") " pod="openstack/rabbitmq-server-0" Jan 25 08:19:54 crc kubenswrapper[4832]: I0125 08:19:54.504821 4832 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"rabbitmq-server-0\" (UID: \"efe389bf-7e64-417c-96c8-d302858a0722\") device mount path \"/mnt/openstack/pv11\"" pod="openstack/rabbitmq-server-0" Jan 25 08:19:54 crc kubenswrapper[4832]: I0125 08:19:54.505274 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/efe389bf-7e64-417c-96c8-d302858a0722-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"efe389bf-7e64-417c-96c8-d302858a0722\") " pod="openstack/rabbitmq-server-0" Jan 25 08:19:54 crc kubenswrapper[4832]: I0125 08:19:54.505460 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/efe389bf-7e64-417c-96c8-d302858a0722-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"efe389bf-7e64-417c-96c8-d302858a0722\") " pod="openstack/rabbitmq-server-0" Jan 25 08:19:54 crc kubenswrapper[4832]: I0125 08:19:54.506123 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/efe389bf-7e64-417c-96c8-d302858a0722-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"efe389bf-7e64-417c-96c8-d302858a0722\") " pod="openstack/rabbitmq-server-0" Jan 25 08:19:54 crc kubenswrapper[4832]: I0125 08:19:54.506377 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/efe389bf-7e64-417c-96c8-d302858a0722-server-conf\") pod \"rabbitmq-server-0\" (UID: \"efe389bf-7e64-417c-96c8-d302858a0722\") " pod="openstack/rabbitmq-server-0" Jan 25 08:19:54 crc kubenswrapper[4832]: I0125 08:19:54.506861 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/efe389bf-7e64-417c-96c8-d302858a0722-config-data\") pod \"rabbitmq-server-0\" (UID: \"efe389bf-7e64-417c-96c8-d302858a0722\") " pod="openstack/rabbitmq-server-0" Jan 25 08:19:54 crc kubenswrapper[4832]: I0125 08:19:54.510370 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/efe389bf-7e64-417c-96c8-d302858a0722-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"efe389bf-7e64-417c-96c8-d302858a0722\") " pod="openstack/rabbitmq-server-0" Jan 25 08:19:54 crc kubenswrapper[4832]: I0125 08:19:54.514922 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/efe389bf-7e64-417c-96c8-d302858a0722-pod-info\") pod \"rabbitmq-server-0\" (UID: \"efe389bf-7e64-417c-96c8-d302858a0722\") " pod="openstack/rabbitmq-server-0" Jan 25 08:19:54 crc kubenswrapper[4832]: I0125 08:19:54.514984 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/efe389bf-7e64-417c-96c8-d302858a0722-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"efe389bf-7e64-417c-96c8-d302858a0722\") " pod="openstack/rabbitmq-server-0" Jan 25 08:19:54 crc kubenswrapper[4832]: I0125 08:19:54.515431 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/efe389bf-7e64-417c-96c8-d302858a0722-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"efe389bf-7e64-417c-96c8-d302858a0722\") " pod="openstack/rabbitmq-server-0" Jan 25 08:19:54 crc kubenswrapper[4832]: I0125 08:19:54.522608 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7ftbp\" (UniqueName: \"kubernetes.io/projected/efe389bf-7e64-417c-96c8-d302858a0722-kube-api-access-7ftbp\") pod \"rabbitmq-server-0\" (UID: \"efe389bf-7e64-417c-96c8-d302858a0722\") " pod="openstack/rabbitmq-server-0" Jan 25 08:19:54 crc kubenswrapper[4832]: I0125 08:19:54.541860 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"rabbitmq-server-0\" (UID: \"efe389bf-7e64-417c-96c8-d302858a0722\") " pod="openstack/rabbitmq-server-0" Jan 25 08:19:54 crc kubenswrapper[4832]: I0125 08:19:54.732928 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 25 08:19:54 crc kubenswrapper[4832]: I0125 08:19:54.989484 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"9b86227f-350e-4e03-aefd-00f308ccb238","Type":"ContainerDied","Data":"000c97a78739b10d84af5f007299db50fd9d0dfbe104f338dba76f12ec758ed4"} Jan 25 08:19:54 crc kubenswrapper[4832]: I0125 08:19:54.989814 4832 scope.go:117] "RemoveContainer" containerID="b4222cb79b322095ec7642cdbdab0fdb9e6322bb2158b4beba10850315703092" Jan 25 08:19:54 crc kubenswrapper[4832]: I0125 08:19:54.989762 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 25 08:19:55 crc kubenswrapper[4832]: I0125 08:19:55.014032 4832 scope.go:117] "RemoveContainer" containerID="b460c04d4adb8e23c0d8d586e6e38768fc8da8021c8d34a10874eaba07e58ccf" Jan 25 08:19:55 crc kubenswrapper[4832]: I0125 08:19:55.029359 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 25 08:19:55 crc kubenswrapper[4832]: I0125 08:19:55.047610 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 25 08:19:55 crc kubenswrapper[4832]: I0125 08:19:55.070792 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 25 08:19:55 crc kubenswrapper[4832]: I0125 08:19:55.086619 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 25 08:19:55 crc kubenswrapper[4832]: I0125 08:19:55.086752 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 25 08:19:55 crc kubenswrapper[4832]: I0125 08:19:55.090433 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Jan 25 08:19:55 crc kubenswrapper[4832]: I0125 08:19:55.090633 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Jan 25 08:19:55 crc kubenswrapper[4832]: I0125 08:19:55.090734 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Jan 25 08:19:55 crc kubenswrapper[4832]: I0125 08:19:55.090806 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-2tqqh" Jan 25 08:19:55 crc kubenswrapper[4832]: I0125 08:19:55.090964 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Jan 25 08:19:55 crc kubenswrapper[4832]: I0125 08:19:55.091065 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Jan 25 08:19:55 crc kubenswrapper[4832]: I0125 08:19:55.091501 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Jan 25 08:19:55 crc kubenswrapper[4832]: I0125 08:19:55.161271 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-67b789f86c-79s92"] Jan 25 08:19:55 crc kubenswrapper[4832]: I0125 08:19:55.162874 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-67b789f86c-79s92" Jan 25 08:19:55 crc kubenswrapper[4832]: I0125 08:19:55.164777 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-edpm-ipam" Jan 25 08:19:55 crc kubenswrapper[4832]: I0125 08:19:55.181879 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-67b789f86c-79s92"] Jan 25 08:19:55 crc kubenswrapper[4832]: I0125 08:19:55.216223 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 25 08:19:55 crc kubenswrapper[4832]: I0125 08:19:55.217326 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/9cf62746-47cb-4e83-9211-57a799a06e93-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"9cf62746-47cb-4e83-9211-57a799a06e93\") " pod="openstack/rabbitmq-cell1-server-0" Jan 25 08:19:55 crc kubenswrapper[4832]: I0125 08:19:55.217403 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mxhqn\" (UniqueName: \"kubernetes.io/projected/9cf62746-47cb-4e83-9211-57a799a06e93-kube-api-access-mxhqn\") pod \"rabbitmq-cell1-server-0\" (UID: \"9cf62746-47cb-4e83-9211-57a799a06e93\") " pod="openstack/rabbitmq-cell1-server-0" Jan 25 08:19:55 crc kubenswrapper[4832]: I0125 08:19:55.217438 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/9cf62746-47cb-4e83-9211-57a799a06e93-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"9cf62746-47cb-4e83-9211-57a799a06e93\") " pod="openstack/rabbitmq-cell1-server-0" Jan 25 08:19:55 crc kubenswrapper[4832]: I0125 08:19:55.217523 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/9cf62746-47cb-4e83-9211-57a799a06e93-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"9cf62746-47cb-4e83-9211-57a799a06e93\") " pod="openstack/rabbitmq-cell1-server-0" Jan 25 08:19:55 crc kubenswrapper[4832]: I0125 08:19:55.217553 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/9cf62746-47cb-4e83-9211-57a799a06e93-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"9cf62746-47cb-4e83-9211-57a799a06e93\") " pod="openstack/rabbitmq-cell1-server-0" Jan 25 08:19:55 crc kubenswrapper[4832]: I0125 08:19:55.217592 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/9cf62746-47cb-4e83-9211-57a799a06e93-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"9cf62746-47cb-4e83-9211-57a799a06e93\") " pod="openstack/rabbitmq-cell1-server-0" Jan 25 08:19:55 crc kubenswrapper[4832]: I0125 08:19:55.217632 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/9cf62746-47cb-4e83-9211-57a799a06e93-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"9cf62746-47cb-4e83-9211-57a799a06e93\") " pod="openstack/rabbitmq-cell1-server-0" Jan 25 08:19:55 crc kubenswrapper[4832]: I0125 08:19:55.217670 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/9cf62746-47cb-4e83-9211-57a799a06e93-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"9cf62746-47cb-4e83-9211-57a799a06e93\") " pod="openstack/rabbitmq-cell1-server-0" Jan 25 08:19:55 crc kubenswrapper[4832]: I0125 08:19:55.217701 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"9cf62746-47cb-4e83-9211-57a799a06e93\") " pod="openstack/rabbitmq-cell1-server-0" Jan 25 08:19:55 crc kubenswrapper[4832]: I0125 08:19:55.217725 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/9cf62746-47cb-4e83-9211-57a799a06e93-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"9cf62746-47cb-4e83-9211-57a799a06e93\") " pod="openstack/rabbitmq-cell1-server-0" Jan 25 08:19:55 crc kubenswrapper[4832]: I0125 08:19:55.217755 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/9cf62746-47cb-4e83-9211-57a799a06e93-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"9cf62746-47cb-4e83-9211-57a799a06e93\") " pod="openstack/rabbitmq-cell1-server-0" Jan 25 08:19:55 crc kubenswrapper[4832]: I0125 08:19:55.322396 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f5e5ad7d-ce45-41d3-a5f5-d5fd8a35d3f1-ovsdbserver-sb\") pod \"dnsmasq-dns-67b789f86c-79s92\" (UID: \"f5e5ad7d-ce45-41d3-a5f5-d5fd8a35d3f1\") " pod="openstack/dnsmasq-dns-67b789f86c-79s92" Jan 25 08:19:55 crc kubenswrapper[4832]: I0125 08:19:55.322785 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/9cf62746-47cb-4e83-9211-57a799a06e93-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"9cf62746-47cb-4e83-9211-57a799a06e93\") " pod="openstack/rabbitmq-cell1-server-0" Jan 25 08:19:55 crc kubenswrapper[4832]: I0125 08:19:55.322812 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6txw2\" (UniqueName: \"kubernetes.io/projected/f5e5ad7d-ce45-41d3-a5f5-d5fd8a35d3f1-kube-api-access-6txw2\") pod \"dnsmasq-dns-67b789f86c-79s92\" (UID: \"f5e5ad7d-ce45-41d3-a5f5-d5fd8a35d3f1\") " pod="openstack/dnsmasq-dns-67b789f86c-79s92" Jan 25 08:19:55 crc kubenswrapper[4832]: I0125 08:19:55.322846 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f5e5ad7d-ce45-41d3-a5f5-d5fd8a35d3f1-dns-swift-storage-0\") pod \"dnsmasq-dns-67b789f86c-79s92\" (UID: \"f5e5ad7d-ce45-41d3-a5f5-d5fd8a35d3f1\") " pod="openstack/dnsmasq-dns-67b789f86c-79s92" Jan 25 08:19:55 crc kubenswrapper[4832]: I0125 08:19:55.322866 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/9cf62746-47cb-4e83-9211-57a799a06e93-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"9cf62746-47cb-4e83-9211-57a799a06e93\") " pod="openstack/rabbitmq-cell1-server-0" Jan 25 08:19:55 crc kubenswrapper[4832]: I0125 08:19:55.322882 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f5e5ad7d-ce45-41d3-a5f5-d5fd8a35d3f1-ovsdbserver-nb\") pod \"dnsmasq-dns-67b789f86c-79s92\" (UID: \"f5e5ad7d-ce45-41d3-a5f5-d5fd8a35d3f1\") " pod="openstack/dnsmasq-dns-67b789f86c-79s92" Jan 25 08:19:55 crc kubenswrapper[4832]: I0125 08:19:55.322903 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"9cf62746-47cb-4e83-9211-57a799a06e93\") " pod="openstack/rabbitmq-cell1-server-0" Jan 25 08:19:55 crc kubenswrapper[4832]: I0125 08:19:55.322924 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/9cf62746-47cb-4e83-9211-57a799a06e93-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"9cf62746-47cb-4e83-9211-57a799a06e93\") " pod="openstack/rabbitmq-cell1-server-0" Jan 25 08:19:55 crc kubenswrapper[4832]: I0125 08:19:55.322946 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f5e5ad7d-ce45-41d3-a5f5-d5fd8a35d3f1-config\") pod \"dnsmasq-dns-67b789f86c-79s92\" (UID: \"f5e5ad7d-ce45-41d3-a5f5-d5fd8a35d3f1\") " pod="openstack/dnsmasq-dns-67b789f86c-79s92" Jan 25 08:19:55 crc kubenswrapper[4832]: I0125 08:19:55.322968 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/9cf62746-47cb-4e83-9211-57a799a06e93-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"9cf62746-47cb-4e83-9211-57a799a06e93\") " pod="openstack/rabbitmq-cell1-server-0" Jan 25 08:19:55 crc kubenswrapper[4832]: I0125 08:19:55.322998 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/f5e5ad7d-ce45-41d3-a5f5-d5fd8a35d3f1-openstack-edpm-ipam\") pod \"dnsmasq-dns-67b789f86c-79s92\" (UID: \"f5e5ad7d-ce45-41d3-a5f5-d5fd8a35d3f1\") " pod="openstack/dnsmasq-dns-67b789f86c-79s92" Jan 25 08:19:55 crc kubenswrapper[4832]: I0125 08:19:55.323015 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/9cf62746-47cb-4e83-9211-57a799a06e93-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"9cf62746-47cb-4e83-9211-57a799a06e93\") " pod="openstack/rabbitmq-cell1-server-0" Jan 25 08:19:55 crc kubenswrapper[4832]: I0125 08:19:55.323031 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mxhqn\" (UniqueName: \"kubernetes.io/projected/9cf62746-47cb-4e83-9211-57a799a06e93-kube-api-access-mxhqn\") pod \"rabbitmq-cell1-server-0\" (UID: \"9cf62746-47cb-4e83-9211-57a799a06e93\") " pod="openstack/rabbitmq-cell1-server-0" Jan 25 08:19:55 crc kubenswrapper[4832]: I0125 08:19:55.323051 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/9cf62746-47cb-4e83-9211-57a799a06e93-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"9cf62746-47cb-4e83-9211-57a799a06e93\") " pod="openstack/rabbitmq-cell1-server-0" Jan 25 08:19:55 crc kubenswrapper[4832]: I0125 08:19:55.323071 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f5e5ad7d-ce45-41d3-a5f5-d5fd8a35d3f1-dns-svc\") pod \"dnsmasq-dns-67b789f86c-79s92\" (UID: \"f5e5ad7d-ce45-41d3-a5f5-d5fd8a35d3f1\") " pod="openstack/dnsmasq-dns-67b789f86c-79s92" Jan 25 08:19:55 crc kubenswrapper[4832]: I0125 08:19:55.323156 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/9cf62746-47cb-4e83-9211-57a799a06e93-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"9cf62746-47cb-4e83-9211-57a799a06e93\") " pod="openstack/rabbitmq-cell1-server-0" Jan 25 08:19:55 crc kubenswrapper[4832]: I0125 08:19:55.323183 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/9cf62746-47cb-4e83-9211-57a799a06e93-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"9cf62746-47cb-4e83-9211-57a799a06e93\") " pod="openstack/rabbitmq-cell1-server-0" Jan 25 08:19:55 crc kubenswrapper[4832]: I0125 08:19:55.323215 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/9cf62746-47cb-4e83-9211-57a799a06e93-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"9cf62746-47cb-4e83-9211-57a799a06e93\") " pod="openstack/rabbitmq-cell1-server-0" Jan 25 08:19:55 crc kubenswrapper[4832]: I0125 08:19:55.327234 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/9cf62746-47cb-4e83-9211-57a799a06e93-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"9cf62746-47cb-4e83-9211-57a799a06e93\") " pod="openstack/rabbitmq-cell1-server-0" Jan 25 08:19:55 crc kubenswrapper[4832]: I0125 08:19:55.327367 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/9cf62746-47cb-4e83-9211-57a799a06e93-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"9cf62746-47cb-4e83-9211-57a799a06e93\") " pod="openstack/rabbitmq-cell1-server-0" Jan 25 08:19:55 crc kubenswrapper[4832]: I0125 08:19:55.328029 4832 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"9cf62746-47cb-4e83-9211-57a799a06e93\") device mount path \"/mnt/openstack/pv01\"" pod="openstack/rabbitmq-cell1-server-0" Jan 25 08:19:55 crc kubenswrapper[4832]: I0125 08:19:55.329143 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/9cf62746-47cb-4e83-9211-57a799a06e93-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"9cf62746-47cb-4e83-9211-57a799a06e93\") " pod="openstack/rabbitmq-cell1-server-0" Jan 25 08:19:55 crc kubenswrapper[4832]: I0125 08:19:55.331275 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/9cf62746-47cb-4e83-9211-57a799a06e93-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"9cf62746-47cb-4e83-9211-57a799a06e93\") " pod="openstack/rabbitmq-cell1-server-0" Jan 25 08:19:55 crc kubenswrapper[4832]: I0125 08:19:55.332300 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/9cf62746-47cb-4e83-9211-57a799a06e93-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"9cf62746-47cb-4e83-9211-57a799a06e93\") " pod="openstack/rabbitmq-cell1-server-0" Jan 25 08:19:55 crc kubenswrapper[4832]: I0125 08:19:55.337542 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/9cf62746-47cb-4e83-9211-57a799a06e93-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"9cf62746-47cb-4e83-9211-57a799a06e93\") " pod="openstack/rabbitmq-cell1-server-0" Jan 25 08:19:55 crc kubenswrapper[4832]: I0125 08:19:55.339867 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/9cf62746-47cb-4e83-9211-57a799a06e93-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"9cf62746-47cb-4e83-9211-57a799a06e93\") " pod="openstack/rabbitmq-cell1-server-0" Jan 25 08:19:55 crc kubenswrapper[4832]: I0125 08:19:55.347361 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/9cf62746-47cb-4e83-9211-57a799a06e93-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"9cf62746-47cb-4e83-9211-57a799a06e93\") " pod="openstack/rabbitmq-cell1-server-0" Jan 25 08:19:55 crc kubenswrapper[4832]: I0125 08:19:55.351801 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/9cf62746-47cb-4e83-9211-57a799a06e93-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"9cf62746-47cb-4e83-9211-57a799a06e93\") " pod="openstack/rabbitmq-cell1-server-0" Jan 25 08:19:55 crc kubenswrapper[4832]: I0125 08:19:55.370264 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mxhqn\" (UniqueName: \"kubernetes.io/projected/9cf62746-47cb-4e83-9211-57a799a06e93-kube-api-access-mxhqn\") pod \"rabbitmq-cell1-server-0\" (UID: \"9cf62746-47cb-4e83-9211-57a799a06e93\") " pod="openstack/rabbitmq-cell1-server-0" Jan 25 08:19:55 crc kubenswrapper[4832]: I0125 08:19:55.376326 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-67b789f86c-79s92"] Jan 25 08:19:55 crc kubenswrapper[4832]: E0125 08:19:55.377112 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[config dns-svc dns-swift-storage-0 kube-api-access-6txw2 openstack-edpm-ipam ovsdbserver-nb ovsdbserver-sb], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openstack/dnsmasq-dns-67b789f86c-79s92" podUID="f5e5ad7d-ce45-41d3-a5f5-d5fd8a35d3f1" Jan 25 08:19:55 crc kubenswrapper[4832]: I0125 08:19:55.418216 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"9cf62746-47cb-4e83-9211-57a799a06e93\") " pod="openstack/rabbitmq-cell1-server-0" Jan 25 08:19:55 crc kubenswrapper[4832]: I0125 08:19:55.424979 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f5e5ad7d-ce45-41d3-a5f5-d5fd8a35d3f1-ovsdbserver-sb\") pod \"dnsmasq-dns-67b789f86c-79s92\" (UID: \"f5e5ad7d-ce45-41d3-a5f5-d5fd8a35d3f1\") " pod="openstack/dnsmasq-dns-67b789f86c-79s92" Jan 25 08:19:55 crc kubenswrapper[4832]: I0125 08:19:55.425026 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6txw2\" (UniqueName: \"kubernetes.io/projected/f5e5ad7d-ce45-41d3-a5f5-d5fd8a35d3f1-kube-api-access-6txw2\") pod \"dnsmasq-dns-67b789f86c-79s92\" (UID: \"f5e5ad7d-ce45-41d3-a5f5-d5fd8a35d3f1\") " pod="openstack/dnsmasq-dns-67b789f86c-79s92" Jan 25 08:19:55 crc kubenswrapper[4832]: I0125 08:19:55.425055 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f5e5ad7d-ce45-41d3-a5f5-d5fd8a35d3f1-dns-swift-storage-0\") pod \"dnsmasq-dns-67b789f86c-79s92\" (UID: \"f5e5ad7d-ce45-41d3-a5f5-d5fd8a35d3f1\") " pod="openstack/dnsmasq-dns-67b789f86c-79s92" Jan 25 08:19:55 crc kubenswrapper[4832]: I0125 08:19:55.425073 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f5e5ad7d-ce45-41d3-a5f5-d5fd8a35d3f1-ovsdbserver-nb\") pod \"dnsmasq-dns-67b789f86c-79s92\" (UID: \"f5e5ad7d-ce45-41d3-a5f5-d5fd8a35d3f1\") " pod="openstack/dnsmasq-dns-67b789f86c-79s92" Jan 25 08:19:55 crc kubenswrapper[4832]: I0125 08:19:55.425101 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f5e5ad7d-ce45-41d3-a5f5-d5fd8a35d3f1-config\") pod \"dnsmasq-dns-67b789f86c-79s92\" (UID: \"f5e5ad7d-ce45-41d3-a5f5-d5fd8a35d3f1\") " pod="openstack/dnsmasq-dns-67b789f86c-79s92" Jan 25 08:19:55 crc kubenswrapper[4832]: I0125 08:19:55.425127 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/f5e5ad7d-ce45-41d3-a5f5-d5fd8a35d3f1-openstack-edpm-ipam\") pod \"dnsmasq-dns-67b789f86c-79s92\" (UID: \"f5e5ad7d-ce45-41d3-a5f5-d5fd8a35d3f1\") " pod="openstack/dnsmasq-dns-67b789f86c-79s92" Jan 25 08:19:55 crc kubenswrapper[4832]: I0125 08:19:55.425147 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f5e5ad7d-ce45-41d3-a5f5-d5fd8a35d3f1-dns-svc\") pod \"dnsmasq-dns-67b789f86c-79s92\" (UID: \"f5e5ad7d-ce45-41d3-a5f5-d5fd8a35d3f1\") " pod="openstack/dnsmasq-dns-67b789f86c-79s92" Jan 25 08:19:55 crc kubenswrapper[4832]: I0125 08:19:55.426170 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-cb6ffcf87-5r9mm"] Jan 25 08:19:55 crc kubenswrapper[4832]: I0125 08:19:55.426621 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f5e5ad7d-ce45-41d3-a5f5-d5fd8a35d3f1-dns-swift-storage-0\") pod \"dnsmasq-dns-67b789f86c-79s92\" (UID: \"f5e5ad7d-ce45-41d3-a5f5-d5fd8a35d3f1\") " pod="openstack/dnsmasq-dns-67b789f86c-79s92" Jan 25 08:19:55 crc kubenswrapper[4832]: I0125 08:19:55.426625 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f5e5ad7d-ce45-41d3-a5f5-d5fd8a35d3f1-ovsdbserver-sb\") pod \"dnsmasq-dns-67b789f86c-79s92\" (UID: \"f5e5ad7d-ce45-41d3-a5f5-d5fd8a35d3f1\") " pod="openstack/dnsmasq-dns-67b789f86c-79s92" Jan 25 08:19:55 crc kubenswrapper[4832]: I0125 08:19:55.427169 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f5e5ad7d-ce45-41d3-a5f5-d5fd8a35d3f1-config\") pod \"dnsmasq-dns-67b789f86c-79s92\" (UID: \"f5e5ad7d-ce45-41d3-a5f5-d5fd8a35d3f1\") " pod="openstack/dnsmasq-dns-67b789f86c-79s92" Jan 25 08:19:55 crc kubenswrapper[4832]: I0125 08:19:55.427637 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f5e5ad7d-ce45-41d3-a5f5-d5fd8a35d3f1-ovsdbserver-nb\") pod \"dnsmasq-dns-67b789f86c-79s92\" (UID: \"f5e5ad7d-ce45-41d3-a5f5-d5fd8a35d3f1\") " pod="openstack/dnsmasq-dns-67b789f86c-79s92" Jan 25 08:19:55 crc kubenswrapper[4832]: I0125 08:19:55.427738 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/f5e5ad7d-ce45-41d3-a5f5-d5fd8a35d3f1-openstack-edpm-ipam\") pod \"dnsmasq-dns-67b789f86c-79s92\" (UID: \"f5e5ad7d-ce45-41d3-a5f5-d5fd8a35d3f1\") " pod="openstack/dnsmasq-dns-67b789f86c-79s92" Jan 25 08:19:55 crc kubenswrapper[4832]: I0125 08:19:55.428205 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-cb6ffcf87-5r9mm" Jan 25 08:19:55 crc kubenswrapper[4832]: I0125 08:19:55.430179 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f5e5ad7d-ce45-41d3-a5f5-d5fd8a35d3f1-dns-svc\") pod \"dnsmasq-dns-67b789f86c-79s92\" (UID: \"f5e5ad7d-ce45-41d3-a5f5-d5fd8a35d3f1\") " pod="openstack/dnsmasq-dns-67b789f86c-79s92" Jan 25 08:19:55 crc kubenswrapper[4832]: I0125 08:19:55.441331 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-cb6ffcf87-5r9mm"] Jan 25 08:19:55 crc kubenswrapper[4832]: I0125 08:19:55.446519 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6txw2\" (UniqueName: \"kubernetes.io/projected/f5e5ad7d-ce45-41d3-a5f5-d5fd8a35d3f1-kube-api-access-6txw2\") pod \"dnsmasq-dns-67b789f86c-79s92\" (UID: \"f5e5ad7d-ce45-41d3-a5f5-d5fd8a35d3f1\") " pod="openstack/dnsmasq-dns-67b789f86c-79s92" Jan 25 08:19:55 crc kubenswrapper[4832]: I0125 08:19:55.528794 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/8b7acd70-a72a-477f-af0d-455512cb4e81-dns-swift-storage-0\") pod \"dnsmasq-dns-cb6ffcf87-5r9mm\" (UID: \"8b7acd70-a72a-477f-af0d-455512cb4e81\") " pod="openstack/dnsmasq-dns-cb6ffcf87-5r9mm" Jan 25 08:19:55 crc kubenswrapper[4832]: I0125 08:19:55.528867 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8b7acd70-a72a-477f-af0d-455512cb4e81-ovsdbserver-nb\") pod \"dnsmasq-dns-cb6ffcf87-5r9mm\" (UID: \"8b7acd70-a72a-477f-af0d-455512cb4e81\") " pod="openstack/dnsmasq-dns-cb6ffcf87-5r9mm" Jan 25 08:19:55 crc kubenswrapper[4832]: I0125 08:19:55.528890 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/8b7acd70-a72a-477f-af0d-455512cb4e81-openstack-edpm-ipam\") pod \"dnsmasq-dns-cb6ffcf87-5r9mm\" (UID: \"8b7acd70-a72a-477f-af0d-455512cb4e81\") " pod="openstack/dnsmasq-dns-cb6ffcf87-5r9mm" Jan 25 08:19:55 crc kubenswrapper[4832]: I0125 08:19:55.528914 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n2snp\" (UniqueName: \"kubernetes.io/projected/8b7acd70-a72a-477f-af0d-455512cb4e81-kube-api-access-n2snp\") pod \"dnsmasq-dns-cb6ffcf87-5r9mm\" (UID: \"8b7acd70-a72a-477f-af0d-455512cb4e81\") " pod="openstack/dnsmasq-dns-cb6ffcf87-5r9mm" Jan 25 08:19:55 crc kubenswrapper[4832]: I0125 08:19:55.528936 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8b7acd70-a72a-477f-af0d-455512cb4e81-ovsdbserver-sb\") pod \"dnsmasq-dns-cb6ffcf87-5r9mm\" (UID: \"8b7acd70-a72a-477f-af0d-455512cb4e81\") " pod="openstack/dnsmasq-dns-cb6ffcf87-5r9mm" Jan 25 08:19:55 crc kubenswrapper[4832]: I0125 08:19:55.528962 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8b7acd70-a72a-477f-af0d-455512cb4e81-dns-svc\") pod \"dnsmasq-dns-cb6ffcf87-5r9mm\" (UID: \"8b7acd70-a72a-477f-af0d-455512cb4e81\") " pod="openstack/dnsmasq-dns-cb6ffcf87-5r9mm" Jan 25 08:19:55 crc kubenswrapper[4832]: I0125 08:19:55.529028 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8b7acd70-a72a-477f-af0d-455512cb4e81-config\") pod \"dnsmasq-dns-cb6ffcf87-5r9mm\" (UID: \"8b7acd70-a72a-477f-af0d-455512cb4e81\") " pod="openstack/dnsmasq-dns-cb6ffcf87-5r9mm" Jan 25 08:19:55 crc kubenswrapper[4832]: I0125 08:19:55.630421 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8b7acd70-a72a-477f-af0d-455512cb4e81-config\") pod \"dnsmasq-dns-cb6ffcf87-5r9mm\" (UID: \"8b7acd70-a72a-477f-af0d-455512cb4e81\") " pod="openstack/dnsmasq-dns-cb6ffcf87-5r9mm" Jan 25 08:19:55 crc kubenswrapper[4832]: I0125 08:19:55.630509 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/8b7acd70-a72a-477f-af0d-455512cb4e81-dns-swift-storage-0\") pod \"dnsmasq-dns-cb6ffcf87-5r9mm\" (UID: \"8b7acd70-a72a-477f-af0d-455512cb4e81\") " pod="openstack/dnsmasq-dns-cb6ffcf87-5r9mm" Jan 25 08:19:55 crc kubenswrapper[4832]: I0125 08:19:55.630548 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8b7acd70-a72a-477f-af0d-455512cb4e81-ovsdbserver-nb\") pod \"dnsmasq-dns-cb6ffcf87-5r9mm\" (UID: \"8b7acd70-a72a-477f-af0d-455512cb4e81\") " pod="openstack/dnsmasq-dns-cb6ffcf87-5r9mm" Jan 25 08:19:55 crc kubenswrapper[4832]: I0125 08:19:55.630569 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/8b7acd70-a72a-477f-af0d-455512cb4e81-openstack-edpm-ipam\") pod \"dnsmasq-dns-cb6ffcf87-5r9mm\" (UID: \"8b7acd70-a72a-477f-af0d-455512cb4e81\") " pod="openstack/dnsmasq-dns-cb6ffcf87-5r9mm" Jan 25 08:19:55 crc kubenswrapper[4832]: I0125 08:19:55.630594 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n2snp\" (UniqueName: \"kubernetes.io/projected/8b7acd70-a72a-477f-af0d-455512cb4e81-kube-api-access-n2snp\") pod \"dnsmasq-dns-cb6ffcf87-5r9mm\" (UID: \"8b7acd70-a72a-477f-af0d-455512cb4e81\") " pod="openstack/dnsmasq-dns-cb6ffcf87-5r9mm" Jan 25 08:19:55 crc kubenswrapper[4832]: I0125 08:19:55.630613 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8b7acd70-a72a-477f-af0d-455512cb4e81-ovsdbserver-sb\") pod \"dnsmasq-dns-cb6ffcf87-5r9mm\" (UID: \"8b7acd70-a72a-477f-af0d-455512cb4e81\") " pod="openstack/dnsmasq-dns-cb6ffcf87-5r9mm" Jan 25 08:19:55 crc kubenswrapper[4832]: I0125 08:19:55.630634 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8b7acd70-a72a-477f-af0d-455512cb4e81-dns-svc\") pod \"dnsmasq-dns-cb6ffcf87-5r9mm\" (UID: \"8b7acd70-a72a-477f-af0d-455512cb4e81\") " pod="openstack/dnsmasq-dns-cb6ffcf87-5r9mm" Jan 25 08:19:55 crc kubenswrapper[4832]: I0125 08:19:55.631554 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8b7acd70-a72a-477f-af0d-455512cb4e81-dns-svc\") pod \"dnsmasq-dns-cb6ffcf87-5r9mm\" (UID: \"8b7acd70-a72a-477f-af0d-455512cb4e81\") " pod="openstack/dnsmasq-dns-cb6ffcf87-5r9mm" Jan 25 08:19:55 crc kubenswrapper[4832]: I0125 08:19:55.632047 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8b7acd70-a72a-477f-af0d-455512cb4e81-ovsdbserver-nb\") pod \"dnsmasq-dns-cb6ffcf87-5r9mm\" (UID: \"8b7acd70-a72a-477f-af0d-455512cb4e81\") " pod="openstack/dnsmasq-dns-cb6ffcf87-5r9mm" Jan 25 08:19:55 crc kubenswrapper[4832]: I0125 08:19:55.632054 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8b7acd70-a72a-477f-af0d-455512cb4e81-config\") pod \"dnsmasq-dns-cb6ffcf87-5r9mm\" (UID: \"8b7acd70-a72a-477f-af0d-455512cb4e81\") " pod="openstack/dnsmasq-dns-cb6ffcf87-5r9mm" Jan 25 08:19:55 crc kubenswrapper[4832]: I0125 08:19:55.632186 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8b7acd70-a72a-477f-af0d-455512cb4e81-ovsdbserver-sb\") pod \"dnsmasq-dns-cb6ffcf87-5r9mm\" (UID: \"8b7acd70-a72a-477f-af0d-455512cb4e81\") " pod="openstack/dnsmasq-dns-cb6ffcf87-5r9mm" Jan 25 08:19:55 crc kubenswrapper[4832]: I0125 08:19:55.632220 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/8b7acd70-a72a-477f-af0d-455512cb4e81-openstack-edpm-ipam\") pod \"dnsmasq-dns-cb6ffcf87-5r9mm\" (UID: \"8b7acd70-a72a-477f-af0d-455512cb4e81\") " pod="openstack/dnsmasq-dns-cb6ffcf87-5r9mm" Jan 25 08:19:55 crc kubenswrapper[4832]: I0125 08:19:55.632801 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/8b7acd70-a72a-477f-af0d-455512cb4e81-dns-swift-storage-0\") pod \"dnsmasq-dns-cb6ffcf87-5r9mm\" (UID: \"8b7acd70-a72a-477f-af0d-455512cb4e81\") " pod="openstack/dnsmasq-dns-cb6ffcf87-5r9mm" Jan 25 08:19:55 crc kubenswrapper[4832]: I0125 08:19:55.648837 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n2snp\" (UniqueName: \"kubernetes.io/projected/8b7acd70-a72a-477f-af0d-455512cb4e81-kube-api-access-n2snp\") pod \"dnsmasq-dns-cb6ffcf87-5r9mm\" (UID: \"8b7acd70-a72a-477f-af0d-455512cb4e81\") " pod="openstack/dnsmasq-dns-cb6ffcf87-5r9mm" Jan 25 08:19:55 crc kubenswrapper[4832]: I0125 08:19:55.679489 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2f80d9a5-5d45-4053-875c-908242efc5e9" path="/var/lib/kubelet/pods/2f80d9a5-5d45-4053-875c-908242efc5e9/volumes" Jan 25 08:19:55 crc kubenswrapper[4832]: I0125 08:19:55.680251 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9b86227f-350e-4e03-aefd-00f308ccb238" path="/var/lib/kubelet/pods/9b86227f-350e-4e03-aefd-00f308ccb238/volumes" Jan 25 08:19:55 crc kubenswrapper[4832]: I0125 08:19:55.706889 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 25 08:19:55 crc kubenswrapper[4832]: I0125 08:19:55.821195 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-cb6ffcf87-5r9mm" Jan 25 08:19:56 crc kubenswrapper[4832]: I0125 08:19:56.004960 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"efe389bf-7e64-417c-96c8-d302858a0722","Type":"ContainerStarted","Data":"6dc0fee36cb2154ed9929e43b50cf876827c0120c8c391c101e484d6aa81424b"} Jan 25 08:19:56 crc kubenswrapper[4832]: I0125 08:19:56.004994 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-67b789f86c-79s92" Jan 25 08:19:56 crc kubenswrapper[4832]: I0125 08:19:56.023153 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-67b789f86c-79s92" Jan 25 08:19:56 crc kubenswrapper[4832]: I0125 08:19:56.138957 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f5e5ad7d-ce45-41d3-a5f5-d5fd8a35d3f1-ovsdbserver-nb\") pod \"f5e5ad7d-ce45-41d3-a5f5-d5fd8a35d3f1\" (UID: \"f5e5ad7d-ce45-41d3-a5f5-d5fd8a35d3f1\") " Jan 25 08:19:56 crc kubenswrapper[4832]: I0125 08:19:56.139052 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f5e5ad7d-ce45-41d3-a5f5-d5fd8a35d3f1-ovsdbserver-sb\") pod \"f5e5ad7d-ce45-41d3-a5f5-d5fd8a35d3f1\" (UID: \"f5e5ad7d-ce45-41d3-a5f5-d5fd8a35d3f1\") " Jan 25 08:19:56 crc kubenswrapper[4832]: I0125 08:19:56.139073 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f5e5ad7d-ce45-41d3-a5f5-d5fd8a35d3f1-dns-swift-storage-0\") pod \"f5e5ad7d-ce45-41d3-a5f5-d5fd8a35d3f1\" (UID: \"f5e5ad7d-ce45-41d3-a5f5-d5fd8a35d3f1\") " Jan 25 08:19:56 crc kubenswrapper[4832]: I0125 08:19:56.139287 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6txw2\" (UniqueName: \"kubernetes.io/projected/f5e5ad7d-ce45-41d3-a5f5-d5fd8a35d3f1-kube-api-access-6txw2\") pod \"f5e5ad7d-ce45-41d3-a5f5-d5fd8a35d3f1\" (UID: \"f5e5ad7d-ce45-41d3-a5f5-d5fd8a35d3f1\") " Jan 25 08:19:56 crc kubenswrapper[4832]: I0125 08:19:56.139330 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f5e5ad7d-ce45-41d3-a5f5-d5fd8a35d3f1-config\") pod \"f5e5ad7d-ce45-41d3-a5f5-d5fd8a35d3f1\" (UID: \"f5e5ad7d-ce45-41d3-a5f5-d5fd8a35d3f1\") " Jan 25 08:19:56 crc kubenswrapper[4832]: I0125 08:19:56.139401 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f5e5ad7d-ce45-41d3-a5f5-d5fd8a35d3f1-dns-svc\") pod \"f5e5ad7d-ce45-41d3-a5f5-d5fd8a35d3f1\" (UID: \"f5e5ad7d-ce45-41d3-a5f5-d5fd8a35d3f1\") " Jan 25 08:19:56 crc kubenswrapper[4832]: I0125 08:19:56.139444 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/f5e5ad7d-ce45-41d3-a5f5-d5fd8a35d3f1-openstack-edpm-ipam\") pod \"f5e5ad7d-ce45-41d3-a5f5-d5fd8a35d3f1\" (UID: \"f5e5ad7d-ce45-41d3-a5f5-d5fd8a35d3f1\") " Jan 25 08:19:56 crc kubenswrapper[4832]: I0125 08:19:56.139507 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f5e5ad7d-ce45-41d3-a5f5-d5fd8a35d3f1-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "f5e5ad7d-ce45-41d3-a5f5-d5fd8a35d3f1" (UID: "f5e5ad7d-ce45-41d3-a5f5-d5fd8a35d3f1"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 25 08:19:56 crc kubenswrapper[4832]: I0125 08:19:56.139968 4832 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f5e5ad7d-ce45-41d3-a5f5-d5fd8a35d3f1-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 25 08:19:56 crc kubenswrapper[4832]: I0125 08:19:56.140188 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f5e5ad7d-ce45-41d3-a5f5-d5fd8a35d3f1-config" (OuterVolumeSpecName: "config") pod "f5e5ad7d-ce45-41d3-a5f5-d5fd8a35d3f1" (UID: "f5e5ad7d-ce45-41d3-a5f5-d5fd8a35d3f1"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 25 08:19:56 crc kubenswrapper[4832]: I0125 08:19:56.140242 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f5e5ad7d-ce45-41d3-a5f5-d5fd8a35d3f1-openstack-edpm-ipam" (OuterVolumeSpecName: "openstack-edpm-ipam") pod "f5e5ad7d-ce45-41d3-a5f5-d5fd8a35d3f1" (UID: "f5e5ad7d-ce45-41d3-a5f5-d5fd8a35d3f1"). InnerVolumeSpecName "openstack-edpm-ipam". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 25 08:19:56 crc kubenswrapper[4832]: I0125 08:19:56.140497 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f5e5ad7d-ce45-41d3-a5f5-d5fd8a35d3f1-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "f5e5ad7d-ce45-41d3-a5f5-d5fd8a35d3f1" (UID: "f5e5ad7d-ce45-41d3-a5f5-d5fd8a35d3f1"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 25 08:19:56 crc kubenswrapper[4832]: I0125 08:19:56.140518 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f5e5ad7d-ce45-41d3-a5f5-d5fd8a35d3f1-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "f5e5ad7d-ce45-41d3-a5f5-d5fd8a35d3f1" (UID: "f5e5ad7d-ce45-41d3-a5f5-d5fd8a35d3f1"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 25 08:19:56 crc kubenswrapper[4832]: I0125 08:19:56.140766 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f5e5ad7d-ce45-41d3-a5f5-d5fd8a35d3f1-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "f5e5ad7d-ce45-41d3-a5f5-d5fd8a35d3f1" (UID: "f5e5ad7d-ce45-41d3-a5f5-d5fd8a35d3f1"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 25 08:19:56 crc kubenswrapper[4832]: I0125 08:19:56.146054 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f5e5ad7d-ce45-41d3-a5f5-d5fd8a35d3f1-kube-api-access-6txw2" (OuterVolumeSpecName: "kube-api-access-6txw2") pod "f5e5ad7d-ce45-41d3-a5f5-d5fd8a35d3f1" (UID: "f5e5ad7d-ce45-41d3-a5f5-d5fd8a35d3f1"). InnerVolumeSpecName "kube-api-access-6txw2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 25 08:19:56 crc kubenswrapper[4832]: W0125 08:19:56.179921 4832 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9cf62746_47cb_4e83_9211_57a799a06e93.slice/crio-432ee207a73ff35d0c4ce4c447951c6e349ca27cad4cb398be902c7a84efdeaa WatchSource:0}: Error finding container 432ee207a73ff35d0c4ce4c447951c6e349ca27cad4cb398be902c7a84efdeaa: Status 404 returned error can't find the container with id 432ee207a73ff35d0c4ce4c447951c6e349ca27cad4cb398be902c7a84efdeaa Jan 25 08:19:56 crc kubenswrapper[4832]: I0125 08:19:56.186626 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 25 08:19:56 crc kubenswrapper[4832]: I0125 08:19:56.241581 4832 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f5e5ad7d-ce45-41d3-a5f5-d5fd8a35d3f1-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 25 08:19:56 crc kubenswrapper[4832]: I0125 08:19:56.241853 4832 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f5e5ad7d-ce45-41d3-a5f5-d5fd8a35d3f1-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 25 08:19:56 crc kubenswrapper[4832]: I0125 08:19:56.241862 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6txw2\" (UniqueName: \"kubernetes.io/projected/f5e5ad7d-ce45-41d3-a5f5-d5fd8a35d3f1-kube-api-access-6txw2\") on node \"crc\" DevicePath \"\"" Jan 25 08:19:56 crc kubenswrapper[4832]: I0125 08:19:56.241871 4832 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f5e5ad7d-ce45-41d3-a5f5-d5fd8a35d3f1-config\") on node \"crc\" DevicePath \"\"" Jan 25 08:19:56 crc kubenswrapper[4832]: I0125 08:19:56.241883 4832 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f5e5ad7d-ce45-41d3-a5f5-d5fd8a35d3f1-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 25 08:19:56 crc kubenswrapper[4832]: I0125 08:19:56.241891 4832 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/f5e5ad7d-ce45-41d3-a5f5-d5fd8a35d3f1-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 25 08:19:56 crc kubenswrapper[4832]: I0125 08:19:56.316861 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-cb6ffcf87-5r9mm"] Jan 25 08:19:57 crc kubenswrapper[4832]: I0125 08:19:57.027120 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"9cf62746-47cb-4e83-9211-57a799a06e93","Type":"ContainerStarted","Data":"432ee207a73ff35d0c4ce4c447951c6e349ca27cad4cb398be902c7a84efdeaa"} Jan 25 08:19:57 crc kubenswrapper[4832]: I0125 08:19:57.028793 4832 generic.go:334] "Generic (PLEG): container finished" podID="8b7acd70-a72a-477f-af0d-455512cb4e81" containerID="84cfa32bf5629aca4f9f94dea1f7de91a54605c1dbddabdd1241e94a7774f085" exitCode=0 Jan 25 08:19:57 crc kubenswrapper[4832]: I0125 08:19:57.028869 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-cb6ffcf87-5r9mm" event={"ID":"8b7acd70-a72a-477f-af0d-455512cb4e81","Type":"ContainerDied","Data":"84cfa32bf5629aca4f9f94dea1f7de91a54605c1dbddabdd1241e94a7774f085"} Jan 25 08:19:57 crc kubenswrapper[4832]: I0125 08:19:57.028892 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-cb6ffcf87-5r9mm" event={"ID":"8b7acd70-a72a-477f-af0d-455512cb4e81","Type":"ContainerStarted","Data":"bc715d441938630cf8d8c32db217c776c53a991bb46a44548911bea496e022e3"} Jan 25 08:19:57 crc kubenswrapper[4832]: I0125 08:19:57.032346 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"efe389bf-7e64-417c-96c8-d302858a0722","Type":"ContainerStarted","Data":"5898922101d8a8f17efbcc21022e36bd0db7cf30a0dffc86d2a923d61dcf0698"} Jan 25 08:19:57 crc kubenswrapper[4832]: I0125 08:19:57.032366 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-67b789f86c-79s92" Jan 25 08:19:57 crc kubenswrapper[4832]: I0125 08:19:57.144283 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-67b789f86c-79s92"] Jan 25 08:19:57 crc kubenswrapper[4832]: I0125 08:19:57.151831 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-67b789f86c-79s92"] Jan 25 08:19:57 crc kubenswrapper[4832]: I0125 08:19:57.692108 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f5e5ad7d-ce45-41d3-a5f5-d5fd8a35d3f1" path="/var/lib/kubelet/pods/f5e5ad7d-ce45-41d3-a5f5-d5fd8a35d3f1/volumes" Jan 25 08:19:58 crc kubenswrapper[4832]: I0125 08:19:58.043310 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"9cf62746-47cb-4e83-9211-57a799a06e93","Type":"ContainerStarted","Data":"00d5b38d78cd784ac14ef72c75aa548bd124c6ddea36fa729f7f7cabaa520dde"} Jan 25 08:19:58 crc kubenswrapper[4832]: I0125 08:19:58.045011 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-cb6ffcf87-5r9mm" event={"ID":"8b7acd70-a72a-477f-af0d-455512cb4e81","Type":"ContainerStarted","Data":"d694cf93f58be01bd4469d6a21cd8d4992a7f1eb29576cfb7b2ac2e3b4b217e7"} Jan 25 08:19:58 crc kubenswrapper[4832]: I0125 08:19:58.045495 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-cb6ffcf87-5r9mm" Jan 25 08:19:58 crc kubenswrapper[4832]: I0125 08:19:58.091940 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-cb6ffcf87-5r9mm" podStartSLOduration=3.091919679 podStartE2EDuration="3.091919679s" podCreationTimestamp="2026-01-25 08:19:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-25 08:19:58.085253957 +0000 UTC m=+1380.759077500" watchObservedRunningTime="2026-01-25 08:19:58.091919679 +0000 UTC m=+1380.765743212" Jan 25 08:20:05 crc kubenswrapper[4832]: I0125 08:20:05.822633 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-cb6ffcf87-5r9mm" Jan 25 08:20:05 crc kubenswrapper[4832]: I0125 08:20:05.883413 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-59cf4bdb65-87zjq"] Jan 25 08:20:05 crc kubenswrapper[4832]: I0125 08:20:05.883716 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-59cf4bdb65-87zjq" podUID="2422fda2-c886-45e9-93ee-8ef936a365f8" containerName="dnsmasq-dns" containerID="cri-o://258632df7d35708001d8d4e18182a4b71a169fc05d60153ff36d5d1f35c4a34e" gracePeriod=10 Jan 25 08:20:06 crc kubenswrapper[4832]: I0125 08:20:06.138138 4832 generic.go:334] "Generic (PLEG): container finished" podID="2422fda2-c886-45e9-93ee-8ef936a365f8" containerID="258632df7d35708001d8d4e18182a4b71a169fc05d60153ff36d5d1f35c4a34e" exitCode=0 Jan 25 08:20:06 crc kubenswrapper[4832]: I0125 08:20:06.138806 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-59cf4bdb65-87zjq" event={"ID":"2422fda2-c886-45e9-93ee-8ef936a365f8","Type":"ContainerDied","Data":"258632df7d35708001d8d4e18182a4b71a169fc05d60153ff36d5d1f35c4a34e"} Jan 25 08:20:06 crc kubenswrapper[4832]: I0125 08:20:06.369289 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-59cf4bdb65-87zjq" Jan 25 08:20:06 crc kubenswrapper[4832]: I0125 08:20:06.440005 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sd5s8\" (UniqueName: \"kubernetes.io/projected/2422fda2-c886-45e9-93ee-8ef936a365f8-kube-api-access-sd5s8\") pod \"2422fda2-c886-45e9-93ee-8ef936a365f8\" (UID: \"2422fda2-c886-45e9-93ee-8ef936a365f8\") " Jan 25 08:20:06 crc kubenswrapper[4832]: I0125 08:20:06.440054 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/2422fda2-c886-45e9-93ee-8ef936a365f8-dns-swift-storage-0\") pod \"2422fda2-c886-45e9-93ee-8ef936a365f8\" (UID: \"2422fda2-c886-45e9-93ee-8ef936a365f8\") " Jan 25 08:20:06 crc kubenswrapper[4832]: I0125 08:20:06.440102 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2422fda2-c886-45e9-93ee-8ef936a365f8-ovsdbserver-nb\") pod \"2422fda2-c886-45e9-93ee-8ef936a365f8\" (UID: \"2422fda2-c886-45e9-93ee-8ef936a365f8\") " Jan 25 08:20:06 crc kubenswrapper[4832]: I0125 08:20:06.440215 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2422fda2-c886-45e9-93ee-8ef936a365f8-dns-svc\") pod \"2422fda2-c886-45e9-93ee-8ef936a365f8\" (UID: \"2422fda2-c886-45e9-93ee-8ef936a365f8\") " Jan 25 08:20:06 crc kubenswrapper[4832]: I0125 08:20:06.440284 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2422fda2-c886-45e9-93ee-8ef936a365f8-ovsdbserver-sb\") pod \"2422fda2-c886-45e9-93ee-8ef936a365f8\" (UID: \"2422fda2-c886-45e9-93ee-8ef936a365f8\") " Jan 25 08:20:06 crc kubenswrapper[4832]: I0125 08:20:06.440319 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2422fda2-c886-45e9-93ee-8ef936a365f8-config\") pod \"2422fda2-c886-45e9-93ee-8ef936a365f8\" (UID: \"2422fda2-c886-45e9-93ee-8ef936a365f8\") " Jan 25 08:20:06 crc kubenswrapper[4832]: I0125 08:20:06.446952 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2422fda2-c886-45e9-93ee-8ef936a365f8-kube-api-access-sd5s8" (OuterVolumeSpecName: "kube-api-access-sd5s8") pod "2422fda2-c886-45e9-93ee-8ef936a365f8" (UID: "2422fda2-c886-45e9-93ee-8ef936a365f8"). InnerVolumeSpecName "kube-api-access-sd5s8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 25 08:20:06 crc kubenswrapper[4832]: I0125 08:20:06.488346 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2422fda2-c886-45e9-93ee-8ef936a365f8-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "2422fda2-c886-45e9-93ee-8ef936a365f8" (UID: "2422fda2-c886-45e9-93ee-8ef936a365f8"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 25 08:20:06 crc kubenswrapper[4832]: I0125 08:20:06.492323 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2422fda2-c886-45e9-93ee-8ef936a365f8-config" (OuterVolumeSpecName: "config") pod "2422fda2-c886-45e9-93ee-8ef936a365f8" (UID: "2422fda2-c886-45e9-93ee-8ef936a365f8"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 25 08:20:06 crc kubenswrapper[4832]: I0125 08:20:06.493294 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2422fda2-c886-45e9-93ee-8ef936a365f8-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "2422fda2-c886-45e9-93ee-8ef936a365f8" (UID: "2422fda2-c886-45e9-93ee-8ef936a365f8"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 25 08:20:06 crc kubenswrapper[4832]: E0125 08:20:06.493646 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/2422fda2-c886-45e9-93ee-8ef936a365f8-ovsdbserver-nb podName:2422fda2-c886-45e9-93ee-8ef936a365f8 nodeName:}" failed. No retries permitted until 2026-01-25 08:20:06.993616691 +0000 UTC m=+1389.667440224 (durationBeforeRetry 500ms). Error: error cleaning subPath mounts for volume "ovsdbserver-nb" (UniqueName: "kubernetes.io/configmap/2422fda2-c886-45e9-93ee-8ef936a365f8-ovsdbserver-nb") pod "2422fda2-c886-45e9-93ee-8ef936a365f8" (UID: "2422fda2-c886-45e9-93ee-8ef936a365f8") : error deleting /var/lib/kubelet/pods/2422fda2-c886-45e9-93ee-8ef936a365f8/volume-subpaths: remove /var/lib/kubelet/pods/2422fda2-c886-45e9-93ee-8ef936a365f8/volume-subpaths: no such file or directory Jan 25 08:20:06 crc kubenswrapper[4832]: I0125 08:20:06.494041 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2422fda2-c886-45e9-93ee-8ef936a365f8-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "2422fda2-c886-45e9-93ee-8ef936a365f8" (UID: "2422fda2-c886-45e9-93ee-8ef936a365f8"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 25 08:20:06 crc kubenswrapper[4832]: I0125 08:20:06.542171 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sd5s8\" (UniqueName: \"kubernetes.io/projected/2422fda2-c886-45e9-93ee-8ef936a365f8-kube-api-access-sd5s8\") on node \"crc\" DevicePath \"\"" Jan 25 08:20:06 crc kubenswrapper[4832]: I0125 08:20:06.542460 4832 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/2422fda2-c886-45e9-93ee-8ef936a365f8-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 25 08:20:06 crc kubenswrapper[4832]: I0125 08:20:06.542583 4832 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2422fda2-c886-45e9-93ee-8ef936a365f8-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 25 08:20:06 crc kubenswrapper[4832]: I0125 08:20:06.542666 4832 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2422fda2-c886-45e9-93ee-8ef936a365f8-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 25 08:20:06 crc kubenswrapper[4832]: I0125 08:20:06.542736 4832 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2422fda2-c886-45e9-93ee-8ef936a365f8-config\") on node \"crc\" DevicePath \"\"" Jan 25 08:20:07 crc kubenswrapper[4832]: I0125 08:20:07.053162 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2422fda2-c886-45e9-93ee-8ef936a365f8-ovsdbserver-nb\") pod \"2422fda2-c886-45e9-93ee-8ef936a365f8\" (UID: \"2422fda2-c886-45e9-93ee-8ef936a365f8\") " Jan 25 08:20:07 crc kubenswrapper[4832]: I0125 08:20:07.053954 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2422fda2-c886-45e9-93ee-8ef936a365f8-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "2422fda2-c886-45e9-93ee-8ef936a365f8" (UID: "2422fda2-c886-45e9-93ee-8ef936a365f8"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 25 08:20:07 crc kubenswrapper[4832]: I0125 08:20:07.155299 4832 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2422fda2-c886-45e9-93ee-8ef936a365f8-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 25 08:20:07 crc kubenswrapper[4832]: I0125 08:20:07.155634 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-59cf4bdb65-87zjq" event={"ID":"2422fda2-c886-45e9-93ee-8ef936a365f8","Type":"ContainerDied","Data":"be555f6210e88b40a8756acd65b7d9518ab4de3c485d173dd4d7c00a78f76ab3"} Jan 25 08:20:07 crc kubenswrapper[4832]: I0125 08:20:07.155706 4832 scope.go:117] "RemoveContainer" containerID="258632df7d35708001d8d4e18182a4b71a169fc05d60153ff36d5d1f35c4a34e" Jan 25 08:20:07 crc kubenswrapper[4832]: I0125 08:20:07.155714 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-59cf4bdb65-87zjq" Jan 25 08:20:07 crc kubenswrapper[4832]: I0125 08:20:07.193120 4832 scope.go:117] "RemoveContainer" containerID="441710a55dd61d984bbd4a2b8c2df3a20de3c702498a2d5e7bf09b6f1ee5621b" Jan 25 08:20:07 crc kubenswrapper[4832]: I0125 08:20:07.198107 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-59cf4bdb65-87zjq"] Jan 25 08:20:07 crc kubenswrapper[4832]: I0125 08:20:07.205874 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-59cf4bdb65-87zjq"] Jan 25 08:20:07 crc kubenswrapper[4832]: I0125 08:20:07.680505 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2422fda2-c886-45e9-93ee-8ef936a365f8" path="/var/lib/kubelet/pods/2422fda2-c886-45e9-93ee-8ef936a365f8/volumes" Jan 25 08:20:14 crc kubenswrapper[4832]: I0125 08:20:14.265122 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-97bvv"] Jan 25 08:20:14 crc kubenswrapper[4832]: E0125 08:20:14.266860 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2422fda2-c886-45e9-93ee-8ef936a365f8" containerName="init" Jan 25 08:20:14 crc kubenswrapper[4832]: I0125 08:20:14.266888 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="2422fda2-c886-45e9-93ee-8ef936a365f8" containerName="init" Jan 25 08:20:14 crc kubenswrapper[4832]: E0125 08:20:14.266943 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2422fda2-c886-45e9-93ee-8ef936a365f8" containerName="dnsmasq-dns" Jan 25 08:20:14 crc kubenswrapper[4832]: I0125 08:20:14.266956 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="2422fda2-c886-45e9-93ee-8ef936a365f8" containerName="dnsmasq-dns" Jan 25 08:20:14 crc kubenswrapper[4832]: I0125 08:20:14.267290 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="2422fda2-c886-45e9-93ee-8ef936a365f8" containerName="dnsmasq-dns" Jan 25 08:20:14 crc kubenswrapper[4832]: I0125 08:20:14.268554 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-97bvv" Jan 25 08:20:14 crc kubenswrapper[4832]: I0125 08:20:14.271014 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-7jwxb" Jan 25 08:20:14 crc kubenswrapper[4832]: I0125 08:20:14.272019 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 25 08:20:14 crc kubenswrapper[4832]: I0125 08:20:14.272371 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 25 08:20:14 crc kubenswrapper[4832]: I0125 08:20:14.273342 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 25 08:20:14 crc kubenswrapper[4832]: I0125 08:20:14.315359 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/be2a25f4-32ba-4406-b6a6-bdae29720048-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-97bvv\" (UID: \"be2a25f4-32ba-4406-b6a6-bdae29720048\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-97bvv" Jan 25 08:20:14 crc kubenswrapper[4832]: I0125 08:20:14.315653 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h5295\" (UniqueName: \"kubernetes.io/projected/be2a25f4-32ba-4406-b6a6-bdae29720048-kube-api-access-h5295\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-97bvv\" (UID: \"be2a25f4-32ba-4406-b6a6-bdae29720048\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-97bvv" Jan 25 08:20:14 crc kubenswrapper[4832]: I0125 08:20:14.315786 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/be2a25f4-32ba-4406-b6a6-bdae29720048-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-97bvv\" (UID: \"be2a25f4-32ba-4406-b6a6-bdae29720048\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-97bvv" Jan 25 08:20:14 crc kubenswrapper[4832]: I0125 08:20:14.315875 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/be2a25f4-32ba-4406-b6a6-bdae29720048-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-97bvv\" (UID: \"be2a25f4-32ba-4406-b6a6-bdae29720048\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-97bvv" Jan 25 08:20:14 crc kubenswrapper[4832]: I0125 08:20:14.331560 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-97bvv"] Jan 25 08:20:14 crc kubenswrapper[4832]: I0125 08:20:14.418059 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/be2a25f4-32ba-4406-b6a6-bdae29720048-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-97bvv\" (UID: \"be2a25f4-32ba-4406-b6a6-bdae29720048\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-97bvv" Jan 25 08:20:14 crc kubenswrapper[4832]: I0125 08:20:14.418167 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h5295\" (UniqueName: \"kubernetes.io/projected/be2a25f4-32ba-4406-b6a6-bdae29720048-kube-api-access-h5295\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-97bvv\" (UID: \"be2a25f4-32ba-4406-b6a6-bdae29720048\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-97bvv" Jan 25 08:20:14 crc kubenswrapper[4832]: I0125 08:20:14.418227 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/be2a25f4-32ba-4406-b6a6-bdae29720048-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-97bvv\" (UID: \"be2a25f4-32ba-4406-b6a6-bdae29720048\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-97bvv" Jan 25 08:20:14 crc kubenswrapper[4832]: I0125 08:20:14.418272 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/be2a25f4-32ba-4406-b6a6-bdae29720048-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-97bvv\" (UID: \"be2a25f4-32ba-4406-b6a6-bdae29720048\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-97bvv" Jan 25 08:20:14 crc kubenswrapper[4832]: I0125 08:20:14.426327 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/be2a25f4-32ba-4406-b6a6-bdae29720048-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-97bvv\" (UID: \"be2a25f4-32ba-4406-b6a6-bdae29720048\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-97bvv" Jan 25 08:20:14 crc kubenswrapper[4832]: I0125 08:20:14.426760 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/be2a25f4-32ba-4406-b6a6-bdae29720048-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-97bvv\" (UID: \"be2a25f4-32ba-4406-b6a6-bdae29720048\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-97bvv" Jan 25 08:20:14 crc kubenswrapper[4832]: I0125 08:20:14.428680 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/be2a25f4-32ba-4406-b6a6-bdae29720048-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-97bvv\" (UID: \"be2a25f4-32ba-4406-b6a6-bdae29720048\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-97bvv" Jan 25 08:20:14 crc kubenswrapper[4832]: I0125 08:20:14.445024 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h5295\" (UniqueName: \"kubernetes.io/projected/be2a25f4-32ba-4406-b6a6-bdae29720048-kube-api-access-h5295\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-97bvv\" (UID: \"be2a25f4-32ba-4406-b6a6-bdae29720048\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-97bvv" Jan 25 08:20:14 crc kubenswrapper[4832]: I0125 08:20:14.613034 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-97bvv" Jan 25 08:20:15 crc kubenswrapper[4832]: I0125 08:20:15.192822 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-97bvv"] Jan 25 08:20:15 crc kubenswrapper[4832]: I0125 08:20:15.233064 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-97bvv" event={"ID":"be2a25f4-32ba-4406-b6a6-bdae29720048","Type":"ContainerStarted","Data":"292a95514662bab2f310aa8449e0654deb6e5e49572550b69290781f84b90612"} Jan 25 08:20:24 crc kubenswrapper[4832]: I0125 08:20:24.355623 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-97bvv" event={"ID":"be2a25f4-32ba-4406-b6a6-bdae29720048","Type":"ContainerStarted","Data":"a63f9d03c06b13c910f43d351e375a85867c4d2df4f85f50f110372239375bad"} Jan 25 08:20:24 crc kubenswrapper[4832]: I0125 08:20:24.370983 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-97bvv" podStartSLOduration=2.181080224 podStartE2EDuration="10.370964009s" podCreationTimestamp="2026-01-25 08:20:14 +0000 UTC" firstStartedPulling="2026-01-25 08:20:15.201174404 +0000 UTC m=+1397.874997937" lastFinishedPulling="2026-01-25 08:20:23.391058189 +0000 UTC m=+1406.064881722" observedRunningTime="2026-01-25 08:20:24.36937503 +0000 UTC m=+1407.043198573" watchObservedRunningTime="2026-01-25 08:20:24.370964009 +0000 UTC m=+1407.044787542" Jan 25 08:20:29 crc kubenswrapper[4832]: I0125 08:20:29.400222 4832 generic.go:334] "Generic (PLEG): container finished" podID="efe389bf-7e64-417c-96c8-d302858a0722" containerID="5898922101d8a8f17efbcc21022e36bd0db7cf30a0dffc86d2a923d61dcf0698" exitCode=0 Jan 25 08:20:29 crc kubenswrapper[4832]: I0125 08:20:29.400312 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"efe389bf-7e64-417c-96c8-d302858a0722","Type":"ContainerDied","Data":"5898922101d8a8f17efbcc21022e36bd0db7cf30a0dffc86d2a923d61dcf0698"} Jan 25 08:20:30 crc kubenswrapper[4832]: I0125 08:20:30.410884 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"efe389bf-7e64-417c-96c8-d302858a0722","Type":"ContainerStarted","Data":"75bb1291bbd9c578f78f13a74a47037445da19b55729a97e2e3a2dde590995fa"} Jan 25 08:20:30 crc kubenswrapper[4832]: I0125 08:20:30.412366 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Jan 25 08:20:30 crc kubenswrapper[4832]: I0125 08:20:30.413946 4832 generic.go:334] "Generic (PLEG): container finished" podID="9cf62746-47cb-4e83-9211-57a799a06e93" containerID="00d5b38d78cd784ac14ef72c75aa548bd124c6ddea36fa729f7f7cabaa520dde" exitCode=0 Jan 25 08:20:30 crc kubenswrapper[4832]: I0125 08:20:30.413995 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"9cf62746-47cb-4e83-9211-57a799a06e93","Type":"ContainerDied","Data":"00d5b38d78cd784ac14ef72c75aa548bd124c6ddea36fa729f7f7cabaa520dde"} Jan 25 08:20:30 crc kubenswrapper[4832]: I0125 08:20:30.440148 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=36.440128028 podStartE2EDuration="36.440128028s" podCreationTimestamp="2026-01-25 08:19:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-25 08:20:30.438571429 +0000 UTC m=+1413.112394962" watchObservedRunningTime="2026-01-25 08:20:30.440128028 +0000 UTC m=+1413.113951561" Jan 25 08:20:31 crc kubenswrapper[4832]: I0125 08:20:31.426079 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"9cf62746-47cb-4e83-9211-57a799a06e93","Type":"ContainerStarted","Data":"4c6be69708393ae41ef35817c0f34420cc8287a2ee22929dff28a888bafde7c2"} Jan 25 08:20:31 crc kubenswrapper[4832]: I0125 08:20:31.426856 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Jan 25 08:20:31 crc kubenswrapper[4832]: I0125 08:20:31.455266 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=36.455249119 podStartE2EDuration="36.455249119s" podCreationTimestamp="2026-01-25 08:19:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-25 08:20:31.449651304 +0000 UTC m=+1414.123474867" watchObservedRunningTime="2026-01-25 08:20:31.455249119 +0000 UTC m=+1414.129072652" Jan 25 08:20:35 crc kubenswrapper[4832]: I0125 08:20:35.464370 4832 generic.go:334] "Generic (PLEG): container finished" podID="be2a25f4-32ba-4406-b6a6-bdae29720048" containerID="a63f9d03c06b13c910f43d351e375a85867c4d2df4f85f50f110372239375bad" exitCode=0 Jan 25 08:20:35 crc kubenswrapper[4832]: I0125 08:20:35.464445 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-97bvv" event={"ID":"be2a25f4-32ba-4406-b6a6-bdae29720048","Type":"ContainerDied","Data":"a63f9d03c06b13c910f43d351e375a85867c4d2df4f85f50f110372239375bad"} Jan 25 08:20:36 crc kubenswrapper[4832]: I0125 08:20:36.892150 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-97bvv" Jan 25 08:20:36 crc kubenswrapper[4832]: I0125 08:20:36.984950 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h5295\" (UniqueName: \"kubernetes.io/projected/be2a25f4-32ba-4406-b6a6-bdae29720048-kube-api-access-h5295\") pod \"be2a25f4-32ba-4406-b6a6-bdae29720048\" (UID: \"be2a25f4-32ba-4406-b6a6-bdae29720048\") " Jan 25 08:20:36 crc kubenswrapper[4832]: I0125 08:20:36.985132 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/be2a25f4-32ba-4406-b6a6-bdae29720048-ssh-key-openstack-edpm-ipam\") pod \"be2a25f4-32ba-4406-b6a6-bdae29720048\" (UID: \"be2a25f4-32ba-4406-b6a6-bdae29720048\") " Jan 25 08:20:36 crc kubenswrapper[4832]: I0125 08:20:36.985180 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/be2a25f4-32ba-4406-b6a6-bdae29720048-repo-setup-combined-ca-bundle\") pod \"be2a25f4-32ba-4406-b6a6-bdae29720048\" (UID: \"be2a25f4-32ba-4406-b6a6-bdae29720048\") " Jan 25 08:20:36 crc kubenswrapper[4832]: I0125 08:20:36.985206 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/be2a25f4-32ba-4406-b6a6-bdae29720048-inventory\") pod \"be2a25f4-32ba-4406-b6a6-bdae29720048\" (UID: \"be2a25f4-32ba-4406-b6a6-bdae29720048\") " Jan 25 08:20:36 crc kubenswrapper[4832]: I0125 08:20:36.991077 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/be2a25f4-32ba-4406-b6a6-bdae29720048-kube-api-access-h5295" (OuterVolumeSpecName: "kube-api-access-h5295") pod "be2a25f4-32ba-4406-b6a6-bdae29720048" (UID: "be2a25f4-32ba-4406-b6a6-bdae29720048"). InnerVolumeSpecName "kube-api-access-h5295". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 25 08:20:36 crc kubenswrapper[4832]: I0125 08:20:36.995560 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/be2a25f4-32ba-4406-b6a6-bdae29720048-repo-setup-combined-ca-bundle" (OuterVolumeSpecName: "repo-setup-combined-ca-bundle") pod "be2a25f4-32ba-4406-b6a6-bdae29720048" (UID: "be2a25f4-32ba-4406-b6a6-bdae29720048"). InnerVolumeSpecName "repo-setup-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 25 08:20:37 crc kubenswrapper[4832]: I0125 08:20:37.017883 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/be2a25f4-32ba-4406-b6a6-bdae29720048-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "be2a25f4-32ba-4406-b6a6-bdae29720048" (UID: "be2a25f4-32ba-4406-b6a6-bdae29720048"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 25 08:20:37 crc kubenswrapper[4832]: I0125 08:20:37.018422 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/be2a25f4-32ba-4406-b6a6-bdae29720048-inventory" (OuterVolumeSpecName: "inventory") pod "be2a25f4-32ba-4406-b6a6-bdae29720048" (UID: "be2a25f4-32ba-4406-b6a6-bdae29720048"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 25 08:20:37 crc kubenswrapper[4832]: I0125 08:20:37.087274 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h5295\" (UniqueName: \"kubernetes.io/projected/be2a25f4-32ba-4406-b6a6-bdae29720048-kube-api-access-h5295\") on node \"crc\" DevicePath \"\"" Jan 25 08:20:37 crc kubenswrapper[4832]: I0125 08:20:37.087315 4832 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/be2a25f4-32ba-4406-b6a6-bdae29720048-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 25 08:20:37 crc kubenswrapper[4832]: I0125 08:20:37.087325 4832 reconciler_common.go:293] "Volume detached for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/be2a25f4-32ba-4406-b6a6-bdae29720048-repo-setup-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 25 08:20:37 crc kubenswrapper[4832]: I0125 08:20:37.087337 4832 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/be2a25f4-32ba-4406-b6a6-bdae29720048-inventory\") on node \"crc\" DevicePath \"\"" Jan 25 08:20:37 crc kubenswrapper[4832]: I0125 08:20:37.483872 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-97bvv" event={"ID":"be2a25f4-32ba-4406-b6a6-bdae29720048","Type":"ContainerDied","Data":"292a95514662bab2f310aa8449e0654deb6e5e49572550b69290781f84b90612"} Jan 25 08:20:37 crc kubenswrapper[4832]: I0125 08:20:37.483909 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-97bvv" Jan 25 08:20:37 crc kubenswrapper[4832]: I0125 08:20:37.483918 4832 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="292a95514662bab2f310aa8449e0654deb6e5e49572550b69290781f84b90612" Jan 25 08:20:37 crc kubenswrapper[4832]: I0125 08:20:37.571327 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-lr429"] Jan 25 08:20:37 crc kubenswrapper[4832]: E0125 08:20:37.571930 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="be2a25f4-32ba-4406-b6a6-bdae29720048" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Jan 25 08:20:37 crc kubenswrapper[4832]: I0125 08:20:37.571960 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="be2a25f4-32ba-4406-b6a6-bdae29720048" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Jan 25 08:20:37 crc kubenswrapper[4832]: I0125 08:20:37.572253 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="be2a25f4-32ba-4406-b6a6-bdae29720048" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Jan 25 08:20:37 crc kubenswrapper[4832]: I0125 08:20:37.573091 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-lr429" Jan 25 08:20:37 crc kubenswrapper[4832]: I0125 08:20:37.574988 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 25 08:20:37 crc kubenswrapper[4832]: I0125 08:20:37.576103 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 25 08:20:37 crc kubenswrapper[4832]: I0125 08:20:37.576684 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 25 08:20:37 crc kubenswrapper[4832]: I0125 08:20:37.578315 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-7jwxb" Jan 25 08:20:37 crc kubenswrapper[4832]: I0125 08:20:37.581692 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-lr429"] Jan 25 08:20:37 crc kubenswrapper[4832]: I0125 08:20:37.596212 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tqrtx\" (UniqueName: \"kubernetes.io/projected/306310b5-6753-4a5a-b279-41e070c2f970-kube-api-access-tqrtx\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-lr429\" (UID: \"306310b5-6753-4a5a-b279-41e070c2f970\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-lr429" Jan 25 08:20:37 crc kubenswrapper[4832]: I0125 08:20:37.596326 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/306310b5-6753-4a5a-b279-41e070c2f970-ssh-key-openstack-edpm-ipam\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-lr429\" (UID: \"306310b5-6753-4a5a-b279-41e070c2f970\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-lr429" Jan 25 08:20:37 crc kubenswrapper[4832]: I0125 08:20:37.596467 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/306310b5-6753-4a5a-b279-41e070c2f970-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-lr429\" (UID: \"306310b5-6753-4a5a-b279-41e070c2f970\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-lr429" Jan 25 08:20:37 crc kubenswrapper[4832]: I0125 08:20:37.698191 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tqrtx\" (UniqueName: \"kubernetes.io/projected/306310b5-6753-4a5a-b279-41e070c2f970-kube-api-access-tqrtx\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-lr429\" (UID: \"306310b5-6753-4a5a-b279-41e070c2f970\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-lr429" Jan 25 08:20:37 crc kubenswrapper[4832]: I0125 08:20:37.698333 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/306310b5-6753-4a5a-b279-41e070c2f970-ssh-key-openstack-edpm-ipam\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-lr429\" (UID: \"306310b5-6753-4a5a-b279-41e070c2f970\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-lr429" Jan 25 08:20:37 crc kubenswrapper[4832]: I0125 08:20:37.698501 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/306310b5-6753-4a5a-b279-41e070c2f970-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-lr429\" (UID: \"306310b5-6753-4a5a-b279-41e070c2f970\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-lr429" Jan 25 08:20:37 crc kubenswrapper[4832]: I0125 08:20:37.704718 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/306310b5-6753-4a5a-b279-41e070c2f970-ssh-key-openstack-edpm-ipam\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-lr429\" (UID: \"306310b5-6753-4a5a-b279-41e070c2f970\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-lr429" Jan 25 08:20:37 crc kubenswrapper[4832]: I0125 08:20:37.708053 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/306310b5-6753-4a5a-b279-41e070c2f970-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-lr429\" (UID: \"306310b5-6753-4a5a-b279-41e070c2f970\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-lr429" Jan 25 08:20:37 crc kubenswrapper[4832]: I0125 08:20:37.717060 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tqrtx\" (UniqueName: \"kubernetes.io/projected/306310b5-6753-4a5a-b279-41e070c2f970-kube-api-access-tqrtx\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-lr429\" (UID: \"306310b5-6753-4a5a-b279-41e070c2f970\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-lr429" Jan 25 08:20:37 crc kubenswrapper[4832]: I0125 08:20:37.888849 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-lr429" Jan 25 08:20:38 crc kubenswrapper[4832]: I0125 08:20:38.492957 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-lr429"] Jan 25 08:20:39 crc kubenswrapper[4832]: I0125 08:20:39.506316 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-lr429" event={"ID":"306310b5-6753-4a5a-b279-41e070c2f970","Type":"ContainerStarted","Data":"12554ba82af61d76e0321fef0febdbe0b34a87613011a24e0996baac2d363dd9"} Jan 25 08:20:39 crc kubenswrapper[4832]: I0125 08:20:39.506735 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-lr429" event={"ID":"306310b5-6753-4a5a-b279-41e070c2f970","Type":"ContainerStarted","Data":"b30df97a92b3a15ae25bce89dd2f5ce89a7eb993b959de8b80d5fe7394d18c29"} Jan 25 08:20:39 crc kubenswrapper[4832]: I0125 08:20:39.552133 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-lr429" podStartSLOduration=2.089380476 podStartE2EDuration="2.552116933s" podCreationTimestamp="2026-01-25 08:20:37 +0000 UTC" firstStartedPulling="2026-01-25 08:20:38.524213932 +0000 UTC m=+1421.198037455" lastFinishedPulling="2026-01-25 08:20:38.986950379 +0000 UTC m=+1421.660773912" observedRunningTime="2026-01-25 08:20:39.520757701 +0000 UTC m=+1422.194581244" watchObservedRunningTime="2026-01-25 08:20:39.552116933 +0000 UTC m=+1422.225940466" Jan 25 08:20:42 crc kubenswrapper[4832]: I0125 08:20:42.535301 4832 generic.go:334] "Generic (PLEG): container finished" podID="306310b5-6753-4a5a-b279-41e070c2f970" containerID="12554ba82af61d76e0321fef0febdbe0b34a87613011a24e0996baac2d363dd9" exitCode=0 Jan 25 08:20:42 crc kubenswrapper[4832]: I0125 08:20:42.535376 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-lr429" event={"ID":"306310b5-6753-4a5a-b279-41e070c2f970","Type":"ContainerDied","Data":"12554ba82af61d76e0321fef0febdbe0b34a87613011a24e0996baac2d363dd9"} Jan 25 08:20:43 crc kubenswrapper[4832]: I0125 08:20:43.960953 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-lr429" Jan 25 08:20:44 crc kubenswrapper[4832]: I0125 08:20:44.042666 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tqrtx\" (UniqueName: \"kubernetes.io/projected/306310b5-6753-4a5a-b279-41e070c2f970-kube-api-access-tqrtx\") pod \"306310b5-6753-4a5a-b279-41e070c2f970\" (UID: \"306310b5-6753-4a5a-b279-41e070c2f970\") " Jan 25 08:20:44 crc kubenswrapper[4832]: I0125 08:20:44.042811 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/306310b5-6753-4a5a-b279-41e070c2f970-inventory\") pod \"306310b5-6753-4a5a-b279-41e070c2f970\" (UID: \"306310b5-6753-4a5a-b279-41e070c2f970\") " Jan 25 08:20:44 crc kubenswrapper[4832]: I0125 08:20:44.042900 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/306310b5-6753-4a5a-b279-41e070c2f970-ssh-key-openstack-edpm-ipam\") pod \"306310b5-6753-4a5a-b279-41e070c2f970\" (UID: \"306310b5-6753-4a5a-b279-41e070c2f970\") " Jan 25 08:20:44 crc kubenswrapper[4832]: I0125 08:20:44.049665 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/306310b5-6753-4a5a-b279-41e070c2f970-kube-api-access-tqrtx" (OuterVolumeSpecName: "kube-api-access-tqrtx") pod "306310b5-6753-4a5a-b279-41e070c2f970" (UID: "306310b5-6753-4a5a-b279-41e070c2f970"). InnerVolumeSpecName "kube-api-access-tqrtx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 25 08:20:44 crc kubenswrapper[4832]: I0125 08:20:44.074057 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/306310b5-6753-4a5a-b279-41e070c2f970-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "306310b5-6753-4a5a-b279-41e070c2f970" (UID: "306310b5-6753-4a5a-b279-41e070c2f970"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 25 08:20:44 crc kubenswrapper[4832]: I0125 08:20:44.075642 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/306310b5-6753-4a5a-b279-41e070c2f970-inventory" (OuterVolumeSpecName: "inventory") pod "306310b5-6753-4a5a-b279-41e070c2f970" (UID: "306310b5-6753-4a5a-b279-41e070c2f970"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 25 08:20:44 crc kubenswrapper[4832]: I0125 08:20:44.145364 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tqrtx\" (UniqueName: \"kubernetes.io/projected/306310b5-6753-4a5a-b279-41e070c2f970-kube-api-access-tqrtx\") on node \"crc\" DevicePath \"\"" Jan 25 08:20:44 crc kubenswrapper[4832]: I0125 08:20:44.145439 4832 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/306310b5-6753-4a5a-b279-41e070c2f970-inventory\") on node \"crc\" DevicePath \"\"" Jan 25 08:20:44 crc kubenswrapper[4832]: I0125 08:20:44.145455 4832 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/306310b5-6753-4a5a-b279-41e070c2f970-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 25 08:20:44 crc kubenswrapper[4832]: I0125 08:20:44.569281 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-lr429" event={"ID":"306310b5-6753-4a5a-b279-41e070c2f970","Type":"ContainerDied","Data":"b30df97a92b3a15ae25bce89dd2f5ce89a7eb993b959de8b80d5fe7394d18c29"} Jan 25 08:20:44 crc kubenswrapper[4832]: I0125 08:20:44.569322 4832 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b30df97a92b3a15ae25bce89dd2f5ce89a7eb993b959de8b80d5fe7394d18c29" Jan 25 08:20:44 crc kubenswrapper[4832]: I0125 08:20:44.569370 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-lr429" Jan 25 08:20:44 crc kubenswrapper[4832]: I0125 08:20:44.722062 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-hdzmf"] Jan 25 08:20:44 crc kubenswrapper[4832]: E0125 08:20:44.722483 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="306310b5-6753-4a5a-b279-41e070c2f970" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Jan 25 08:20:44 crc kubenswrapper[4832]: I0125 08:20:44.722499 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="306310b5-6753-4a5a-b279-41e070c2f970" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Jan 25 08:20:44 crc kubenswrapper[4832]: I0125 08:20:44.722701 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="306310b5-6753-4a5a-b279-41e070c2f970" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Jan 25 08:20:44 crc kubenswrapper[4832]: I0125 08:20:44.723310 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-hdzmf" Jan 25 08:20:44 crc kubenswrapper[4832]: I0125 08:20:44.727073 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 25 08:20:44 crc kubenswrapper[4832]: I0125 08:20:44.727151 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 25 08:20:44 crc kubenswrapper[4832]: I0125 08:20:44.727363 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-7jwxb" Jan 25 08:20:44 crc kubenswrapper[4832]: I0125 08:20:44.727454 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 25 08:20:44 crc kubenswrapper[4832]: I0125 08:20:44.736634 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Jan 25 08:20:44 crc kubenswrapper[4832]: I0125 08:20:44.738212 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-hdzmf"] Jan 25 08:20:44 crc kubenswrapper[4832]: I0125 08:20:44.781330 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/146a1b8e-1733-40ca-81a5-d73122618f4d-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-hdzmf\" (UID: \"146a1b8e-1733-40ca-81a5-d73122618f4d\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-hdzmf" Jan 25 08:20:44 crc kubenswrapper[4832]: I0125 08:20:44.781415 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/146a1b8e-1733-40ca-81a5-d73122618f4d-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-hdzmf\" (UID: \"146a1b8e-1733-40ca-81a5-d73122618f4d\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-hdzmf" Jan 25 08:20:44 crc kubenswrapper[4832]: I0125 08:20:44.781444 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/146a1b8e-1733-40ca-81a5-d73122618f4d-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-hdzmf\" (UID: \"146a1b8e-1733-40ca-81a5-d73122618f4d\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-hdzmf" Jan 25 08:20:44 crc kubenswrapper[4832]: I0125 08:20:44.781614 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8bnbh\" (UniqueName: \"kubernetes.io/projected/146a1b8e-1733-40ca-81a5-d73122618f4d-kube-api-access-8bnbh\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-hdzmf\" (UID: \"146a1b8e-1733-40ca-81a5-d73122618f4d\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-hdzmf" Jan 25 08:20:44 crc kubenswrapper[4832]: I0125 08:20:44.883542 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/146a1b8e-1733-40ca-81a5-d73122618f4d-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-hdzmf\" (UID: \"146a1b8e-1733-40ca-81a5-d73122618f4d\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-hdzmf" Jan 25 08:20:44 crc kubenswrapper[4832]: I0125 08:20:44.883783 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/146a1b8e-1733-40ca-81a5-d73122618f4d-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-hdzmf\" (UID: \"146a1b8e-1733-40ca-81a5-d73122618f4d\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-hdzmf" Jan 25 08:20:44 crc kubenswrapper[4832]: I0125 08:20:44.883856 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/146a1b8e-1733-40ca-81a5-d73122618f4d-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-hdzmf\" (UID: \"146a1b8e-1733-40ca-81a5-d73122618f4d\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-hdzmf" Jan 25 08:20:44 crc kubenswrapper[4832]: I0125 08:20:44.884124 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8bnbh\" (UniqueName: \"kubernetes.io/projected/146a1b8e-1733-40ca-81a5-d73122618f4d-kube-api-access-8bnbh\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-hdzmf\" (UID: \"146a1b8e-1733-40ca-81a5-d73122618f4d\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-hdzmf" Jan 25 08:20:44 crc kubenswrapper[4832]: I0125 08:20:44.888728 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/146a1b8e-1733-40ca-81a5-d73122618f4d-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-hdzmf\" (UID: \"146a1b8e-1733-40ca-81a5-d73122618f4d\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-hdzmf" Jan 25 08:20:44 crc kubenswrapper[4832]: I0125 08:20:44.888799 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/146a1b8e-1733-40ca-81a5-d73122618f4d-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-hdzmf\" (UID: \"146a1b8e-1733-40ca-81a5-d73122618f4d\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-hdzmf" Jan 25 08:20:44 crc kubenswrapper[4832]: I0125 08:20:44.900421 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/146a1b8e-1733-40ca-81a5-d73122618f4d-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-hdzmf\" (UID: \"146a1b8e-1733-40ca-81a5-d73122618f4d\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-hdzmf" Jan 25 08:20:44 crc kubenswrapper[4832]: I0125 08:20:44.903577 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8bnbh\" (UniqueName: \"kubernetes.io/projected/146a1b8e-1733-40ca-81a5-d73122618f4d-kube-api-access-8bnbh\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-hdzmf\" (UID: \"146a1b8e-1733-40ca-81a5-d73122618f4d\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-hdzmf" Jan 25 08:20:45 crc kubenswrapper[4832]: I0125 08:20:45.053710 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-hdzmf" Jan 25 08:20:45 crc kubenswrapper[4832]: I0125 08:20:45.621489 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-hdzmf"] Jan 25 08:20:45 crc kubenswrapper[4832]: I0125 08:20:45.709570 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Jan 25 08:20:46 crc kubenswrapper[4832]: I0125 08:20:46.590966 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-hdzmf" event={"ID":"146a1b8e-1733-40ca-81a5-d73122618f4d","Type":"ContainerStarted","Data":"bf46844b6a9a9ca1cc5a905cb438dfcf848a23bd2a232c75689ec5dbf2c499f2"} Jan 25 08:20:46 crc kubenswrapper[4832]: I0125 08:20:46.591212 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-hdzmf" event={"ID":"146a1b8e-1733-40ca-81a5-d73122618f4d","Type":"ContainerStarted","Data":"5a59fb8f495bb4a45e2ed01320f554641ec36082f45f54266e02c95e439332aa"} Jan 25 08:20:46 crc kubenswrapper[4832]: I0125 08:20:46.634818 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-hdzmf" podStartSLOduration=2.25996034 podStartE2EDuration="2.634795455s" podCreationTimestamp="2026-01-25 08:20:44 +0000 UTC" firstStartedPulling="2026-01-25 08:20:45.626514388 +0000 UTC m=+1428.300337921" lastFinishedPulling="2026-01-25 08:20:46.001349503 +0000 UTC m=+1428.675173036" observedRunningTime="2026-01-25 08:20:46.602954599 +0000 UTC m=+1429.276778162" watchObservedRunningTime="2026-01-25 08:20:46.634795455 +0000 UTC m=+1429.308618988" Jan 25 08:20:52 crc kubenswrapper[4832]: I0125 08:20:52.150313 4832 patch_prober.go:28] interesting pod/machine-config-daemon-9r9sz container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 25 08:20:52 crc kubenswrapper[4832]: I0125 08:20:52.151076 4832 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-9r9sz" podUID="1fb47e8e-c812-41b4-9be7-3fad81e121b0" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 25 08:21:22 crc kubenswrapper[4832]: I0125 08:21:22.149241 4832 patch_prober.go:28] interesting pod/machine-config-daemon-9r9sz container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 25 08:21:22 crc kubenswrapper[4832]: I0125 08:21:22.149990 4832 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-9r9sz" podUID="1fb47e8e-c812-41b4-9be7-3fad81e121b0" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 25 08:21:39 crc kubenswrapper[4832]: I0125 08:21:39.602405 4832 scope.go:117] "RemoveContainer" containerID="9ca814f6b8251cfd6b10bb677f8a7dcbc1d7ac5e4285315c0bb7306bb32d833a" Jan 25 08:21:39 crc kubenswrapper[4832]: I0125 08:21:39.645684 4832 scope.go:117] "RemoveContainer" containerID="db57b244a480c2cd03b457004010e87222d6aaee3be4574b4d43bf073cb5417a" Jan 25 08:21:39 crc kubenswrapper[4832]: I0125 08:21:39.698000 4832 scope.go:117] "RemoveContainer" containerID="9d3da0a7bdd1779a51a05bb43d06cfc2079f43c7facd448746b691f4951b451d" Jan 25 08:21:39 crc kubenswrapper[4832]: I0125 08:21:39.726989 4832 scope.go:117] "RemoveContainer" containerID="449a56dd7d9c8f7d92a1b953146140e5c3eff8d435d90869a9361ea033ab56a2" Jan 25 08:21:52 crc kubenswrapper[4832]: I0125 08:21:52.150363 4832 patch_prober.go:28] interesting pod/machine-config-daemon-9r9sz container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 25 08:21:52 crc kubenswrapper[4832]: I0125 08:21:52.150978 4832 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-9r9sz" podUID="1fb47e8e-c812-41b4-9be7-3fad81e121b0" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 25 08:21:52 crc kubenswrapper[4832]: I0125 08:21:52.151028 4832 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-9r9sz" Jan 25 08:21:52 crc kubenswrapper[4832]: I0125 08:21:52.151694 4832 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"cac454964b3d1f20ac28961991abf402bf242194f2fbad579737da7d57d4a27f"} pod="openshift-machine-config-operator/machine-config-daemon-9r9sz" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 25 08:21:52 crc kubenswrapper[4832]: I0125 08:21:52.151814 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-9r9sz" podUID="1fb47e8e-c812-41b4-9be7-3fad81e121b0" containerName="machine-config-daemon" containerID="cri-o://cac454964b3d1f20ac28961991abf402bf242194f2fbad579737da7d57d4a27f" gracePeriod=600 Jan 25 08:21:52 crc kubenswrapper[4832]: E0125 08:21:52.275439 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9r9sz_openshift-machine-config-operator(1fb47e8e-c812-41b4-9be7-3fad81e121b0)\"" pod="openshift-machine-config-operator/machine-config-daemon-9r9sz" podUID="1fb47e8e-c812-41b4-9be7-3fad81e121b0" Jan 25 08:21:53 crc kubenswrapper[4832]: I0125 08:21:53.177219 4832 generic.go:334] "Generic (PLEG): container finished" podID="1fb47e8e-c812-41b4-9be7-3fad81e121b0" containerID="cac454964b3d1f20ac28961991abf402bf242194f2fbad579737da7d57d4a27f" exitCode=0 Jan 25 08:21:53 crc kubenswrapper[4832]: I0125 08:21:53.177307 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-9r9sz" event={"ID":"1fb47e8e-c812-41b4-9be7-3fad81e121b0","Type":"ContainerDied","Data":"cac454964b3d1f20ac28961991abf402bf242194f2fbad579737da7d57d4a27f"} Jan 25 08:21:53 crc kubenswrapper[4832]: I0125 08:21:53.177509 4832 scope.go:117] "RemoveContainer" containerID="a703522300807412e74dfb0216f7c46b79210bcc992ea5f87976c5936fa1c4d9" Jan 25 08:21:53 crc kubenswrapper[4832]: I0125 08:21:53.178348 4832 scope.go:117] "RemoveContainer" containerID="cac454964b3d1f20ac28961991abf402bf242194f2fbad579737da7d57d4a27f" Jan 25 08:21:53 crc kubenswrapper[4832]: E0125 08:21:53.178774 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9r9sz_openshift-machine-config-operator(1fb47e8e-c812-41b4-9be7-3fad81e121b0)\"" pod="openshift-machine-config-operator/machine-config-daemon-9r9sz" podUID="1fb47e8e-c812-41b4-9be7-3fad81e121b0" Jan 25 08:22:05 crc kubenswrapper[4832]: I0125 08:22:05.669765 4832 scope.go:117] "RemoveContainer" containerID="cac454964b3d1f20ac28961991abf402bf242194f2fbad579737da7d57d4a27f" Jan 25 08:22:05 crc kubenswrapper[4832]: E0125 08:22:05.670993 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9r9sz_openshift-machine-config-operator(1fb47e8e-c812-41b4-9be7-3fad81e121b0)\"" pod="openshift-machine-config-operator/machine-config-daemon-9r9sz" podUID="1fb47e8e-c812-41b4-9be7-3fad81e121b0" Jan 25 08:22:20 crc kubenswrapper[4832]: I0125 08:22:20.670108 4832 scope.go:117] "RemoveContainer" containerID="cac454964b3d1f20ac28961991abf402bf242194f2fbad579737da7d57d4a27f" Jan 25 08:22:20 crc kubenswrapper[4832]: E0125 08:22:20.671866 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9r9sz_openshift-machine-config-operator(1fb47e8e-c812-41b4-9be7-3fad81e121b0)\"" pod="openshift-machine-config-operator/machine-config-daemon-9r9sz" podUID="1fb47e8e-c812-41b4-9be7-3fad81e121b0" Jan 25 08:22:35 crc kubenswrapper[4832]: I0125 08:22:35.670354 4832 scope.go:117] "RemoveContainer" containerID="cac454964b3d1f20ac28961991abf402bf242194f2fbad579737da7d57d4a27f" Jan 25 08:22:35 crc kubenswrapper[4832]: E0125 08:22:35.671185 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9r9sz_openshift-machine-config-operator(1fb47e8e-c812-41b4-9be7-3fad81e121b0)\"" pod="openshift-machine-config-operator/machine-config-daemon-9r9sz" podUID="1fb47e8e-c812-41b4-9be7-3fad81e121b0" Jan 25 08:22:39 crc kubenswrapper[4832]: I0125 08:22:39.826155 4832 scope.go:117] "RemoveContainer" containerID="d399a17cccba09c5367e9af52b2eed1ccb200a38317606a105d12e84fbc4af18" Jan 25 08:22:48 crc kubenswrapper[4832]: I0125 08:22:48.669553 4832 scope.go:117] "RemoveContainer" containerID="cac454964b3d1f20ac28961991abf402bf242194f2fbad579737da7d57d4a27f" Jan 25 08:22:48 crc kubenswrapper[4832]: E0125 08:22:48.670363 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9r9sz_openshift-machine-config-operator(1fb47e8e-c812-41b4-9be7-3fad81e121b0)\"" pod="openshift-machine-config-operator/machine-config-daemon-9r9sz" podUID="1fb47e8e-c812-41b4-9be7-3fad81e121b0" Jan 25 08:23:02 crc kubenswrapper[4832]: I0125 08:23:02.072008 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-mswrf"] Jan 25 08:23:02 crc kubenswrapper[4832]: I0125 08:23:02.075426 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-mswrf" Jan 25 08:23:02 crc kubenswrapper[4832]: I0125 08:23:02.092141 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-mswrf"] Jan 25 08:23:02 crc kubenswrapper[4832]: I0125 08:23:02.176692 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/beb90b9d-7550-4c29-a308-f2a340eff0d9-catalog-content\") pod \"redhat-marketplace-mswrf\" (UID: \"beb90b9d-7550-4c29-a308-f2a340eff0d9\") " pod="openshift-marketplace/redhat-marketplace-mswrf" Jan 25 08:23:02 crc kubenswrapper[4832]: I0125 08:23:02.177036 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/beb90b9d-7550-4c29-a308-f2a340eff0d9-utilities\") pod \"redhat-marketplace-mswrf\" (UID: \"beb90b9d-7550-4c29-a308-f2a340eff0d9\") " pod="openshift-marketplace/redhat-marketplace-mswrf" Jan 25 08:23:02 crc kubenswrapper[4832]: I0125 08:23:02.177104 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9bcxp\" (UniqueName: \"kubernetes.io/projected/beb90b9d-7550-4c29-a308-f2a340eff0d9-kube-api-access-9bcxp\") pod \"redhat-marketplace-mswrf\" (UID: \"beb90b9d-7550-4c29-a308-f2a340eff0d9\") " pod="openshift-marketplace/redhat-marketplace-mswrf" Jan 25 08:23:02 crc kubenswrapper[4832]: I0125 08:23:02.279736 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/beb90b9d-7550-4c29-a308-f2a340eff0d9-catalog-content\") pod \"redhat-marketplace-mswrf\" (UID: \"beb90b9d-7550-4c29-a308-f2a340eff0d9\") " pod="openshift-marketplace/redhat-marketplace-mswrf" Jan 25 08:23:02 crc kubenswrapper[4832]: I0125 08:23:02.279844 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/beb90b9d-7550-4c29-a308-f2a340eff0d9-utilities\") pod \"redhat-marketplace-mswrf\" (UID: \"beb90b9d-7550-4c29-a308-f2a340eff0d9\") " pod="openshift-marketplace/redhat-marketplace-mswrf" Jan 25 08:23:02 crc kubenswrapper[4832]: I0125 08:23:02.279929 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9bcxp\" (UniqueName: \"kubernetes.io/projected/beb90b9d-7550-4c29-a308-f2a340eff0d9-kube-api-access-9bcxp\") pod \"redhat-marketplace-mswrf\" (UID: \"beb90b9d-7550-4c29-a308-f2a340eff0d9\") " pod="openshift-marketplace/redhat-marketplace-mswrf" Jan 25 08:23:02 crc kubenswrapper[4832]: I0125 08:23:02.280407 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/beb90b9d-7550-4c29-a308-f2a340eff0d9-catalog-content\") pod \"redhat-marketplace-mswrf\" (UID: \"beb90b9d-7550-4c29-a308-f2a340eff0d9\") " pod="openshift-marketplace/redhat-marketplace-mswrf" Jan 25 08:23:02 crc kubenswrapper[4832]: I0125 08:23:02.280453 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/beb90b9d-7550-4c29-a308-f2a340eff0d9-utilities\") pod \"redhat-marketplace-mswrf\" (UID: \"beb90b9d-7550-4c29-a308-f2a340eff0d9\") " pod="openshift-marketplace/redhat-marketplace-mswrf" Jan 25 08:23:02 crc kubenswrapper[4832]: I0125 08:23:02.304829 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9bcxp\" (UniqueName: \"kubernetes.io/projected/beb90b9d-7550-4c29-a308-f2a340eff0d9-kube-api-access-9bcxp\") pod \"redhat-marketplace-mswrf\" (UID: \"beb90b9d-7550-4c29-a308-f2a340eff0d9\") " pod="openshift-marketplace/redhat-marketplace-mswrf" Jan 25 08:23:02 crc kubenswrapper[4832]: I0125 08:23:02.414020 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-mswrf" Jan 25 08:23:02 crc kubenswrapper[4832]: I0125 08:23:02.919636 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-mswrf"] Jan 25 08:23:04 crc kubenswrapper[4832]: I0125 08:23:03.670476 4832 scope.go:117] "RemoveContainer" containerID="cac454964b3d1f20ac28961991abf402bf242194f2fbad579737da7d57d4a27f" Jan 25 08:23:04 crc kubenswrapper[4832]: E0125 08:23:03.671101 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9r9sz_openshift-machine-config-operator(1fb47e8e-c812-41b4-9be7-3fad81e121b0)\"" pod="openshift-machine-config-operator/machine-config-daemon-9r9sz" podUID="1fb47e8e-c812-41b4-9be7-3fad81e121b0" Jan 25 08:23:04 crc kubenswrapper[4832]: I0125 08:23:03.858511 4832 generic.go:334] "Generic (PLEG): container finished" podID="beb90b9d-7550-4c29-a308-f2a340eff0d9" containerID="d9cecf2b8b9d34d3ac7c446e4a5ba050d8973652ad7f51583caea710b6b873e8" exitCode=0 Jan 25 08:23:04 crc kubenswrapper[4832]: I0125 08:23:03.858573 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mswrf" event={"ID":"beb90b9d-7550-4c29-a308-f2a340eff0d9","Type":"ContainerDied","Data":"d9cecf2b8b9d34d3ac7c446e4a5ba050d8973652ad7f51583caea710b6b873e8"} Jan 25 08:23:04 crc kubenswrapper[4832]: I0125 08:23:03.858613 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mswrf" event={"ID":"beb90b9d-7550-4c29-a308-f2a340eff0d9","Type":"ContainerStarted","Data":"b56151c66c86483da739472d5b0d87a250063742f0e50fd60dd4e49e00e46903"} Jan 25 08:23:04 crc kubenswrapper[4832]: I0125 08:23:04.868731 4832 generic.go:334] "Generic (PLEG): container finished" podID="beb90b9d-7550-4c29-a308-f2a340eff0d9" containerID="1bf49ce2155d881a3ad6643a7eaa2293a016373768f3024f1579bcdb76afec40" exitCode=0 Jan 25 08:23:04 crc kubenswrapper[4832]: I0125 08:23:04.868884 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mswrf" event={"ID":"beb90b9d-7550-4c29-a308-f2a340eff0d9","Type":"ContainerDied","Data":"1bf49ce2155d881a3ad6643a7eaa2293a016373768f3024f1579bcdb76afec40"} Jan 25 08:23:05 crc kubenswrapper[4832]: I0125 08:23:05.890123 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mswrf" event={"ID":"beb90b9d-7550-4c29-a308-f2a340eff0d9","Type":"ContainerStarted","Data":"6d28890a24723487fac619f3412ea98d7c99d800197f8b90635ecc19dbd1d920"} Jan 25 08:23:05 crc kubenswrapper[4832]: I0125 08:23:05.913912 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-mswrf" podStartSLOduration=2.263187106 podStartE2EDuration="3.913890925s" podCreationTimestamp="2026-01-25 08:23:02 +0000 UTC" firstStartedPulling="2026-01-25 08:23:03.860595365 +0000 UTC m=+1566.534418898" lastFinishedPulling="2026-01-25 08:23:05.511299184 +0000 UTC m=+1568.185122717" observedRunningTime="2026-01-25 08:23:05.905547514 +0000 UTC m=+1568.579371067" watchObservedRunningTime="2026-01-25 08:23:05.913890925 +0000 UTC m=+1568.587714458" Jan 25 08:23:06 crc kubenswrapper[4832]: I0125 08:23:06.459147 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-zlnmc"] Jan 25 08:23:06 crc kubenswrapper[4832]: I0125 08:23:06.461369 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-zlnmc" Jan 25 08:23:06 crc kubenswrapper[4832]: I0125 08:23:06.475681 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-zlnmc"] Jan 25 08:23:06 crc kubenswrapper[4832]: I0125 08:23:06.606799 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/52698310-e203-4936-8bb9-9779921381cb-catalog-content\") pod \"community-operators-zlnmc\" (UID: \"52698310-e203-4936-8bb9-9779921381cb\") " pod="openshift-marketplace/community-operators-zlnmc" Jan 25 08:23:06 crc kubenswrapper[4832]: I0125 08:23:06.606875 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/52698310-e203-4936-8bb9-9779921381cb-utilities\") pod \"community-operators-zlnmc\" (UID: \"52698310-e203-4936-8bb9-9779921381cb\") " pod="openshift-marketplace/community-operators-zlnmc" Jan 25 08:23:06 crc kubenswrapper[4832]: I0125 08:23:06.606930 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fz4k9\" (UniqueName: \"kubernetes.io/projected/52698310-e203-4936-8bb9-9779921381cb-kube-api-access-fz4k9\") pod \"community-operators-zlnmc\" (UID: \"52698310-e203-4936-8bb9-9779921381cb\") " pod="openshift-marketplace/community-operators-zlnmc" Jan 25 08:23:06 crc kubenswrapper[4832]: I0125 08:23:06.709530 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/52698310-e203-4936-8bb9-9779921381cb-utilities\") pod \"community-operators-zlnmc\" (UID: \"52698310-e203-4936-8bb9-9779921381cb\") " pod="openshift-marketplace/community-operators-zlnmc" Jan 25 08:23:06 crc kubenswrapper[4832]: I0125 08:23:06.709601 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fz4k9\" (UniqueName: \"kubernetes.io/projected/52698310-e203-4936-8bb9-9779921381cb-kube-api-access-fz4k9\") pod \"community-operators-zlnmc\" (UID: \"52698310-e203-4936-8bb9-9779921381cb\") " pod="openshift-marketplace/community-operators-zlnmc" Jan 25 08:23:06 crc kubenswrapper[4832]: I0125 08:23:06.709770 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/52698310-e203-4936-8bb9-9779921381cb-catalog-content\") pod \"community-operators-zlnmc\" (UID: \"52698310-e203-4936-8bb9-9779921381cb\") " pod="openshift-marketplace/community-operators-zlnmc" Jan 25 08:23:06 crc kubenswrapper[4832]: I0125 08:23:06.712025 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/52698310-e203-4936-8bb9-9779921381cb-catalog-content\") pod \"community-operators-zlnmc\" (UID: \"52698310-e203-4936-8bb9-9779921381cb\") " pod="openshift-marketplace/community-operators-zlnmc" Jan 25 08:23:06 crc kubenswrapper[4832]: I0125 08:23:06.712143 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/52698310-e203-4936-8bb9-9779921381cb-utilities\") pod \"community-operators-zlnmc\" (UID: \"52698310-e203-4936-8bb9-9779921381cb\") " pod="openshift-marketplace/community-operators-zlnmc" Jan 25 08:23:06 crc kubenswrapper[4832]: I0125 08:23:06.746329 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fz4k9\" (UniqueName: \"kubernetes.io/projected/52698310-e203-4936-8bb9-9779921381cb-kube-api-access-fz4k9\") pod \"community-operators-zlnmc\" (UID: \"52698310-e203-4936-8bb9-9779921381cb\") " pod="openshift-marketplace/community-operators-zlnmc" Jan 25 08:23:06 crc kubenswrapper[4832]: I0125 08:23:06.778186 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-zlnmc" Jan 25 08:23:07 crc kubenswrapper[4832]: I0125 08:23:07.294239 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-zlnmc"] Jan 25 08:23:07 crc kubenswrapper[4832]: I0125 08:23:07.920749 4832 generic.go:334] "Generic (PLEG): container finished" podID="52698310-e203-4936-8bb9-9779921381cb" containerID="2cef2077ac4cbb42db5d01eb795800c780bb558a99d1efcc3c97f0dd7077e565" exitCode=0 Jan 25 08:23:07 crc kubenswrapper[4832]: I0125 08:23:07.920823 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zlnmc" event={"ID":"52698310-e203-4936-8bb9-9779921381cb","Type":"ContainerDied","Data":"2cef2077ac4cbb42db5d01eb795800c780bb558a99d1efcc3c97f0dd7077e565"} Jan 25 08:23:07 crc kubenswrapper[4832]: I0125 08:23:07.921123 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zlnmc" event={"ID":"52698310-e203-4936-8bb9-9779921381cb","Type":"ContainerStarted","Data":"a4fddb09c48d40dbb9769c4b821ab5b279d543da6e830694daf706b5955beef4"} Jan 25 08:23:08 crc kubenswrapper[4832]: I0125 08:23:08.939074 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zlnmc" event={"ID":"52698310-e203-4936-8bb9-9779921381cb","Type":"ContainerStarted","Data":"0d12b66c9aa66c8995ba4258c0326e711d8c5aff07851a031a68dda37557a76d"} Jan 25 08:23:09 crc kubenswrapper[4832]: I0125 08:23:09.949119 4832 generic.go:334] "Generic (PLEG): container finished" podID="52698310-e203-4936-8bb9-9779921381cb" containerID="0d12b66c9aa66c8995ba4258c0326e711d8c5aff07851a031a68dda37557a76d" exitCode=0 Jan 25 08:23:09 crc kubenswrapper[4832]: I0125 08:23:09.949229 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zlnmc" event={"ID":"52698310-e203-4936-8bb9-9779921381cb","Type":"ContainerDied","Data":"0d12b66c9aa66c8995ba4258c0326e711d8c5aff07851a031a68dda37557a76d"} Jan 25 08:23:10 crc kubenswrapper[4832]: I0125 08:23:10.962617 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zlnmc" event={"ID":"52698310-e203-4936-8bb9-9779921381cb","Type":"ContainerStarted","Data":"263e31b506005bb84950e88825c3aca50d29aa0f4f09988094406f7d92ed0348"} Jan 25 08:23:10 crc kubenswrapper[4832]: I0125 08:23:10.986775 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-zlnmc" podStartSLOduration=2.513831846 podStartE2EDuration="4.986753629s" podCreationTimestamp="2026-01-25 08:23:06 +0000 UTC" firstStartedPulling="2026-01-25 08:23:07.922215527 +0000 UTC m=+1570.596039050" lastFinishedPulling="2026-01-25 08:23:10.3951373 +0000 UTC m=+1573.068960833" observedRunningTime="2026-01-25 08:23:10.979071509 +0000 UTC m=+1573.652895052" watchObservedRunningTime="2026-01-25 08:23:10.986753629 +0000 UTC m=+1573.660577162" Jan 25 08:23:12 crc kubenswrapper[4832]: I0125 08:23:12.415102 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-mswrf" Jan 25 08:23:12 crc kubenswrapper[4832]: I0125 08:23:12.415515 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-mswrf" Jan 25 08:23:12 crc kubenswrapper[4832]: I0125 08:23:12.463213 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-mswrf" Jan 25 08:23:13 crc kubenswrapper[4832]: I0125 08:23:13.027784 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-mswrf" Jan 25 08:23:13 crc kubenswrapper[4832]: I0125 08:23:13.644785 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-mswrf"] Jan 25 08:23:14 crc kubenswrapper[4832]: I0125 08:23:14.996555 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-mswrf" podUID="beb90b9d-7550-4c29-a308-f2a340eff0d9" containerName="registry-server" containerID="cri-o://6d28890a24723487fac619f3412ea98d7c99d800197f8b90635ecc19dbd1d920" gracePeriod=2 Jan 25 08:23:15 crc kubenswrapper[4832]: I0125 08:23:15.670115 4832 scope.go:117] "RemoveContainer" containerID="cac454964b3d1f20ac28961991abf402bf242194f2fbad579737da7d57d4a27f" Jan 25 08:23:15 crc kubenswrapper[4832]: E0125 08:23:15.671001 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9r9sz_openshift-machine-config-operator(1fb47e8e-c812-41b4-9be7-3fad81e121b0)\"" pod="openshift-machine-config-operator/machine-config-daemon-9r9sz" podUID="1fb47e8e-c812-41b4-9be7-3fad81e121b0" Jan 25 08:23:15 crc kubenswrapper[4832]: I0125 08:23:15.947917 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-mswrf" Jan 25 08:23:16 crc kubenswrapper[4832]: I0125 08:23:16.008409 4832 generic.go:334] "Generic (PLEG): container finished" podID="beb90b9d-7550-4c29-a308-f2a340eff0d9" containerID="6d28890a24723487fac619f3412ea98d7c99d800197f8b90635ecc19dbd1d920" exitCode=0 Jan 25 08:23:16 crc kubenswrapper[4832]: I0125 08:23:16.008465 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mswrf" event={"ID":"beb90b9d-7550-4c29-a308-f2a340eff0d9","Type":"ContainerDied","Data":"6d28890a24723487fac619f3412ea98d7c99d800197f8b90635ecc19dbd1d920"} Jan 25 08:23:16 crc kubenswrapper[4832]: I0125 08:23:16.008498 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mswrf" event={"ID":"beb90b9d-7550-4c29-a308-f2a340eff0d9","Type":"ContainerDied","Data":"b56151c66c86483da739472d5b0d87a250063742f0e50fd60dd4e49e00e46903"} Jan 25 08:23:16 crc kubenswrapper[4832]: I0125 08:23:16.008511 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-mswrf" Jan 25 08:23:16 crc kubenswrapper[4832]: I0125 08:23:16.008515 4832 scope.go:117] "RemoveContainer" containerID="6d28890a24723487fac619f3412ea98d7c99d800197f8b90635ecc19dbd1d920" Jan 25 08:23:16 crc kubenswrapper[4832]: I0125 08:23:16.034328 4832 scope.go:117] "RemoveContainer" containerID="1bf49ce2155d881a3ad6643a7eaa2293a016373768f3024f1579bcdb76afec40" Jan 25 08:23:16 crc kubenswrapper[4832]: I0125 08:23:16.042502 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/beb90b9d-7550-4c29-a308-f2a340eff0d9-catalog-content\") pod \"beb90b9d-7550-4c29-a308-f2a340eff0d9\" (UID: \"beb90b9d-7550-4c29-a308-f2a340eff0d9\") " Jan 25 08:23:16 crc kubenswrapper[4832]: I0125 08:23:16.042729 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/beb90b9d-7550-4c29-a308-f2a340eff0d9-utilities\") pod \"beb90b9d-7550-4c29-a308-f2a340eff0d9\" (UID: \"beb90b9d-7550-4c29-a308-f2a340eff0d9\") " Jan 25 08:23:16 crc kubenswrapper[4832]: I0125 08:23:16.042821 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9bcxp\" (UniqueName: \"kubernetes.io/projected/beb90b9d-7550-4c29-a308-f2a340eff0d9-kube-api-access-9bcxp\") pod \"beb90b9d-7550-4c29-a308-f2a340eff0d9\" (UID: \"beb90b9d-7550-4c29-a308-f2a340eff0d9\") " Jan 25 08:23:16 crc kubenswrapper[4832]: I0125 08:23:16.043371 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/beb90b9d-7550-4c29-a308-f2a340eff0d9-utilities" (OuterVolumeSpecName: "utilities") pod "beb90b9d-7550-4c29-a308-f2a340eff0d9" (UID: "beb90b9d-7550-4c29-a308-f2a340eff0d9"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 25 08:23:16 crc kubenswrapper[4832]: I0125 08:23:16.044083 4832 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/beb90b9d-7550-4c29-a308-f2a340eff0d9-utilities\") on node \"crc\" DevicePath \"\"" Jan 25 08:23:16 crc kubenswrapper[4832]: I0125 08:23:16.054379 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/beb90b9d-7550-4c29-a308-f2a340eff0d9-kube-api-access-9bcxp" (OuterVolumeSpecName: "kube-api-access-9bcxp") pod "beb90b9d-7550-4c29-a308-f2a340eff0d9" (UID: "beb90b9d-7550-4c29-a308-f2a340eff0d9"). InnerVolumeSpecName "kube-api-access-9bcxp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 25 08:23:16 crc kubenswrapper[4832]: I0125 08:23:16.065584 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/beb90b9d-7550-4c29-a308-f2a340eff0d9-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "beb90b9d-7550-4c29-a308-f2a340eff0d9" (UID: "beb90b9d-7550-4c29-a308-f2a340eff0d9"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 25 08:23:16 crc kubenswrapper[4832]: I0125 08:23:16.077132 4832 scope.go:117] "RemoveContainer" containerID="d9cecf2b8b9d34d3ac7c446e4a5ba050d8973652ad7f51583caea710b6b873e8" Jan 25 08:23:16 crc kubenswrapper[4832]: I0125 08:23:16.147308 4832 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/beb90b9d-7550-4c29-a308-f2a340eff0d9-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 25 08:23:16 crc kubenswrapper[4832]: I0125 08:23:16.147348 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9bcxp\" (UniqueName: \"kubernetes.io/projected/beb90b9d-7550-4c29-a308-f2a340eff0d9-kube-api-access-9bcxp\") on node \"crc\" DevicePath \"\"" Jan 25 08:23:16 crc kubenswrapper[4832]: I0125 08:23:16.167568 4832 scope.go:117] "RemoveContainer" containerID="6d28890a24723487fac619f3412ea98d7c99d800197f8b90635ecc19dbd1d920" Jan 25 08:23:16 crc kubenswrapper[4832]: E0125 08:23:16.168258 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6d28890a24723487fac619f3412ea98d7c99d800197f8b90635ecc19dbd1d920\": container with ID starting with 6d28890a24723487fac619f3412ea98d7c99d800197f8b90635ecc19dbd1d920 not found: ID does not exist" containerID="6d28890a24723487fac619f3412ea98d7c99d800197f8b90635ecc19dbd1d920" Jan 25 08:23:16 crc kubenswrapper[4832]: I0125 08:23:16.168305 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6d28890a24723487fac619f3412ea98d7c99d800197f8b90635ecc19dbd1d920"} err="failed to get container status \"6d28890a24723487fac619f3412ea98d7c99d800197f8b90635ecc19dbd1d920\": rpc error: code = NotFound desc = could not find container \"6d28890a24723487fac619f3412ea98d7c99d800197f8b90635ecc19dbd1d920\": container with ID starting with 6d28890a24723487fac619f3412ea98d7c99d800197f8b90635ecc19dbd1d920 not found: ID does not exist" Jan 25 08:23:16 crc kubenswrapper[4832]: I0125 08:23:16.168332 4832 scope.go:117] "RemoveContainer" containerID="1bf49ce2155d881a3ad6643a7eaa2293a016373768f3024f1579bcdb76afec40" Jan 25 08:23:16 crc kubenswrapper[4832]: E0125 08:23:16.169076 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1bf49ce2155d881a3ad6643a7eaa2293a016373768f3024f1579bcdb76afec40\": container with ID starting with 1bf49ce2155d881a3ad6643a7eaa2293a016373768f3024f1579bcdb76afec40 not found: ID does not exist" containerID="1bf49ce2155d881a3ad6643a7eaa2293a016373768f3024f1579bcdb76afec40" Jan 25 08:23:16 crc kubenswrapper[4832]: I0125 08:23:16.169338 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1bf49ce2155d881a3ad6643a7eaa2293a016373768f3024f1579bcdb76afec40"} err="failed to get container status \"1bf49ce2155d881a3ad6643a7eaa2293a016373768f3024f1579bcdb76afec40\": rpc error: code = NotFound desc = could not find container \"1bf49ce2155d881a3ad6643a7eaa2293a016373768f3024f1579bcdb76afec40\": container with ID starting with 1bf49ce2155d881a3ad6643a7eaa2293a016373768f3024f1579bcdb76afec40 not found: ID does not exist" Jan 25 08:23:16 crc kubenswrapper[4832]: I0125 08:23:16.169368 4832 scope.go:117] "RemoveContainer" containerID="d9cecf2b8b9d34d3ac7c446e4a5ba050d8973652ad7f51583caea710b6b873e8" Jan 25 08:23:16 crc kubenswrapper[4832]: E0125 08:23:16.169838 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d9cecf2b8b9d34d3ac7c446e4a5ba050d8973652ad7f51583caea710b6b873e8\": container with ID starting with d9cecf2b8b9d34d3ac7c446e4a5ba050d8973652ad7f51583caea710b6b873e8 not found: ID does not exist" containerID="d9cecf2b8b9d34d3ac7c446e4a5ba050d8973652ad7f51583caea710b6b873e8" Jan 25 08:23:16 crc kubenswrapper[4832]: I0125 08:23:16.169924 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d9cecf2b8b9d34d3ac7c446e4a5ba050d8973652ad7f51583caea710b6b873e8"} err="failed to get container status \"d9cecf2b8b9d34d3ac7c446e4a5ba050d8973652ad7f51583caea710b6b873e8\": rpc error: code = NotFound desc = could not find container \"d9cecf2b8b9d34d3ac7c446e4a5ba050d8973652ad7f51583caea710b6b873e8\": container with ID starting with d9cecf2b8b9d34d3ac7c446e4a5ba050d8973652ad7f51583caea710b6b873e8 not found: ID does not exist" Jan 25 08:23:16 crc kubenswrapper[4832]: I0125 08:23:16.367173 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-mswrf"] Jan 25 08:23:16 crc kubenswrapper[4832]: I0125 08:23:16.383369 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-mswrf"] Jan 25 08:23:16 crc kubenswrapper[4832]: I0125 08:23:16.779281 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-zlnmc" Jan 25 08:23:16 crc kubenswrapper[4832]: I0125 08:23:16.779379 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-zlnmc" Jan 25 08:23:16 crc kubenswrapper[4832]: I0125 08:23:16.837272 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-zlnmc" Jan 25 08:23:17 crc kubenswrapper[4832]: I0125 08:23:17.086902 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-zlnmc" Jan 25 08:23:17 crc kubenswrapper[4832]: I0125 08:23:17.684538 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="beb90b9d-7550-4c29-a308-f2a340eff0d9" path="/var/lib/kubelet/pods/beb90b9d-7550-4c29-a308-f2a340eff0d9/volumes" Jan 25 08:23:19 crc kubenswrapper[4832]: I0125 08:23:19.046241 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-zlnmc"] Jan 25 08:23:19 crc kubenswrapper[4832]: I0125 08:23:19.047042 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-zlnmc" podUID="52698310-e203-4936-8bb9-9779921381cb" containerName="registry-server" containerID="cri-o://263e31b506005bb84950e88825c3aca50d29aa0f4f09988094406f7d92ed0348" gracePeriod=2 Jan 25 08:23:19 crc kubenswrapper[4832]: I0125 08:23:19.510424 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-zlnmc" Jan 25 08:23:19 crc kubenswrapper[4832]: I0125 08:23:19.627449 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/52698310-e203-4936-8bb9-9779921381cb-catalog-content\") pod \"52698310-e203-4936-8bb9-9779921381cb\" (UID: \"52698310-e203-4936-8bb9-9779921381cb\") " Jan 25 08:23:19 crc kubenswrapper[4832]: I0125 08:23:19.627572 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fz4k9\" (UniqueName: \"kubernetes.io/projected/52698310-e203-4936-8bb9-9779921381cb-kube-api-access-fz4k9\") pod \"52698310-e203-4936-8bb9-9779921381cb\" (UID: \"52698310-e203-4936-8bb9-9779921381cb\") " Jan 25 08:23:19 crc kubenswrapper[4832]: I0125 08:23:19.627650 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/52698310-e203-4936-8bb9-9779921381cb-utilities\") pod \"52698310-e203-4936-8bb9-9779921381cb\" (UID: \"52698310-e203-4936-8bb9-9779921381cb\") " Jan 25 08:23:19 crc kubenswrapper[4832]: I0125 08:23:19.628758 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/52698310-e203-4936-8bb9-9779921381cb-utilities" (OuterVolumeSpecName: "utilities") pod "52698310-e203-4936-8bb9-9779921381cb" (UID: "52698310-e203-4936-8bb9-9779921381cb"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 25 08:23:19 crc kubenswrapper[4832]: I0125 08:23:19.635192 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/52698310-e203-4936-8bb9-9779921381cb-kube-api-access-fz4k9" (OuterVolumeSpecName: "kube-api-access-fz4k9") pod "52698310-e203-4936-8bb9-9779921381cb" (UID: "52698310-e203-4936-8bb9-9779921381cb"). InnerVolumeSpecName "kube-api-access-fz4k9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 25 08:23:19 crc kubenswrapper[4832]: I0125 08:23:19.729732 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fz4k9\" (UniqueName: \"kubernetes.io/projected/52698310-e203-4936-8bb9-9779921381cb-kube-api-access-fz4k9\") on node \"crc\" DevicePath \"\"" Jan 25 08:23:19 crc kubenswrapper[4832]: I0125 08:23:19.729772 4832 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/52698310-e203-4936-8bb9-9779921381cb-utilities\") on node \"crc\" DevicePath \"\"" Jan 25 08:23:19 crc kubenswrapper[4832]: I0125 08:23:19.902706 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/52698310-e203-4936-8bb9-9779921381cb-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "52698310-e203-4936-8bb9-9779921381cb" (UID: "52698310-e203-4936-8bb9-9779921381cb"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 25 08:23:19 crc kubenswrapper[4832]: I0125 08:23:19.933919 4832 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/52698310-e203-4936-8bb9-9779921381cb-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 25 08:23:20 crc kubenswrapper[4832]: I0125 08:23:20.051833 4832 generic.go:334] "Generic (PLEG): container finished" podID="52698310-e203-4936-8bb9-9779921381cb" containerID="263e31b506005bb84950e88825c3aca50d29aa0f4f09988094406f7d92ed0348" exitCode=0 Jan 25 08:23:20 crc kubenswrapper[4832]: I0125 08:23:20.051887 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zlnmc" event={"ID":"52698310-e203-4936-8bb9-9779921381cb","Type":"ContainerDied","Data":"263e31b506005bb84950e88825c3aca50d29aa0f4f09988094406f7d92ed0348"} Jan 25 08:23:20 crc kubenswrapper[4832]: I0125 08:23:20.051899 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-zlnmc" Jan 25 08:23:20 crc kubenswrapper[4832]: I0125 08:23:20.051921 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zlnmc" event={"ID":"52698310-e203-4936-8bb9-9779921381cb","Type":"ContainerDied","Data":"a4fddb09c48d40dbb9769c4b821ab5b279d543da6e830694daf706b5955beef4"} Jan 25 08:23:20 crc kubenswrapper[4832]: I0125 08:23:20.051945 4832 scope.go:117] "RemoveContainer" containerID="263e31b506005bb84950e88825c3aca50d29aa0f4f09988094406f7d92ed0348" Jan 25 08:23:20 crc kubenswrapper[4832]: I0125 08:23:20.081672 4832 scope.go:117] "RemoveContainer" containerID="0d12b66c9aa66c8995ba4258c0326e711d8c5aff07851a031a68dda37557a76d" Jan 25 08:23:20 crc kubenswrapper[4832]: I0125 08:23:20.089669 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-zlnmc"] Jan 25 08:23:20 crc kubenswrapper[4832]: I0125 08:23:20.108135 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-zlnmc"] Jan 25 08:23:20 crc kubenswrapper[4832]: I0125 08:23:20.111619 4832 scope.go:117] "RemoveContainer" containerID="2cef2077ac4cbb42db5d01eb795800c780bb558a99d1efcc3c97f0dd7077e565" Jan 25 08:23:20 crc kubenswrapper[4832]: I0125 08:23:20.145637 4832 scope.go:117] "RemoveContainer" containerID="263e31b506005bb84950e88825c3aca50d29aa0f4f09988094406f7d92ed0348" Jan 25 08:23:20 crc kubenswrapper[4832]: E0125 08:23:20.146426 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"263e31b506005bb84950e88825c3aca50d29aa0f4f09988094406f7d92ed0348\": container with ID starting with 263e31b506005bb84950e88825c3aca50d29aa0f4f09988094406f7d92ed0348 not found: ID does not exist" containerID="263e31b506005bb84950e88825c3aca50d29aa0f4f09988094406f7d92ed0348" Jan 25 08:23:20 crc kubenswrapper[4832]: I0125 08:23:20.146459 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"263e31b506005bb84950e88825c3aca50d29aa0f4f09988094406f7d92ed0348"} err="failed to get container status \"263e31b506005bb84950e88825c3aca50d29aa0f4f09988094406f7d92ed0348\": rpc error: code = NotFound desc = could not find container \"263e31b506005bb84950e88825c3aca50d29aa0f4f09988094406f7d92ed0348\": container with ID starting with 263e31b506005bb84950e88825c3aca50d29aa0f4f09988094406f7d92ed0348 not found: ID does not exist" Jan 25 08:23:20 crc kubenswrapper[4832]: I0125 08:23:20.146481 4832 scope.go:117] "RemoveContainer" containerID="0d12b66c9aa66c8995ba4258c0326e711d8c5aff07851a031a68dda37557a76d" Jan 25 08:23:20 crc kubenswrapper[4832]: E0125 08:23:20.146821 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0d12b66c9aa66c8995ba4258c0326e711d8c5aff07851a031a68dda37557a76d\": container with ID starting with 0d12b66c9aa66c8995ba4258c0326e711d8c5aff07851a031a68dda37557a76d not found: ID does not exist" containerID="0d12b66c9aa66c8995ba4258c0326e711d8c5aff07851a031a68dda37557a76d" Jan 25 08:23:20 crc kubenswrapper[4832]: I0125 08:23:20.146863 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0d12b66c9aa66c8995ba4258c0326e711d8c5aff07851a031a68dda37557a76d"} err="failed to get container status \"0d12b66c9aa66c8995ba4258c0326e711d8c5aff07851a031a68dda37557a76d\": rpc error: code = NotFound desc = could not find container \"0d12b66c9aa66c8995ba4258c0326e711d8c5aff07851a031a68dda37557a76d\": container with ID starting with 0d12b66c9aa66c8995ba4258c0326e711d8c5aff07851a031a68dda37557a76d not found: ID does not exist" Jan 25 08:23:20 crc kubenswrapper[4832]: I0125 08:23:20.146895 4832 scope.go:117] "RemoveContainer" containerID="2cef2077ac4cbb42db5d01eb795800c780bb558a99d1efcc3c97f0dd7077e565" Jan 25 08:23:20 crc kubenswrapper[4832]: E0125 08:23:20.147181 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2cef2077ac4cbb42db5d01eb795800c780bb558a99d1efcc3c97f0dd7077e565\": container with ID starting with 2cef2077ac4cbb42db5d01eb795800c780bb558a99d1efcc3c97f0dd7077e565 not found: ID does not exist" containerID="2cef2077ac4cbb42db5d01eb795800c780bb558a99d1efcc3c97f0dd7077e565" Jan 25 08:23:20 crc kubenswrapper[4832]: I0125 08:23:20.147205 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2cef2077ac4cbb42db5d01eb795800c780bb558a99d1efcc3c97f0dd7077e565"} err="failed to get container status \"2cef2077ac4cbb42db5d01eb795800c780bb558a99d1efcc3c97f0dd7077e565\": rpc error: code = NotFound desc = could not find container \"2cef2077ac4cbb42db5d01eb795800c780bb558a99d1efcc3c97f0dd7077e565\": container with ID starting with 2cef2077ac4cbb42db5d01eb795800c780bb558a99d1efcc3c97f0dd7077e565 not found: ID does not exist" Jan 25 08:23:21 crc kubenswrapper[4832]: I0125 08:23:21.681241 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="52698310-e203-4936-8bb9-9779921381cb" path="/var/lib/kubelet/pods/52698310-e203-4936-8bb9-9779921381cb/volumes" Jan 25 08:23:26 crc kubenswrapper[4832]: I0125 08:23:26.670723 4832 scope.go:117] "RemoveContainer" containerID="cac454964b3d1f20ac28961991abf402bf242194f2fbad579737da7d57d4a27f" Jan 25 08:23:26 crc kubenswrapper[4832]: E0125 08:23:26.672426 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9r9sz_openshift-machine-config-operator(1fb47e8e-c812-41b4-9be7-3fad81e121b0)\"" pod="openshift-machine-config-operator/machine-config-daemon-9r9sz" podUID="1fb47e8e-c812-41b4-9be7-3fad81e121b0" Jan 25 08:23:40 crc kubenswrapper[4832]: I0125 08:23:40.670111 4832 scope.go:117] "RemoveContainer" containerID="cac454964b3d1f20ac28961991abf402bf242194f2fbad579737da7d57d4a27f" Jan 25 08:23:40 crc kubenswrapper[4832]: E0125 08:23:40.671105 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9r9sz_openshift-machine-config-operator(1fb47e8e-c812-41b4-9be7-3fad81e121b0)\"" pod="openshift-machine-config-operator/machine-config-daemon-9r9sz" podUID="1fb47e8e-c812-41b4-9be7-3fad81e121b0" Jan 25 08:23:53 crc kubenswrapper[4832]: I0125 08:23:53.670120 4832 scope.go:117] "RemoveContainer" containerID="cac454964b3d1f20ac28961991abf402bf242194f2fbad579737da7d57d4a27f" Jan 25 08:23:53 crc kubenswrapper[4832]: E0125 08:23:53.670877 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9r9sz_openshift-machine-config-operator(1fb47e8e-c812-41b4-9be7-3fad81e121b0)\"" pod="openshift-machine-config-operator/machine-config-daemon-9r9sz" podUID="1fb47e8e-c812-41b4-9be7-3fad81e121b0" Jan 25 08:23:59 crc kubenswrapper[4832]: I0125 08:23:59.410224 4832 generic.go:334] "Generic (PLEG): container finished" podID="146a1b8e-1733-40ca-81a5-d73122618f4d" containerID="bf46844b6a9a9ca1cc5a905cb438dfcf848a23bd2a232c75689ec5dbf2c499f2" exitCode=0 Jan 25 08:23:59 crc kubenswrapper[4832]: I0125 08:23:59.410313 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-hdzmf" event={"ID":"146a1b8e-1733-40ca-81a5-d73122618f4d","Type":"ContainerDied","Data":"bf46844b6a9a9ca1cc5a905cb438dfcf848a23bd2a232c75689ec5dbf2c499f2"} Jan 25 08:24:00 crc kubenswrapper[4832]: I0125 08:24:00.841010 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-hdzmf" Jan 25 08:24:00 crc kubenswrapper[4832]: I0125 08:24:00.975092 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/146a1b8e-1733-40ca-81a5-d73122618f4d-inventory\") pod \"146a1b8e-1733-40ca-81a5-d73122618f4d\" (UID: \"146a1b8e-1733-40ca-81a5-d73122618f4d\") " Jan 25 08:24:00 crc kubenswrapper[4832]: I0125 08:24:00.975263 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8bnbh\" (UniqueName: \"kubernetes.io/projected/146a1b8e-1733-40ca-81a5-d73122618f4d-kube-api-access-8bnbh\") pod \"146a1b8e-1733-40ca-81a5-d73122618f4d\" (UID: \"146a1b8e-1733-40ca-81a5-d73122618f4d\") " Jan 25 08:24:00 crc kubenswrapper[4832]: I0125 08:24:00.975301 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/146a1b8e-1733-40ca-81a5-d73122618f4d-ssh-key-openstack-edpm-ipam\") pod \"146a1b8e-1733-40ca-81a5-d73122618f4d\" (UID: \"146a1b8e-1733-40ca-81a5-d73122618f4d\") " Jan 25 08:24:00 crc kubenswrapper[4832]: I0125 08:24:00.975354 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/146a1b8e-1733-40ca-81a5-d73122618f4d-bootstrap-combined-ca-bundle\") pod \"146a1b8e-1733-40ca-81a5-d73122618f4d\" (UID: \"146a1b8e-1733-40ca-81a5-d73122618f4d\") " Jan 25 08:24:00 crc kubenswrapper[4832]: I0125 08:24:00.986094 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/146a1b8e-1733-40ca-81a5-d73122618f4d-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "146a1b8e-1733-40ca-81a5-d73122618f4d" (UID: "146a1b8e-1733-40ca-81a5-d73122618f4d"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 25 08:24:00 crc kubenswrapper[4832]: I0125 08:24:00.990147 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/146a1b8e-1733-40ca-81a5-d73122618f4d-kube-api-access-8bnbh" (OuterVolumeSpecName: "kube-api-access-8bnbh") pod "146a1b8e-1733-40ca-81a5-d73122618f4d" (UID: "146a1b8e-1733-40ca-81a5-d73122618f4d"). InnerVolumeSpecName "kube-api-access-8bnbh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 25 08:24:01 crc kubenswrapper[4832]: I0125 08:24:01.008802 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/146a1b8e-1733-40ca-81a5-d73122618f4d-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "146a1b8e-1733-40ca-81a5-d73122618f4d" (UID: "146a1b8e-1733-40ca-81a5-d73122618f4d"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 25 08:24:01 crc kubenswrapper[4832]: I0125 08:24:01.008918 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/146a1b8e-1733-40ca-81a5-d73122618f4d-inventory" (OuterVolumeSpecName: "inventory") pod "146a1b8e-1733-40ca-81a5-d73122618f4d" (UID: "146a1b8e-1733-40ca-81a5-d73122618f4d"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 25 08:24:01 crc kubenswrapper[4832]: I0125 08:24:01.078173 4832 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/146a1b8e-1733-40ca-81a5-d73122618f4d-inventory\") on node \"crc\" DevicePath \"\"" Jan 25 08:24:01 crc kubenswrapper[4832]: I0125 08:24:01.078449 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8bnbh\" (UniqueName: \"kubernetes.io/projected/146a1b8e-1733-40ca-81a5-d73122618f4d-kube-api-access-8bnbh\") on node \"crc\" DevicePath \"\"" Jan 25 08:24:01 crc kubenswrapper[4832]: I0125 08:24:01.078538 4832 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/146a1b8e-1733-40ca-81a5-d73122618f4d-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 25 08:24:01 crc kubenswrapper[4832]: I0125 08:24:01.078625 4832 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/146a1b8e-1733-40ca-81a5-d73122618f4d-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 25 08:24:01 crc kubenswrapper[4832]: I0125 08:24:01.428776 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-hdzmf" event={"ID":"146a1b8e-1733-40ca-81a5-d73122618f4d","Type":"ContainerDied","Data":"5a59fb8f495bb4a45e2ed01320f554641ec36082f45f54266e02c95e439332aa"} Jan 25 08:24:01 crc kubenswrapper[4832]: I0125 08:24:01.429033 4832 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5a59fb8f495bb4a45e2ed01320f554641ec36082f45f54266e02c95e439332aa" Jan 25 08:24:01 crc kubenswrapper[4832]: I0125 08:24:01.428821 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-hdzmf" Jan 25 08:24:01 crc kubenswrapper[4832]: I0125 08:24:01.521342 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-5wttx"] Jan 25 08:24:01 crc kubenswrapper[4832]: E0125 08:24:01.521971 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="beb90b9d-7550-4c29-a308-f2a340eff0d9" containerName="extract-content" Jan 25 08:24:01 crc kubenswrapper[4832]: I0125 08:24:01.522007 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="beb90b9d-7550-4c29-a308-f2a340eff0d9" containerName="extract-content" Jan 25 08:24:01 crc kubenswrapper[4832]: E0125 08:24:01.522022 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="52698310-e203-4936-8bb9-9779921381cb" containerName="extract-content" Jan 25 08:24:01 crc kubenswrapper[4832]: I0125 08:24:01.522028 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="52698310-e203-4936-8bb9-9779921381cb" containerName="extract-content" Jan 25 08:24:01 crc kubenswrapper[4832]: E0125 08:24:01.522043 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="52698310-e203-4936-8bb9-9779921381cb" containerName="registry-server" Jan 25 08:24:01 crc kubenswrapper[4832]: I0125 08:24:01.522051 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="52698310-e203-4936-8bb9-9779921381cb" containerName="registry-server" Jan 25 08:24:01 crc kubenswrapper[4832]: E0125 08:24:01.522086 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="52698310-e203-4936-8bb9-9779921381cb" containerName="extract-utilities" Jan 25 08:24:01 crc kubenswrapper[4832]: I0125 08:24:01.522093 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="52698310-e203-4936-8bb9-9779921381cb" containerName="extract-utilities" Jan 25 08:24:01 crc kubenswrapper[4832]: E0125 08:24:01.522101 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="beb90b9d-7550-4c29-a308-f2a340eff0d9" containerName="extract-utilities" Jan 25 08:24:01 crc kubenswrapper[4832]: I0125 08:24:01.522108 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="beb90b9d-7550-4c29-a308-f2a340eff0d9" containerName="extract-utilities" Jan 25 08:24:01 crc kubenswrapper[4832]: E0125 08:24:01.522117 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="146a1b8e-1733-40ca-81a5-d73122618f4d" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Jan 25 08:24:01 crc kubenswrapper[4832]: I0125 08:24:01.522124 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="146a1b8e-1733-40ca-81a5-d73122618f4d" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Jan 25 08:24:01 crc kubenswrapper[4832]: E0125 08:24:01.522163 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="beb90b9d-7550-4c29-a308-f2a340eff0d9" containerName="registry-server" Jan 25 08:24:01 crc kubenswrapper[4832]: I0125 08:24:01.522170 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="beb90b9d-7550-4c29-a308-f2a340eff0d9" containerName="registry-server" Jan 25 08:24:01 crc kubenswrapper[4832]: I0125 08:24:01.522405 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="52698310-e203-4936-8bb9-9779921381cb" containerName="registry-server" Jan 25 08:24:01 crc kubenswrapper[4832]: I0125 08:24:01.522427 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="146a1b8e-1733-40ca-81a5-d73122618f4d" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Jan 25 08:24:01 crc kubenswrapper[4832]: I0125 08:24:01.522441 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="beb90b9d-7550-4c29-a308-f2a340eff0d9" containerName="registry-server" Jan 25 08:24:01 crc kubenswrapper[4832]: I0125 08:24:01.523351 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-5wttx" Jan 25 08:24:01 crc kubenswrapper[4832]: I0125 08:24:01.530073 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 25 08:24:01 crc kubenswrapper[4832]: I0125 08:24:01.530089 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 25 08:24:01 crc kubenswrapper[4832]: I0125 08:24:01.530970 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-7jwxb" Jan 25 08:24:01 crc kubenswrapper[4832]: I0125 08:24:01.530982 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 25 08:24:01 crc kubenswrapper[4832]: I0125 08:24:01.532321 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-5wttx"] Jan 25 08:24:01 crc kubenswrapper[4832]: I0125 08:24:01.588604 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q6shv\" (UniqueName: \"kubernetes.io/projected/c2445bfc-4cb1-417b-9eea-3ef40a5dcb6f-kube-api-access-q6shv\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-5wttx\" (UID: \"c2445bfc-4cb1-417b-9eea-3ef40a5dcb6f\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-5wttx" Jan 25 08:24:01 crc kubenswrapper[4832]: I0125 08:24:01.588678 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c2445bfc-4cb1-417b-9eea-3ef40a5dcb6f-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-5wttx\" (UID: \"c2445bfc-4cb1-417b-9eea-3ef40a5dcb6f\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-5wttx" Jan 25 08:24:01 crc kubenswrapper[4832]: I0125 08:24:01.588753 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c2445bfc-4cb1-417b-9eea-3ef40a5dcb6f-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-5wttx\" (UID: \"c2445bfc-4cb1-417b-9eea-3ef40a5dcb6f\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-5wttx" Jan 25 08:24:01 crc kubenswrapper[4832]: I0125 08:24:01.690245 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c2445bfc-4cb1-417b-9eea-3ef40a5dcb6f-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-5wttx\" (UID: \"c2445bfc-4cb1-417b-9eea-3ef40a5dcb6f\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-5wttx" Jan 25 08:24:01 crc kubenswrapper[4832]: I0125 08:24:01.690362 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q6shv\" (UniqueName: \"kubernetes.io/projected/c2445bfc-4cb1-417b-9eea-3ef40a5dcb6f-kube-api-access-q6shv\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-5wttx\" (UID: \"c2445bfc-4cb1-417b-9eea-3ef40a5dcb6f\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-5wttx" Jan 25 08:24:01 crc kubenswrapper[4832]: I0125 08:24:01.690424 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c2445bfc-4cb1-417b-9eea-3ef40a5dcb6f-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-5wttx\" (UID: \"c2445bfc-4cb1-417b-9eea-3ef40a5dcb6f\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-5wttx" Jan 25 08:24:01 crc kubenswrapper[4832]: I0125 08:24:01.695133 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c2445bfc-4cb1-417b-9eea-3ef40a5dcb6f-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-5wttx\" (UID: \"c2445bfc-4cb1-417b-9eea-3ef40a5dcb6f\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-5wttx" Jan 25 08:24:01 crc kubenswrapper[4832]: I0125 08:24:01.695799 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c2445bfc-4cb1-417b-9eea-3ef40a5dcb6f-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-5wttx\" (UID: \"c2445bfc-4cb1-417b-9eea-3ef40a5dcb6f\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-5wttx" Jan 25 08:24:01 crc kubenswrapper[4832]: I0125 08:24:01.722160 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q6shv\" (UniqueName: \"kubernetes.io/projected/c2445bfc-4cb1-417b-9eea-3ef40a5dcb6f-kube-api-access-q6shv\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-5wttx\" (UID: \"c2445bfc-4cb1-417b-9eea-3ef40a5dcb6f\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-5wttx" Jan 25 08:24:01 crc kubenswrapper[4832]: I0125 08:24:01.873804 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-5wttx" Jan 25 08:24:02 crc kubenswrapper[4832]: I0125 08:24:02.386892 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-5wttx"] Jan 25 08:24:02 crc kubenswrapper[4832]: I0125 08:24:02.387838 4832 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 25 08:24:02 crc kubenswrapper[4832]: I0125 08:24:02.438154 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-5wttx" event={"ID":"c2445bfc-4cb1-417b-9eea-3ef40a5dcb6f","Type":"ContainerStarted","Data":"6ccd4f64829bfda8fa953d3c91c71b2646052700da8f254381ca0f3daac2b666"} Jan 25 08:24:04 crc kubenswrapper[4832]: I0125 08:24:04.455055 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-5wttx" event={"ID":"c2445bfc-4cb1-417b-9eea-3ef40a5dcb6f","Type":"ContainerStarted","Data":"f55b0e991c447f65e5eb8df48946710eaf041fb521e8c2fbb7a8af9c406c4089"} Jan 25 08:24:04 crc kubenswrapper[4832]: I0125 08:24:04.476525 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-5wttx" podStartSLOduration=2.603918302 podStartE2EDuration="3.476503084s" podCreationTimestamp="2026-01-25 08:24:01 +0000 UTC" firstStartedPulling="2026-01-25 08:24:02.387250468 +0000 UTC m=+1625.061074011" lastFinishedPulling="2026-01-25 08:24:03.25983527 +0000 UTC m=+1625.933658793" observedRunningTime="2026-01-25 08:24:04.469558806 +0000 UTC m=+1627.143382349" watchObservedRunningTime="2026-01-25 08:24:04.476503084 +0000 UTC m=+1627.150326617" Jan 25 08:24:06 crc kubenswrapper[4832]: I0125 08:24:06.669780 4832 scope.go:117] "RemoveContainer" containerID="cac454964b3d1f20ac28961991abf402bf242194f2fbad579737da7d57d4a27f" Jan 25 08:24:06 crc kubenswrapper[4832]: E0125 08:24:06.670503 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9r9sz_openshift-machine-config-operator(1fb47e8e-c812-41b4-9be7-3fad81e121b0)\"" pod="openshift-machine-config-operator/machine-config-daemon-9r9sz" podUID="1fb47e8e-c812-41b4-9be7-3fad81e121b0" Jan 25 08:24:18 crc kubenswrapper[4832]: I0125 08:24:18.227932 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-p2z6l"] Jan 25 08:24:18 crc kubenswrapper[4832]: I0125 08:24:18.234279 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-p2z6l" Jan 25 08:24:18 crc kubenswrapper[4832]: I0125 08:24:18.237737 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-p2z6l"] Jan 25 08:24:18 crc kubenswrapper[4832]: I0125 08:24:18.256425 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4b026a6d-4463-481c-a36c-2ade1c16a71c-catalog-content\") pod \"certified-operators-p2z6l\" (UID: \"4b026a6d-4463-481c-a36c-2ade1c16a71c\") " pod="openshift-marketplace/certified-operators-p2z6l" Jan 25 08:24:18 crc kubenswrapper[4832]: I0125 08:24:18.256520 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vh6tc\" (UniqueName: \"kubernetes.io/projected/4b026a6d-4463-481c-a36c-2ade1c16a71c-kube-api-access-vh6tc\") pod \"certified-operators-p2z6l\" (UID: \"4b026a6d-4463-481c-a36c-2ade1c16a71c\") " pod="openshift-marketplace/certified-operators-p2z6l" Jan 25 08:24:18 crc kubenswrapper[4832]: I0125 08:24:18.256562 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4b026a6d-4463-481c-a36c-2ade1c16a71c-utilities\") pod \"certified-operators-p2z6l\" (UID: \"4b026a6d-4463-481c-a36c-2ade1c16a71c\") " pod="openshift-marketplace/certified-operators-p2z6l" Jan 25 08:24:18 crc kubenswrapper[4832]: I0125 08:24:18.358644 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vh6tc\" (UniqueName: \"kubernetes.io/projected/4b026a6d-4463-481c-a36c-2ade1c16a71c-kube-api-access-vh6tc\") pod \"certified-operators-p2z6l\" (UID: \"4b026a6d-4463-481c-a36c-2ade1c16a71c\") " pod="openshift-marketplace/certified-operators-p2z6l" Jan 25 08:24:18 crc kubenswrapper[4832]: I0125 08:24:18.358718 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4b026a6d-4463-481c-a36c-2ade1c16a71c-utilities\") pod \"certified-operators-p2z6l\" (UID: \"4b026a6d-4463-481c-a36c-2ade1c16a71c\") " pod="openshift-marketplace/certified-operators-p2z6l" Jan 25 08:24:18 crc kubenswrapper[4832]: I0125 08:24:18.358789 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4b026a6d-4463-481c-a36c-2ade1c16a71c-catalog-content\") pod \"certified-operators-p2z6l\" (UID: \"4b026a6d-4463-481c-a36c-2ade1c16a71c\") " pod="openshift-marketplace/certified-operators-p2z6l" Jan 25 08:24:18 crc kubenswrapper[4832]: I0125 08:24:18.359259 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4b026a6d-4463-481c-a36c-2ade1c16a71c-catalog-content\") pod \"certified-operators-p2z6l\" (UID: \"4b026a6d-4463-481c-a36c-2ade1c16a71c\") " pod="openshift-marketplace/certified-operators-p2z6l" Jan 25 08:24:18 crc kubenswrapper[4832]: I0125 08:24:18.359509 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4b026a6d-4463-481c-a36c-2ade1c16a71c-utilities\") pod \"certified-operators-p2z6l\" (UID: \"4b026a6d-4463-481c-a36c-2ade1c16a71c\") " pod="openshift-marketplace/certified-operators-p2z6l" Jan 25 08:24:18 crc kubenswrapper[4832]: I0125 08:24:18.387080 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vh6tc\" (UniqueName: \"kubernetes.io/projected/4b026a6d-4463-481c-a36c-2ade1c16a71c-kube-api-access-vh6tc\") pod \"certified-operators-p2z6l\" (UID: \"4b026a6d-4463-481c-a36c-2ade1c16a71c\") " pod="openshift-marketplace/certified-operators-p2z6l" Jan 25 08:24:18 crc kubenswrapper[4832]: I0125 08:24:18.563992 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-p2z6l" Jan 25 08:24:19 crc kubenswrapper[4832]: I0125 08:24:19.074810 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-p2z6l"] Jan 25 08:24:19 crc kubenswrapper[4832]: W0125 08:24:19.081426 4832 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4b026a6d_4463_481c_a36c_2ade1c16a71c.slice/crio-07284af6fa93c16dfaca4dce7c0d6692e619b96489e2326e774fe951fc53a70f WatchSource:0}: Error finding container 07284af6fa93c16dfaca4dce7c0d6692e619b96489e2326e774fe951fc53a70f: Status 404 returned error can't find the container with id 07284af6fa93c16dfaca4dce7c0d6692e619b96489e2326e774fe951fc53a70f Jan 25 08:24:19 crc kubenswrapper[4832]: I0125 08:24:19.589010 4832 generic.go:334] "Generic (PLEG): container finished" podID="4b026a6d-4463-481c-a36c-2ade1c16a71c" containerID="c4041f1e67f14717763d4f69f99a174cf07a353ef916329fc3c4f8df95b1994a" exitCode=0 Jan 25 08:24:19 crc kubenswrapper[4832]: I0125 08:24:19.589051 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-p2z6l" event={"ID":"4b026a6d-4463-481c-a36c-2ade1c16a71c","Type":"ContainerDied","Data":"c4041f1e67f14717763d4f69f99a174cf07a353ef916329fc3c4f8df95b1994a"} Jan 25 08:24:19 crc kubenswrapper[4832]: I0125 08:24:19.589078 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-p2z6l" event={"ID":"4b026a6d-4463-481c-a36c-2ade1c16a71c","Type":"ContainerStarted","Data":"07284af6fa93c16dfaca4dce7c0d6692e619b96489e2326e774fe951fc53a70f"} Jan 25 08:24:19 crc kubenswrapper[4832]: I0125 08:24:19.669348 4832 scope.go:117] "RemoveContainer" containerID="cac454964b3d1f20ac28961991abf402bf242194f2fbad579737da7d57d4a27f" Jan 25 08:24:19 crc kubenswrapper[4832]: E0125 08:24:19.669598 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9r9sz_openshift-machine-config-operator(1fb47e8e-c812-41b4-9be7-3fad81e121b0)\"" pod="openshift-machine-config-operator/machine-config-daemon-9r9sz" podUID="1fb47e8e-c812-41b4-9be7-3fad81e121b0" Jan 25 08:24:20 crc kubenswrapper[4832]: I0125 08:24:20.601263 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-p2z6l" event={"ID":"4b026a6d-4463-481c-a36c-2ade1c16a71c","Type":"ContainerStarted","Data":"df2e3bc8ef16c838ced9aa9c2cb04c3f88e04492a7fac8333d0090736207f09c"} Jan 25 08:24:21 crc kubenswrapper[4832]: I0125 08:24:21.611618 4832 generic.go:334] "Generic (PLEG): container finished" podID="4b026a6d-4463-481c-a36c-2ade1c16a71c" containerID="df2e3bc8ef16c838ced9aa9c2cb04c3f88e04492a7fac8333d0090736207f09c" exitCode=0 Jan 25 08:24:21 crc kubenswrapper[4832]: I0125 08:24:21.611669 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-p2z6l" event={"ID":"4b026a6d-4463-481c-a36c-2ade1c16a71c","Type":"ContainerDied","Data":"df2e3bc8ef16c838ced9aa9c2cb04c3f88e04492a7fac8333d0090736207f09c"} Jan 25 08:24:22 crc kubenswrapper[4832]: I0125 08:24:22.622321 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-p2z6l" event={"ID":"4b026a6d-4463-481c-a36c-2ade1c16a71c","Type":"ContainerStarted","Data":"5fde564d7cae8ff13dc24e8cd42acc3607421b833ebfba5cec2d503fd9bd4f06"} Jan 25 08:24:22 crc kubenswrapper[4832]: I0125 08:24:22.646083 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-p2z6l" podStartSLOduration=2.2497325950000002 podStartE2EDuration="4.646065712s" podCreationTimestamp="2026-01-25 08:24:18 +0000 UTC" firstStartedPulling="2026-01-25 08:24:19.594147376 +0000 UTC m=+1642.267970909" lastFinishedPulling="2026-01-25 08:24:21.990480493 +0000 UTC m=+1644.664304026" observedRunningTime="2026-01-25 08:24:22.644944738 +0000 UTC m=+1645.318768401" watchObservedRunningTime="2026-01-25 08:24:22.646065712 +0000 UTC m=+1645.319889245" Jan 25 08:24:28 crc kubenswrapper[4832]: I0125 08:24:28.564191 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-p2z6l" Jan 25 08:24:28 crc kubenswrapper[4832]: I0125 08:24:28.564959 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-p2z6l" Jan 25 08:24:28 crc kubenswrapper[4832]: I0125 08:24:28.609348 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-p2z6l" Jan 25 08:24:28 crc kubenswrapper[4832]: I0125 08:24:28.741929 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-p2z6l" Jan 25 08:24:28 crc kubenswrapper[4832]: I0125 08:24:28.851832 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-p2z6l"] Jan 25 08:24:30 crc kubenswrapper[4832]: I0125 08:24:30.700315 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-p2z6l" podUID="4b026a6d-4463-481c-a36c-2ade1c16a71c" containerName="registry-server" containerID="cri-o://5fde564d7cae8ff13dc24e8cd42acc3607421b833ebfba5cec2d503fd9bd4f06" gracePeriod=2 Jan 25 08:24:31 crc kubenswrapper[4832]: I0125 08:24:31.124946 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-p2z6l" Jan 25 08:24:31 crc kubenswrapper[4832]: I0125 08:24:31.309217 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vh6tc\" (UniqueName: \"kubernetes.io/projected/4b026a6d-4463-481c-a36c-2ade1c16a71c-kube-api-access-vh6tc\") pod \"4b026a6d-4463-481c-a36c-2ade1c16a71c\" (UID: \"4b026a6d-4463-481c-a36c-2ade1c16a71c\") " Jan 25 08:24:31 crc kubenswrapper[4832]: I0125 08:24:31.309595 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4b026a6d-4463-481c-a36c-2ade1c16a71c-catalog-content\") pod \"4b026a6d-4463-481c-a36c-2ade1c16a71c\" (UID: \"4b026a6d-4463-481c-a36c-2ade1c16a71c\") " Jan 25 08:24:31 crc kubenswrapper[4832]: I0125 08:24:31.310287 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4b026a6d-4463-481c-a36c-2ade1c16a71c-utilities\") pod \"4b026a6d-4463-481c-a36c-2ade1c16a71c\" (UID: \"4b026a6d-4463-481c-a36c-2ade1c16a71c\") " Jan 25 08:24:31 crc kubenswrapper[4832]: I0125 08:24:31.311182 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4b026a6d-4463-481c-a36c-2ade1c16a71c-utilities" (OuterVolumeSpecName: "utilities") pod "4b026a6d-4463-481c-a36c-2ade1c16a71c" (UID: "4b026a6d-4463-481c-a36c-2ade1c16a71c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 25 08:24:31 crc kubenswrapper[4832]: I0125 08:24:31.317688 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4b026a6d-4463-481c-a36c-2ade1c16a71c-kube-api-access-vh6tc" (OuterVolumeSpecName: "kube-api-access-vh6tc") pod "4b026a6d-4463-481c-a36c-2ade1c16a71c" (UID: "4b026a6d-4463-481c-a36c-2ade1c16a71c"). InnerVolumeSpecName "kube-api-access-vh6tc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 25 08:24:31 crc kubenswrapper[4832]: I0125 08:24:31.361630 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4b026a6d-4463-481c-a36c-2ade1c16a71c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "4b026a6d-4463-481c-a36c-2ade1c16a71c" (UID: "4b026a6d-4463-481c-a36c-2ade1c16a71c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 25 08:24:31 crc kubenswrapper[4832]: I0125 08:24:31.413252 4832 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4b026a6d-4463-481c-a36c-2ade1c16a71c-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 25 08:24:31 crc kubenswrapper[4832]: I0125 08:24:31.413294 4832 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4b026a6d-4463-481c-a36c-2ade1c16a71c-utilities\") on node \"crc\" DevicePath \"\"" Jan 25 08:24:31 crc kubenswrapper[4832]: I0125 08:24:31.413306 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vh6tc\" (UniqueName: \"kubernetes.io/projected/4b026a6d-4463-481c-a36c-2ade1c16a71c-kube-api-access-vh6tc\") on node \"crc\" DevicePath \"\"" Jan 25 08:24:31 crc kubenswrapper[4832]: I0125 08:24:31.710043 4832 generic.go:334] "Generic (PLEG): container finished" podID="4b026a6d-4463-481c-a36c-2ade1c16a71c" containerID="5fde564d7cae8ff13dc24e8cd42acc3607421b833ebfba5cec2d503fd9bd4f06" exitCode=0 Jan 25 08:24:31 crc kubenswrapper[4832]: I0125 08:24:31.710114 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-p2z6l" Jan 25 08:24:31 crc kubenswrapper[4832]: I0125 08:24:31.710126 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-p2z6l" event={"ID":"4b026a6d-4463-481c-a36c-2ade1c16a71c","Type":"ContainerDied","Data":"5fde564d7cae8ff13dc24e8cd42acc3607421b833ebfba5cec2d503fd9bd4f06"} Jan 25 08:24:31 crc kubenswrapper[4832]: I0125 08:24:31.710615 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-p2z6l" event={"ID":"4b026a6d-4463-481c-a36c-2ade1c16a71c","Type":"ContainerDied","Data":"07284af6fa93c16dfaca4dce7c0d6692e619b96489e2326e774fe951fc53a70f"} Jan 25 08:24:31 crc kubenswrapper[4832]: I0125 08:24:31.710637 4832 scope.go:117] "RemoveContainer" containerID="5fde564d7cae8ff13dc24e8cd42acc3607421b833ebfba5cec2d503fd9bd4f06" Jan 25 08:24:31 crc kubenswrapper[4832]: I0125 08:24:31.744293 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-p2z6l"] Jan 25 08:24:31 crc kubenswrapper[4832]: I0125 08:24:31.750641 4832 scope.go:117] "RemoveContainer" containerID="df2e3bc8ef16c838ced9aa9c2cb04c3f88e04492a7fac8333d0090736207f09c" Jan 25 08:24:31 crc kubenswrapper[4832]: I0125 08:24:31.754805 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-p2z6l"] Jan 25 08:24:31 crc kubenswrapper[4832]: I0125 08:24:31.778811 4832 scope.go:117] "RemoveContainer" containerID="c4041f1e67f14717763d4f69f99a174cf07a353ef916329fc3c4f8df95b1994a" Jan 25 08:24:31 crc kubenswrapper[4832]: I0125 08:24:31.817232 4832 scope.go:117] "RemoveContainer" containerID="5fde564d7cae8ff13dc24e8cd42acc3607421b833ebfba5cec2d503fd9bd4f06" Jan 25 08:24:31 crc kubenswrapper[4832]: E0125 08:24:31.817880 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5fde564d7cae8ff13dc24e8cd42acc3607421b833ebfba5cec2d503fd9bd4f06\": container with ID starting with 5fde564d7cae8ff13dc24e8cd42acc3607421b833ebfba5cec2d503fd9bd4f06 not found: ID does not exist" containerID="5fde564d7cae8ff13dc24e8cd42acc3607421b833ebfba5cec2d503fd9bd4f06" Jan 25 08:24:31 crc kubenswrapper[4832]: I0125 08:24:31.817935 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5fde564d7cae8ff13dc24e8cd42acc3607421b833ebfba5cec2d503fd9bd4f06"} err="failed to get container status \"5fde564d7cae8ff13dc24e8cd42acc3607421b833ebfba5cec2d503fd9bd4f06\": rpc error: code = NotFound desc = could not find container \"5fde564d7cae8ff13dc24e8cd42acc3607421b833ebfba5cec2d503fd9bd4f06\": container with ID starting with 5fde564d7cae8ff13dc24e8cd42acc3607421b833ebfba5cec2d503fd9bd4f06 not found: ID does not exist" Jan 25 08:24:31 crc kubenswrapper[4832]: I0125 08:24:31.817968 4832 scope.go:117] "RemoveContainer" containerID="df2e3bc8ef16c838ced9aa9c2cb04c3f88e04492a7fac8333d0090736207f09c" Jan 25 08:24:31 crc kubenswrapper[4832]: E0125 08:24:31.818681 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"df2e3bc8ef16c838ced9aa9c2cb04c3f88e04492a7fac8333d0090736207f09c\": container with ID starting with df2e3bc8ef16c838ced9aa9c2cb04c3f88e04492a7fac8333d0090736207f09c not found: ID does not exist" containerID="df2e3bc8ef16c838ced9aa9c2cb04c3f88e04492a7fac8333d0090736207f09c" Jan 25 08:24:31 crc kubenswrapper[4832]: I0125 08:24:31.818723 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"df2e3bc8ef16c838ced9aa9c2cb04c3f88e04492a7fac8333d0090736207f09c"} err="failed to get container status \"df2e3bc8ef16c838ced9aa9c2cb04c3f88e04492a7fac8333d0090736207f09c\": rpc error: code = NotFound desc = could not find container \"df2e3bc8ef16c838ced9aa9c2cb04c3f88e04492a7fac8333d0090736207f09c\": container with ID starting with df2e3bc8ef16c838ced9aa9c2cb04c3f88e04492a7fac8333d0090736207f09c not found: ID does not exist" Jan 25 08:24:31 crc kubenswrapper[4832]: I0125 08:24:31.818749 4832 scope.go:117] "RemoveContainer" containerID="c4041f1e67f14717763d4f69f99a174cf07a353ef916329fc3c4f8df95b1994a" Jan 25 08:24:31 crc kubenswrapper[4832]: E0125 08:24:31.819071 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c4041f1e67f14717763d4f69f99a174cf07a353ef916329fc3c4f8df95b1994a\": container with ID starting with c4041f1e67f14717763d4f69f99a174cf07a353ef916329fc3c4f8df95b1994a not found: ID does not exist" containerID="c4041f1e67f14717763d4f69f99a174cf07a353ef916329fc3c4f8df95b1994a" Jan 25 08:24:31 crc kubenswrapper[4832]: I0125 08:24:31.819091 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c4041f1e67f14717763d4f69f99a174cf07a353ef916329fc3c4f8df95b1994a"} err="failed to get container status \"c4041f1e67f14717763d4f69f99a174cf07a353ef916329fc3c4f8df95b1994a\": rpc error: code = NotFound desc = could not find container \"c4041f1e67f14717763d4f69f99a174cf07a353ef916329fc3c4f8df95b1994a\": container with ID starting with c4041f1e67f14717763d4f69f99a174cf07a353ef916329fc3c4f8df95b1994a not found: ID does not exist" Jan 25 08:24:33 crc kubenswrapper[4832]: I0125 08:24:33.670454 4832 scope.go:117] "RemoveContainer" containerID="cac454964b3d1f20ac28961991abf402bf242194f2fbad579737da7d57d4a27f" Jan 25 08:24:33 crc kubenswrapper[4832]: E0125 08:24:33.671084 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9r9sz_openshift-machine-config-operator(1fb47e8e-c812-41b4-9be7-3fad81e121b0)\"" pod="openshift-machine-config-operator/machine-config-daemon-9r9sz" podUID="1fb47e8e-c812-41b4-9be7-3fad81e121b0" Jan 25 08:24:33 crc kubenswrapper[4832]: I0125 08:24:33.688792 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4b026a6d-4463-481c-a36c-2ade1c16a71c" path="/var/lib/kubelet/pods/4b026a6d-4463-481c-a36c-2ade1c16a71c/volumes" Jan 25 08:24:47 crc kubenswrapper[4832]: I0125 08:24:47.678054 4832 scope.go:117] "RemoveContainer" containerID="cac454964b3d1f20ac28961991abf402bf242194f2fbad579737da7d57d4a27f" Jan 25 08:24:47 crc kubenswrapper[4832]: E0125 08:24:47.678860 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9r9sz_openshift-machine-config-operator(1fb47e8e-c812-41b4-9be7-3fad81e121b0)\"" pod="openshift-machine-config-operator/machine-config-daemon-9r9sz" podUID="1fb47e8e-c812-41b4-9be7-3fad81e121b0" Jan 25 08:24:53 crc kubenswrapper[4832]: I0125 08:24:53.047048 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-create-n7gsd"] Jan 25 08:24:53 crc kubenswrapper[4832]: I0125 08:24:53.055864 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-create-n7gsd"] Jan 25 08:24:53 crc kubenswrapper[4832]: I0125 08:24:53.681988 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c1da6c5d-2894-431a-bec2-804d998b607b" path="/var/lib/kubelet/pods/c1da6c5d-2894-431a-bec2-804d998b607b/volumes" Jan 25 08:24:54 crc kubenswrapper[4832]: I0125 08:24:54.038044 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-create-mkcbk"] Jan 25 08:24:54 crc kubenswrapper[4832]: I0125 08:24:54.049327 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-36c3-account-create-update-m7jc9"] Jan 25 08:24:54 crc kubenswrapper[4832]: I0125 08:24:54.057736 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-7fa9-account-create-update-9gzv2"] Jan 25 08:24:54 crc kubenswrapper[4832]: I0125 08:24:54.064991 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-7fa9-account-create-update-9gzv2"] Jan 25 08:24:54 crc kubenswrapper[4832]: I0125 08:24:54.072493 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-36c3-account-create-update-m7jc9"] Jan 25 08:24:54 crc kubenswrapper[4832]: I0125 08:24:54.080960 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-create-mkcbk"] Jan 25 08:24:55 crc kubenswrapper[4832]: I0125 08:24:55.681845 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="078f097c-bbd2-4fad-9ea6-0e92f09607c8" path="/var/lib/kubelet/pods/078f097c-bbd2-4fad-9ea6-0e92f09607c8/volumes" Jan 25 08:24:55 crc kubenswrapper[4832]: I0125 08:24:55.682575 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="13555380-67de-40bf-9255-d195682c6e56" path="/var/lib/kubelet/pods/13555380-67de-40bf-9255-d195682c6e56/volumes" Jan 25 08:24:55 crc kubenswrapper[4832]: I0125 08:24:55.683548 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="41d61b0c-2799-4be1-a1fb-d5402ada7efd" path="/var/lib/kubelet/pods/41d61b0c-2799-4be1-a1fb-d5402ada7efd/volumes" Jan 25 08:24:58 crc kubenswrapper[4832]: I0125 08:24:58.669641 4832 scope.go:117] "RemoveContainer" containerID="cac454964b3d1f20ac28961991abf402bf242194f2fbad579737da7d57d4a27f" Jan 25 08:24:58 crc kubenswrapper[4832]: E0125 08:24:58.670179 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9r9sz_openshift-machine-config-operator(1fb47e8e-c812-41b4-9be7-3fad81e121b0)\"" pod="openshift-machine-config-operator/machine-config-daemon-9r9sz" podUID="1fb47e8e-c812-41b4-9be7-3fad81e121b0" Jan 25 08:24:59 crc kubenswrapper[4832]: I0125 08:24:59.030038 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-1d89-account-create-update-nnk7h"] Jan 25 08:24:59 crc kubenswrapper[4832]: I0125 08:24:59.039764 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-create-h7pph"] Jan 25 08:24:59 crc kubenswrapper[4832]: I0125 08:24:59.050254 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-create-h7pph"] Jan 25 08:24:59 crc kubenswrapper[4832]: I0125 08:24:59.058795 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-1d89-account-create-update-nnk7h"] Jan 25 08:24:59 crc kubenswrapper[4832]: I0125 08:24:59.682628 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="be22c9ab-23d0-48ef-8d5d-298d42e5590f" path="/var/lib/kubelet/pods/be22c9ab-23d0-48ef-8d5d-298d42e5590f/volumes" Jan 25 08:24:59 crc kubenswrapper[4832]: I0125 08:24:59.683284 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f4884afc-1fd6-43f9-bd20-b02a682b1975" path="/var/lib/kubelet/pods/f4884afc-1fd6-43f9-bd20-b02a682b1975/volumes" Jan 25 08:25:11 crc kubenswrapper[4832]: I0125 08:25:11.669804 4832 scope.go:117] "RemoveContainer" containerID="cac454964b3d1f20ac28961991abf402bf242194f2fbad579737da7d57d4a27f" Jan 25 08:25:11 crc kubenswrapper[4832]: E0125 08:25:11.670616 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9r9sz_openshift-machine-config-operator(1fb47e8e-c812-41b4-9be7-3fad81e121b0)\"" pod="openshift-machine-config-operator/machine-config-daemon-9r9sz" podUID="1fb47e8e-c812-41b4-9be7-3fad81e121b0" Jan 25 08:25:17 crc kubenswrapper[4832]: I0125 08:25:17.031957 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-create-khdxr"] Jan 25 08:25:17 crc kubenswrapper[4832]: I0125 08:25:17.041786 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-95bb-account-create-update-9qtwc"] Jan 25 08:25:17 crc kubenswrapper[4832]: I0125 08:25:17.051018 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-7094-account-create-update-zccgm"] Jan 25 08:25:17 crc kubenswrapper[4832]: I0125 08:25:17.061079 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-95bb-account-create-update-9qtwc"] Jan 25 08:25:17 crc kubenswrapper[4832]: I0125 08:25:17.070858 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-create-dlpsc"] Jan 25 08:25:17 crc kubenswrapper[4832]: I0125 08:25:17.080341 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-create-khdxr"] Jan 25 08:25:17 crc kubenswrapper[4832]: I0125 08:25:17.089041 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-create-bdwvt"] Jan 25 08:25:17 crc kubenswrapper[4832]: I0125 08:25:17.096958 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-7094-account-create-update-zccgm"] Jan 25 08:25:17 crc kubenswrapper[4832]: I0125 08:25:17.106114 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-create-bdwvt"] Jan 25 08:25:17 crc kubenswrapper[4832]: I0125 08:25:17.114491 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-create-dlpsc"] Jan 25 08:25:17 crc kubenswrapper[4832]: I0125 08:25:17.123516 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-58szm"] Jan 25 08:25:17 crc kubenswrapper[4832]: I0125 08:25:17.133321 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-a9d0-account-create-update-5njf2"] Jan 25 08:25:17 crc kubenswrapper[4832]: I0125 08:25:17.143445 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-58szm"] Jan 25 08:25:17 crc kubenswrapper[4832]: I0125 08:25:17.150715 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-a9d0-account-create-update-5njf2"] Jan 25 08:25:17 crc kubenswrapper[4832]: I0125 08:25:17.682183 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="15a33ab1-a365-4e45-b7aa-3208d9b16fd0" path="/var/lib/kubelet/pods/15a33ab1-a365-4e45-b7aa-3208d9b16fd0/volumes" Jan 25 08:25:17 crc kubenswrapper[4832]: I0125 08:25:17.682887 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5db077e1-3078-4290-91ea-4e099d11584a" path="/var/lib/kubelet/pods/5db077e1-3078-4290-91ea-4e099d11584a/volumes" Jan 25 08:25:17 crc kubenswrapper[4832]: I0125 08:25:17.683477 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7640ab02-6a97-40ae-9d40-99e42123e170" path="/var/lib/kubelet/pods/7640ab02-6a97-40ae-9d40-99e42123e170/volumes" Jan 25 08:25:17 crc kubenswrapper[4832]: I0125 08:25:17.684053 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="78d32c3b-2a6c-4a1e-a1c5-146a00bbba21" path="/var/lib/kubelet/pods/78d32c3b-2a6c-4a1e-a1c5-146a00bbba21/volumes" Jan 25 08:25:17 crc kubenswrapper[4832]: I0125 08:25:17.685128 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a4bac199-c6e9-4bef-b649-12aa5af881ab" path="/var/lib/kubelet/pods/a4bac199-c6e9-4bef-b649-12aa5af881ab/volumes" Jan 25 08:25:17 crc kubenswrapper[4832]: I0125 08:25:17.685681 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b3771f9f-7c61-47ef-9977-96275f49cd91" path="/var/lib/kubelet/pods/b3771f9f-7c61-47ef-9977-96275f49cd91/volumes" Jan 25 08:25:17 crc kubenswrapper[4832]: I0125 08:25:17.689923 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d05c514f-1bc8-45c4-aa69-e8d08cfeb515" path="/var/lib/kubelet/pods/d05c514f-1bc8-45c4-aa69-e8d08cfeb515/volumes" Jan 25 08:25:22 crc kubenswrapper[4832]: I0125 08:25:22.670046 4832 scope.go:117] "RemoveContainer" containerID="cac454964b3d1f20ac28961991abf402bf242194f2fbad579737da7d57d4a27f" Jan 25 08:25:22 crc kubenswrapper[4832]: E0125 08:25:22.673286 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9r9sz_openshift-machine-config-operator(1fb47e8e-c812-41b4-9be7-3fad81e121b0)\"" pod="openshift-machine-config-operator/machine-config-daemon-9r9sz" podUID="1fb47e8e-c812-41b4-9be7-3fad81e121b0" Jan 25 08:25:35 crc kubenswrapper[4832]: I0125 08:25:35.671058 4832 scope.go:117] "RemoveContainer" containerID="cac454964b3d1f20ac28961991abf402bf242194f2fbad579737da7d57d4a27f" Jan 25 08:25:35 crc kubenswrapper[4832]: E0125 08:25:35.673788 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9r9sz_openshift-machine-config-operator(1fb47e8e-c812-41b4-9be7-3fad81e121b0)\"" pod="openshift-machine-config-operator/machine-config-daemon-9r9sz" podUID="1fb47e8e-c812-41b4-9be7-3fad81e121b0" Jan 25 08:25:39 crc kubenswrapper[4832]: I0125 08:25:39.044411 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-sync-csqzf"] Jan 25 08:25:39 crc kubenswrapper[4832]: I0125 08:25:39.053118 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-sync-csqzf"] Jan 25 08:25:39 crc kubenswrapper[4832]: I0125 08:25:39.680781 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dd9939bf-1855-4b5d-8b7c-38e73d8a8a10" path="/var/lib/kubelet/pods/dd9939bf-1855-4b5d-8b7c-38e73d8a8a10/volumes" Jan 25 08:25:39 crc kubenswrapper[4832]: I0125 08:25:39.993154 4832 scope.go:117] "RemoveContainer" containerID="639b3bfa6f1d4cc91f16c767b6214b91518bd1c57823b9dee0788b23bcf6a51f" Jan 25 08:25:40 crc kubenswrapper[4832]: I0125 08:25:40.029642 4832 scope.go:117] "RemoveContainer" containerID="201f6d2c316f2683a5cc2ce5979bc19b95b2b22bb51dca55411bd5ac69855848" Jan 25 08:25:40 crc kubenswrapper[4832]: I0125 08:25:40.069836 4832 scope.go:117] "RemoveContainer" containerID="a00cfde4bfde10f46126d63276bf226cdbe3bea6b92b1cb55a658b51d3217bc7" Jan 25 08:25:40 crc kubenswrapper[4832]: I0125 08:25:40.107694 4832 scope.go:117] "RemoveContainer" containerID="90184d494eb56a19649488a1af2182f74457b36d9179d268301e2ca7875a33f2" Jan 25 08:25:40 crc kubenswrapper[4832]: I0125 08:25:40.154162 4832 scope.go:117] "RemoveContainer" containerID="d01fb318cba2e3e10a5923f98bd8c0680a4aa77dd407ff0faed75e0e0e47003b" Jan 25 08:25:40 crc kubenswrapper[4832]: I0125 08:25:40.211349 4832 scope.go:117] "RemoveContainer" containerID="7236a7397d9b7007dd81b2829d19e0a00b651840306d949d52e7cc5e4e72fad1" Jan 25 08:25:40 crc kubenswrapper[4832]: I0125 08:25:40.254001 4832 scope.go:117] "RemoveContainer" containerID="851a6a1c2a9bdaeae4dfd13545cddb503f9ffdbe1ea4e4369837beefa242308a" Jan 25 08:25:40 crc kubenswrapper[4832]: I0125 08:25:40.274950 4832 scope.go:117] "RemoveContainer" containerID="365b82ec7d97ec39d1787c8c19e678438c6114c19446fb79b5acf88ede37d16d" Jan 25 08:25:40 crc kubenswrapper[4832]: I0125 08:25:40.296914 4832 scope.go:117] "RemoveContainer" containerID="51feac782fe6444dc2d01017ed8996c4c63c5a832d7d03361f500084111a7d6f" Jan 25 08:25:40 crc kubenswrapper[4832]: I0125 08:25:40.338752 4832 scope.go:117] "RemoveContainer" containerID="91f1c057cdb42c03f5d2e577b4c21aa0212750ee20de4ac6e8bbda20db4ec82a" Jan 25 08:25:40 crc kubenswrapper[4832]: I0125 08:25:40.412010 4832 scope.go:117] "RemoveContainer" containerID="b37e1b6972a63335a0599c0210fd8992c16cf2493470030556beaa855933526f" Jan 25 08:25:40 crc kubenswrapper[4832]: I0125 08:25:40.433499 4832 scope.go:117] "RemoveContainer" containerID="1b3b7c88c783e78f21260f9705950cd9a7906b374ee7543a4d8f6bf7bc36abab" Jan 25 08:25:40 crc kubenswrapper[4832]: I0125 08:25:40.453548 4832 scope.go:117] "RemoveContainer" containerID="4eba20c9281a894eb2807c25bdda31d05f6c3826474f98e68c9778832d038975" Jan 25 08:25:40 crc kubenswrapper[4832]: I0125 08:25:40.484174 4832 scope.go:117] "RemoveContainer" containerID="86638e548ea7882a51876bf5fa20b5eb04d1b7db97b260c72f26e6ce546a7de9" Jan 25 08:25:48 crc kubenswrapper[4832]: I0125 08:25:48.475458 4832 generic.go:334] "Generic (PLEG): container finished" podID="c2445bfc-4cb1-417b-9eea-3ef40a5dcb6f" containerID="f55b0e991c447f65e5eb8df48946710eaf041fb521e8c2fbb7a8af9c406c4089" exitCode=0 Jan 25 08:25:48 crc kubenswrapper[4832]: I0125 08:25:48.475585 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-5wttx" event={"ID":"c2445bfc-4cb1-417b-9eea-3ef40a5dcb6f","Type":"ContainerDied","Data":"f55b0e991c447f65e5eb8df48946710eaf041fb521e8c2fbb7a8af9c406c4089"} Jan 25 08:25:48 crc kubenswrapper[4832]: I0125 08:25:48.670202 4832 scope.go:117] "RemoveContainer" containerID="cac454964b3d1f20ac28961991abf402bf242194f2fbad579737da7d57d4a27f" Jan 25 08:25:48 crc kubenswrapper[4832]: E0125 08:25:48.670518 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9r9sz_openshift-machine-config-operator(1fb47e8e-c812-41b4-9be7-3fad81e121b0)\"" pod="openshift-machine-config-operator/machine-config-daemon-9r9sz" podUID="1fb47e8e-c812-41b4-9be7-3fad81e121b0" Jan 25 08:25:50 crc kubenswrapper[4832]: I0125 08:25:50.002576 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-5wttx" Jan 25 08:25:50 crc kubenswrapper[4832]: I0125 08:25:50.088015 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q6shv\" (UniqueName: \"kubernetes.io/projected/c2445bfc-4cb1-417b-9eea-3ef40a5dcb6f-kube-api-access-q6shv\") pod \"c2445bfc-4cb1-417b-9eea-3ef40a5dcb6f\" (UID: \"c2445bfc-4cb1-417b-9eea-3ef40a5dcb6f\") " Jan 25 08:25:50 crc kubenswrapper[4832]: I0125 08:25:50.088172 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c2445bfc-4cb1-417b-9eea-3ef40a5dcb6f-inventory\") pod \"c2445bfc-4cb1-417b-9eea-3ef40a5dcb6f\" (UID: \"c2445bfc-4cb1-417b-9eea-3ef40a5dcb6f\") " Jan 25 08:25:50 crc kubenswrapper[4832]: I0125 08:25:50.088257 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c2445bfc-4cb1-417b-9eea-3ef40a5dcb6f-ssh-key-openstack-edpm-ipam\") pod \"c2445bfc-4cb1-417b-9eea-3ef40a5dcb6f\" (UID: \"c2445bfc-4cb1-417b-9eea-3ef40a5dcb6f\") " Jan 25 08:25:50 crc kubenswrapper[4832]: I0125 08:25:50.098821 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c2445bfc-4cb1-417b-9eea-3ef40a5dcb6f-kube-api-access-q6shv" (OuterVolumeSpecName: "kube-api-access-q6shv") pod "c2445bfc-4cb1-417b-9eea-3ef40a5dcb6f" (UID: "c2445bfc-4cb1-417b-9eea-3ef40a5dcb6f"). InnerVolumeSpecName "kube-api-access-q6shv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 25 08:25:50 crc kubenswrapper[4832]: I0125 08:25:50.118451 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c2445bfc-4cb1-417b-9eea-3ef40a5dcb6f-inventory" (OuterVolumeSpecName: "inventory") pod "c2445bfc-4cb1-417b-9eea-3ef40a5dcb6f" (UID: "c2445bfc-4cb1-417b-9eea-3ef40a5dcb6f"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 25 08:25:50 crc kubenswrapper[4832]: I0125 08:25:50.132986 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c2445bfc-4cb1-417b-9eea-3ef40a5dcb6f-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "c2445bfc-4cb1-417b-9eea-3ef40a5dcb6f" (UID: "c2445bfc-4cb1-417b-9eea-3ef40a5dcb6f"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 25 08:25:50 crc kubenswrapper[4832]: I0125 08:25:50.190940 4832 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c2445bfc-4cb1-417b-9eea-3ef40a5dcb6f-inventory\") on node \"crc\" DevicePath \"\"" Jan 25 08:25:50 crc kubenswrapper[4832]: I0125 08:25:50.190986 4832 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c2445bfc-4cb1-417b-9eea-3ef40a5dcb6f-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 25 08:25:50 crc kubenswrapper[4832]: I0125 08:25:50.190997 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q6shv\" (UniqueName: \"kubernetes.io/projected/c2445bfc-4cb1-417b-9eea-3ef40a5dcb6f-kube-api-access-q6shv\") on node \"crc\" DevicePath \"\"" Jan 25 08:25:50 crc kubenswrapper[4832]: I0125 08:25:50.501912 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-5wttx" event={"ID":"c2445bfc-4cb1-417b-9eea-3ef40a5dcb6f","Type":"ContainerDied","Data":"6ccd4f64829bfda8fa953d3c91c71b2646052700da8f254381ca0f3daac2b666"} Jan 25 08:25:50 crc kubenswrapper[4832]: I0125 08:25:50.501956 4832 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6ccd4f64829bfda8fa953d3c91c71b2646052700da8f254381ca0f3daac2b666" Jan 25 08:25:50 crc kubenswrapper[4832]: I0125 08:25:50.502010 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-5wttx" Jan 25 08:25:50 crc kubenswrapper[4832]: I0125 08:25:50.580893 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-fr296"] Jan 25 08:25:50 crc kubenswrapper[4832]: E0125 08:25:50.581274 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4b026a6d-4463-481c-a36c-2ade1c16a71c" containerName="registry-server" Jan 25 08:25:50 crc kubenswrapper[4832]: I0125 08:25:50.581289 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="4b026a6d-4463-481c-a36c-2ade1c16a71c" containerName="registry-server" Jan 25 08:25:50 crc kubenswrapper[4832]: E0125 08:25:50.581312 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4b026a6d-4463-481c-a36c-2ade1c16a71c" containerName="extract-content" Jan 25 08:25:50 crc kubenswrapper[4832]: I0125 08:25:50.581320 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="4b026a6d-4463-481c-a36c-2ade1c16a71c" containerName="extract-content" Jan 25 08:25:50 crc kubenswrapper[4832]: E0125 08:25:50.581353 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c2445bfc-4cb1-417b-9eea-3ef40a5dcb6f" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Jan 25 08:25:50 crc kubenswrapper[4832]: I0125 08:25:50.581361 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="c2445bfc-4cb1-417b-9eea-3ef40a5dcb6f" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Jan 25 08:25:50 crc kubenswrapper[4832]: E0125 08:25:50.581373 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4b026a6d-4463-481c-a36c-2ade1c16a71c" containerName="extract-utilities" Jan 25 08:25:50 crc kubenswrapper[4832]: I0125 08:25:50.581379 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="4b026a6d-4463-481c-a36c-2ade1c16a71c" containerName="extract-utilities" Jan 25 08:25:50 crc kubenswrapper[4832]: I0125 08:25:50.581614 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="c2445bfc-4cb1-417b-9eea-3ef40a5dcb6f" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Jan 25 08:25:50 crc kubenswrapper[4832]: I0125 08:25:50.581639 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="4b026a6d-4463-481c-a36c-2ade1c16a71c" containerName="registry-server" Jan 25 08:25:50 crc kubenswrapper[4832]: I0125 08:25:50.582489 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-fr296" Jan 25 08:25:50 crc kubenswrapper[4832]: I0125 08:25:50.585066 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 25 08:25:50 crc kubenswrapper[4832]: I0125 08:25:50.585148 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 25 08:25:50 crc kubenswrapper[4832]: I0125 08:25:50.585346 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-7jwxb" Jan 25 08:25:50 crc kubenswrapper[4832]: I0125 08:25:50.585647 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 25 08:25:50 crc kubenswrapper[4832]: I0125 08:25:50.607287 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-fr296"] Jan 25 08:25:50 crc kubenswrapper[4832]: I0125 08:25:50.701656 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wfjh6\" (UniqueName: \"kubernetes.io/projected/ef813e8a-d19f-4638-bd75-5cba3643b1d0-kube-api-access-wfjh6\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-fr296\" (UID: \"ef813e8a-d19f-4638-bd75-5cba3643b1d0\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-fr296" Jan 25 08:25:50 crc kubenswrapper[4832]: I0125 08:25:50.701818 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ef813e8a-d19f-4638-bd75-5cba3643b1d0-ssh-key-openstack-edpm-ipam\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-fr296\" (UID: \"ef813e8a-d19f-4638-bd75-5cba3643b1d0\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-fr296" Jan 25 08:25:50 crc kubenswrapper[4832]: I0125 08:25:50.701894 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ef813e8a-d19f-4638-bd75-5cba3643b1d0-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-fr296\" (UID: \"ef813e8a-d19f-4638-bd75-5cba3643b1d0\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-fr296" Jan 25 08:25:50 crc kubenswrapper[4832]: I0125 08:25:50.804073 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ef813e8a-d19f-4638-bd75-5cba3643b1d0-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-fr296\" (UID: \"ef813e8a-d19f-4638-bd75-5cba3643b1d0\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-fr296" Jan 25 08:25:50 crc kubenswrapper[4832]: I0125 08:25:50.804210 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wfjh6\" (UniqueName: \"kubernetes.io/projected/ef813e8a-d19f-4638-bd75-5cba3643b1d0-kube-api-access-wfjh6\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-fr296\" (UID: \"ef813e8a-d19f-4638-bd75-5cba3643b1d0\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-fr296" Jan 25 08:25:50 crc kubenswrapper[4832]: I0125 08:25:50.804354 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ef813e8a-d19f-4638-bd75-5cba3643b1d0-ssh-key-openstack-edpm-ipam\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-fr296\" (UID: \"ef813e8a-d19f-4638-bd75-5cba3643b1d0\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-fr296" Jan 25 08:25:50 crc kubenswrapper[4832]: I0125 08:25:50.811000 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ef813e8a-d19f-4638-bd75-5cba3643b1d0-ssh-key-openstack-edpm-ipam\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-fr296\" (UID: \"ef813e8a-d19f-4638-bd75-5cba3643b1d0\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-fr296" Jan 25 08:25:50 crc kubenswrapper[4832]: I0125 08:25:50.811716 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ef813e8a-d19f-4638-bd75-5cba3643b1d0-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-fr296\" (UID: \"ef813e8a-d19f-4638-bd75-5cba3643b1d0\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-fr296" Jan 25 08:25:50 crc kubenswrapper[4832]: I0125 08:25:50.824984 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wfjh6\" (UniqueName: \"kubernetes.io/projected/ef813e8a-d19f-4638-bd75-5cba3643b1d0-kube-api-access-wfjh6\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-fr296\" (UID: \"ef813e8a-d19f-4638-bd75-5cba3643b1d0\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-fr296" Jan 25 08:25:50 crc kubenswrapper[4832]: I0125 08:25:50.903620 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-fr296" Jan 25 08:25:51 crc kubenswrapper[4832]: I0125 08:25:51.459333 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-fr296"] Jan 25 08:25:51 crc kubenswrapper[4832]: I0125 08:25:51.511049 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-fr296" event={"ID":"ef813e8a-d19f-4638-bd75-5cba3643b1d0","Type":"ContainerStarted","Data":"94339ebefa255746045f3754dc23b5774bba49d88c03577a6881ed061c236ecc"} Jan 25 08:25:52 crc kubenswrapper[4832]: I0125 08:25:52.521321 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-fr296" event={"ID":"ef813e8a-d19f-4638-bd75-5cba3643b1d0","Type":"ContainerStarted","Data":"934cf10dcdb57f7903d3127dd9c2089300a6b81a122bfc463338edfed5743b78"} Jan 25 08:25:52 crc kubenswrapper[4832]: I0125 08:25:52.536871 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-fr296" podStartSLOduration=2.044669167 podStartE2EDuration="2.536852183s" podCreationTimestamp="2026-01-25 08:25:50 +0000 UTC" firstStartedPulling="2026-01-25 08:25:51.473474608 +0000 UTC m=+1734.147298131" lastFinishedPulling="2026-01-25 08:25:51.965657594 +0000 UTC m=+1734.639481147" observedRunningTime="2026-01-25 08:25:52.53295565 +0000 UTC m=+1735.206779183" watchObservedRunningTime="2026-01-25 08:25:52.536852183 +0000 UTC m=+1735.210675716" Jan 25 08:26:02 crc kubenswrapper[4832]: I0125 08:26:02.670653 4832 scope.go:117] "RemoveContainer" containerID="cac454964b3d1f20ac28961991abf402bf242194f2fbad579737da7d57d4a27f" Jan 25 08:26:02 crc kubenswrapper[4832]: E0125 08:26:02.671712 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9r9sz_openshift-machine-config-operator(1fb47e8e-c812-41b4-9be7-3fad81e121b0)\"" pod="openshift-machine-config-operator/machine-config-daemon-9r9sz" podUID="1fb47e8e-c812-41b4-9be7-3fad81e121b0" Jan 25 08:26:14 crc kubenswrapper[4832]: I0125 08:26:14.670862 4832 scope.go:117] "RemoveContainer" containerID="cac454964b3d1f20ac28961991abf402bf242194f2fbad579737da7d57d4a27f" Jan 25 08:26:14 crc kubenswrapper[4832]: E0125 08:26:14.672311 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9r9sz_openshift-machine-config-operator(1fb47e8e-c812-41b4-9be7-3fad81e121b0)\"" pod="openshift-machine-config-operator/machine-config-daemon-9r9sz" podUID="1fb47e8e-c812-41b4-9be7-3fad81e121b0" Jan 25 08:26:16 crc kubenswrapper[4832]: I0125 08:26:16.041743 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-sync-pfc28"] Jan 25 08:26:16 crc kubenswrapper[4832]: I0125 08:26:16.053027 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-sync-pfc28"] Jan 25 08:26:17 crc kubenswrapper[4832]: I0125 08:26:17.680812 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="88d4e115-8ad0-4971-b4aa-cb63d0bd2c11" path="/var/lib/kubelet/pods/88d4e115-8ad0-4971-b4aa-cb63d0bd2c11/volumes" Jan 25 08:26:26 crc kubenswrapper[4832]: I0125 08:26:26.038516 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-5dqnt"] Jan 25 08:26:26 crc kubenswrapper[4832]: I0125 08:26:26.048194 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-5dqnt"] Jan 25 08:26:27 crc kubenswrapper[4832]: I0125 08:26:27.681541 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0d1875b5-9bf9-49f8-8600-d4e2c2804c47" path="/var/lib/kubelet/pods/0d1875b5-9bf9-49f8-8600-d4e2c2804c47/volumes" Jan 25 08:26:28 crc kubenswrapper[4832]: I0125 08:26:28.670264 4832 scope.go:117] "RemoveContainer" containerID="cac454964b3d1f20ac28961991abf402bf242194f2fbad579737da7d57d4a27f" Jan 25 08:26:28 crc kubenswrapper[4832]: E0125 08:26:28.670620 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9r9sz_openshift-machine-config-operator(1fb47e8e-c812-41b4-9be7-3fad81e121b0)\"" pod="openshift-machine-config-operator/machine-config-daemon-9r9sz" podUID="1fb47e8e-c812-41b4-9be7-3fad81e121b0" Jan 25 08:26:31 crc kubenswrapper[4832]: I0125 08:26:31.030313 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-sync-7tnnv"] Jan 25 08:26:31 crc kubenswrapper[4832]: I0125 08:26:31.038855 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-sync-7tnnv"] Jan 25 08:26:31 crc kubenswrapper[4832]: I0125 08:26:31.681164 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e1a44ba3-2a1f-4189-80d7-cd0c8795bd9a" path="/var/lib/kubelet/pods/e1a44ba3-2a1f-4189-80d7-cd0c8795bd9a/volumes" Jan 25 08:26:38 crc kubenswrapper[4832]: I0125 08:26:38.037349 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-sync-dnzjb"] Jan 25 08:26:38 crc kubenswrapper[4832]: I0125 08:26:38.045088 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-sync-dnzjb"] Jan 25 08:26:38 crc kubenswrapper[4832]: I0125 08:26:38.052884 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-sync-xdqfx"] Jan 25 08:26:38 crc kubenswrapper[4832]: I0125 08:26:38.060367 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-sync-xdqfx"] Jan 25 08:26:39 crc kubenswrapper[4832]: I0125 08:26:39.032802 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-sync-vrvb2"] Jan 25 08:26:39 crc kubenswrapper[4832]: I0125 08:26:39.042559 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-sync-vrvb2"] Jan 25 08:26:39 crc kubenswrapper[4832]: I0125 08:26:39.685837 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="88b922f3-0125-4078-8ec7-ad4edd04d0ed" path="/var/lib/kubelet/pods/88b922f3-0125-4078-8ec7-ad4edd04d0ed/volumes" Jan 25 08:26:39 crc kubenswrapper[4832]: I0125 08:26:39.687673 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e793ce7a-261b-4b97-8436-c7a5efc5e126" path="/var/lib/kubelet/pods/e793ce7a-261b-4b97-8436-c7a5efc5e126/volumes" Jan 25 08:26:39 crc kubenswrapper[4832]: I0125 08:26:39.689046 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f4bbdba8-c7bc-4dd7-ae19-1655bc089a86" path="/var/lib/kubelet/pods/f4bbdba8-c7bc-4dd7-ae19-1655bc089a86/volumes" Jan 25 08:26:40 crc kubenswrapper[4832]: I0125 08:26:40.670280 4832 scope.go:117] "RemoveContainer" containerID="cac454964b3d1f20ac28961991abf402bf242194f2fbad579737da7d57d4a27f" Jan 25 08:26:40 crc kubenswrapper[4832]: E0125 08:26:40.670732 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9r9sz_openshift-machine-config-operator(1fb47e8e-c812-41b4-9be7-3fad81e121b0)\"" pod="openshift-machine-config-operator/machine-config-daemon-9r9sz" podUID="1fb47e8e-c812-41b4-9be7-3fad81e121b0" Jan 25 08:26:40 crc kubenswrapper[4832]: I0125 08:26:40.764804 4832 scope.go:117] "RemoveContainer" containerID="60af9015ae9720b19176d23260a846349c530a9b3b692bf9315265e29c80cfec" Jan 25 08:26:40 crc kubenswrapper[4832]: I0125 08:26:40.819095 4832 scope.go:117] "RemoveContainer" containerID="7d46d3eff94d22ea0ddca1e6e36f9d0cc0da8afd772359a66cb4417d7e75bfec" Jan 25 08:26:40 crc kubenswrapper[4832]: I0125 08:26:40.864951 4832 scope.go:117] "RemoveContainer" containerID="5d8a4aebb6051b9a2ea061e44a57637bc058c8f737d722b1a2136d729d292408" Jan 25 08:26:40 crc kubenswrapper[4832]: I0125 08:26:40.935250 4832 scope.go:117] "RemoveContainer" containerID="2bc24f26d829b53a811da3b1657056332cb5bca551cb0d9c4b02484b0306b433" Jan 25 08:26:40 crc kubenswrapper[4832]: I0125 08:26:40.976596 4832 scope.go:117] "RemoveContainer" containerID="882c4811454c01f87f413004ff277f6ed02b5c631dc3dfb6708b5bf0b9e8e5b1" Jan 25 08:26:41 crc kubenswrapper[4832]: I0125 08:26:41.028635 4832 scope.go:117] "RemoveContainer" containerID="55887aa70bb83eb4a9c37bbf1ffa23262c67a7a0d8e23e20ad96ff018bbb23f2" Jan 25 08:26:55 crc kubenswrapper[4832]: I0125 08:26:55.670131 4832 scope.go:117] "RemoveContainer" containerID="cac454964b3d1f20ac28961991abf402bf242194f2fbad579737da7d57d4a27f" Jan 25 08:26:56 crc kubenswrapper[4832]: I0125 08:26:56.064731 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-9r9sz" event={"ID":"1fb47e8e-c812-41b4-9be7-3fad81e121b0","Type":"ContainerStarted","Data":"5ee81b1e42e0e2f931beb9dc8d8ff5683471d0ba095236f471161e82f9c1c998"} Jan 25 08:27:07 crc kubenswrapper[4832]: I0125 08:27:07.588427 4832 generic.go:334] "Generic (PLEG): container finished" podID="ef813e8a-d19f-4638-bd75-5cba3643b1d0" containerID="934cf10dcdb57f7903d3127dd9c2089300a6b81a122bfc463338edfed5743b78" exitCode=0 Jan 25 08:27:07 crc kubenswrapper[4832]: I0125 08:27:07.588462 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-fr296" event={"ID":"ef813e8a-d19f-4638-bd75-5cba3643b1d0","Type":"ContainerDied","Data":"934cf10dcdb57f7903d3127dd9c2089300a6b81a122bfc463338edfed5743b78"} Jan 25 08:27:08 crc kubenswrapper[4832]: I0125 08:27:08.982877 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-fr296" Jan 25 08:27:09 crc kubenswrapper[4832]: I0125 08:27:09.079677 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ef813e8a-d19f-4638-bd75-5cba3643b1d0-inventory\") pod \"ef813e8a-d19f-4638-bd75-5cba3643b1d0\" (UID: \"ef813e8a-d19f-4638-bd75-5cba3643b1d0\") " Jan 25 08:27:09 crc kubenswrapper[4832]: I0125 08:27:09.079762 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wfjh6\" (UniqueName: \"kubernetes.io/projected/ef813e8a-d19f-4638-bd75-5cba3643b1d0-kube-api-access-wfjh6\") pod \"ef813e8a-d19f-4638-bd75-5cba3643b1d0\" (UID: \"ef813e8a-d19f-4638-bd75-5cba3643b1d0\") " Jan 25 08:27:09 crc kubenswrapper[4832]: I0125 08:27:09.079832 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ef813e8a-d19f-4638-bd75-5cba3643b1d0-ssh-key-openstack-edpm-ipam\") pod \"ef813e8a-d19f-4638-bd75-5cba3643b1d0\" (UID: \"ef813e8a-d19f-4638-bd75-5cba3643b1d0\") " Jan 25 08:27:09 crc kubenswrapper[4832]: I0125 08:27:09.085819 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ef813e8a-d19f-4638-bd75-5cba3643b1d0-kube-api-access-wfjh6" (OuterVolumeSpecName: "kube-api-access-wfjh6") pod "ef813e8a-d19f-4638-bd75-5cba3643b1d0" (UID: "ef813e8a-d19f-4638-bd75-5cba3643b1d0"). InnerVolumeSpecName "kube-api-access-wfjh6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 25 08:27:09 crc kubenswrapper[4832]: I0125 08:27:09.108165 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ef813e8a-d19f-4638-bd75-5cba3643b1d0-inventory" (OuterVolumeSpecName: "inventory") pod "ef813e8a-d19f-4638-bd75-5cba3643b1d0" (UID: "ef813e8a-d19f-4638-bd75-5cba3643b1d0"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 25 08:27:09 crc kubenswrapper[4832]: I0125 08:27:09.110051 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ef813e8a-d19f-4638-bd75-5cba3643b1d0-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "ef813e8a-d19f-4638-bd75-5cba3643b1d0" (UID: "ef813e8a-d19f-4638-bd75-5cba3643b1d0"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 25 08:27:09 crc kubenswrapper[4832]: I0125 08:27:09.182751 4832 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ef813e8a-d19f-4638-bd75-5cba3643b1d0-inventory\") on node \"crc\" DevicePath \"\"" Jan 25 08:27:09 crc kubenswrapper[4832]: I0125 08:27:09.182789 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wfjh6\" (UniqueName: \"kubernetes.io/projected/ef813e8a-d19f-4638-bd75-5cba3643b1d0-kube-api-access-wfjh6\") on node \"crc\" DevicePath \"\"" Jan 25 08:27:09 crc kubenswrapper[4832]: I0125 08:27:09.182802 4832 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ef813e8a-d19f-4638-bd75-5cba3643b1d0-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 25 08:27:09 crc kubenswrapper[4832]: I0125 08:27:09.606997 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-fr296" event={"ID":"ef813e8a-d19f-4638-bd75-5cba3643b1d0","Type":"ContainerDied","Data":"94339ebefa255746045f3754dc23b5774bba49d88c03577a6881ed061c236ecc"} Jan 25 08:27:09 crc kubenswrapper[4832]: I0125 08:27:09.607352 4832 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="94339ebefa255746045f3754dc23b5774bba49d88c03577a6881ed061c236ecc" Jan 25 08:27:09 crc kubenswrapper[4832]: I0125 08:27:09.607099 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-fr296" Jan 25 08:27:09 crc kubenswrapper[4832]: I0125 08:27:09.696662 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-jb565"] Jan 25 08:27:09 crc kubenswrapper[4832]: E0125 08:27:09.697210 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ef813e8a-d19f-4638-bd75-5cba3643b1d0" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Jan 25 08:27:09 crc kubenswrapper[4832]: I0125 08:27:09.697238 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="ef813e8a-d19f-4638-bd75-5cba3643b1d0" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Jan 25 08:27:09 crc kubenswrapper[4832]: I0125 08:27:09.697503 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="ef813e8a-d19f-4638-bd75-5cba3643b1d0" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Jan 25 08:27:09 crc kubenswrapper[4832]: I0125 08:27:09.698226 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-jb565" Jan 25 08:27:09 crc kubenswrapper[4832]: I0125 08:27:09.700864 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 25 08:27:09 crc kubenswrapper[4832]: I0125 08:27:09.701002 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 25 08:27:09 crc kubenswrapper[4832]: I0125 08:27:09.701166 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 25 08:27:09 crc kubenswrapper[4832]: I0125 08:27:09.701669 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-7jwxb" Jan 25 08:27:09 crc kubenswrapper[4832]: I0125 08:27:09.714535 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-jb565"] Jan 25 08:27:09 crc kubenswrapper[4832]: I0125 08:27:09.796075 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/51471519-c6e2-4ab1-9536-3443579b4bb1-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-jb565\" (UID: \"51471519-c6e2-4ab1-9536-3443579b4bb1\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-jb565" Jan 25 08:27:09 crc kubenswrapper[4832]: I0125 08:27:09.796334 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mw9jp\" (UniqueName: \"kubernetes.io/projected/51471519-c6e2-4ab1-9536-3443579b4bb1-kube-api-access-mw9jp\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-jb565\" (UID: \"51471519-c6e2-4ab1-9536-3443579b4bb1\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-jb565" Jan 25 08:27:09 crc kubenswrapper[4832]: I0125 08:27:09.796494 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/51471519-c6e2-4ab1-9536-3443579b4bb1-ssh-key-openstack-edpm-ipam\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-jb565\" (UID: \"51471519-c6e2-4ab1-9536-3443579b4bb1\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-jb565" Jan 25 08:27:09 crc kubenswrapper[4832]: I0125 08:27:09.898904 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mw9jp\" (UniqueName: \"kubernetes.io/projected/51471519-c6e2-4ab1-9536-3443579b4bb1-kube-api-access-mw9jp\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-jb565\" (UID: \"51471519-c6e2-4ab1-9536-3443579b4bb1\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-jb565" Jan 25 08:27:09 crc kubenswrapper[4832]: I0125 08:27:09.898991 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/51471519-c6e2-4ab1-9536-3443579b4bb1-ssh-key-openstack-edpm-ipam\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-jb565\" (UID: \"51471519-c6e2-4ab1-9536-3443579b4bb1\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-jb565" Jan 25 08:27:09 crc kubenswrapper[4832]: I0125 08:27:09.899067 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/51471519-c6e2-4ab1-9536-3443579b4bb1-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-jb565\" (UID: \"51471519-c6e2-4ab1-9536-3443579b4bb1\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-jb565" Jan 25 08:27:09 crc kubenswrapper[4832]: I0125 08:27:09.904007 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/51471519-c6e2-4ab1-9536-3443579b4bb1-ssh-key-openstack-edpm-ipam\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-jb565\" (UID: \"51471519-c6e2-4ab1-9536-3443579b4bb1\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-jb565" Jan 25 08:27:09 crc kubenswrapper[4832]: I0125 08:27:09.904433 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/51471519-c6e2-4ab1-9536-3443579b4bb1-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-jb565\" (UID: \"51471519-c6e2-4ab1-9536-3443579b4bb1\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-jb565" Jan 25 08:27:09 crc kubenswrapper[4832]: I0125 08:27:09.917564 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mw9jp\" (UniqueName: \"kubernetes.io/projected/51471519-c6e2-4ab1-9536-3443579b4bb1-kube-api-access-mw9jp\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-jb565\" (UID: \"51471519-c6e2-4ab1-9536-3443579b4bb1\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-jb565" Jan 25 08:27:10 crc kubenswrapper[4832]: I0125 08:27:10.017891 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-jb565" Jan 25 08:27:10 crc kubenswrapper[4832]: I0125 08:27:10.538232 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-jb565"] Jan 25 08:27:10 crc kubenswrapper[4832]: I0125 08:27:10.617275 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-jb565" event={"ID":"51471519-c6e2-4ab1-9536-3443579b4bb1","Type":"ContainerStarted","Data":"3caa2bb98698fd2ee950d72f2caeab1665a212a068a72e3b76f5cc98601c7694"} Jan 25 08:27:11 crc kubenswrapper[4832]: I0125 08:27:11.629207 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-jb565" event={"ID":"51471519-c6e2-4ab1-9536-3443579b4bb1","Type":"ContainerStarted","Data":"a8ab99ce75d9531be720bdedb32e46ead293e0338e75cf3ee3571b83c3fcb9ae"} Jan 25 08:27:11 crc kubenswrapper[4832]: I0125 08:27:11.647422 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-jb565" podStartSLOduration=2.1668158220000002 podStartE2EDuration="2.647387618s" podCreationTimestamp="2026-01-25 08:27:09 +0000 UTC" firstStartedPulling="2026-01-25 08:27:10.569059635 +0000 UTC m=+1813.242883168" lastFinishedPulling="2026-01-25 08:27:11.049631431 +0000 UTC m=+1813.723454964" observedRunningTime="2026-01-25 08:27:11.645538959 +0000 UTC m=+1814.319362572" watchObservedRunningTime="2026-01-25 08:27:11.647387618 +0000 UTC m=+1814.321211151" Jan 25 08:27:16 crc kubenswrapper[4832]: I0125 08:27:16.672444 4832 generic.go:334] "Generic (PLEG): container finished" podID="51471519-c6e2-4ab1-9536-3443579b4bb1" containerID="a8ab99ce75d9531be720bdedb32e46ead293e0338e75cf3ee3571b83c3fcb9ae" exitCode=0 Jan 25 08:27:16 crc kubenswrapper[4832]: I0125 08:27:16.672535 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-jb565" event={"ID":"51471519-c6e2-4ab1-9536-3443579b4bb1","Type":"ContainerDied","Data":"a8ab99ce75d9531be720bdedb32e46ead293e0338e75cf3ee3571b83c3fcb9ae"} Jan 25 08:27:18 crc kubenswrapper[4832]: I0125 08:27:18.047090 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-jb565" Jan 25 08:27:18 crc kubenswrapper[4832]: I0125 08:27:18.166156 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/51471519-c6e2-4ab1-9536-3443579b4bb1-inventory\") pod \"51471519-c6e2-4ab1-9536-3443579b4bb1\" (UID: \"51471519-c6e2-4ab1-9536-3443579b4bb1\") " Jan 25 08:27:18 crc kubenswrapper[4832]: I0125 08:27:18.166348 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/51471519-c6e2-4ab1-9536-3443579b4bb1-ssh-key-openstack-edpm-ipam\") pod \"51471519-c6e2-4ab1-9536-3443579b4bb1\" (UID: \"51471519-c6e2-4ab1-9536-3443579b4bb1\") " Jan 25 08:27:18 crc kubenswrapper[4832]: I0125 08:27:18.166404 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mw9jp\" (UniqueName: \"kubernetes.io/projected/51471519-c6e2-4ab1-9536-3443579b4bb1-kube-api-access-mw9jp\") pod \"51471519-c6e2-4ab1-9536-3443579b4bb1\" (UID: \"51471519-c6e2-4ab1-9536-3443579b4bb1\") " Jan 25 08:27:18 crc kubenswrapper[4832]: I0125 08:27:18.172105 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/51471519-c6e2-4ab1-9536-3443579b4bb1-kube-api-access-mw9jp" (OuterVolumeSpecName: "kube-api-access-mw9jp") pod "51471519-c6e2-4ab1-9536-3443579b4bb1" (UID: "51471519-c6e2-4ab1-9536-3443579b4bb1"). InnerVolumeSpecName "kube-api-access-mw9jp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 25 08:27:18 crc kubenswrapper[4832]: I0125 08:27:18.195937 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/51471519-c6e2-4ab1-9536-3443579b4bb1-inventory" (OuterVolumeSpecName: "inventory") pod "51471519-c6e2-4ab1-9536-3443579b4bb1" (UID: "51471519-c6e2-4ab1-9536-3443579b4bb1"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 25 08:27:18 crc kubenswrapper[4832]: I0125 08:27:18.198887 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/51471519-c6e2-4ab1-9536-3443579b4bb1-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "51471519-c6e2-4ab1-9536-3443579b4bb1" (UID: "51471519-c6e2-4ab1-9536-3443579b4bb1"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 25 08:27:18 crc kubenswrapper[4832]: I0125 08:27:18.269023 4832 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/51471519-c6e2-4ab1-9536-3443579b4bb1-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 25 08:27:18 crc kubenswrapper[4832]: I0125 08:27:18.269067 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mw9jp\" (UniqueName: \"kubernetes.io/projected/51471519-c6e2-4ab1-9536-3443579b4bb1-kube-api-access-mw9jp\") on node \"crc\" DevicePath \"\"" Jan 25 08:27:18 crc kubenswrapper[4832]: I0125 08:27:18.269081 4832 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/51471519-c6e2-4ab1-9536-3443579b4bb1-inventory\") on node \"crc\" DevicePath \"\"" Jan 25 08:27:18 crc kubenswrapper[4832]: I0125 08:27:18.691826 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-jb565" event={"ID":"51471519-c6e2-4ab1-9536-3443579b4bb1","Type":"ContainerDied","Data":"3caa2bb98698fd2ee950d72f2caeab1665a212a068a72e3b76f5cc98601c7694"} Jan 25 08:27:18 crc kubenswrapper[4832]: I0125 08:27:18.691874 4832 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3caa2bb98698fd2ee950d72f2caeab1665a212a068a72e3b76f5cc98601c7694" Jan 25 08:27:18 crc kubenswrapper[4832]: I0125 08:27:18.691929 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-jb565" Jan 25 08:27:18 crc kubenswrapper[4832]: I0125 08:27:18.830084 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-b4dhr"] Jan 25 08:27:18 crc kubenswrapper[4832]: E0125 08:27:18.830771 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="51471519-c6e2-4ab1-9536-3443579b4bb1" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Jan 25 08:27:18 crc kubenswrapper[4832]: I0125 08:27:18.830805 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="51471519-c6e2-4ab1-9536-3443579b4bb1" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Jan 25 08:27:18 crc kubenswrapper[4832]: I0125 08:27:18.831133 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="51471519-c6e2-4ab1-9536-3443579b4bb1" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Jan 25 08:27:18 crc kubenswrapper[4832]: I0125 08:27:18.832220 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-b4dhr" Jan 25 08:27:18 crc kubenswrapper[4832]: I0125 08:27:18.835674 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 25 08:27:18 crc kubenswrapper[4832]: I0125 08:27:18.836536 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 25 08:27:18 crc kubenswrapper[4832]: I0125 08:27:18.836709 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 25 08:27:18 crc kubenswrapper[4832]: I0125 08:27:18.838361 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-7jwxb" Jan 25 08:27:18 crc kubenswrapper[4832]: I0125 08:27:18.844246 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-b4dhr"] Jan 25 08:27:18 crc kubenswrapper[4832]: I0125 08:27:18.983209 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/112e50b5-86e0-4401-b4f9-b32be27ab508-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-b4dhr\" (UID: \"112e50b5-86e0-4401-b4f9-b32be27ab508\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-b4dhr" Jan 25 08:27:18 crc kubenswrapper[4832]: I0125 08:27:18.983326 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ksjnf\" (UniqueName: \"kubernetes.io/projected/112e50b5-86e0-4401-b4f9-b32be27ab508-kube-api-access-ksjnf\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-b4dhr\" (UID: \"112e50b5-86e0-4401-b4f9-b32be27ab508\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-b4dhr" Jan 25 08:27:18 crc kubenswrapper[4832]: I0125 08:27:18.983370 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/112e50b5-86e0-4401-b4f9-b32be27ab508-ssh-key-openstack-edpm-ipam\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-b4dhr\" (UID: \"112e50b5-86e0-4401-b4f9-b32be27ab508\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-b4dhr" Jan 25 08:27:19 crc kubenswrapper[4832]: I0125 08:27:19.085450 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ksjnf\" (UniqueName: \"kubernetes.io/projected/112e50b5-86e0-4401-b4f9-b32be27ab508-kube-api-access-ksjnf\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-b4dhr\" (UID: \"112e50b5-86e0-4401-b4f9-b32be27ab508\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-b4dhr" Jan 25 08:27:19 crc kubenswrapper[4832]: I0125 08:27:19.085501 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/112e50b5-86e0-4401-b4f9-b32be27ab508-ssh-key-openstack-edpm-ipam\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-b4dhr\" (UID: \"112e50b5-86e0-4401-b4f9-b32be27ab508\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-b4dhr" Jan 25 08:27:19 crc kubenswrapper[4832]: I0125 08:27:19.085612 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/112e50b5-86e0-4401-b4f9-b32be27ab508-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-b4dhr\" (UID: \"112e50b5-86e0-4401-b4f9-b32be27ab508\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-b4dhr" Jan 25 08:27:19 crc kubenswrapper[4832]: I0125 08:27:19.093029 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/112e50b5-86e0-4401-b4f9-b32be27ab508-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-b4dhr\" (UID: \"112e50b5-86e0-4401-b4f9-b32be27ab508\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-b4dhr" Jan 25 08:27:19 crc kubenswrapper[4832]: I0125 08:27:19.093030 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/112e50b5-86e0-4401-b4f9-b32be27ab508-ssh-key-openstack-edpm-ipam\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-b4dhr\" (UID: \"112e50b5-86e0-4401-b4f9-b32be27ab508\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-b4dhr" Jan 25 08:27:19 crc kubenswrapper[4832]: I0125 08:27:19.107557 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ksjnf\" (UniqueName: \"kubernetes.io/projected/112e50b5-86e0-4401-b4f9-b32be27ab508-kube-api-access-ksjnf\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-b4dhr\" (UID: \"112e50b5-86e0-4401-b4f9-b32be27ab508\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-b4dhr" Jan 25 08:27:19 crc kubenswrapper[4832]: I0125 08:27:19.151974 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-b4dhr" Jan 25 08:27:19 crc kubenswrapper[4832]: I0125 08:27:19.684955 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-b4dhr"] Jan 25 08:27:19 crc kubenswrapper[4832]: I0125 08:27:19.701679 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-b4dhr" event={"ID":"112e50b5-86e0-4401-b4f9-b32be27ab508","Type":"ContainerStarted","Data":"ad8a08c4028b4b0d917af7b30b59e525cc9363c03781bea3d5b74fe9619176fe"} Jan 25 08:27:20 crc kubenswrapper[4832]: I0125 08:27:20.711632 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-b4dhr" event={"ID":"112e50b5-86e0-4401-b4f9-b32be27ab508","Type":"ContainerStarted","Data":"5b578e54520a0f6ccfa6da5cae1c915b30f0c16c952f3e406b324259097a2d92"} Jan 25 08:27:20 crc kubenswrapper[4832]: I0125 08:27:20.732710 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-b4dhr" podStartSLOduration=2.296439183 podStartE2EDuration="2.73268727s" podCreationTimestamp="2026-01-25 08:27:18 +0000 UTC" firstStartedPulling="2026-01-25 08:27:19.678080322 +0000 UTC m=+1822.351903855" lastFinishedPulling="2026-01-25 08:27:20.114328409 +0000 UTC m=+1822.788151942" observedRunningTime="2026-01-25 08:27:20.730782011 +0000 UTC m=+1823.404605534" watchObservedRunningTime="2026-01-25 08:27:20.73268727 +0000 UTC m=+1823.406510803" Jan 25 08:27:31 crc kubenswrapper[4832]: I0125 08:27:31.059154 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-fdf0-account-create-update-xcnhj"] Jan 25 08:27:31 crc kubenswrapper[4832]: I0125 08:27:31.070594 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-30c4-account-create-update-7tq6t"] Jan 25 08:27:31 crc kubenswrapper[4832]: I0125 08:27:31.080745 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-db-create-qfsv4"] Jan 25 08:27:31 crc kubenswrapper[4832]: I0125 08:27:31.089024 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-734e-account-create-update-h4xzg"] Jan 25 08:27:31 crc kubenswrapper[4832]: I0125 08:27:31.098176 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-db-create-mckms"] Jan 25 08:27:31 crc kubenswrapper[4832]: I0125 08:27:31.106134 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-db-create-q8swj"] Jan 25 08:27:31 crc kubenswrapper[4832]: I0125 08:27:31.113434 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-fdf0-account-create-update-xcnhj"] Jan 25 08:27:31 crc kubenswrapper[4832]: I0125 08:27:31.142412 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-db-create-mckms"] Jan 25 08:27:31 crc kubenswrapper[4832]: I0125 08:27:31.149661 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-30c4-account-create-update-7tq6t"] Jan 25 08:27:31 crc kubenswrapper[4832]: I0125 08:27:31.155511 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-734e-account-create-update-h4xzg"] Jan 25 08:27:31 crc kubenswrapper[4832]: I0125 08:27:31.161535 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-db-create-qfsv4"] Jan 25 08:27:31 crc kubenswrapper[4832]: I0125 08:27:31.171652 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-db-create-q8swj"] Jan 25 08:27:31 crc kubenswrapper[4832]: I0125 08:27:31.681907 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="163febb0-9715-4944-8c59-0a4997e12c47" path="/var/lib/kubelet/pods/163febb0-9715-4944-8c59-0a4997e12c47/volumes" Jan 25 08:27:31 crc kubenswrapper[4832]: I0125 08:27:31.682530 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2b1d3eaf-356b-4dd4-87ed-2561b811f68e" path="/var/lib/kubelet/pods/2b1d3eaf-356b-4dd4-87ed-2561b811f68e/volumes" Jan 25 08:27:31 crc kubenswrapper[4832]: I0125 08:27:31.683027 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3981045c-8650-4fda-af05-1ff4196d30de" path="/var/lib/kubelet/pods/3981045c-8650-4fda-af05-1ff4196d30de/volumes" Jan 25 08:27:31 crc kubenswrapper[4832]: I0125 08:27:31.683573 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="48ebae8e-c265-49f1-a050-d6ae6b1ea729" path="/var/lib/kubelet/pods/48ebae8e-c265-49f1-a050-d6ae6b1ea729/volumes" Jan 25 08:27:31 crc kubenswrapper[4832]: I0125 08:27:31.684569 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ede7170a-cec3-43e5-b7de-d37e72f0cc11" path="/var/lib/kubelet/pods/ede7170a-cec3-43e5-b7de-d37e72f0cc11/volumes" Jan 25 08:27:31 crc kubenswrapper[4832]: I0125 08:27:31.685158 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f9f7e75f-369f-47ce-b9c9-9e6018f0b3a6" path="/var/lib/kubelet/pods/f9f7e75f-369f-47ce-b9c9-9e6018f0b3a6/volumes" Jan 25 08:27:41 crc kubenswrapper[4832]: I0125 08:27:41.160704 4832 scope.go:117] "RemoveContainer" containerID="61a1a4c106ee00b40a614d814d29530aa167c26aa1937b6057642254d73285e4" Jan 25 08:27:41 crc kubenswrapper[4832]: I0125 08:27:41.183972 4832 scope.go:117] "RemoveContainer" containerID="8c5c43b555531e24c9bf75c76d9f3ae85e93dd331f3f986aa123e861dd761092" Jan 25 08:27:41 crc kubenswrapper[4832]: I0125 08:27:41.235065 4832 scope.go:117] "RemoveContainer" containerID="74feba622b39acd952edc75d90e881187844c3d737b9ade8bd9261054a4fe7df" Jan 25 08:27:41 crc kubenswrapper[4832]: I0125 08:27:41.287966 4832 scope.go:117] "RemoveContainer" containerID="8c6e45c2487cd568917904abd06657c93fb9f8e390d1bc11ee30bf0ba90c5c5a" Jan 25 08:27:41 crc kubenswrapper[4832]: I0125 08:27:41.346785 4832 scope.go:117] "RemoveContainer" containerID="9aed33b39d8ec4a014db4076866d65d4b3af3057eba886f29af7e602655e6bfe" Jan 25 08:27:41 crc kubenswrapper[4832]: I0125 08:27:41.390380 4832 scope.go:117] "RemoveContainer" containerID="fc03f602940db592f521266666b34d036bde2a885f9cdd5822d1a8f20d2102fc" Jan 25 08:28:00 crc kubenswrapper[4832]: I0125 08:28:00.040278 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-7snwr"] Jan 25 08:28:00 crc kubenswrapper[4832]: I0125 08:28:00.050605 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-7snwr"] Jan 25 08:28:01 crc kubenswrapper[4832]: I0125 08:28:01.075441 4832 generic.go:334] "Generic (PLEG): container finished" podID="112e50b5-86e0-4401-b4f9-b32be27ab508" containerID="5b578e54520a0f6ccfa6da5cae1c915b30f0c16c952f3e406b324259097a2d92" exitCode=0 Jan 25 08:28:01 crc kubenswrapper[4832]: I0125 08:28:01.075537 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-b4dhr" event={"ID":"112e50b5-86e0-4401-b4f9-b32be27ab508","Type":"ContainerDied","Data":"5b578e54520a0f6ccfa6da5cae1c915b30f0c16c952f3e406b324259097a2d92"} Jan 25 08:28:01 crc kubenswrapper[4832]: I0125 08:28:01.680629 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="47eba52e-d8fa-4336-9c57-7006963eb712" path="/var/lib/kubelet/pods/47eba52e-d8fa-4336-9c57-7006963eb712/volumes" Jan 25 08:28:02 crc kubenswrapper[4832]: I0125 08:28:02.493708 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-b4dhr" Jan 25 08:28:02 crc kubenswrapper[4832]: I0125 08:28:02.587530 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ksjnf\" (UniqueName: \"kubernetes.io/projected/112e50b5-86e0-4401-b4f9-b32be27ab508-kube-api-access-ksjnf\") pod \"112e50b5-86e0-4401-b4f9-b32be27ab508\" (UID: \"112e50b5-86e0-4401-b4f9-b32be27ab508\") " Jan 25 08:28:02 crc kubenswrapper[4832]: I0125 08:28:02.587826 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/112e50b5-86e0-4401-b4f9-b32be27ab508-inventory\") pod \"112e50b5-86e0-4401-b4f9-b32be27ab508\" (UID: \"112e50b5-86e0-4401-b4f9-b32be27ab508\") " Jan 25 08:28:02 crc kubenswrapper[4832]: I0125 08:28:02.587884 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/112e50b5-86e0-4401-b4f9-b32be27ab508-ssh-key-openstack-edpm-ipam\") pod \"112e50b5-86e0-4401-b4f9-b32be27ab508\" (UID: \"112e50b5-86e0-4401-b4f9-b32be27ab508\") " Jan 25 08:28:02 crc kubenswrapper[4832]: I0125 08:28:02.686258 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/112e50b5-86e0-4401-b4f9-b32be27ab508-kube-api-access-ksjnf" (OuterVolumeSpecName: "kube-api-access-ksjnf") pod "112e50b5-86e0-4401-b4f9-b32be27ab508" (UID: "112e50b5-86e0-4401-b4f9-b32be27ab508"). InnerVolumeSpecName "kube-api-access-ksjnf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 25 08:28:02 crc kubenswrapper[4832]: I0125 08:28:02.707661 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/112e50b5-86e0-4401-b4f9-b32be27ab508-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "112e50b5-86e0-4401-b4f9-b32be27ab508" (UID: "112e50b5-86e0-4401-b4f9-b32be27ab508"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 25 08:28:02 crc kubenswrapper[4832]: I0125 08:28:02.708985 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/112e50b5-86e0-4401-b4f9-b32be27ab508-inventory" (OuterVolumeSpecName: "inventory") pod "112e50b5-86e0-4401-b4f9-b32be27ab508" (UID: "112e50b5-86e0-4401-b4f9-b32be27ab508"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 25 08:28:02 crc kubenswrapper[4832]: I0125 08:28:02.791535 4832 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/112e50b5-86e0-4401-b4f9-b32be27ab508-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 25 08:28:02 crc kubenswrapper[4832]: I0125 08:28:02.791563 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ksjnf\" (UniqueName: \"kubernetes.io/projected/112e50b5-86e0-4401-b4f9-b32be27ab508-kube-api-access-ksjnf\") on node \"crc\" DevicePath \"\"" Jan 25 08:28:02 crc kubenswrapper[4832]: I0125 08:28:02.791574 4832 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/112e50b5-86e0-4401-b4f9-b32be27ab508-inventory\") on node \"crc\" DevicePath \"\"" Jan 25 08:28:03 crc kubenswrapper[4832]: I0125 08:28:03.094614 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-b4dhr" event={"ID":"112e50b5-86e0-4401-b4f9-b32be27ab508","Type":"ContainerDied","Data":"ad8a08c4028b4b0d917af7b30b59e525cc9363c03781bea3d5b74fe9619176fe"} Jan 25 08:28:03 crc kubenswrapper[4832]: I0125 08:28:03.094662 4832 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ad8a08c4028b4b0d917af7b30b59e525cc9363c03781bea3d5b74fe9619176fe" Jan 25 08:28:03 crc kubenswrapper[4832]: I0125 08:28:03.094955 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-b4dhr" Jan 25 08:28:03 crc kubenswrapper[4832]: I0125 08:28:03.207278 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-rk2l7"] Jan 25 08:28:03 crc kubenswrapper[4832]: E0125 08:28:03.207895 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="112e50b5-86e0-4401-b4f9-b32be27ab508" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Jan 25 08:28:03 crc kubenswrapper[4832]: I0125 08:28:03.207938 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="112e50b5-86e0-4401-b4f9-b32be27ab508" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Jan 25 08:28:03 crc kubenswrapper[4832]: I0125 08:28:03.208179 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="112e50b5-86e0-4401-b4f9-b32be27ab508" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Jan 25 08:28:03 crc kubenswrapper[4832]: I0125 08:28:03.209038 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-rk2l7" Jan 25 08:28:03 crc kubenswrapper[4832]: I0125 08:28:03.213335 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 25 08:28:03 crc kubenswrapper[4832]: I0125 08:28:03.213540 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 25 08:28:03 crc kubenswrapper[4832]: I0125 08:28:03.213633 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-7jwxb" Jan 25 08:28:03 crc kubenswrapper[4832]: I0125 08:28:03.213770 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 25 08:28:03 crc kubenswrapper[4832]: I0125 08:28:03.219836 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-rk2l7"] Jan 25 08:28:03 crc kubenswrapper[4832]: I0125 08:28:03.300073 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/10ca3609-7786-4065-9125-f1460e9718f2-ssh-key-openstack-edpm-ipam\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-rk2l7\" (UID: \"10ca3609-7786-4065-9125-f1460e9718f2\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-rk2l7" Jan 25 08:28:03 crc kubenswrapper[4832]: I0125 08:28:03.300471 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l8bqn\" (UniqueName: \"kubernetes.io/projected/10ca3609-7786-4065-9125-f1460e9718f2-kube-api-access-l8bqn\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-rk2l7\" (UID: \"10ca3609-7786-4065-9125-f1460e9718f2\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-rk2l7" Jan 25 08:28:03 crc kubenswrapper[4832]: I0125 08:28:03.300649 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/10ca3609-7786-4065-9125-f1460e9718f2-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-rk2l7\" (UID: \"10ca3609-7786-4065-9125-f1460e9718f2\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-rk2l7" Jan 25 08:28:03 crc kubenswrapper[4832]: I0125 08:28:03.402486 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/10ca3609-7786-4065-9125-f1460e9718f2-ssh-key-openstack-edpm-ipam\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-rk2l7\" (UID: \"10ca3609-7786-4065-9125-f1460e9718f2\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-rk2l7" Jan 25 08:28:03 crc kubenswrapper[4832]: I0125 08:28:03.402533 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l8bqn\" (UniqueName: \"kubernetes.io/projected/10ca3609-7786-4065-9125-f1460e9718f2-kube-api-access-l8bqn\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-rk2l7\" (UID: \"10ca3609-7786-4065-9125-f1460e9718f2\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-rk2l7" Jan 25 08:28:03 crc kubenswrapper[4832]: I0125 08:28:03.402606 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/10ca3609-7786-4065-9125-f1460e9718f2-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-rk2l7\" (UID: \"10ca3609-7786-4065-9125-f1460e9718f2\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-rk2l7" Jan 25 08:28:03 crc kubenswrapper[4832]: I0125 08:28:03.406045 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/10ca3609-7786-4065-9125-f1460e9718f2-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-rk2l7\" (UID: \"10ca3609-7786-4065-9125-f1460e9718f2\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-rk2l7" Jan 25 08:28:03 crc kubenswrapper[4832]: I0125 08:28:03.406258 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/10ca3609-7786-4065-9125-f1460e9718f2-ssh-key-openstack-edpm-ipam\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-rk2l7\" (UID: \"10ca3609-7786-4065-9125-f1460e9718f2\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-rk2l7" Jan 25 08:28:03 crc kubenswrapper[4832]: I0125 08:28:03.420221 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l8bqn\" (UniqueName: \"kubernetes.io/projected/10ca3609-7786-4065-9125-f1460e9718f2-kube-api-access-l8bqn\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-rk2l7\" (UID: \"10ca3609-7786-4065-9125-f1460e9718f2\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-rk2l7" Jan 25 08:28:03 crc kubenswrapper[4832]: I0125 08:28:03.582302 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-rk2l7" Jan 25 08:28:04 crc kubenswrapper[4832]: I0125 08:28:04.085841 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-rk2l7"] Jan 25 08:28:04 crc kubenswrapper[4832]: I0125 08:28:04.104166 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-rk2l7" event={"ID":"10ca3609-7786-4065-9125-f1460e9718f2","Type":"ContainerStarted","Data":"ee09b076bb01931ffc5e74dd9b1d25972ef252797a05081ce1f3d7580ee35e48"} Jan 25 08:28:05 crc kubenswrapper[4832]: I0125 08:28:05.115277 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-rk2l7" event={"ID":"10ca3609-7786-4065-9125-f1460e9718f2","Type":"ContainerStarted","Data":"66fd6aca93eb8af2d5b707a7c972ffe4f0d8083cbf6ade44a5fc8a55c3ab56f2"} Jan 25 08:28:22 crc kubenswrapper[4832]: I0125 08:28:22.043791 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-rk2l7" podStartSLOduration=18.556026198 podStartE2EDuration="19.043769048s" podCreationTimestamp="2026-01-25 08:28:03 +0000 UTC" firstStartedPulling="2026-01-25 08:28:04.088468595 +0000 UTC m=+1866.762292128" lastFinishedPulling="2026-01-25 08:28:04.576211445 +0000 UTC m=+1867.250034978" observedRunningTime="2026-01-25 08:28:05.139486321 +0000 UTC m=+1867.813309854" watchObservedRunningTime="2026-01-25 08:28:22.043769048 +0000 UTC m=+1884.717592581" Jan 25 08:28:22 crc kubenswrapper[4832]: I0125 08:28:22.046986 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-c24ss"] Jan 25 08:28:22 crc kubenswrapper[4832]: I0125 08:28:22.057785 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-c24ss"] Jan 25 08:28:23 crc kubenswrapper[4832]: I0125 08:28:23.027978 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-cell-mapping-nglwx"] Jan 25 08:28:23 crc kubenswrapper[4832]: I0125 08:28:23.036494 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-cell-mapping-nglwx"] Jan 25 08:28:23 crc kubenswrapper[4832]: I0125 08:28:23.684831 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="30535fb7-5d1d-47e6-8394-3df7f9d032eb" path="/var/lib/kubelet/pods/30535fb7-5d1d-47e6-8394-3df7f9d032eb/volumes" Jan 25 08:28:23 crc kubenswrapper[4832]: I0125 08:28:23.686323 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d1a99b4f-2213-4a2a-9086-e755207a4e3c" path="/var/lib/kubelet/pods/d1a99b4f-2213-4a2a-9086-e755207a4e3c/volumes" Jan 25 08:28:26 crc kubenswrapper[4832]: I0125 08:28:26.359617 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-7vg4l"] Jan 25 08:28:26 crc kubenswrapper[4832]: I0125 08:28:26.362005 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-7vg4l" Jan 25 08:28:26 crc kubenswrapper[4832]: I0125 08:28:26.372063 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-7vg4l"] Jan 25 08:28:26 crc kubenswrapper[4832]: I0125 08:28:26.486138 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/92963a40-9078-47a9-b4c7-4e0181e94836-catalog-content\") pod \"redhat-operators-7vg4l\" (UID: \"92963a40-9078-47a9-b4c7-4e0181e94836\") " pod="openshift-marketplace/redhat-operators-7vg4l" Jan 25 08:28:26 crc kubenswrapper[4832]: I0125 08:28:26.486286 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d68jm\" (UniqueName: \"kubernetes.io/projected/92963a40-9078-47a9-b4c7-4e0181e94836-kube-api-access-d68jm\") pod \"redhat-operators-7vg4l\" (UID: \"92963a40-9078-47a9-b4c7-4e0181e94836\") " pod="openshift-marketplace/redhat-operators-7vg4l" Jan 25 08:28:26 crc kubenswrapper[4832]: I0125 08:28:26.487651 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/92963a40-9078-47a9-b4c7-4e0181e94836-utilities\") pod \"redhat-operators-7vg4l\" (UID: \"92963a40-9078-47a9-b4c7-4e0181e94836\") " pod="openshift-marketplace/redhat-operators-7vg4l" Jan 25 08:28:26 crc kubenswrapper[4832]: I0125 08:28:26.589570 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/92963a40-9078-47a9-b4c7-4e0181e94836-utilities\") pod \"redhat-operators-7vg4l\" (UID: \"92963a40-9078-47a9-b4c7-4e0181e94836\") " pod="openshift-marketplace/redhat-operators-7vg4l" Jan 25 08:28:26 crc kubenswrapper[4832]: I0125 08:28:26.589660 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/92963a40-9078-47a9-b4c7-4e0181e94836-catalog-content\") pod \"redhat-operators-7vg4l\" (UID: \"92963a40-9078-47a9-b4c7-4e0181e94836\") " pod="openshift-marketplace/redhat-operators-7vg4l" Jan 25 08:28:26 crc kubenswrapper[4832]: I0125 08:28:26.589699 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d68jm\" (UniqueName: \"kubernetes.io/projected/92963a40-9078-47a9-b4c7-4e0181e94836-kube-api-access-d68jm\") pod \"redhat-operators-7vg4l\" (UID: \"92963a40-9078-47a9-b4c7-4e0181e94836\") " pod="openshift-marketplace/redhat-operators-7vg4l" Jan 25 08:28:26 crc kubenswrapper[4832]: I0125 08:28:26.590336 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/92963a40-9078-47a9-b4c7-4e0181e94836-utilities\") pod \"redhat-operators-7vg4l\" (UID: \"92963a40-9078-47a9-b4c7-4e0181e94836\") " pod="openshift-marketplace/redhat-operators-7vg4l" Jan 25 08:28:26 crc kubenswrapper[4832]: I0125 08:28:26.590424 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/92963a40-9078-47a9-b4c7-4e0181e94836-catalog-content\") pod \"redhat-operators-7vg4l\" (UID: \"92963a40-9078-47a9-b4c7-4e0181e94836\") " pod="openshift-marketplace/redhat-operators-7vg4l" Jan 25 08:28:26 crc kubenswrapper[4832]: I0125 08:28:26.612609 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d68jm\" (UniqueName: \"kubernetes.io/projected/92963a40-9078-47a9-b4c7-4e0181e94836-kube-api-access-d68jm\") pod \"redhat-operators-7vg4l\" (UID: \"92963a40-9078-47a9-b4c7-4e0181e94836\") " pod="openshift-marketplace/redhat-operators-7vg4l" Jan 25 08:28:26 crc kubenswrapper[4832]: I0125 08:28:26.683970 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-7vg4l" Jan 25 08:28:27 crc kubenswrapper[4832]: I0125 08:28:27.147569 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-7vg4l"] Jan 25 08:28:27 crc kubenswrapper[4832]: I0125 08:28:27.305399 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7vg4l" event={"ID":"92963a40-9078-47a9-b4c7-4e0181e94836","Type":"ContainerStarted","Data":"0e0f05d4a7c5fa9c040af6ba312dba97205dbce43a1dcd2081251fa23ffe1660"} Jan 25 08:28:28 crc kubenswrapper[4832]: I0125 08:28:28.317214 4832 generic.go:334] "Generic (PLEG): container finished" podID="92963a40-9078-47a9-b4c7-4e0181e94836" containerID="6da18aaf25b2f4bc0295d6eaef7747535f7b2d5732e0acfe4b783dd755d3b4cd" exitCode=0 Jan 25 08:28:28 crc kubenswrapper[4832]: I0125 08:28:28.317324 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7vg4l" event={"ID":"92963a40-9078-47a9-b4c7-4e0181e94836","Type":"ContainerDied","Data":"6da18aaf25b2f4bc0295d6eaef7747535f7b2d5732e0acfe4b783dd755d3b4cd"} Jan 25 08:28:30 crc kubenswrapper[4832]: I0125 08:28:30.340916 4832 generic.go:334] "Generic (PLEG): container finished" podID="92963a40-9078-47a9-b4c7-4e0181e94836" containerID="6b9df6157f054d6c2ac8e787e08e0a83aaee9321d04c6a0838a9b64855243000" exitCode=0 Jan 25 08:28:30 crc kubenswrapper[4832]: I0125 08:28:30.341010 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7vg4l" event={"ID":"92963a40-9078-47a9-b4c7-4e0181e94836","Type":"ContainerDied","Data":"6b9df6157f054d6c2ac8e787e08e0a83aaee9321d04c6a0838a9b64855243000"} Jan 25 08:28:31 crc kubenswrapper[4832]: I0125 08:28:31.352242 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7vg4l" event={"ID":"92963a40-9078-47a9-b4c7-4e0181e94836","Type":"ContainerStarted","Data":"9f4f82d5cf788dd1324aa2df4cf1e788251e86a4fca2c155420daf2ba5538f83"} Jan 25 08:28:31 crc kubenswrapper[4832]: I0125 08:28:31.380134 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-7vg4l" podStartSLOduration=2.933034046 podStartE2EDuration="5.380082357s" podCreationTimestamp="2026-01-25 08:28:26 +0000 UTC" firstStartedPulling="2026-01-25 08:28:28.320763655 +0000 UTC m=+1890.994587188" lastFinishedPulling="2026-01-25 08:28:30.767811916 +0000 UTC m=+1893.441635499" observedRunningTime="2026-01-25 08:28:31.37250276 +0000 UTC m=+1894.046326283" watchObservedRunningTime="2026-01-25 08:28:31.380082357 +0000 UTC m=+1894.053905890" Jan 25 08:28:36 crc kubenswrapper[4832]: I0125 08:28:36.684941 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-7vg4l" Jan 25 08:28:36 crc kubenswrapper[4832]: I0125 08:28:36.685366 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-7vg4l" Jan 25 08:28:36 crc kubenswrapper[4832]: I0125 08:28:36.732299 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-7vg4l" Jan 25 08:28:37 crc kubenswrapper[4832]: I0125 08:28:37.448686 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-7vg4l" Jan 25 08:28:37 crc kubenswrapper[4832]: I0125 08:28:37.500823 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-7vg4l"] Jan 25 08:28:39 crc kubenswrapper[4832]: I0125 08:28:39.425456 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-7vg4l" podUID="92963a40-9078-47a9-b4c7-4e0181e94836" containerName="registry-server" containerID="cri-o://9f4f82d5cf788dd1324aa2df4cf1e788251e86a4fca2c155420daf2ba5538f83" gracePeriod=2 Jan 25 08:28:39 crc kubenswrapper[4832]: I0125 08:28:39.853964 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-7vg4l" Jan 25 08:28:39 crc kubenswrapper[4832]: I0125 08:28:39.994621 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d68jm\" (UniqueName: \"kubernetes.io/projected/92963a40-9078-47a9-b4c7-4e0181e94836-kube-api-access-d68jm\") pod \"92963a40-9078-47a9-b4c7-4e0181e94836\" (UID: \"92963a40-9078-47a9-b4c7-4e0181e94836\") " Jan 25 08:28:39 crc kubenswrapper[4832]: I0125 08:28:39.994931 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/92963a40-9078-47a9-b4c7-4e0181e94836-utilities\") pod \"92963a40-9078-47a9-b4c7-4e0181e94836\" (UID: \"92963a40-9078-47a9-b4c7-4e0181e94836\") " Jan 25 08:28:39 crc kubenswrapper[4832]: I0125 08:28:39.995074 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/92963a40-9078-47a9-b4c7-4e0181e94836-catalog-content\") pod \"92963a40-9078-47a9-b4c7-4e0181e94836\" (UID: \"92963a40-9078-47a9-b4c7-4e0181e94836\") " Jan 25 08:28:39 crc kubenswrapper[4832]: I0125 08:28:39.996191 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/92963a40-9078-47a9-b4c7-4e0181e94836-utilities" (OuterVolumeSpecName: "utilities") pod "92963a40-9078-47a9-b4c7-4e0181e94836" (UID: "92963a40-9078-47a9-b4c7-4e0181e94836"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 25 08:28:40 crc kubenswrapper[4832]: I0125 08:28:40.001713 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/92963a40-9078-47a9-b4c7-4e0181e94836-kube-api-access-d68jm" (OuterVolumeSpecName: "kube-api-access-d68jm") pod "92963a40-9078-47a9-b4c7-4e0181e94836" (UID: "92963a40-9078-47a9-b4c7-4e0181e94836"). InnerVolumeSpecName "kube-api-access-d68jm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 25 08:28:40 crc kubenswrapper[4832]: I0125 08:28:40.097540 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d68jm\" (UniqueName: \"kubernetes.io/projected/92963a40-9078-47a9-b4c7-4e0181e94836-kube-api-access-d68jm\") on node \"crc\" DevicePath \"\"" Jan 25 08:28:40 crc kubenswrapper[4832]: I0125 08:28:40.097587 4832 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/92963a40-9078-47a9-b4c7-4e0181e94836-utilities\") on node \"crc\" DevicePath \"\"" Jan 25 08:28:40 crc kubenswrapper[4832]: I0125 08:28:40.118131 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/92963a40-9078-47a9-b4c7-4e0181e94836-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "92963a40-9078-47a9-b4c7-4e0181e94836" (UID: "92963a40-9078-47a9-b4c7-4e0181e94836"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 25 08:28:40 crc kubenswrapper[4832]: I0125 08:28:40.198944 4832 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/92963a40-9078-47a9-b4c7-4e0181e94836-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 25 08:28:40 crc kubenswrapper[4832]: I0125 08:28:40.434760 4832 generic.go:334] "Generic (PLEG): container finished" podID="92963a40-9078-47a9-b4c7-4e0181e94836" containerID="9f4f82d5cf788dd1324aa2df4cf1e788251e86a4fca2c155420daf2ba5538f83" exitCode=0 Jan 25 08:28:40 crc kubenswrapper[4832]: I0125 08:28:40.434805 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7vg4l" event={"ID":"92963a40-9078-47a9-b4c7-4e0181e94836","Type":"ContainerDied","Data":"9f4f82d5cf788dd1324aa2df4cf1e788251e86a4fca2c155420daf2ba5538f83"} Jan 25 08:28:40 crc kubenswrapper[4832]: I0125 08:28:40.434826 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-7vg4l" Jan 25 08:28:40 crc kubenswrapper[4832]: I0125 08:28:40.434843 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7vg4l" event={"ID":"92963a40-9078-47a9-b4c7-4e0181e94836","Type":"ContainerDied","Data":"0e0f05d4a7c5fa9c040af6ba312dba97205dbce43a1dcd2081251fa23ffe1660"} Jan 25 08:28:40 crc kubenswrapper[4832]: I0125 08:28:40.434865 4832 scope.go:117] "RemoveContainer" containerID="9f4f82d5cf788dd1324aa2df4cf1e788251e86a4fca2c155420daf2ba5538f83" Jan 25 08:28:40 crc kubenswrapper[4832]: I0125 08:28:40.464463 4832 scope.go:117] "RemoveContainer" containerID="6b9df6157f054d6c2ac8e787e08e0a83aaee9321d04c6a0838a9b64855243000" Jan 25 08:28:40 crc kubenswrapper[4832]: I0125 08:28:40.469910 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-7vg4l"] Jan 25 08:28:40 crc kubenswrapper[4832]: I0125 08:28:40.478163 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-7vg4l"] Jan 25 08:28:40 crc kubenswrapper[4832]: I0125 08:28:40.498262 4832 scope.go:117] "RemoveContainer" containerID="6da18aaf25b2f4bc0295d6eaef7747535f7b2d5732e0acfe4b783dd755d3b4cd" Jan 25 08:28:40 crc kubenswrapper[4832]: I0125 08:28:40.526936 4832 scope.go:117] "RemoveContainer" containerID="9f4f82d5cf788dd1324aa2df4cf1e788251e86a4fca2c155420daf2ba5538f83" Jan 25 08:28:40 crc kubenswrapper[4832]: E0125 08:28:40.527426 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9f4f82d5cf788dd1324aa2df4cf1e788251e86a4fca2c155420daf2ba5538f83\": container with ID starting with 9f4f82d5cf788dd1324aa2df4cf1e788251e86a4fca2c155420daf2ba5538f83 not found: ID does not exist" containerID="9f4f82d5cf788dd1324aa2df4cf1e788251e86a4fca2c155420daf2ba5538f83" Jan 25 08:28:40 crc kubenswrapper[4832]: I0125 08:28:40.527473 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9f4f82d5cf788dd1324aa2df4cf1e788251e86a4fca2c155420daf2ba5538f83"} err="failed to get container status \"9f4f82d5cf788dd1324aa2df4cf1e788251e86a4fca2c155420daf2ba5538f83\": rpc error: code = NotFound desc = could not find container \"9f4f82d5cf788dd1324aa2df4cf1e788251e86a4fca2c155420daf2ba5538f83\": container with ID starting with 9f4f82d5cf788dd1324aa2df4cf1e788251e86a4fca2c155420daf2ba5538f83 not found: ID does not exist" Jan 25 08:28:40 crc kubenswrapper[4832]: I0125 08:28:40.527502 4832 scope.go:117] "RemoveContainer" containerID="6b9df6157f054d6c2ac8e787e08e0a83aaee9321d04c6a0838a9b64855243000" Jan 25 08:28:40 crc kubenswrapper[4832]: E0125 08:28:40.527777 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6b9df6157f054d6c2ac8e787e08e0a83aaee9321d04c6a0838a9b64855243000\": container with ID starting with 6b9df6157f054d6c2ac8e787e08e0a83aaee9321d04c6a0838a9b64855243000 not found: ID does not exist" containerID="6b9df6157f054d6c2ac8e787e08e0a83aaee9321d04c6a0838a9b64855243000" Jan 25 08:28:40 crc kubenswrapper[4832]: I0125 08:28:40.527811 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6b9df6157f054d6c2ac8e787e08e0a83aaee9321d04c6a0838a9b64855243000"} err="failed to get container status \"6b9df6157f054d6c2ac8e787e08e0a83aaee9321d04c6a0838a9b64855243000\": rpc error: code = NotFound desc = could not find container \"6b9df6157f054d6c2ac8e787e08e0a83aaee9321d04c6a0838a9b64855243000\": container with ID starting with 6b9df6157f054d6c2ac8e787e08e0a83aaee9321d04c6a0838a9b64855243000 not found: ID does not exist" Jan 25 08:28:40 crc kubenswrapper[4832]: I0125 08:28:40.527834 4832 scope.go:117] "RemoveContainer" containerID="6da18aaf25b2f4bc0295d6eaef7747535f7b2d5732e0acfe4b783dd755d3b4cd" Jan 25 08:28:40 crc kubenswrapper[4832]: E0125 08:28:40.528024 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6da18aaf25b2f4bc0295d6eaef7747535f7b2d5732e0acfe4b783dd755d3b4cd\": container with ID starting with 6da18aaf25b2f4bc0295d6eaef7747535f7b2d5732e0acfe4b783dd755d3b4cd not found: ID does not exist" containerID="6da18aaf25b2f4bc0295d6eaef7747535f7b2d5732e0acfe4b783dd755d3b4cd" Jan 25 08:28:40 crc kubenswrapper[4832]: I0125 08:28:40.528045 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6da18aaf25b2f4bc0295d6eaef7747535f7b2d5732e0acfe4b783dd755d3b4cd"} err="failed to get container status \"6da18aaf25b2f4bc0295d6eaef7747535f7b2d5732e0acfe4b783dd755d3b4cd\": rpc error: code = NotFound desc = could not find container \"6da18aaf25b2f4bc0295d6eaef7747535f7b2d5732e0acfe4b783dd755d3b4cd\": container with ID starting with 6da18aaf25b2f4bc0295d6eaef7747535f7b2d5732e0acfe4b783dd755d3b4cd not found: ID does not exist" Jan 25 08:28:41 crc kubenswrapper[4832]: I0125 08:28:41.532500 4832 scope.go:117] "RemoveContainer" containerID="76d01e0bfcc0872f53687478ef0953e42b8d701cf8269f78bc992fc53ee4a3b2" Jan 25 08:28:41 crc kubenswrapper[4832]: I0125 08:28:41.575203 4832 scope.go:117] "RemoveContainer" containerID="72124bd7bf49d598aa55b3e27272ea9046d23af883d96705c9dd9a7fe614d8f3" Jan 25 08:28:41 crc kubenswrapper[4832]: I0125 08:28:41.630039 4832 scope.go:117] "RemoveContainer" containerID="574faa8798ceac6b8e063d9c738b9da32df65a6d57fde1ba725961285d3d8d0e" Jan 25 08:28:41 crc kubenswrapper[4832]: I0125 08:28:41.683506 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="92963a40-9078-47a9-b4c7-4e0181e94836" path="/var/lib/kubelet/pods/92963a40-9078-47a9-b4c7-4e0181e94836/volumes" Jan 25 08:29:02 crc kubenswrapper[4832]: I0125 08:29:02.629607 4832 generic.go:334] "Generic (PLEG): container finished" podID="10ca3609-7786-4065-9125-f1460e9718f2" containerID="66fd6aca93eb8af2d5b707a7c972ffe4f0d8083cbf6ade44a5fc8a55c3ab56f2" exitCode=0 Jan 25 08:29:02 crc kubenswrapper[4832]: I0125 08:29:02.629918 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-rk2l7" event={"ID":"10ca3609-7786-4065-9125-f1460e9718f2","Type":"ContainerDied","Data":"66fd6aca93eb8af2d5b707a7c972ffe4f0d8083cbf6ade44a5fc8a55c3ab56f2"} Jan 25 08:29:04 crc kubenswrapper[4832]: I0125 08:29:04.037674 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-rk2l7" Jan 25 08:29:04 crc kubenswrapper[4832]: I0125 08:29:04.062579 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l8bqn\" (UniqueName: \"kubernetes.io/projected/10ca3609-7786-4065-9125-f1460e9718f2-kube-api-access-l8bqn\") pod \"10ca3609-7786-4065-9125-f1460e9718f2\" (UID: \"10ca3609-7786-4065-9125-f1460e9718f2\") " Jan 25 08:29:04 crc kubenswrapper[4832]: I0125 08:29:04.062726 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/10ca3609-7786-4065-9125-f1460e9718f2-ssh-key-openstack-edpm-ipam\") pod \"10ca3609-7786-4065-9125-f1460e9718f2\" (UID: \"10ca3609-7786-4065-9125-f1460e9718f2\") " Jan 25 08:29:04 crc kubenswrapper[4832]: I0125 08:29:04.062759 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/10ca3609-7786-4065-9125-f1460e9718f2-inventory\") pod \"10ca3609-7786-4065-9125-f1460e9718f2\" (UID: \"10ca3609-7786-4065-9125-f1460e9718f2\") " Jan 25 08:29:04 crc kubenswrapper[4832]: I0125 08:29:04.072083 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/10ca3609-7786-4065-9125-f1460e9718f2-kube-api-access-l8bqn" (OuterVolumeSpecName: "kube-api-access-l8bqn") pod "10ca3609-7786-4065-9125-f1460e9718f2" (UID: "10ca3609-7786-4065-9125-f1460e9718f2"). InnerVolumeSpecName "kube-api-access-l8bqn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 25 08:29:04 crc kubenswrapper[4832]: I0125 08:29:04.096548 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/10ca3609-7786-4065-9125-f1460e9718f2-inventory" (OuterVolumeSpecName: "inventory") pod "10ca3609-7786-4065-9125-f1460e9718f2" (UID: "10ca3609-7786-4065-9125-f1460e9718f2"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 25 08:29:04 crc kubenswrapper[4832]: I0125 08:29:04.100978 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/10ca3609-7786-4065-9125-f1460e9718f2-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "10ca3609-7786-4065-9125-f1460e9718f2" (UID: "10ca3609-7786-4065-9125-f1460e9718f2"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 25 08:29:04 crc kubenswrapper[4832]: I0125 08:29:04.164286 4832 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/10ca3609-7786-4065-9125-f1460e9718f2-inventory\") on node \"crc\" DevicePath \"\"" Jan 25 08:29:04 crc kubenswrapper[4832]: I0125 08:29:04.164331 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l8bqn\" (UniqueName: \"kubernetes.io/projected/10ca3609-7786-4065-9125-f1460e9718f2-kube-api-access-l8bqn\") on node \"crc\" DevicePath \"\"" Jan 25 08:29:04 crc kubenswrapper[4832]: I0125 08:29:04.164345 4832 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/10ca3609-7786-4065-9125-f1460e9718f2-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 25 08:29:04 crc kubenswrapper[4832]: I0125 08:29:04.647041 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-rk2l7" event={"ID":"10ca3609-7786-4065-9125-f1460e9718f2","Type":"ContainerDied","Data":"ee09b076bb01931ffc5e74dd9b1d25972ef252797a05081ce1f3d7580ee35e48"} Jan 25 08:29:04 crc kubenswrapper[4832]: I0125 08:29:04.647077 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-rk2l7" Jan 25 08:29:04 crc kubenswrapper[4832]: I0125 08:29:04.647092 4832 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ee09b076bb01931ffc5e74dd9b1d25972ef252797a05081ce1f3d7580ee35e48" Jan 25 08:29:04 crc kubenswrapper[4832]: I0125 08:29:04.740117 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-7xcl5"] Jan 25 08:29:04 crc kubenswrapper[4832]: E0125 08:29:04.740528 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="92963a40-9078-47a9-b4c7-4e0181e94836" containerName="registry-server" Jan 25 08:29:04 crc kubenswrapper[4832]: I0125 08:29:04.740545 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="92963a40-9078-47a9-b4c7-4e0181e94836" containerName="registry-server" Jan 25 08:29:04 crc kubenswrapper[4832]: E0125 08:29:04.740561 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="10ca3609-7786-4065-9125-f1460e9718f2" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Jan 25 08:29:04 crc kubenswrapper[4832]: I0125 08:29:04.740570 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="10ca3609-7786-4065-9125-f1460e9718f2" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Jan 25 08:29:04 crc kubenswrapper[4832]: E0125 08:29:04.740605 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="92963a40-9078-47a9-b4c7-4e0181e94836" containerName="extract-content" Jan 25 08:29:04 crc kubenswrapper[4832]: I0125 08:29:04.740613 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="92963a40-9078-47a9-b4c7-4e0181e94836" containerName="extract-content" Jan 25 08:29:04 crc kubenswrapper[4832]: E0125 08:29:04.740624 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="92963a40-9078-47a9-b4c7-4e0181e94836" containerName="extract-utilities" Jan 25 08:29:04 crc kubenswrapper[4832]: I0125 08:29:04.740631 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="92963a40-9078-47a9-b4c7-4e0181e94836" containerName="extract-utilities" Jan 25 08:29:04 crc kubenswrapper[4832]: I0125 08:29:04.740815 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="10ca3609-7786-4065-9125-f1460e9718f2" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Jan 25 08:29:04 crc kubenswrapper[4832]: I0125 08:29:04.740829 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="92963a40-9078-47a9-b4c7-4e0181e94836" containerName="registry-server" Jan 25 08:29:04 crc kubenswrapper[4832]: I0125 08:29:04.741536 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-7xcl5" Jan 25 08:29:04 crc kubenswrapper[4832]: I0125 08:29:04.745576 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 25 08:29:04 crc kubenswrapper[4832]: I0125 08:29:04.745617 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 25 08:29:04 crc kubenswrapper[4832]: I0125 08:29:04.746867 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-7jwxb" Jan 25 08:29:04 crc kubenswrapper[4832]: I0125 08:29:04.747051 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 25 08:29:04 crc kubenswrapper[4832]: I0125 08:29:04.774626 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/977dfa38-e1a5-4daf-b1b4-4be30da2ee0f-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-7xcl5\" (UID: \"977dfa38-e1a5-4daf-b1b4-4be30da2ee0f\") " pod="openstack/ssh-known-hosts-edpm-deployment-7xcl5" Jan 25 08:29:04 crc kubenswrapper[4832]: I0125 08:29:04.774798 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/977dfa38-e1a5-4daf-b1b4-4be30da2ee0f-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-7xcl5\" (UID: \"977dfa38-e1a5-4daf-b1b4-4be30da2ee0f\") " pod="openstack/ssh-known-hosts-edpm-deployment-7xcl5" Jan 25 08:29:04 crc kubenswrapper[4832]: I0125 08:29:04.774896 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t7zzk\" (UniqueName: \"kubernetes.io/projected/977dfa38-e1a5-4daf-b1b4-4be30da2ee0f-kube-api-access-t7zzk\") pod \"ssh-known-hosts-edpm-deployment-7xcl5\" (UID: \"977dfa38-e1a5-4daf-b1b4-4be30da2ee0f\") " pod="openstack/ssh-known-hosts-edpm-deployment-7xcl5" Jan 25 08:29:04 crc kubenswrapper[4832]: I0125 08:29:04.876913 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/977dfa38-e1a5-4daf-b1b4-4be30da2ee0f-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-7xcl5\" (UID: \"977dfa38-e1a5-4daf-b1b4-4be30da2ee0f\") " pod="openstack/ssh-known-hosts-edpm-deployment-7xcl5" Jan 25 08:29:04 crc kubenswrapper[4832]: I0125 08:29:04.877001 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/977dfa38-e1a5-4daf-b1b4-4be30da2ee0f-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-7xcl5\" (UID: \"977dfa38-e1a5-4daf-b1b4-4be30da2ee0f\") " pod="openstack/ssh-known-hosts-edpm-deployment-7xcl5" Jan 25 08:29:04 crc kubenswrapper[4832]: I0125 08:29:04.877070 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t7zzk\" (UniqueName: \"kubernetes.io/projected/977dfa38-e1a5-4daf-b1b4-4be30da2ee0f-kube-api-access-t7zzk\") pod \"ssh-known-hosts-edpm-deployment-7xcl5\" (UID: \"977dfa38-e1a5-4daf-b1b4-4be30da2ee0f\") " pod="openstack/ssh-known-hosts-edpm-deployment-7xcl5" Jan 25 08:29:04 crc kubenswrapper[4832]: I0125 08:29:04.881218 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/977dfa38-e1a5-4daf-b1b4-4be30da2ee0f-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-7xcl5\" (UID: \"977dfa38-e1a5-4daf-b1b4-4be30da2ee0f\") " pod="openstack/ssh-known-hosts-edpm-deployment-7xcl5" Jan 25 08:29:04 crc kubenswrapper[4832]: I0125 08:29:04.881354 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/977dfa38-e1a5-4daf-b1b4-4be30da2ee0f-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-7xcl5\" (UID: \"977dfa38-e1a5-4daf-b1b4-4be30da2ee0f\") " pod="openstack/ssh-known-hosts-edpm-deployment-7xcl5" Jan 25 08:29:04 crc kubenswrapper[4832]: I0125 08:29:04.894570 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t7zzk\" (UniqueName: \"kubernetes.io/projected/977dfa38-e1a5-4daf-b1b4-4be30da2ee0f-kube-api-access-t7zzk\") pod \"ssh-known-hosts-edpm-deployment-7xcl5\" (UID: \"977dfa38-e1a5-4daf-b1b4-4be30da2ee0f\") " pod="openstack/ssh-known-hosts-edpm-deployment-7xcl5" Jan 25 08:29:05 crc kubenswrapper[4832]: I0125 08:29:05.060994 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-7xcl5" Jan 25 08:29:05 crc kubenswrapper[4832]: I0125 08:29:05.134558 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-7xcl5"] Jan 25 08:29:05 crc kubenswrapper[4832]: I0125 08:29:05.598030 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-7xcl5"] Jan 25 08:29:05 crc kubenswrapper[4832]: I0125 08:29:05.598810 4832 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 25 08:29:05 crc kubenswrapper[4832]: I0125 08:29:05.656946 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-7xcl5" event={"ID":"977dfa38-e1a5-4daf-b1b4-4be30da2ee0f","Type":"ContainerStarted","Data":"52a1150f7af84020550388c999265d8c4cb5df8cffd9bf7b50f240de17a01013"} Jan 25 08:29:06 crc kubenswrapper[4832]: I0125 08:29:06.666956 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-7xcl5" event={"ID":"977dfa38-e1a5-4daf-b1b4-4be30da2ee0f","Type":"ContainerStarted","Data":"043aec0e9a6ea539d7f861224cf58e45fb4eb0c578438d13d2f42e09262bec4e"} Jan 25 08:29:07 crc kubenswrapper[4832]: I0125 08:29:07.039537 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ssh-known-hosts-edpm-deployment-7xcl5" podStartSLOduration=2.575478463 podStartE2EDuration="3.039503529s" podCreationTimestamp="2026-01-25 08:29:04 +0000 UTC" firstStartedPulling="2026-01-25 08:29:05.598593889 +0000 UTC m=+1928.272417422" lastFinishedPulling="2026-01-25 08:29:06.062618955 +0000 UTC m=+1928.736442488" observedRunningTime="2026-01-25 08:29:06.688512803 +0000 UTC m=+1929.362336336" watchObservedRunningTime="2026-01-25 08:29:07.039503529 +0000 UTC m=+1929.713327062" Jan 25 08:29:07 crc kubenswrapper[4832]: I0125 08:29:07.044470 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-cell-mapping-6jrsn"] Jan 25 08:29:07 crc kubenswrapper[4832]: I0125 08:29:07.052023 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-cell-mapping-6jrsn"] Jan 25 08:29:07 crc kubenswrapper[4832]: I0125 08:29:07.682421 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="043a28cc-bd52-47d0-83cd-59e5b8b101b4" path="/var/lib/kubelet/pods/043a28cc-bd52-47d0-83cd-59e5b8b101b4/volumes" Jan 25 08:29:13 crc kubenswrapper[4832]: I0125 08:29:13.719956 4832 generic.go:334] "Generic (PLEG): container finished" podID="977dfa38-e1a5-4daf-b1b4-4be30da2ee0f" containerID="043aec0e9a6ea539d7f861224cf58e45fb4eb0c578438d13d2f42e09262bec4e" exitCode=0 Jan 25 08:29:13 crc kubenswrapper[4832]: I0125 08:29:13.720046 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-7xcl5" event={"ID":"977dfa38-e1a5-4daf-b1b4-4be30da2ee0f","Type":"ContainerDied","Data":"043aec0e9a6ea539d7f861224cf58e45fb4eb0c578438d13d2f42e09262bec4e"} Jan 25 08:29:15 crc kubenswrapper[4832]: I0125 08:29:15.119319 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-7xcl5" Jan 25 08:29:15 crc kubenswrapper[4832]: I0125 08:29:15.284679 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/977dfa38-e1a5-4daf-b1b4-4be30da2ee0f-ssh-key-openstack-edpm-ipam\") pod \"977dfa38-e1a5-4daf-b1b4-4be30da2ee0f\" (UID: \"977dfa38-e1a5-4daf-b1b4-4be30da2ee0f\") " Jan 25 08:29:15 crc kubenswrapper[4832]: I0125 08:29:15.284925 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/977dfa38-e1a5-4daf-b1b4-4be30da2ee0f-inventory-0\") pod \"977dfa38-e1a5-4daf-b1b4-4be30da2ee0f\" (UID: \"977dfa38-e1a5-4daf-b1b4-4be30da2ee0f\") " Jan 25 08:29:15 crc kubenswrapper[4832]: I0125 08:29:15.284959 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t7zzk\" (UniqueName: \"kubernetes.io/projected/977dfa38-e1a5-4daf-b1b4-4be30da2ee0f-kube-api-access-t7zzk\") pod \"977dfa38-e1a5-4daf-b1b4-4be30da2ee0f\" (UID: \"977dfa38-e1a5-4daf-b1b4-4be30da2ee0f\") " Jan 25 08:29:15 crc kubenswrapper[4832]: I0125 08:29:15.289915 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/977dfa38-e1a5-4daf-b1b4-4be30da2ee0f-kube-api-access-t7zzk" (OuterVolumeSpecName: "kube-api-access-t7zzk") pod "977dfa38-e1a5-4daf-b1b4-4be30da2ee0f" (UID: "977dfa38-e1a5-4daf-b1b4-4be30da2ee0f"). InnerVolumeSpecName "kube-api-access-t7zzk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 25 08:29:15 crc kubenswrapper[4832]: I0125 08:29:15.321332 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/977dfa38-e1a5-4daf-b1b4-4be30da2ee0f-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "977dfa38-e1a5-4daf-b1b4-4be30da2ee0f" (UID: "977dfa38-e1a5-4daf-b1b4-4be30da2ee0f"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 25 08:29:15 crc kubenswrapper[4832]: I0125 08:29:15.324743 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/977dfa38-e1a5-4daf-b1b4-4be30da2ee0f-inventory-0" (OuterVolumeSpecName: "inventory-0") pod "977dfa38-e1a5-4daf-b1b4-4be30da2ee0f" (UID: "977dfa38-e1a5-4daf-b1b4-4be30da2ee0f"). InnerVolumeSpecName "inventory-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 25 08:29:15 crc kubenswrapper[4832]: I0125 08:29:15.386473 4832 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/977dfa38-e1a5-4daf-b1b4-4be30da2ee0f-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 25 08:29:15 crc kubenswrapper[4832]: I0125 08:29:15.386512 4832 reconciler_common.go:293] "Volume detached for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/977dfa38-e1a5-4daf-b1b4-4be30da2ee0f-inventory-0\") on node \"crc\" DevicePath \"\"" Jan 25 08:29:15 crc kubenswrapper[4832]: I0125 08:29:15.386527 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t7zzk\" (UniqueName: \"kubernetes.io/projected/977dfa38-e1a5-4daf-b1b4-4be30da2ee0f-kube-api-access-t7zzk\") on node \"crc\" DevicePath \"\"" Jan 25 08:29:15 crc kubenswrapper[4832]: I0125 08:29:15.738884 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-7xcl5" event={"ID":"977dfa38-e1a5-4daf-b1b4-4be30da2ee0f","Type":"ContainerDied","Data":"52a1150f7af84020550388c999265d8c4cb5df8cffd9bf7b50f240de17a01013"} Jan 25 08:29:15 crc kubenswrapper[4832]: I0125 08:29:15.738938 4832 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="52a1150f7af84020550388c999265d8c4cb5df8cffd9bf7b50f240de17a01013" Jan 25 08:29:15 crc kubenswrapper[4832]: I0125 08:29:15.738945 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-7xcl5" Jan 25 08:29:15 crc kubenswrapper[4832]: I0125 08:29:15.811699 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-qvjw2"] Jan 25 08:29:15 crc kubenswrapper[4832]: E0125 08:29:15.812219 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="977dfa38-e1a5-4daf-b1b4-4be30da2ee0f" containerName="ssh-known-hosts-edpm-deployment" Jan 25 08:29:15 crc kubenswrapper[4832]: I0125 08:29:15.812239 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="977dfa38-e1a5-4daf-b1b4-4be30da2ee0f" containerName="ssh-known-hosts-edpm-deployment" Jan 25 08:29:15 crc kubenswrapper[4832]: I0125 08:29:15.812477 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="977dfa38-e1a5-4daf-b1b4-4be30da2ee0f" containerName="ssh-known-hosts-edpm-deployment" Jan 25 08:29:15 crc kubenswrapper[4832]: I0125 08:29:15.813275 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-qvjw2" Jan 25 08:29:15 crc kubenswrapper[4832]: I0125 08:29:15.816243 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 25 08:29:15 crc kubenswrapper[4832]: I0125 08:29:15.816243 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 25 08:29:15 crc kubenswrapper[4832]: I0125 08:29:15.816698 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 25 08:29:15 crc kubenswrapper[4832]: I0125 08:29:15.817232 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-7jwxb" Jan 25 08:29:15 crc kubenswrapper[4832]: I0125 08:29:15.830009 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-qvjw2"] Jan 25 08:29:15 crc kubenswrapper[4832]: I0125 08:29:15.895220 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/acaaf210-0845-4432-b149-30c8c038bfcb-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-qvjw2\" (UID: \"acaaf210-0845-4432-b149-30c8c038bfcb\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-qvjw2" Jan 25 08:29:15 crc kubenswrapper[4832]: I0125 08:29:15.895551 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-psqll\" (UniqueName: \"kubernetes.io/projected/acaaf210-0845-4432-b149-30c8c038bfcb-kube-api-access-psqll\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-qvjw2\" (UID: \"acaaf210-0845-4432-b149-30c8c038bfcb\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-qvjw2" Jan 25 08:29:15 crc kubenswrapper[4832]: I0125 08:29:15.895610 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/acaaf210-0845-4432-b149-30c8c038bfcb-ssh-key-openstack-edpm-ipam\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-qvjw2\" (UID: \"acaaf210-0845-4432-b149-30c8c038bfcb\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-qvjw2" Jan 25 08:29:15 crc kubenswrapper[4832]: I0125 08:29:15.997814 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/acaaf210-0845-4432-b149-30c8c038bfcb-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-qvjw2\" (UID: \"acaaf210-0845-4432-b149-30c8c038bfcb\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-qvjw2" Jan 25 08:29:15 crc kubenswrapper[4832]: I0125 08:29:15.997978 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-psqll\" (UniqueName: \"kubernetes.io/projected/acaaf210-0845-4432-b149-30c8c038bfcb-kube-api-access-psqll\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-qvjw2\" (UID: \"acaaf210-0845-4432-b149-30c8c038bfcb\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-qvjw2" Jan 25 08:29:15 crc kubenswrapper[4832]: I0125 08:29:15.998009 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/acaaf210-0845-4432-b149-30c8c038bfcb-ssh-key-openstack-edpm-ipam\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-qvjw2\" (UID: \"acaaf210-0845-4432-b149-30c8c038bfcb\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-qvjw2" Jan 25 08:29:16 crc kubenswrapper[4832]: I0125 08:29:16.003304 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/acaaf210-0845-4432-b149-30c8c038bfcb-ssh-key-openstack-edpm-ipam\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-qvjw2\" (UID: \"acaaf210-0845-4432-b149-30c8c038bfcb\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-qvjw2" Jan 25 08:29:16 crc kubenswrapper[4832]: I0125 08:29:16.011599 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/acaaf210-0845-4432-b149-30c8c038bfcb-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-qvjw2\" (UID: \"acaaf210-0845-4432-b149-30c8c038bfcb\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-qvjw2" Jan 25 08:29:16 crc kubenswrapper[4832]: I0125 08:29:16.014104 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-psqll\" (UniqueName: \"kubernetes.io/projected/acaaf210-0845-4432-b149-30c8c038bfcb-kube-api-access-psqll\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-qvjw2\" (UID: \"acaaf210-0845-4432-b149-30c8c038bfcb\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-qvjw2" Jan 25 08:29:16 crc kubenswrapper[4832]: I0125 08:29:16.132221 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-qvjw2" Jan 25 08:29:16 crc kubenswrapper[4832]: I0125 08:29:16.675185 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-qvjw2"] Jan 25 08:29:16 crc kubenswrapper[4832]: I0125 08:29:16.751955 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-qvjw2" event={"ID":"acaaf210-0845-4432-b149-30c8c038bfcb","Type":"ContainerStarted","Data":"4b5848795b3ef4f811ee632abca49409b253d5b0aea0ce8f52b0cd36136fba32"} Jan 25 08:29:17 crc kubenswrapper[4832]: I0125 08:29:17.761050 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-qvjw2" event={"ID":"acaaf210-0845-4432-b149-30c8c038bfcb","Type":"ContainerStarted","Data":"675b283fcaa948ab5afd26fb4c5484b464962f7ef763d5d03cf438cc004a3d92"} Jan 25 08:29:17 crc kubenswrapper[4832]: I0125 08:29:17.785910 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-qvjw2" podStartSLOduration=2.134637381 podStartE2EDuration="2.785889453s" podCreationTimestamp="2026-01-25 08:29:15 +0000 UTC" firstStartedPulling="2026-01-25 08:29:16.686116799 +0000 UTC m=+1939.359940332" lastFinishedPulling="2026-01-25 08:29:17.337368871 +0000 UTC m=+1940.011192404" observedRunningTime="2026-01-25 08:29:17.773487395 +0000 UTC m=+1940.447310928" watchObservedRunningTime="2026-01-25 08:29:17.785889453 +0000 UTC m=+1940.459712986" Jan 25 08:29:22 crc kubenswrapper[4832]: I0125 08:29:22.150033 4832 patch_prober.go:28] interesting pod/machine-config-daemon-9r9sz container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 25 08:29:22 crc kubenswrapper[4832]: I0125 08:29:22.150532 4832 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-9r9sz" podUID="1fb47e8e-c812-41b4-9be7-3fad81e121b0" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 25 08:29:26 crc kubenswrapper[4832]: I0125 08:29:26.834449 4832 generic.go:334] "Generic (PLEG): container finished" podID="acaaf210-0845-4432-b149-30c8c038bfcb" containerID="675b283fcaa948ab5afd26fb4c5484b464962f7ef763d5d03cf438cc004a3d92" exitCode=0 Jan 25 08:29:26 crc kubenswrapper[4832]: I0125 08:29:26.834543 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-qvjw2" event={"ID":"acaaf210-0845-4432-b149-30c8c038bfcb","Type":"ContainerDied","Data":"675b283fcaa948ab5afd26fb4c5484b464962f7ef763d5d03cf438cc004a3d92"} Jan 25 08:29:28 crc kubenswrapper[4832]: I0125 08:29:28.360002 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-qvjw2" Jan 25 08:29:28 crc kubenswrapper[4832]: I0125 08:29:28.547021 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/acaaf210-0845-4432-b149-30c8c038bfcb-ssh-key-openstack-edpm-ipam\") pod \"acaaf210-0845-4432-b149-30c8c038bfcb\" (UID: \"acaaf210-0845-4432-b149-30c8c038bfcb\") " Jan 25 08:29:28 crc kubenswrapper[4832]: I0125 08:29:28.547122 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-psqll\" (UniqueName: \"kubernetes.io/projected/acaaf210-0845-4432-b149-30c8c038bfcb-kube-api-access-psqll\") pod \"acaaf210-0845-4432-b149-30c8c038bfcb\" (UID: \"acaaf210-0845-4432-b149-30c8c038bfcb\") " Jan 25 08:29:28 crc kubenswrapper[4832]: I0125 08:29:28.547367 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/acaaf210-0845-4432-b149-30c8c038bfcb-inventory\") pod \"acaaf210-0845-4432-b149-30c8c038bfcb\" (UID: \"acaaf210-0845-4432-b149-30c8c038bfcb\") " Jan 25 08:29:28 crc kubenswrapper[4832]: I0125 08:29:28.554209 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/acaaf210-0845-4432-b149-30c8c038bfcb-kube-api-access-psqll" (OuterVolumeSpecName: "kube-api-access-psqll") pod "acaaf210-0845-4432-b149-30c8c038bfcb" (UID: "acaaf210-0845-4432-b149-30c8c038bfcb"). InnerVolumeSpecName "kube-api-access-psqll". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 25 08:29:28 crc kubenswrapper[4832]: I0125 08:29:28.573836 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/acaaf210-0845-4432-b149-30c8c038bfcb-inventory" (OuterVolumeSpecName: "inventory") pod "acaaf210-0845-4432-b149-30c8c038bfcb" (UID: "acaaf210-0845-4432-b149-30c8c038bfcb"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 25 08:29:28 crc kubenswrapper[4832]: I0125 08:29:28.579333 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/acaaf210-0845-4432-b149-30c8c038bfcb-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "acaaf210-0845-4432-b149-30c8c038bfcb" (UID: "acaaf210-0845-4432-b149-30c8c038bfcb"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 25 08:29:28 crc kubenswrapper[4832]: I0125 08:29:28.650141 4832 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/acaaf210-0845-4432-b149-30c8c038bfcb-inventory\") on node \"crc\" DevicePath \"\"" Jan 25 08:29:28 crc kubenswrapper[4832]: I0125 08:29:28.650198 4832 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/acaaf210-0845-4432-b149-30c8c038bfcb-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 25 08:29:28 crc kubenswrapper[4832]: I0125 08:29:28.650211 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-psqll\" (UniqueName: \"kubernetes.io/projected/acaaf210-0845-4432-b149-30c8c038bfcb-kube-api-access-psqll\") on node \"crc\" DevicePath \"\"" Jan 25 08:29:28 crc kubenswrapper[4832]: I0125 08:29:28.851759 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-qvjw2" event={"ID":"acaaf210-0845-4432-b149-30c8c038bfcb","Type":"ContainerDied","Data":"4b5848795b3ef4f811ee632abca49409b253d5b0aea0ce8f52b0cd36136fba32"} Jan 25 08:29:28 crc kubenswrapper[4832]: I0125 08:29:28.852037 4832 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4b5848795b3ef4f811ee632abca49409b253d5b0aea0ce8f52b0cd36136fba32" Jan 25 08:29:28 crc kubenswrapper[4832]: I0125 08:29:28.852098 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-qvjw2" Jan 25 08:29:28 crc kubenswrapper[4832]: I0125 08:29:28.918079 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-x685s"] Jan 25 08:29:28 crc kubenswrapper[4832]: E0125 08:29:28.918539 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="acaaf210-0845-4432-b149-30c8c038bfcb" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Jan 25 08:29:28 crc kubenswrapper[4832]: I0125 08:29:28.918563 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="acaaf210-0845-4432-b149-30c8c038bfcb" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Jan 25 08:29:28 crc kubenswrapper[4832]: I0125 08:29:28.918772 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="acaaf210-0845-4432-b149-30c8c038bfcb" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Jan 25 08:29:28 crc kubenswrapper[4832]: I0125 08:29:28.919511 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-x685s" Jan 25 08:29:28 crc kubenswrapper[4832]: I0125 08:29:28.922639 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 25 08:29:28 crc kubenswrapper[4832]: I0125 08:29:28.922921 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-7jwxb" Jan 25 08:29:28 crc kubenswrapper[4832]: I0125 08:29:28.923039 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 25 08:29:28 crc kubenswrapper[4832]: I0125 08:29:28.927861 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 25 08:29:28 crc kubenswrapper[4832]: I0125 08:29:28.935916 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-x685s"] Jan 25 08:29:28 crc kubenswrapper[4832]: I0125 08:29:28.957809 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/63023ae6-5cfd-4940-8160-7547220bbb5b-ssh-key-openstack-edpm-ipam\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-x685s\" (UID: \"63023ae6-5cfd-4940-8160-7547220bbb5b\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-x685s" Jan 25 08:29:28 crc kubenswrapper[4832]: I0125 08:29:28.958248 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jlsk7\" (UniqueName: \"kubernetes.io/projected/63023ae6-5cfd-4940-8160-7547220bbb5b-kube-api-access-jlsk7\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-x685s\" (UID: \"63023ae6-5cfd-4940-8160-7547220bbb5b\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-x685s" Jan 25 08:29:28 crc kubenswrapper[4832]: I0125 08:29:28.958410 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/63023ae6-5cfd-4940-8160-7547220bbb5b-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-x685s\" (UID: \"63023ae6-5cfd-4940-8160-7547220bbb5b\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-x685s" Jan 25 08:29:29 crc kubenswrapper[4832]: I0125 08:29:29.060506 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/63023ae6-5cfd-4940-8160-7547220bbb5b-ssh-key-openstack-edpm-ipam\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-x685s\" (UID: \"63023ae6-5cfd-4940-8160-7547220bbb5b\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-x685s" Jan 25 08:29:29 crc kubenswrapper[4832]: I0125 08:29:29.060683 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jlsk7\" (UniqueName: \"kubernetes.io/projected/63023ae6-5cfd-4940-8160-7547220bbb5b-kube-api-access-jlsk7\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-x685s\" (UID: \"63023ae6-5cfd-4940-8160-7547220bbb5b\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-x685s" Jan 25 08:29:29 crc kubenswrapper[4832]: I0125 08:29:29.060781 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/63023ae6-5cfd-4940-8160-7547220bbb5b-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-x685s\" (UID: \"63023ae6-5cfd-4940-8160-7547220bbb5b\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-x685s" Jan 25 08:29:29 crc kubenswrapper[4832]: I0125 08:29:29.065537 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/63023ae6-5cfd-4940-8160-7547220bbb5b-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-x685s\" (UID: \"63023ae6-5cfd-4940-8160-7547220bbb5b\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-x685s" Jan 25 08:29:29 crc kubenswrapper[4832]: I0125 08:29:29.073127 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/63023ae6-5cfd-4940-8160-7547220bbb5b-ssh-key-openstack-edpm-ipam\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-x685s\" (UID: \"63023ae6-5cfd-4940-8160-7547220bbb5b\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-x685s" Jan 25 08:29:29 crc kubenswrapper[4832]: I0125 08:29:29.080603 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jlsk7\" (UniqueName: \"kubernetes.io/projected/63023ae6-5cfd-4940-8160-7547220bbb5b-kube-api-access-jlsk7\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-x685s\" (UID: \"63023ae6-5cfd-4940-8160-7547220bbb5b\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-x685s" Jan 25 08:29:29 crc kubenswrapper[4832]: I0125 08:29:29.239907 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-x685s" Jan 25 08:29:29 crc kubenswrapper[4832]: I0125 08:29:29.734254 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-x685s"] Jan 25 08:29:29 crc kubenswrapper[4832]: I0125 08:29:29.864354 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-x685s" event={"ID":"63023ae6-5cfd-4940-8160-7547220bbb5b","Type":"ContainerStarted","Data":"7b2c13ffeccdb6717905d6f00670f95dbc5f01bc757857282180ae35d49ef6a1"} Jan 25 08:29:30 crc kubenswrapper[4832]: I0125 08:29:30.876128 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-x685s" event={"ID":"63023ae6-5cfd-4940-8160-7547220bbb5b","Type":"ContainerStarted","Data":"9e881c6a47d489879909365e2a925827cde6fdaaf44840f51eda64a5ca0f5ccc"} Jan 25 08:29:30 crc kubenswrapper[4832]: I0125 08:29:30.897827 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-x685s" podStartSLOduration=2.396000504 podStartE2EDuration="2.897807755s" podCreationTimestamp="2026-01-25 08:29:28 +0000 UTC" firstStartedPulling="2026-01-25 08:29:29.739865648 +0000 UTC m=+1952.413689181" lastFinishedPulling="2026-01-25 08:29:30.241672899 +0000 UTC m=+1952.915496432" observedRunningTime="2026-01-25 08:29:30.892807778 +0000 UTC m=+1953.566631311" watchObservedRunningTime="2026-01-25 08:29:30.897807755 +0000 UTC m=+1953.571631288" Jan 25 08:29:40 crc kubenswrapper[4832]: I0125 08:29:40.972753 4832 generic.go:334] "Generic (PLEG): container finished" podID="63023ae6-5cfd-4940-8160-7547220bbb5b" containerID="9e881c6a47d489879909365e2a925827cde6fdaaf44840f51eda64a5ca0f5ccc" exitCode=0 Jan 25 08:29:40 crc kubenswrapper[4832]: I0125 08:29:40.972963 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-x685s" event={"ID":"63023ae6-5cfd-4940-8160-7547220bbb5b","Type":"ContainerDied","Data":"9e881c6a47d489879909365e2a925827cde6fdaaf44840f51eda64a5ca0f5ccc"} Jan 25 08:29:41 crc kubenswrapper[4832]: I0125 08:29:41.749823 4832 scope.go:117] "RemoveContainer" containerID="19bfe1ab953cc86ae66dd70baae770eb99576c2e1d66361d4363058af63653f2" Jan 25 08:29:42 crc kubenswrapper[4832]: I0125 08:29:42.383848 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-x685s" Jan 25 08:29:42 crc kubenswrapper[4832]: I0125 08:29:42.525822 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/63023ae6-5cfd-4940-8160-7547220bbb5b-ssh-key-openstack-edpm-ipam\") pod \"63023ae6-5cfd-4940-8160-7547220bbb5b\" (UID: \"63023ae6-5cfd-4940-8160-7547220bbb5b\") " Jan 25 08:29:42 crc kubenswrapper[4832]: I0125 08:29:42.526016 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/63023ae6-5cfd-4940-8160-7547220bbb5b-inventory\") pod \"63023ae6-5cfd-4940-8160-7547220bbb5b\" (UID: \"63023ae6-5cfd-4940-8160-7547220bbb5b\") " Jan 25 08:29:42 crc kubenswrapper[4832]: I0125 08:29:42.526148 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jlsk7\" (UniqueName: \"kubernetes.io/projected/63023ae6-5cfd-4940-8160-7547220bbb5b-kube-api-access-jlsk7\") pod \"63023ae6-5cfd-4940-8160-7547220bbb5b\" (UID: \"63023ae6-5cfd-4940-8160-7547220bbb5b\") " Jan 25 08:29:42 crc kubenswrapper[4832]: I0125 08:29:42.533800 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/63023ae6-5cfd-4940-8160-7547220bbb5b-kube-api-access-jlsk7" (OuterVolumeSpecName: "kube-api-access-jlsk7") pod "63023ae6-5cfd-4940-8160-7547220bbb5b" (UID: "63023ae6-5cfd-4940-8160-7547220bbb5b"). InnerVolumeSpecName "kube-api-access-jlsk7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 25 08:29:42 crc kubenswrapper[4832]: I0125 08:29:42.556404 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/63023ae6-5cfd-4940-8160-7547220bbb5b-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "63023ae6-5cfd-4940-8160-7547220bbb5b" (UID: "63023ae6-5cfd-4940-8160-7547220bbb5b"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 25 08:29:42 crc kubenswrapper[4832]: I0125 08:29:42.558990 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/63023ae6-5cfd-4940-8160-7547220bbb5b-inventory" (OuterVolumeSpecName: "inventory") pod "63023ae6-5cfd-4940-8160-7547220bbb5b" (UID: "63023ae6-5cfd-4940-8160-7547220bbb5b"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 25 08:29:42 crc kubenswrapper[4832]: I0125 08:29:42.628718 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jlsk7\" (UniqueName: \"kubernetes.io/projected/63023ae6-5cfd-4940-8160-7547220bbb5b-kube-api-access-jlsk7\") on node \"crc\" DevicePath \"\"" Jan 25 08:29:42 crc kubenswrapper[4832]: I0125 08:29:42.628781 4832 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/63023ae6-5cfd-4940-8160-7547220bbb5b-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 25 08:29:42 crc kubenswrapper[4832]: I0125 08:29:42.628802 4832 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/63023ae6-5cfd-4940-8160-7547220bbb5b-inventory\") on node \"crc\" DevicePath \"\"" Jan 25 08:29:42 crc kubenswrapper[4832]: I0125 08:29:42.992733 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-x685s" event={"ID":"63023ae6-5cfd-4940-8160-7547220bbb5b","Type":"ContainerDied","Data":"7b2c13ffeccdb6717905d6f00670f95dbc5f01bc757857282180ae35d49ef6a1"} Jan 25 08:29:42 crc kubenswrapper[4832]: I0125 08:29:42.992786 4832 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7b2c13ffeccdb6717905d6f00670f95dbc5f01bc757857282180ae35d49ef6a1" Jan 25 08:29:42 crc kubenswrapper[4832]: I0125 08:29:42.992800 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-x685s" Jan 25 08:29:43 crc kubenswrapper[4832]: I0125 08:29:43.084778 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-ftpbj"] Jan 25 08:29:43 crc kubenswrapper[4832]: E0125 08:29:43.085188 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="63023ae6-5cfd-4940-8160-7547220bbb5b" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Jan 25 08:29:43 crc kubenswrapper[4832]: I0125 08:29:43.085210 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="63023ae6-5cfd-4940-8160-7547220bbb5b" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Jan 25 08:29:43 crc kubenswrapper[4832]: I0125 08:29:43.085464 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="63023ae6-5cfd-4940-8160-7547220bbb5b" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Jan 25 08:29:43 crc kubenswrapper[4832]: I0125 08:29:43.086183 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-ftpbj" Jan 25 08:29:43 crc kubenswrapper[4832]: I0125 08:29:43.088602 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-7jwxb" Jan 25 08:29:43 crc kubenswrapper[4832]: I0125 08:29:43.089109 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 25 08:29:43 crc kubenswrapper[4832]: I0125 08:29:43.089114 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-ovn-default-certs-0" Jan 25 08:29:43 crc kubenswrapper[4832]: I0125 08:29:43.089282 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-neutron-metadata-default-certs-0" Jan 25 08:29:43 crc kubenswrapper[4832]: I0125 08:29:43.089790 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 25 08:29:43 crc kubenswrapper[4832]: I0125 08:29:43.090846 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-telemetry-default-certs-0" Jan 25 08:29:43 crc kubenswrapper[4832]: I0125 08:29:43.091135 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 25 08:29:43 crc kubenswrapper[4832]: I0125 08:29:43.091325 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-libvirt-default-certs-0" Jan 25 08:29:43 crc kubenswrapper[4832]: I0125 08:29:43.121061 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-ftpbj"] Jan 25 08:29:43 crc kubenswrapper[4832]: I0125 08:29:43.136634 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ca88c519-c20b-4e26-86c2-5b62b163af37-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-ftpbj\" (UID: \"ca88c519-c20b-4e26-86c2-5b62b163af37\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-ftpbj" Jan 25 08:29:43 crc kubenswrapper[4832]: I0125 08:29:43.136718 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ca88c519-c20b-4e26-86c2-5b62b163af37-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-ftpbj\" (UID: \"ca88c519-c20b-4e26-86c2-5b62b163af37\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-ftpbj" Jan 25 08:29:43 crc kubenswrapper[4832]: I0125 08:29:43.136744 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ca88c519-c20b-4e26-86c2-5b62b163af37-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-ftpbj\" (UID: \"ca88c519-c20b-4e26-86c2-5b62b163af37\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-ftpbj" Jan 25 08:29:43 crc kubenswrapper[4832]: I0125 08:29:43.136948 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ca88c519-c20b-4e26-86c2-5b62b163af37-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-ftpbj\" (UID: \"ca88c519-c20b-4e26-86c2-5b62b163af37\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-ftpbj" Jan 25 08:29:43 crc kubenswrapper[4832]: I0125 08:29:43.137086 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-plgqg\" (UniqueName: \"kubernetes.io/projected/ca88c519-c20b-4e26-86c2-5b62b163af37-kube-api-access-plgqg\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-ftpbj\" (UID: \"ca88c519-c20b-4e26-86c2-5b62b163af37\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-ftpbj" Jan 25 08:29:43 crc kubenswrapper[4832]: I0125 08:29:43.137161 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/ca88c519-c20b-4e26-86c2-5b62b163af37-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-ftpbj\" (UID: \"ca88c519-c20b-4e26-86c2-5b62b163af37\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-ftpbj" Jan 25 08:29:43 crc kubenswrapper[4832]: I0125 08:29:43.137314 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ca88c519-c20b-4e26-86c2-5b62b163af37-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-ftpbj\" (UID: \"ca88c519-c20b-4e26-86c2-5b62b163af37\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-ftpbj" Jan 25 08:29:43 crc kubenswrapper[4832]: I0125 08:29:43.137495 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ca88c519-c20b-4e26-86c2-5b62b163af37-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-ftpbj\" (UID: \"ca88c519-c20b-4e26-86c2-5b62b163af37\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-ftpbj" Jan 25 08:29:43 crc kubenswrapper[4832]: I0125 08:29:43.137584 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ca88c519-c20b-4e26-86c2-5b62b163af37-ssh-key-openstack-edpm-ipam\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-ftpbj\" (UID: \"ca88c519-c20b-4e26-86c2-5b62b163af37\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-ftpbj" Jan 25 08:29:43 crc kubenswrapper[4832]: I0125 08:29:43.137653 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/ca88c519-c20b-4e26-86c2-5b62b163af37-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-ftpbj\" (UID: \"ca88c519-c20b-4e26-86c2-5b62b163af37\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-ftpbj" Jan 25 08:29:43 crc kubenswrapper[4832]: I0125 08:29:43.137709 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ca88c519-c20b-4e26-86c2-5b62b163af37-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-ftpbj\" (UID: \"ca88c519-c20b-4e26-86c2-5b62b163af37\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-ftpbj" Jan 25 08:29:43 crc kubenswrapper[4832]: I0125 08:29:43.137785 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/ca88c519-c20b-4e26-86c2-5b62b163af37-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-ftpbj\" (UID: \"ca88c519-c20b-4e26-86c2-5b62b163af37\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-ftpbj" Jan 25 08:29:43 crc kubenswrapper[4832]: I0125 08:29:43.137880 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ca88c519-c20b-4e26-86c2-5b62b163af37-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-ftpbj\" (UID: \"ca88c519-c20b-4e26-86c2-5b62b163af37\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-ftpbj" Jan 25 08:29:43 crc kubenswrapper[4832]: I0125 08:29:43.137983 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/ca88c519-c20b-4e26-86c2-5b62b163af37-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-ftpbj\" (UID: \"ca88c519-c20b-4e26-86c2-5b62b163af37\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-ftpbj" Jan 25 08:29:43 crc kubenswrapper[4832]: I0125 08:29:43.240206 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ca88c519-c20b-4e26-86c2-5b62b163af37-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-ftpbj\" (UID: \"ca88c519-c20b-4e26-86c2-5b62b163af37\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-ftpbj" Jan 25 08:29:43 crc kubenswrapper[4832]: I0125 08:29:43.240266 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ca88c519-c20b-4e26-86c2-5b62b163af37-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-ftpbj\" (UID: \"ca88c519-c20b-4e26-86c2-5b62b163af37\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-ftpbj" Jan 25 08:29:43 crc kubenswrapper[4832]: I0125 08:29:43.240308 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ca88c519-c20b-4e26-86c2-5b62b163af37-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-ftpbj\" (UID: \"ca88c519-c20b-4e26-86c2-5b62b163af37\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-ftpbj" Jan 25 08:29:43 crc kubenswrapper[4832]: I0125 08:29:43.240340 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-plgqg\" (UniqueName: \"kubernetes.io/projected/ca88c519-c20b-4e26-86c2-5b62b163af37-kube-api-access-plgqg\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-ftpbj\" (UID: \"ca88c519-c20b-4e26-86c2-5b62b163af37\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-ftpbj" Jan 25 08:29:43 crc kubenswrapper[4832]: I0125 08:29:43.240369 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/ca88c519-c20b-4e26-86c2-5b62b163af37-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-ftpbj\" (UID: \"ca88c519-c20b-4e26-86c2-5b62b163af37\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-ftpbj" Jan 25 08:29:43 crc kubenswrapper[4832]: I0125 08:29:43.240407 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ca88c519-c20b-4e26-86c2-5b62b163af37-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-ftpbj\" (UID: \"ca88c519-c20b-4e26-86c2-5b62b163af37\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-ftpbj" Jan 25 08:29:43 crc kubenswrapper[4832]: I0125 08:29:43.240437 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ca88c519-c20b-4e26-86c2-5b62b163af37-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-ftpbj\" (UID: \"ca88c519-c20b-4e26-86c2-5b62b163af37\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-ftpbj" Jan 25 08:29:43 crc kubenswrapper[4832]: I0125 08:29:43.240462 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ca88c519-c20b-4e26-86c2-5b62b163af37-ssh-key-openstack-edpm-ipam\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-ftpbj\" (UID: \"ca88c519-c20b-4e26-86c2-5b62b163af37\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-ftpbj" Jan 25 08:29:43 crc kubenswrapper[4832]: I0125 08:29:43.240489 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/ca88c519-c20b-4e26-86c2-5b62b163af37-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-ftpbj\" (UID: \"ca88c519-c20b-4e26-86c2-5b62b163af37\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-ftpbj" Jan 25 08:29:43 crc kubenswrapper[4832]: I0125 08:29:43.240511 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ca88c519-c20b-4e26-86c2-5b62b163af37-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-ftpbj\" (UID: \"ca88c519-c20b-4e26-86c2-5b62b163af37\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-ftpbj" Jan 25 08:29:43 crc kubenswrapper[4832]: I0125 08:29:43.240540 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/ca88c519-c20b-4e26-86c2-5b62b163af37-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-ftpbj\" (UID: \"ca88c519-c20b-4e26-86c2-5b62b163af37\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-ftpbj" Jan 25 08:29:43 crc kubenswrapper[4832]: I0125 08:29:43.240575 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ca88c519-c20b-4e26-86c2-5b62b163af37-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-ftpbj\" (UID: \"ca88c519-c20b-4e26-86c2-5b62b163af37\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-ftpbj" Jan 25 08:29:43 crc kubenswrapper[4832]: I0125 08:29:43.240608 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/ca88c519-c20b-4e26-86c2-5b62b163af37-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-ftpbj\" (UID: \"ca88c519-c20b-4e26-86c2-5b62b163af37\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-ftpbj" Jan 25 08:29:43 crc kubenswrapper[4832]: I0125 08:29:43.240636 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ca88c519-c20b-4e26-86c2-5b62b163af37-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-ftpbj\" (UID: \"ca88c519-c20b-4e26-86c2-5b62b163af37\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-ftpbj" Jan 25 08:29:43 crc kubenswrapper[4832]: I0125 08:29:43.245659 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ca88c519-c20b-4e26-86c2-5b62b163af37-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-ftpbj\" (UID: \"ca88c519-c20b-4e26-86c2-5b62b163af37\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-ftpbj" Jan 25 08:29:43 crc kubenswrapper[4832]: I0125 08:29:43.246474 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ca88c519-c20b-4e26-86c2-5b62b163af37-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-ftpbj\" (UID: \"ca88c519-c20b-4e26-86c2-5b62b163af37\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-ftpbj" Jan 25 08:29:43 crc kubenswrapper[4832]: I0125 08:29:43.246768 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/ca88c519-c20b-4e26-86c2-5b62b163af37-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-ftpbj\" (UID: \"ca88c519-c20b-4e26-86c2-5b62b163af37\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-ftpbj" Jan 25 08:29:43 crc kubenswrapper[4832]: I0125 08:29:43.246996 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ca88c519-c20b-4e26-86c2-5b62b163af37-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-ftpbj\" (UID: \"ca88c519-c20b-4e26-86c2-5b62b163af37\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-ftpbj" Jan 25 08:29:43 crc kubenswrapper[4832]: I0125 08:29:43.246999 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/ca88c519-c20b-4e26-86c2-5b62b163af37-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-ftpbj\" (UID: \"ca88c519-c20b-4e26-86c2-5b62b163af37\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-ftpbj" Jan 25 08:29:43 crc kubenswrapper[4832]: I0125 08:29:43.247457 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/ca88c519-c20b-4e26-86c2-5b62b163af37-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-ftpbj\" (UID: \"ca88c519-c20b-4e26-86c2-5b62b163af37\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-ftpbj" Jan 25 08:29:43 crc kubenswrapper[4832]: I0125 08:29:43.248582 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ca88c519-c20b-4e26-86c2-5b62b163af37-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-ftpbj\" (UID: \"ca88c519-c20b-4e26-86c2-5b62b163af37\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-ftpbj" Jan 25 08:29:43 crc kubenswrapper[4832]: I0125 08:29:43.248849 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ca88c519-c20b-4e26-86c2-5b62b163af37-ssh-key-openstack-edpm-ipam\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-ftpbj\" (UID: \"ca88c519-c20b-4e26-86c2-5b62b163af37\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-ftpbj" Jan 25 08:29:43 crc kubenswrapper[4832]: I0125 08:29:43.249425 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ca88c519-c20b-4e26-86c2-5b62b163af37-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-ftpbj\" (UID: \"ca88c519-c20b-4e26-86c2-5b62b163af37\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-ftpbj" Jan 25 08:29:43 crc kubenswrapper[4832]: I0125 08:29:43.249559 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/ca88c519-c20b-4e26-86c2-5b62b163af37-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-ftpbj\" (UID: \"ca88c519-c20b-4e26-86c2-5b62b163af37\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-ftpbj" Jan 25 08:29:43 crc kubenswrapper[4832]: I0125 08:29:43.254435 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ca88c519-c20b-4e26-86c2-5b62b163af37-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-ftpbj\" (UID: \"ca88c519-c20b-4e26-86c2-5b62b163af37\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-ftpbj" Jan 25 08:29:43 crc kubenswrapper[4832]: I0125 08:29:43.256161 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ca88c519-c20b-4e26-86c2-5b62b163af37-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-ftpbj\" (UID: \"ca88c519-c20b-4e26-86c2-5b62b163af37\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-ftpbj" Jan 25 08:29:43 crc kubenswrapper[4832]: I0125 08:29:43.256915 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-plgqg\" (UniqueName: \"kubernetes.io/projected/ca88c519-c20b-4e26-86c2-5b62b163af37-kube-api-access-plgqg\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-ftpbj\" (UID: \"ca88c519-c20b-4e26-86c2-5b62b163af37\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-ftpbj" Jan 25 08:29:43 crc kubenswrapper[4832]: I0125 08:29:43.259638 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ca88c519-c20b-4e26-86c2-5b62b163af37-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-ftpbj\" (UID: \"ca88c519-c20b-4e26-86c2-5b62b163af37\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-ftpbj" Jan 25 08:29:43 crc kubenswrapper[4832]: I0125 08:29:43.460312 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-ftpbj" Jan 25 08:29:43 crc kubenswrapper[4832]: I0125 08:29:43.786129 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-ftpbj"] Jan 25 08:29:44 crc kubenswrapper[4832]: I0125 08:29:44.002284 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-ftpbj" event={"ID":"ca88c519-c20b-4e26-86c2-5b62b163af37","Type":"ContainerStarted","Data":"be5ed8710d00abd326c2d29ce7b816218c14f702f6e8e5d8c8b7a64b257775d0"} Jan 25 08:29:45 crc kubenswrapper[4832]: I0125 08:29:45.013365 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-ftpbj" event={"ID":"ca88c519-c20b-4e26-86c2-5b62b163af37","Type":"ContainerStarted","Data":"2aedaf7f30621d2d3ce16a679c6ff8ec95e58f024de5af8743eea8ebe1d59198"} Jan 25 08:29:45 crc kubenswrapper[4832]: I0125 08:29:45.048631 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-ftpbj" podStartSLOduration=1.604508219 podStartE2EDuration="2.048595272s" podCreationTimestamp="2026-01-25 08:29:43 +0000 UTC" firstStartedPulling="2026-01-25 08:29:43.790543979 +0000 UTC m=+1966.464367512" lastFinishedPulling="2026-01-25 08:29:44.234631022 +0000 UTC m=+1966.908454565" observedRunningTime="2026-01-25 08:29:45.037789753 +0000 UTC m=+1967.711613306" watchObservedRunningTime="2026-01-25 08:29:45.048595272 +0000 UTC m=+1967.722418815" Jan 25 08:29:52 crc kubenswrapper[4832]: I0125 08:29:52.149522 4832 patch_prober.go:28] interesting pod/machine-config-daemon-9r9sz container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 25 08:29:52 crc kubenswrapper[4832]: I0125 08:29:52.150069 4832 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-9r9sz" podUID="1fb47e8e-c812-41b4-9be7-3fad81e121b0" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 25 08:30:00 crc kubenswrapper[4832]: I0125 08:30:00.160100 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29488830-4gsj2"] Jan 25 08:30:00 crc kubenswrapper[4832]: I0125 08:30:00.162076 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29488830-4gsj2" Jan 25 08:30:00 crc kubenswrapper[4832]: I0125 08:30:00.164732 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 25 08:30:00 crc kubenswrapper[4832]: I0125 08:30:00.167598 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 25 08:30:00 crc kubenswrapper[4832]: I0125 08:30:00.168864 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29488830-4gsj2"] Jan 25 08:30:00 crc kubenswrapper[4832]: I0125 08:30:00.292216 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a25d2383-1995-4dda-ab68-ab5872da5a5e-secret-volume\") pod \"collect-profiles-29488830-4gsj2\" (UID: \"a25d2383-1995-4dda-ab68-ab5872da5a5e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29488830-4gsj2" Jan 25 08:30:00 crc kubenswrapper[4832]: I0125 08:30:00.292457 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9hb9n\" (UniqueName: \"kubernetes.io/projected/a25d2383-1995-4dda-ab68-ab5872da5a5e-kube-api-access-9hb9n\") pod \"collect-profiles-29488830-4gsj2\" (UID: \"a25d2383-1995-4dda-ab68-ab5872da5a5e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29488830-4gsj2" Jan 25 08:30:00 crc kubenswrapper[4832]: I0125 08:30:00.292494 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a25d2383-1995-4dda-ab68-ab5872da5a5e-config-volume\") pod \"collect-profiles-29488830-4gsj2\" (UID: \"a25d2383-1995-4dda-ab68-ab5872da5a5e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29488830-4gsj2" Jan 25 08:30:00 crc kubenswrapper[4832]: I0125 08:30:00.394166 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9hb9n\" (UniqueName: \"kubernetes.io/projected/a25d2383-1995-4dda-ab68-ab5872da5a5e-kube-api-access-9hb9n\") pod \"collect-profiles-29488830-4gsj2\" (UID: \"a25d2383-1995-4dda-ab68-ab5872da5a5e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29488830-4gsj2" Jan 25 08:30:00 crc kubenswrapper[4832]: I0125 08:30:00.394577 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a25d2383-1995-4dda-ab68-ab5872da5a5e-config-volume\") pod \"collect-profiles-29488830-4gsj2\" (UID: \"a25d2383-1995-4dda-ab68-ab5872da5a5e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29488830-4gsj2" Jan 25 08:30:00 crc kubenswrapper[4832]: I0125 08:30:00.394728 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a25d2383-1995-4dda-ab68-ab5872da5a5e-secret-volume\") pod \"collect-profiles-29488830-4gsj2\" (UID: \"a25d2383-1995-4dda-ab68-ab5872da5a5e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29488830-4gsj2" Jan 25 08:30:00 crc kubenswrapper[4832]: I0125 08:30:00.395608 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a25d2383-1995-4dda-ab68-ab5872da5a5e-config-volume\") pod \"collect-profiles-29488830-4gsj2\" (UID: \"a25d2383-1995-4dda-ab68-ab5872da5a5e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29488830-4gsj2" Jan 25 08:30:00 crc kubenswrapper[4832]: I0125 08:30:00.401825 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a25d2383-1995-4dda-ab68-ab5872da5a5e-secret-volume\") pod \"collect-profiles-29488830-4gsj2\" (UID: \"a25d2383-1995-4dda-ab68-ab5872da5a5e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29488830-4gsj2" Jan 25 08:30:00 crc kubenswrapper[4832]: I0125 08:30:00.411606 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9hb9n\" (UniqueName: \"kubernetes.io/projected/a25d2383-1995-4dda-ab68-ab5872da5a5e-kube-api-access-9hb9n\") pod \"collect-profiles-29488830-4gsj2\" (UID: \"a25d2383-1995-4dda-ab68-ab5872da5a5e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29488830-4gsj2" Jan 25 08:30:00 crc kubenswrapper[4832]: I0125 08:30:00.480536 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29488830-4gsj2" Jan 25 08:30:00 crc kubenswrapper[4832]: I0125 08:30:00.938600 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29488830-4gsj2"] Jan 25 08:30:01 crc kubenswrapper[4832]: I0125 08:30:01.158503 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29488830-4gsj2" event={"ID":"a25d2383-1995-4dda-ab68-ab5872da5a5e","Type":"ContainerStarted","Data":"7392628c8e486aede12430904d3c21b5e7eb127d58ea052fcea26bcfa12b89ed"} Jan 25 08:30:01 crc kubenswrapper[4832]: I0125 08:30:01.158791 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29488830-4gsj2" event={"ID":"a25d2383-1995-4dda-ab68-ab5872da5a5e","Type":"ContainerStarted","Data":"669d3e0c79ad4c8c683d9ef81e5df109e76150570f00b304e0de91ac89939ace"} Jan 25 08:30:01 crc kubenswrapper[4832]: I0125 08:30:01.181660 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29488830-4gsj2" podStartSLOduration=1.181634687 podStartE2EDuration="1.181634687s" podCreationTimestamp="2026-01-25 08:30:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-25 08:30:01.173798942 +0000 UTC m=+1983.847622495" watchObservedRunningTime="2026-01-25 08:30:01.181634687 +0000 UTC m=+1983.855458230" Jan 25 08:30:02 crc kubenswrapper[4832]: I0125 08:30:02.169078 4832 generic.go:334] "Generic (PLEG): container finished" podID="a25d2383-1995-4dda-ab68-ab5872da5a5e" containerID="7392628c8e486aede12430904d3c21b5e7eb127d58ea052fcea26bcfa12b89ed" exitCode=0 Jan 25 08:30:02 crc kubenswrapper[4832]: I0125 08:30:02.169182 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29488830-4gsj2" event={"ID":"a25d2383-1995-4dda-ab68-ab5872da5a5e","Type":"ContainerDied","Data":"7392628c8e486aede12430904d3c21b5e7eb127d58ea052fcea26bcfa12b89ed"} Jan 25 08:30:03 crc kubenswrapper[4832]: I0125 08:30:03.679528 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29488830-4gsj2" Jan 25 08:30:03 crc kubenswrapper[4832]: I0125 08:30:03.761765 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9hb9n\" (UniqueName: \"kubernetes.io/projected/a25d2383-1995-4dda-ab68-ab5872da5a5e-kube-api-access-9hb9n\") pod \"a25d2383-1995-4dda-ab68-ab5872da5a5e\" (UID: \"a25d2383-1995-4dda-ab68-ab5872da5a5e\") " Jan 25 08:30:03 crc kubenswrapper[4832]: I0125 08:30:03.762086 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a25d2383-1995-4dda-ab68-ab5872da5a5e-secret-volume\") pod \"a25d2383-1995-4dda-ab68-ab5872da5a5e\" (UID: \"a25d2383-1995-4dda-ab68-ab5872da5a5e\") " Jan 25 08:30:03 crc kubenswrapper[4832]: I0125 08:30:03.762227 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a25d2383-1995-4dda-ab68-ab5872da5a5e-config-volume\") pod \"a25d2383-1995-4dda-ab68-ab5872da5a5e\" (UID: \"a25d2383-1995-4dda-ab68-ab5872da5a5e\") " Jan 25 08:30:03 crc kubenswrapper[4832]: I0125 08:30:03.763050 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a25d2383-1995-4dda-ab68-ab5872da5a5e-config-volume" (OuterVolumeSpecName: "config-volume") pod "a25d2383-1995-4dda-ab68-ab5872da5a5e" (UID: "a25d2383-1995-4dda-ab68-ab5872da5a5e"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 25 08:30:03 crc kubenswrapper[4832]: I0125 08:30:03.763892 4832 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a25d2383-1995-4dda-ab68-ab5872da5a5e-config-volume\") on node \"crc\" DevicePath \"\"" Jan 25 08:30:03 crc kubenswrapper[4832]: I0125 08:30:03.768036 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a25d2383-1995-4dda-ab68-ab5872da5a5e-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "a25d2383-1995-4dda-ab68-ab5872da5a5e" (UID: "a25d2383-1995-4dda-ab68-ab5872da5a5e"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 25 08:30:03 crc kubenswrapper[4832]: I0125 08:30:03.768376 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a25d2383-1995-4dda-ab68-ab5872da5a5e-kube-api-access-9hb9n" (OuterVolumeSpecName: "kube-api-access-9hb9n") pod "a25d2383-1995-4dda-ab68-ab5872da5a5e" (UID: "a25d2383-1995-4dda-ab68-ab5872da5a5e"). InnerVolumeSpecName "kube-api-access-9hb9n". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 25 08:30:03 crc kubenswrapper[4832]: I0125 08:30:03.865954 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9hb9n\" (UniqueName: \"kubernetes.io/projected/a25d2383-1995-4dda-ab68-ab5872da5a5e-kube-api-access-9hb9n\") on node \"crc\" DevicePath \"\"" Jan 25 08:30:03 crc kubenswrapper[4832]: I0125 08:30:03.865993 4832 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a25d2383-1995-4dda-ab68-ab5872da5a5e-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 25 08:30:04 crc kubenswrapper[4832]: I0125 08:30:04.185604 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29488830-4gsj2" event={"ID":"a25d2383-1995-4dda-ab68-ab5872da5a5e","Type":"ContainerDied","Data":"669d3e0c79ad4c8c683d9ef81e5df109e76150570f00b304e0de91ac89939ace"} Jan 25 08:30:04 crc kubenswrapper[4832]: I0125 08:30:04.185648 4832 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="669d3e0c79ad4c8c683d9ef81e5df109e76150570f00b304e0de91ac89939ace" Jan 25 08:30:04 crc kubenswrapper[4832]: I0125 08:30:04.185654 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29488830-4gsj2" Jan 25 08:30:04 crc kubenswrapper[4832]: I0125 08:30:04.244724 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29488785-dcf79"] Jan 25 08:30:04 crc kubenswrapper[4832]: I0125 08:30:04.252973 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29488785-dcf79"] Jan 25 08:30:05 crc kubenswrapper[4832]: I0125 08:30:05.682235 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="051ceaa0-fdb3-480a-9c5d-f56b1194ca81" path="/var/lib/kubelet/pods/051ceaa0-fdb3-480a-9c5d-f56b1194ca81/volumes" Jan 25 08:30:22 crc kubenswrapper[4832]: I0125 08:30:22.149892 4832 patch_prober.go:28] interesting pod/machine-config-daemon-9r9sz container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 25 08:30:22 crc kubenswrapper[4832]: I0125 08:30:22.150460 4832 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-9r9sz" podUID="1fb47e8e-c812-41b4-9be7-3fad81e121b0" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 25 08:30:22 crc kubenswrapper[4832]: I0125 08:30:22.150823 4832 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-9r9sz" Jan 25 08:30:22 crc kubenswrapper[4832]: I0125 08:30:22.151610 4832 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"5ee81b1e42e0e2f931beb9dc8d8ff5683471d0ba095236f471161e82f9c1c998"} pod="openshift-machine-config-operator/machine-config-daemon-9r9sz" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 25 08:30:22 crc kubenswrapper[4832]: I0125 08:30:22.151690 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-9r9sz" podUID="1fb47e8e-c812-41b4-9be7-3fad81e121b0" containerName="machine-config-daemon" containerID="cri-o://5ee81b1e42e0e2f931beb9dc8d8ff5683471d0ba095236f471161e82f9c1c998" gracePeriod=600 Jan 25 08:30:22 crc kubenswrapper[4832]: I0125 08:30:22.357560 4832 generic.go:334] "Generic (PLEG): container finished" podID="1fb47e8e-c812-41b4-9be7-3fad81e121b0" containerID="5ee81b1e42e0e2f931beb9dc8d8ff5683471d0ba095236f471161e82f9c1c998" exitCode=0 Jan 25 08:30:22 crc kubenswrapper[4832]: I0125 08:30:22.357602 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-9r9sz" event={"ID":"1fb47e8e-c812-41b4-9be7-3fad81e121b0","Type":"ContainerDied","Data":"5ee81b1e42e0e2f931beb9dc8d8ff5683471d0ba095236f471161e82f9c1c998"} Jan 25 08:30:22 crc kubenswrapper[4832]: I0125 08:30:22.357639 4832 scope.go:117] "RemoveContainer" containerID="cac454964b3d1f20ac28961991abf402bf242194f2fbad579737da7d57d4a27f" Jan 25 08:30:23 crc kubenswrapper[4832]: I0125 08:30:23.367105 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-9r9sz" event={"ID":"1fb47e8e-c812-41b4-9be7-3fad81e121b0","Type":"ContainerStarted","Data":"9f2eeb7f40f324f08ff39981fc95d743c2fa5a392afa220896be4c22d983c99b"} Jan 25 08:30:28 crc kubenswrapper[4832]: I0125 08:30:28.411285 4832 generic.go:334] "Generic (PLEG): container finished" podID="ca88c519-c20b-4e26-86c2-5b62b163af37" containerID="2aedaf7f30621d2d3ce16a679c6ff8ec95e58f024de5af8743eea8ebe1d59198" exitCode=0 Jan 25 08:30:28 crc kubenswrapper[4832]: I0125 08:30:28.411369 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-ftpbj" event={"ID":"ca88c519-c20b-4e26-86c2-5b62b163af37","Type":"ContainerDied","Data":"2aedaf7f30621d2d3ce16a679c6ff8ec95e58f024de5af8743eea8ebe1d59198"} Jan 25 08:30:29 crc kubenswrapper[4832]: I0125 08:30:29.817663 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-ftpbj" Jan 25 08:30:29 crc kubenswrapper[4832]: I0125 08:30:29.870075 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ca88c519-c20b-4e26-86c2-5b62b163af37-repo-setup-combined-ca-bundle\") pod \"ca88c519-c20b-4e26-86c2-5b62b163af37\" (UID: \"ca88c519-c20b-4e26-86c2-5b62b163af37\") " Jan 25 08:30:29 crc kubenswrapper[4832]: I0125 08:30:29.870158 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ca88c519-c20b-4e26-86c2-5b62b163af37-telemetry-combined-ca-bundle\") pod \"ca88c519-c20b-4e26-86c2-5b62b163af37\" (UID: \"ca88c519-c20b-4e26-86c2-5b62b163af37\") " Jan 25 08:30:29 crc kubenswrapper[4832]: I0125 08:30:29.870198 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/ca88c519-c20b-4e26-86c2-5b62b163af37-openstack-edpm-ipam-ovn-default-certs-0\") pod \"ca88c519-c20b-4e26-86c2-5b62b163af37\" (UID: \"ca88c519-c20b-4e26-86c2-5b62b163af37\") " Jan 25 08:30:29 crc kubenswrapper[4832]: I0125 08:30:29.870230 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/ca88c519-c20b-4e26-86c2-5b62b163af37-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"ca88c519-c20b-4e26-86c2-5b62b163af37\" (UID: \"ca88c519-c20b-4e26-86c2-5b62b163af37\") " Jan 25 08:30:29 crc kubenswrapper[4832]: I0125 08:30:29.870297 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-plgqg\" (UniqueName: \"kubernetes.io/projected/ca88c519-c20b-4e26-86c2-5b62b163af37-kube-api-access-plgqg\") pod \"ca88c519-c20b-4e26-86c2-5b62b163af37\" (UID: \"ca88c519-c20b-4e26-86c2-5b62b163af37\") " Jan 25 08:30:29 crc kubenswrapper[4832]: I0125 08:30:29.870339 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/ca88c519-c20b-4e26-86c2-5b62b163af37-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"ca88c519-c20b-4e26-86c2-5b62b163af37\" (UID: \"ca88c519-c20b-4e26-86c2-5b62b163af37\") " Jan 25 08:30:29 crc kubenswrapper[4832]: I0125 08:30:29.870447 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ca88c519-c20b-4e26-86c2-5b62b163af37-inventory\") pod \"ca88c519-c20b-4e26-86c2-5b62b163af37\" (UID: \"ca88c519-c20b-4e26-86c2-5b62b163af37\") " Jan 25 08:30:29 crc kubenswrapper[4832]: I0125 08:30:29.870481 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ca88c519-c20b-4e26-86c2-5b62b163af37-ovn-combined-ca-bundle\") pod \"ca88c519-c20b-4e26-86c2-5b62b163af37\" (UID: \"ca88c519-c20b-4e26-86c2-5b62b163af37\") " Jan 25 08:30:29 crc kubenswrapper[4832]: I0125 08:30:29.870508 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/ca88c519-c20b-4e26-86c2-5b62b163af37-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"ca88c519-c20b-4e26-86c2-5b62b163af37\" (UID: \"ca88c519-c20b-4e26-86c2-5b62b163af37\") " Jan 25 08:30:29 crc kubenswrapper[4832]: I0125 08:30:29.870529 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ca88c519-c20b-4e26-86c2-5b62b163af37-nova-combined-ca-bundle\") pod \"ca88c519-c20b-4e26-86c2-5b62b163af37\" (UID: \"ca88c519-c20b-4e26-86c2-5b62b163af37\") " Jan 25 08:30:29 crc kubenswrapper[4832]: I0125 08:30:29.870572 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ca88c519-c20b-4e26-86c2-5b62b163af37-bootstrap-combined-ca-bundle\") pod \"ca88c519-c20b-4e26-86c2-5b62b163af37\" (UID: \"ca88c519-c20b-4e26-86c2-5b62b163af37\") " Jan 25 08:30:29 crc kubenswrapper[4832]: I0125 08:30:29.870597 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ca88c519-c20b-4e26-86c2-5b62b163af37-libvirt-combined-ca-bundle\") pod \"ca88c519-c20b-4e26-86c2-5b62b163af37\" (UID: \"ca88c519-c20b-4e26-86c2-5b62b163af37\") " Jan 25 08:30:29 crc kubenswrapper[4832]: I0125 08:30:29.870641 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ca88c519-c20b-4e26-86c2-5b62b163af37-neutron-metadata-combined-ca-bundle\") pod \"ca88c519-c20b-4e26-86c2-5b62b163af37\" (UID: \"ca88c519-c20b-4e26-86c2-5b62b163af37\") " Jan 25 08:30:29 crc kubenswrapper[4832]: I0125 08:30:29.870690 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ca88c519-c20b-4e26-86c2-5b62b163af37-ssh-key-openstack-edpm-ipam\") pod \"ca88c519-c20b-4e26-86c2-5b62b163af37\" (UID: \"ca88c519-c20b-4e26-86c2-5b62b163af37\") " Jan 25 08:30:29 crc kubenswrapper[4832]: I0125 08:30:29.877491 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ca88c519-c20b-4e26-86c2-5b62b163af37-openstack-edpm-ipam-libvirt-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-libvirt-default-certs-0") pod "ca88c519-c20b-4e26-86c2-5b62b163af37" (UID: "ca88c519-c20b-4e26-86c2-5b62b163af37"). InnerVolumeSpecName "openstack-edpm-ipam-libvirt-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 25 08:30:29 crc kubenswrapper[4832]: I0125 08:30:29.877558 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ca88c519-c20b-4e26-86c2-5b62b163af37-telemetry-combined-ca-bundle" (OuterVolumeSpecName: "telemetry-combined-ca-bundle") pod "ca88c519-c20b-4e26-86c2-5b62b163af37" (UID: "ca88c519-c20b-4e26-86c2-5b62b163af37"). InnerVolumeSpecName "telemetry-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 25 08:30:29 crc kubenswrapper[4832]: I0125 08:30:29.879236 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ca88c519-c20b-4e26-86c2-5b62b163af37-openstack-edpm-ipam-ovn-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-ovn-default-certs-0") pod "ca88c519-c20b-4e26-86c2-5b62b163af37" (UID: "ca88c519-c20b-4e26-86c2-5b62b163af37"). InnerVolumeSpecName "openstack-edpm-ipam-ovn-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 25 08:30:29 crc kubenswrapper[4832]: I0125 08:30:29.879892 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ca88c519-c20b-4e26-86c2-5b62b163af37-kube-api-access-plgqg" (OuterVolumeSpecName: "kube-api-access-plgqg") pod "ca88c519-c20b-4e26-86c2-5b62b163af37" (UID: "ca88c519-c20b-4e26-86c2-5b62b163af37"). InnerVolumeSpecName "kube-api-access-plgqg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 25 08:30:29 crc kubenswrapper[4832]: I0125 08:30:29.881808 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ca88c519-c20b-4e26-86c2-5b62b163af37-libvirt-combined-ca-bundle" (OuterVolumeSpecName: "libvirt-combined-ca-bundle") pod "ca88c519-c20b-4e26-86c2-5b62b163af37" (UID: "ca88c519-c20b-4e26-86c2-5b62b163af37"). InnerVolumeSpecName "libvirt-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 25 08:30:29 crc kubenswrapper[4832]: I0125 08:30:29.883087 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ca88c519-c20b-4e26-86c2-5b62b163af37-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "ca88c519-c20b-4e26-86c2-5b62b163af37" (UID: "ca88c519-c20b-4e26-86c2-5b62b163af37"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 25 08:30:29 crc kubenswrapper[4832]: I0125 08:30:29.885823 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ca88c519-c20b-4e26-86c2-5b62b163af37-openstack-edpm-ipam-neutron-metadata-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-neutron-metadata-default-certs-0") pod "ca88c519-c20b-4e26-86c2-5b62b163af37" (UID: "ca88c519-c20b-4e26-86c2-5b62b163af37"). InnerVolumeSpecName "openstack-edpm-ipam-neutron-metadata-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 25 08:30:29 crc kubenswrapper[4832]: I0125 08:30:29.887761 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ca88c519-c20b-4e26-86c2-5b62b163af37-openstack-edpm-ipam-telemetry-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-telemetry-default-certs-0") pod "ca88c519-c20b-4e26-86c2-5b62b163af37" (UID: "ca88c519-c20b-4e26-86c2-5b62b163af37"). InnerVolumeSpecName "openstack-edpm-ipam-telemetry-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 25 08:30:29 crc kubenswrapper[4832]: I0125 08:30:29.900713 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ca88c519-c20b-4e26-86c2-5b62b163af37-nova-combined-ca-bundle" (OuterVolumeSpecName: "nova-combined-ca-bundle") pod "ca88c519-c20b-4e26-86c2-5b62b163af37" (UID: "ca88c519-c20b-4e26-86c2-5b62b163af37"). InnerVolumeSpecName "nova-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 25 08:30:29 crc kubenswrapper[4832]: I0125 08:30:29.900750 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ca88c519-c20b-4e26-86c2-5b62b163af37-ovn-combined-ca-bundle" (OuterVolumeSpecName: "ovn-combined-ca-bundle") pod "ca88c519-c20b-4e26-86c2-5b62b163af37" (UID: "ca88c519-c20b-4e26-86c2-5b62b163af37"). InnerVolumeSpecName "ovn-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 25 08:30:29 crc kubenswrapper[4832]: I0125 08:30:29.903656 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ca88c519-c20b-4e26-86c2-5b62b163af37-repo-setup-combined-ca-bundle" (OuterVolumeSpecName: "repo-setup-combined-ca-bundle") pod "ca88c519-c20b-4e26-86c2-5b62b163af37" (UID: "ca88c519-c20b-4e26-86c2-5b62b163af37"). InnerVolumeSpecName "repo-setup-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 25 08:30:29 crc kubenswrapper[4832]: I0125 08:30:29.918019 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ca88c519-c20b-4e26-86c2-5b62b163af37-neutron-metadata-combined-ca-bundle" (OuterVolumeSpecName: "neutron-metadata-combined-ca-bundle") pod "ca88c519-c20b-4e26-86c2-5b62b163af37" (UID: "ca88c519-c20b-4e26-86c2-5b62b163af37"). InnerVolumeSpecName "neutron-metadata-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 25 08:30:29 crc kubenswrapper[4832]: I0125 08:30:29.919297 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ca88c519-c20b-4e26-86c2-5b62b163af37-inventory" (OuterVolumeSpecName: "inventory") pod "ca88c519-c20b-4e26-86c2-5b62b163af37" (UID: "ca88c519-c20b-4e26-86c2-5b62b163af37"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 25 08:30:29 crc kubenswrapper[4832]: I0125 08:30:29.922270 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ca88c519-c20b-4e26-86c2-5b62b163af37-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "ca88c519-c20b-4e26-86c2-5b62b163af37" (UID: "ca88c519-c20b-4e26-86c2-5b62b163af37"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 25 08:30:29 crc kubenswrapper[4832]: I0125 08:30:29.973440 4832 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ca88c519-c20b-4e26-86c2-5b62b163af37-inventory\") on node \"crc\" DevicePath \"\"" Jan 25 08:30:29 crc kubenswrapper[4832]: I0125 08:30:29.973473 4832 reconciler_common.go:293] "Volume detached for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ca88c519-c20b-4e26-86c2-5b62b163af37-ovn-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 25 08:30:29 crc kubenswrapper[4832]: I0125 08:30:29.973487 4832 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/ca88c519-c20b-4e26-86c2-5b62b163af37-openstack-edpm-ipam-neutron-metadata-default-certs-0\") on node \"crc\" DevicePath \"\"" Jan 25 08:30:29 crc kubenswrapper[4832]: I0125 08:30:29.973498 4832 reconciler_common.go:293] "Volume detached for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ca88c519-c20b-4e26-86c2-5b62b163af37-nova-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 25 08:30:29 crc kubenswrapper[4832]: I0125 08:30:29.973508 4832 reconciler_common.go:293] "Volume detached for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ca88c519-c20b-4e26-86c2-5b62b163af37-libvirt-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 25 08:30:29 crc kubenswrapper[4832]: I0125 08:30:29.973518 4832 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ca88c519-c20b-4e26-86c2-5b62b163af37-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 25 08:30:29 crc kubenswrapper[4832]: I0125 08:30:29.973527 4832 reconciler_common.go:293] "Volume detached for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ca88c519-c20b-4e26-86c2-5b62b163af37-neutron-metadata-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 25 08:30:29 crc kubenswrapper[4832]: I0125 08:30:29.973535 4832 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ca88c519-c20b-4e26-86c2-5b62b163af37-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 25 08:30:29 crc kubenswrapper[4832]: I0125 08:30:29.973546 4832 reconciler_common.go:293] "Volume detached for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ca88c519-c20b-4e26-86c2-5b62b163af37-repo-setup-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 25 08:30:29 crc kubenswrapper[4832]: I0125 08:30:29.973555 4832 reconciler_common.go:293] "Volume detached for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ca88c519-c20b-4e26-86c2-5b62b163af37-telemetry-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 25 08:30:29 crc kubenswrapper[4832]: I0125 08:30:29.973566 4832 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/ca88c519-c20b-4e26-86c2-5b62b163af37-openstack-edpm-ipam-ovn-default-certs-0\") on node \"crc\" DevicePath \"\"" Jan 25 08:30:29 crc kubenswrapper[4832]: I0125 08:30:29.973575 4832 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/ca88c519-c20b-4e26-86c2-5b62b163af37-openstack-edpm-ipam-telemetry-default-certs-0\") on node \"crc\" DevicePath \"\"" Jan 25 08:30:29 crc kubenswrapper[4832]: I0125 08:30:29.973585 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-plgqg\" (UniqueName: \"kubernetes.io/projected/ca88c519-c20b-4e26-86c2-5b62b163af37-kube-api-access-plgqg\") on node \"crc\" DevicePath \"\"" Jan 25 08:30:29 crc kubenswrapper[4832]: I0125 08:30:29.973593 4832 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/ca88c519-c20b-4e26-86c2-5b62b163af37-openstack-edpm-ipam-libvirt-default-certs-0\") on node \"crc\" DevicePath \"\"" Jan 25 08:30:30 crc kubenswrapper[4832]: I0125 08:30:30.430940 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-ftpbj" event={"ID":"ca88c519-c20b-4e26-86c2-5b62b163af37","Type":"ContainerDied","Data":"be5ed8710d00abd326c2d29ce7b816218c14f702f6e8e5d8c8b7a64b257775d0"} Jan 25 08:30:30 crc kubenswrapper[4832]: I0125 08:30:30.431263 4832 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="be5ed8710d00abd326c2d29ce7b816218c14f702f6e8e5d8c8b7a64b257775d0" Jan 25 08:30:30 crc kubenswrapper[4832]: I0125 08:30:30.430992 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-ftpbj" Jan 25 08:30:30 crc kubenswrapper[4832]: I0125 08:30:30.618512 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-bxs2f"] Jan 25 08:30:30 crc kubenswrapper[4832]: E0125 08:30:30.618954 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ca88c519-c20b-4e26-86c2-5b62b163af37" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Jan 25 08:30:30 crc kubenswrapper[4832]: I0125 08:30:30.618980 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="ca88c519-c20b-4e26-86c2-5b62b163af37" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Jan 25 08:30:30 crc kubenswrapper[4832]: E0125 08:30:30.619008 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a25d2383-1995-4dda-ab68-ab5872da5a5e" containerName="collect-profiles" Jan 25 08:30:30 crc kubenswrapper[4832]: I0125 08:30:30.619015 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="a25d2383-1995-4dda-ab68-ab5872da5a5e" containerName="collect-profiles" Jan 25 08:30:30 crc kubenswrapper[4832]: I0125 08:30:30.619188 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="a25d2383-1995-4dda-ab68-ab5872da5a5e" containerName="collect-profiles" Jan 25 08:30:30 crc kubenswrapper[4832]: I0125 08:30:30.619202 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="ca88c519-c20b-4e26-86c2-5b62b163af37" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Jan 25 08:30:30 crc kubenswrapper[4832]: I0125 08:30:30.620011 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-bxs2f" Jan 25 08:30:30 crc kubenswrapper[4832]: I0125 08:30:30.631060 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 25 08:30:30 crc kubenswrapper[4832]: I0125 08:30:30.631378 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 25 08:30:30 crc kubenswrapper[4832]: I0125 08:30:30.631647 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-7jwxb" Jan 25 08:30:30 crc kubenswrapper[4832]: I0125 08:30:30.632042 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-bxs2f"] Jan 25 08:30:30 crc kubenswrapper[4832]: I0125 08:30:30.632252 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 25 08:30:30 crc kubenswrapper[4832]: I0125 08:30:30.632961 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-config" Jan 25 08:30:30 crc kubenswrapper[4832]: I0125 08:30:30.685021 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/23b2cd4e-4921-4082-8a44-50c065f88f52-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-bxs2f\" (UID: \"23b2cd4e-4921-4082-8a44-50c065f88f52\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-bxs2f" Jan 25 08:30:30 crc kubenswrapper[4832]: I0125 08:30:30.685136 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/23b2cd4e-4921-4082-8a44-50c065f88f52-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-bxs2f\" (UID: \"23b2cd4e-4921-4082-8a44-50c065f88f52\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-bxs2f" Jan 25 08:30:30 crc kubenswrapper[4832]: I0125 08:30:30.685320 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xr25f\" (UniqueName: \"kubernetes.io/projected/23b2cd4e-4921-4082-8a44-50c065f88f52-kube-api-access-xr25f\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-bxs2f\" (UID: \"23b2cd4e-4921-4082-8a44-50c065f88f52\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-bxs2f" Jan 25 08:30:30 crc kubenswrapper[4832]: I0125 08:30:30.685345 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/23b2cd4e-4921-4082-8a44-50c065f88f52-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-bxs2f\" (UID: \"23b2cd4e-4921-4082-8a44-50c065f88f52\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-bxs2f" Jan 25 08:30:30 crc kubenswrapper[4832]: I0125 08:30:30.685449 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/23b2cd4e-4921-4082-8a44-50c065f88f52-ssh-key-openstack-edpm-ipam\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-bxs2f\" (UID: \"23b2cd4e-4921-4082-8a44-50c065f88f52\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-bxs2f" Jan 25 08:30:30 crc kubenswrapper[4832]: I0125 08:30:30.787447 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xr25f\" (UniqueName: \"kubernetes.io/projected/23b2cd4e-4921-4082-8a44-50c065f88f52-kube-api-access-xr25f\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-bxs2f\" (UID: \"23b2cd4e-4921-4082-8a44-50c065f88f52\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-bxs2f" Jan 25 08:30:30 crc kubenswrapper[4832]: I0125 08:30:30.787506 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/23b2cd4e-4921-4082-8a44-50c065f88f52-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-bxs2f\" (UID: \"23b2cd4e-4921-4082-8a44-50c065f88f52\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-bxs2f" Jan 25 08:30:30 crc kubenswrapper[4832]: I0125 08:30:30.787555 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/23b2cd4e-4921-4082-8a44-50c065f88f52-ssh-key-openstack-edpm-ipam\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-bxs2f\" (UID: \"23b2cd4e-4921-4082-8a44-50c065f88f52\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-bxs2f" Jan 25 08:30:30 crc kubenswrapper[4832]: I0125 08:30:30.787588 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/23b2cd4e-4921-4082-8a44-50c065f88f52-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-bxs2f\" (UID: \"23b2cd4e-4921-4082-8a44-50c065f88f52\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-bxs2f" Jan 25 08:30:30 crc kubenswrapper[4832]: I0125 08:30:30.787629 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/23b2cd4e-4921-4082-8a44-50c065f88f52-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-bxs2f\" (UID: \"23b2cd4e-4921-4082-8a44-50c065f88f52\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-bxs2f" Jan 25 08:30:30 crc kubenswrapper[4832]: I0125 08:30:30.791914 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/23b2cd4e-4921-4082-8a44-50c065f88f52-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-bxs2f\" (UID: \"23b2cd4e-4921-4082-8a44-50c065f88f52\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-bxs2f" Jan 25 08:30:30 crc kubenswrapper[4832]: I0125 08:30:30.793908 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/23b2cd4e-4921-4082-8a44-50c065f88f52-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-bxs2f\" (UID: \"23b2cd4e-4921-4082-8a44-50c065f88f52\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-bxs2f" Jan 25 08:30:30 crc kubenswrapper[4832]: I0125 08:30:30.793991 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/23b2cd4e-4921-4082-8a44-50c065f88f52-ssh-key-openstack-edpm-ipam\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-bxs2f\" (UID: \"23b2cd4e-4921-4082-8a44-50c065f88f52\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-bxs2f" Jan 25 08:30:30 crc kubenswrapper[4832]: I0125 08:30:30.795189 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/23b2cd4e-4921-4082-8a44-50c065f88f52-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-bxs2f\" (UID: \"23b2cd4e-4921-4082-8a44-50c065f88f52\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-bxs2f" Jan 25 08:30:30 crc kubenswrapper[4832]: I0125 08:30:30.805346 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xr25f\" (UniqueName: \"kubernetes.io/projected/23b2cd4e-4921-4082-8a44-50c065f88f52-kube-api-access-xr25f\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-bxs2f\" (UID: \"23b2cd4e-4921-4082-8a44-50c065f88f52\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-bxs2f" Jan 25 08:30:30 crc kubenswrapper[4832]: I0125 08:30:30.939934 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-bxs2f" Jan 25 08:30:31 crc kubenswrapper[4832]: I0125 08:30:31.473047 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-bxs2f"] Jan 25 08:30:32 crc kubenswrapper[4832]: I0125 08:30:32.450917 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-bxs2f" event={"ID":"23b2cd4e-4921-4082-8a44-50c065f88f52","Type":"ContainerStarted","Data":"314c50c2fde4c491de8aa1b333ae1af2d1d7590e86ae35fd262f0d96055012c5"} Jan 25 08:30:32 crc kubenswrapper[4832]: I0125 08:30:32.451480 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-bxs2f" event={"ID":"23b2cd4e-4921-4082-8a44-50c065f88f52","Type":"ContainerStarted","Data":"1325b8657510f93276cf6a1ce33d7187a450dcaed416a0a5d3efbcbae5228192"} Jan 25 08:30:32 crc kubenswrapper[4832]: I0125 08:30:32.475081 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-bxs2f" podStartSLOduration=1.985346925 podStartE2EDuration="2.475050612s" podCreationTimestamp="2026-01-25 08:30:30 +0000 UTC" firstStartedPulling="2026-01-25 08:30:31.483418533 +0000 UTC m=+2014.157242066" lastFinishedPulling="2026-01-25 08:30:31.97312222 +0000 UTC m=+2014.646945753" observedRunningTime="2026-01-25 08:30:32.471574676 +0000 UTC m=+2015.145398209" watchObservedRunningTime="2026-01-25 08:30:32.475050612 +0000 UTC m=+2015.148874145" Jan 25 08:30:41 crc kubenswrapper[4832]: I0125 08:30:41.827102 4832 scope.go:117] "RemoveContainer" containerID="6387974f472abd37b386de1337e463ca8517d1c91ef706a01e56a7509c79ae88" Jan 25 08:31:45 crc kubenswrapper[4832]: I0125 08:31:45.086662 4832 generic.go:334] "Generic (PLEG): container finished" podID="23b2cd4e-4921-4082-8a44-50c065f88f52" containerID="314c50c2fde4c491de8aa1b333ae1af2d1d7590e86ae35fd262f0d96055012c5" exitCode=0 Jan 25 08:31:45 crc kubenswrapper[4832]: I0125 08:31:45.086838 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-bxs2f" event={"ID":"23b2cd4e-4921-4082-8a44-50c065f88f52","Type":"ContainerDied","Data":"314c50c2fde4c491de8aa1b333ae1af2d1d7590e86ae35fd262f0d96055012c5"} Jan 25 08:31:46 crc kubenswrapper[4832]: I0125 08:31:46.505774 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-bxs2f" Jan 25 08:31:46 crc kubenswrapper[4832]: I0125 08:31:46.688341 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/23b2cd4e-4921-4082-8a44-50c065f88f52-inventory\") pod \"23b2cd4e-4921-4082-8a44-50c065f88f52\" (UID: \"23b2cd4e-4921-4082-8a44-50c065f88f52\") " Jan 25 08:31:46 crc kubenswrapper[4832]: I0125 08:31:46.688586 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/23b2cd4e-4921-4082-8a44-50c065f88f52-ssh-key-openstack-edpm-ipam\") pod \"23b2cd4e-4921-4082-8a44-50c065f88f52\" (UID: \"23b2cd4e-4921-4082-8a44-50c065f88f52\") " Jan 25 08:31:46 crc kubenswrapper[4832]: I0125 08:31:46.688620 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/23b2cd4e-4921-4082-8a44-50c065f88f52-ovn-combined-ca-bundle\") pod \"23b2cd4e-4921-4082-8a44-50c065f88f52\" (UID: \"23b2cd4e-4921-4082-8a44-50c065f88f52\") " Jan 25 08:31:46 crc kubenswrapper[4832]: I0125 08:31:46.688729 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xr25f\" (UniqueName: \"kubernetes.io/projected/23b2cd4e-4921-4082-8a44-50c065f88f52-kube-api-access-xr25f\") pod \"23b2cd4e-4921-4082-8a44-50c065f88f52\" (UID: \"23b2cd4e-4921-4082-8a44-50c065f88f52\") " Jan 25 08:31:46 crc kubenswrapper[4832]: I0125 08:31:46.688821 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/23b2cd4e-4921-4082-8a44-50c065f88f52-ovncontroller-config-0\") pod \"23b2cd4e-4921-4082-8a44-50c065f88f52\" (UID: \"23b2cd4e-4921-4082-8a44-50c065f88f52\") " Jan 25 08:31:46 crc kubenswrapper[4832]: I0125 08:31:46.693914 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/23b2cd4e-4921-4082-8a44-50c065f88f52-ovn-combined-ca-bundle" (OuterVolumeSpecName: "ovn-combined-ca-bundle") pod "23b2cd4e-4921-4082-8a44-50c065f88f52" (UID: "23b2cd4e-4921-4082-8a44-50c065f88f52"). InnerVolumeSpecName "ovn-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 25 08:31:46 crc kubenswrapper[4832]: I0125 08:31:46.695257 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/23b2cd4e-4921-4082-8a44-50c065f88f52-kube-api-access-xr25f" (OuterVolumeSpecName: "kube-api-access-xr25f") pod "23b2cd4e-4921-4082-8a44-50c065f88f52" (UID: "23b2cd4e-4921-4082-8a44-50c065f88f52"). InnerVolumeSpecName "kube-api-access-xr25f". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 25 08:31:46 crc kubenswrapper[4832]: I0125 08:31:46.718378 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/23b2cd4e-4921-4082-8a44-50c065f88f52-inventory" (OuterVolumeSpecName: "inventory") pod "23b2cd4e-4921-4082-8a44-50c065f88f52" (UID: "23b2cd4e-4921-4082-8a44-50c065f88f52"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 25 08:31:46 crc kubenswrapper[4832]: I0125 08:31:46.718603 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/23b2cd4e-4921-4082-8a44-50c065f88f52-ovncontroller-config-0" (OuterVolumeSpecName: "ovncontroller-config-0") pod "23b2cd4e-4921-4082-8a44-50c065f88f52" (UID: "23b2cd4e-4921-4082-8a44-50c065f88f52"). InnerVolumeSpecName "ovncontroller-config-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 25 08:31:46 crc kubenswrapper[4832]: I0125 08:31:46.723675 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/23b2cd4e-4921-4082-8a44-50c065f88f52-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "23b2cd4e-4921-4082-8a44-50c065f88f52" (UID: "23b2cd4e-4921-4082-8a44-50c065f88f52"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 25 08:31:46 crc kubenswrapper[4832]: I0125 08:31:46.792906 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xr25f\" (UniqueName: \"kubernetes.io/projected/23b2cd4e-4921-4082-8a44-50c065f88f52-kube-api-access-xr25f\") on node \"crc\" DevicePath \"\"" Jan 25 08:31:46 crc kubenswrapper[4832]: I0125 08:31:46.793110 4832 reconciler_common.go:293] "Volume detached for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/23b2cd4e-4921-4082-8a44-50c065f88f52-ovncontroller-config-0\") on node \"crc\" DevicePath \"\"" Jan 25 08:31:46 crc kubenswrapper[4832]: I0125 08:31:46.793978 4832 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/23b2cd4e-4921-4082-8a44-50c065f88f52-inventory\") on node \"crc\" DevicePath \"\"" Jan 25 08:31:46 crc kubenswrapper[4832]: I0125 08:31:46.794022 4832 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/23b2cd4e-4921-4082-8a44-50c065f88f52-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 25 08:31:46 crc kubenswrapper[4832]: I0125 08:31:46.794039 4832 reconciler_common.go:293] "Volume detached for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/23b2cd4e-4921-4082-8a44-50c065f88f52-ovn-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 25 08:31:47 crc kubenswrapper[4832]: I0125 08:31:47.107234 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-bxs2f" event={"ID":"23b2cd4e-4921-4082-8a44-50c065f88f52","Type":"ContainerDied","Data":"1325b8657510f93276cf6a1ce33d7187a450dcaed416a0a5d3efbcbae5228192"} Jan 25 08:31:47 crc kubenswrapper[4832]: I0125 08:31:47.107289 4832 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1325b8657510f93276cf6a1ce33d7187a450dcaed416a0a5d3efbcbae5228192" Jan 25 08:31:47 crc kubenswrapper[4832]: I0125 08:31:47.107373 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-bxs2f" Jan 25 08:31:47 crc kubenswrapper[4832]: I0125 08:31:47.211820 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-cz2vj"] Jan 25 08:31:47 crc kubenswrapper[4832]: E0125 08:31:47.212409 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="23b2cd4e-4921-4082-8a44-50c065f88f52" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Jan 25 08:31:47 crc kubenswrapper[4832]: I0125 08:31:47.212435 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="23b2cd4e-4921-4082-8a44-50c065f88f52" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Jan 25 08:31:47 crc kubenswrapper[4832]: I0125 08:31:47.212658 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="23b2cd4e-4921-4082-8a44-50c065f88f52" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Jan 25 08:31:47 crc kubenswrapper[4832]: I0125 08:31:47.213318 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-cz2vj" Jan 25 08:31:47 crc kubenswrapper[4832]: I0125 08:31:47.217232 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 25 08:31:47 crc kubenswrapper[4832]: I0125 08:31:47.217291 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-neutron-config" Jan 25 08:31:47 crc kubenswrapper[4832]: I0125 08:31:47.217734 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-ovn-metadata-agent-neutron-config" Jan 25 08:31:47 crc kubenswrapper[4832]: I0125 08:31:47.218077 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 25 08:31:47 crc kubenswrapper[4832]: I0125 08:31:47.219035 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 25 08:31:47 crc kubenswrapper[4832]: I0125 08:31:47.219437 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-7jwxb" Jan 25 08:31:47 crc kubenswrapper[4832]: I0125 08:31:47.222930 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-cz2vj"] Jan 25 08:31:47 crc kubenswrapper[4832]: I0125 08:31:47.408076 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e0e39d1f-665b-486a-bc7c-d89d1e50fee9-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-cz2vj\" (UID: \"e0e39d1f-665b-486a-bc7c-d89d1e50fee9\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-cz2vj" Jan 25 08:31:47 crc kubenswrapper[4832]: I0125 08:31:47.408142 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/e0e39d1f-665b-486a-bc7c-d89d1e50fee9-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-cz2vj\" (UID: \"e0e39d1f-665b-486a-bc7c-d89d1e50fee9\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-cz2vj" Jan 25 08:31:47 crc kubenswrapper[4832]: I0125 08:31:47.408179 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e0e39d1f-665b-486a-bc7c-d89d1e50fee9-ssh-key-openstack-edpm-ipam\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-cz2vj\" (UID: \"e0e39d1f-665b-486a-bc7c-d89d1e50fee9\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-cz2vj" Jan 25 08:31:47 crc kubenswrapper[4832]: I0125 08:31:47.408350 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e0e39d1f-665b-486a-bc7c-d89d1e50fee9-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-cz2vj\" (UID: \"e0e39d1f-665b-486a-bc7c-d89d1e50fee9\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-cz2vj" Jan 25 08:31:47 crc kubenswrapper[4832]: I0125 08:31:47.408431 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/e0e39d1f-665b-486a-bc7c-d89d1e50fee9-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-cz2vj\" (UID: \"e0e39d1f-665b-486a-bc7c-d89d1e50fee9\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-cz2vj" Jan 25 08:31:47 crc kubenswrapper[4832]: I0125 08:31:47.408599 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-shxdn\" (UniqueName: \"kubernetes.io/projected/e0e39d1f-665b-486a-bc7c-d89d1e50fee9-kube-api-access-shxdn\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-cz2vj\" (UID: \"e0e39d1f-665b-486a-bc7c-d89d1e50fee9\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-cz2vj" Jan 25 08:31:47 crc kubenswrapper[4832]: I0125 08:31:47.510720 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e0e39d1f-665b-486a-bc7c-d89d1e50fee9-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-cz2vj\" (UID: \"e0e39d1f-665b-486a-bc7c-d89d1e50fee9\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-cz2vj" Jan 25 08:31:47 crc kubenswrapper[4832]: I0125 08:31:47.510822 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/e0e39d1f-665b-486a-bc7c-d89d1e50fee9-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-cz2vj\" (UID: \"e0e39d1f-665b-486a-bc7c-d89d1e50fee9\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-cz2vj" Jan 25 08:31:47 crc kubenswrapper[4832]: I0125 08:31:47.510917 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-shxdn\" (UniqueName: \"kubernetes.io/projected/e0e39d1f-665b-486a-bc7c-d89d1e50fee9-kube-api-access-shxdn\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-cz2vj\" (UID: \"e0e39d1f-665b-486a-bc7c-d89d1e50fee9\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-cz2vj" Jan 25 08:31:47 crc kubenswrapper[4832]: I0125 08:31:47.511091 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e0e39d1f-665b-486a-bc7c-d89d1e50fee9-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-cz2vj\" (UID: \"e0e39d1f-665b-486a-bc7c-d89d1e50fee9\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-cz2vj" Jan 25 08:31:47 crc kubenswrapper[4832]: I0125 08:31:47.511209 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/e0e39d1f-665b-486a-bc7c-d89d1e50fee9-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-cz2vj\" (UID: \"e0e39d1f-665b-486a-bc7c-d89d1e50fee9\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-cz2vj" Jan 25 08:31:47 crc kubenswrapper[4832]: I0125 08:31:47.511303 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e0e39d1f-665b-486a-bc7c-d89d1e50fee9-ssh-key-openstack-edpm-ipam\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-cz2vj\" (UID: \"e0e39d1f-665b-486a-bc7c-d89d1e50fee9\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-cz2vj" Jan 25 08:31:47 crc kubenswrapper[4832]: I0125 08:31:47.516424 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e0e39d1f-665b-486a-bc7c-d89d1e50fee9-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-cz2vj\" (UID: \"e0e39d1f-665b-486a-bc7c-d89d1e50fee9\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-cz2vj" Jan 25 08:31:47 crc kubenswrapper[4832]: I0125 08:31:47.516595 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e0e39d1f-665b-486a-bc7c-d89d1e50fee9-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-cz2vj\" (UID: \"e0e39d1f-665b-486a-bc7c-d89d1e50fee9\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-cz2vj" Jan 25 08:31:47 crc kubenswrapper[4832]: I0125 08:31:47.516826 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/e0e39d1f-665b-486a-bc7c-d89d1e50fee9-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-cz2vj\" (UID: \"e0e39d1f-665b-486a-bc7c-d89d1e50fee9\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-cz2vj" Jan 25 08:31:47 crc kubenswrapper[4832]: I0125 08:31:47.517754 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e0e39d1f-665b-486a-bc7c-d89d1e50fee9-ssh-key-openstack-edpm-ipam\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-cz2vj\" (UID: \"e0e39d1f-665b-486a-bc7c-d89d1e50fee9\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-cz2vj" Jan 25 08:31:47 crc kubenswrapper[4832]: I0125 08:31:47.518829 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/e0e39d1f-665b-486a-bc7c-d89d1e50fee9-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-cz2vj\" (UID: \"e0e39d1f-665b-486a-bc7c-d89d1e50fee9\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-cz2vj" Jan 25 08:31:47 crc kubenswrapper[4832]: I0125 08:31:47.535524 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-shxdn\" (UniqueName: \"kubernetes.io/projected/e0e39d1f-665b-486a-bc7c-d89d1e50fee9-kube-api-access-shxdn\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-cz2vj\" (UID: \"e0e39d1f-665b-486a-bc7c-d89d1e50fee9\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-cz2vj" Jan 25 08:31:47 crc kubenswrapper[4832]: I0125 08:31:47.545282 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-cz2vj" Jan 25 08:31:48 crc kubenswrapper[4832]: W0125 08:31:48.106975 4832 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode0e39d1f_665b_486a_bc7c_d89d1e50fee9.slice/crio-f1ec42fcc30010cdd686f46b5ad5df7e180d7cb4037fb85b0ab786511e999c87 WatchSource:0}: Error finding container f1ec42fcc30010cdd686f46b5ad5df7e180d7cb4037fb85b0ab786511e999c87: Status 404 returned error can't find the container with id f1ec42fcc30010cdd686f46b5ad5df7e180d7cb4037fb85b0ab786511e999c87 Jan 25 08:31:48 crc kubenswrapper[4832]: I0125 08:31:48.107687 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-cz2vj"] Jan 25 08:31:49 crc kubenswrapper[4832]: I0125 08:31:49.129323 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-cz2vj" event={"ID":"e0e39d1f-665b-486a-bc7c-d89d1e50fee9","Type":"ContainerStarted","Data":"1560931efcf661acb13a807e63ab4cbdcf066b8c3c15e0254e6a3a9a7ee161d6"} Jan 25 08:31:49 crc kubenswrapper[4832]: I0125 08:31:49.130685 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-cz2vj" event={"ID":"e0e39d1f-665b-486a-bc7c-d89d1e50fee9","Type":"ContainerStarted","Data":"f1ec42fcc30010cdd686f46b5ad5df7e180d7cb4037fb85b0ab786511e999c87"} Jan 25 08:31:49 crc kubenswrapper[4832]: I0125 08:31:49.158885 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-cz2vj" podStartSLOduration=1.746388682 podStartE2EDuration="2.158860806s" podCreationTimestamp="2026-01-25 08:31:47 +0000 UTC" firstStartedPulling="2026-01-25 08:31:48.112353771 +0000 UTC m=+2090.786177304" lastFinishedPulling="2026-01-25 08:31:48.524825885 +0000 UTC m=+2091.198649428" observedRunningTime="2026-01-25 08:31:49.145528427 +0000 UTC m=+2091.819351950" watchObservedRunningTime="2026-01-25 08:31:49.158860806 +0000 UTC m=+2091.832684339" Jan 25 08:32:22 crc kubenswrapper[4832]: I0125 08:32:22.149556 4832 patch_prober.go:28] interesting pod/machine-config-daemon-9r9sz container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 25 08:32:22 crc kubenswrapper[4832]: I0125 08:32:22.150109 4832 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-9r9sz" podUID="1fb47e8e-c812-41b4-9be7-3fad81e121b0" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 25 08:32:44 crc kubenswrapper[4832]: I0125 08:32:44.616736 4832 generic.go:334] "Generic (PLEG): container finished" podID="e0e39d1f-665b-486a-bc7c-d89d1e50fee9" containerID="1560931efcf661acb13a807e63ab4cbdcf066b8c3c15e0254e6a3a9a7ee161d6" exitCode=0 Jan 25 08:32:44 crc kubenswrapper[4832]: I0125 08:32:44.616847 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-cz2vj" event={"ID":"e0e39d1f-665b-486a-bc7c-d89d1e50fee9","Type":"ContainerDied","Data":"1560931efcf661acb13a807e63ab4cbdcf066b8c3c15e0254e6a3a9a7ee161d6"} Jan 25 08:32:46 crc kubenswrapper[4832]: I0125 08:32:46.028506 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-cz2vj" Jan 25 08:32:46 crc kubenswrapper[4832]: I0125 08:32:46.060050 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/e0e39d1f-665b-486a-bc7c-d89d1e50fee9-neutron-ovn-metadata-agent-neutron-config-0\") pod \"e0e39d1f-665b-486a-bc7c-d89d1e50fee9\" (UID: \"e0e39d1f-665b-486a-bc7c-d89d1e50fee9\") " Jan 25 08:32:46 crc kubenswrapper[4832]: I0125 08:32:46.060114 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e0e39d1f-665b-486a-bc7c-d89d1e50fee9-inventory\") pod \"e0e39d1f-665b-486a-bc7c-d89d1e50fee9\" (UID: \"e0e39d1f-665b-486a-bc7c-d89d1e50fee9\") " Jan 25 08:32:46 crc kubenswrapper[4832]: I0125 08:32:46.060138 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/e0e39d1f-665b-486a-bc7c-d89d1e50fee9-nova-metadata-neutron-config-0\") pod \"e0e39d1f-665b-486a-bc7c-d89d1e50fee9\" (UID: \"e0e39d1f-665b-486a-bc7c-d89d1e50fee9\") " Jan 25 08:32:46 crc kubenswrapper[4832]: I0125 08:32:46.060161 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-shxdn\" (UniqueName: \"kubernetes.io/projected/e0e39d1f-665b-486a-bc7c-d89d1e50fee9-kube-api-access-shxdn\") pod \"e0e39d1f-665b-486a-bc7c-d89d1e50fee9\" (UID: \"e0e39d1f-665b-486a-bc7c-d89d1e50fee9\") " Jan 25 08:32:46 crc kubenswrapper[4832]: I0125 08:32:46.061062 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e0e39d1f-665b-486a-bc7c-d89d1e50fee9-ssh-key-openstack-edpm-ipam\") pod \"e0e39d1f-665b-486a-bc7c-d89d1e50fee9\" (UID: \"e0e39d1f-665b-486a-bc7c-d89d1e50fee9\") " Jan 25 08:32:46 crc kubenswrapper[4832]: I0125 08:32:46.061100 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e0e39d1f-665b-486a-bc7c-d89d1e50fee9-neutron-metadata-combined-ca-bundle\") pod \"e0e39d1f-665b-486a-bc7c-d89d1e50fee9\" (UID: \"e0e39d1f-665b-486a-bc7c-d89d1e50fee9\") " Jan 25 08:32:46 crc kubenswrapper[4832]: I0125 08:32:46.066443 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e0e39d1f-665b-486a-bc7c-d89d1e50fee9-neutron-metadata-combined-ca-bundle" (OuterVolumeSpecName: "neutron-metadata-combined-ca-bundle") pod "e0e39d1f-665b-486a-bc7c-d89d1e50fee9" (UID: "e0e39d1f-665b-486a-bc7c-d89d1e50fee9"). InnerVolumeSpecName "neutron-metadata-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 25 08:32:46 crc kubenswrapper[4832]: I0125 08:32:46.072706 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e0e39d1f-665b-486a-bc7c-d89d1e50fee9-kube-api-access-shxdn" (OuterVolumeSpecName: "kube-api-access-shxdn") pod "e0e39d1f-665b-486a-bc7c-d89d1e50fee9" (UID: "e0e39d1f-665b-486a-bc7c-d89d1e50fee9"). InnerVolumeSpecName "kube-api-access-shxdn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 25 08:32:46 crc kubenswrapper[4832]: I0125 08:32:46.090905 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e0e39d1f-665b-486a-bc7c-d89d1e50fee9-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "e0e39d1f-665b-486a-bc7c-d89d1e50fee9" (UID: "e0e39d1f-665b-486a-bc7c-d89d1e50fee9"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 25 08:32:46 crc kubenswrapper[4832]: I0125 08:32:46.091604 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e0e39d1f-665b-486a-bc7c-d89d1e50fee9-inventory" (OuterVolumeSpecName: "inventory") pod "e0e39d1f-665b-486a-bc7c-d89d1e50fee9" (UID: "e0e39d1f-665b-486a-bc7c-d89d1e50fee9"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 25 08:32:46 crc kubenswrapper[4832]: I0125 08:32:46.094149 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e0e39d1f-665b-486a-bc7c-d89d1e50fee9-neutron-ovn-metadata-agent-neutron-config-0" (OuterVolumeSpecName: "neutron-ovn-metadata-agent-neutron-config-0") pod "e0e39d1f-665b-486a-bc7c-d89d1e50fee9" (UID: "e0e39d1f-665b-486a-bc7c-d89d1e50fee9"). InnerVolumeSpecName "neutron-ovn-metadata-agent-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 25 08:32:46 crc kubenswrapper[4832]: I0125 08:32:46.106420 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e0e39d1f-665b-486a-bc7c-d89d1e50fee9-nova-metadata-neutron-config-0" (OuterVolumeSpecName: "nova-metadata-neutron-config-0") pod "e0e39d1f-665b-486a-bc7c-d89d1e50fee9" (UID: "e0e39d1f-665b-486a-bc7c-d89d1e50fee9"). InnerVolumeSpecName "nova-metadata-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 25 08:32:46 crc kubenswrapper[4832]: I0125 08:32:46.167057 4832 reconciler_common.go:293] "Volume detached for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/e0e39d1f-665b-486a-bc7c-d89d1e50fee9-neutron-ovn-metadata-agent-neutron-config-0\") on node \"crc\" DevicePath \"\"" Jan 25 08:32:46 crc kubenswrapper[4832]: I0125 08:32:46.167102 4832 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e0e39d1f-665b-486a-bc7c-d89d1e50fee9-inventory\") on node \"crc\" DevicePath \"\"" Jan 25 08:32:46 crc kubenswrapper[4832]: I0125 08:32:46.167117 4832 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/e0e39d1f-665b-486a-bc7c-d89d1e50fee9-nova-metadata-neutron-config-0\") on node \"crc\" DevicePath \"\"" Jan 25 08:32:46 crc kubenswrapper[4832]: I0125 08:32:46.167131 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-shxdn\" (UniqueName: \"kubernetes.io/projected/e0e39d1f-665b-486a-bc7c-d89d1e50fee9-kube-api-access-shxdn\") on node \"crc\" DevicePath \"\"" Jan 25 08:32:46 crc kubenswrapper[4832]: I0125 08:32:46.167144 4832 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e0e39d1f-665b-486a-bc7c-d89d1e50fee9-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 25 08:32:46 crc kubenswrapper[4832]: I0125 08:32:46.167156 4832 reconciler_common.go:293] "Volume detached for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e0e39d1f-665b-486a-bc7c-d89d1e50fee9-neutron-metadata-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 25 08:32:46 crc kubenswrapper[4832]: I0125 08:32:46.637132 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-cz2vj" event={"ID":"e0e39d1f-665b-486a-bc7c-d89d1e50fee9","Type":"ContainerDied","Data":"f1ec42fcc30010cdd686f46b5ad5df7e180d7cb4037fb85b0ab786511e999c87"} Jan 25 08:32:46 crc kubenswrapper[4832]: I0125 08:32:46.637488 4832 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f1ec42fcc30010cdd686f46b5ad5df7e180d7cb4037fb85b0ab786511e999c87" Jan 25 08:32:46 crc kubenswrapper[4832]: I0125 08:32:46.637188 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-cz2vj" Jan 25 08:32:46 crc kubenswrapper[4832]: I0125 08:32:46.725146 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-sllb7"] Jan 25 08:32:46 crc kubenswrapper[4832]: E0125 08:32:46.725814 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e0e39d1f-665b-486a-bc7c-d89d1e50fee9" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Jan 25 08:32:46 crc kubenswrapper[4832]: I0125 08:32:46.725907 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="e0e39d1f-665b-486a-bc7c-d89d1e50fee9" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Jan 25 08:32:46 crc kubenswrapper[4832]: I0125 08:32:46.726139 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="e0e39d1f-665b-486a-bc7c-d89d1e50fee9" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Jan 25 08:32:46 crc kubenswrapper[4832]: I0125 08:32:46.726829 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-sllb7" Jan 25 08:32:46 crc kubenswrapper[4832]: I0125 08:32:46.730147 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 25 08:32:46 crc kubenswrapper[4832]: I0125 08:32:46.730427 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-7jwxb" Jan 25 08:32:46 crc kubenswrapper[4832]: I0125 08:32:46.730456 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"libvirt-secret" Jan 25 08:32:46 crc kubenswrapper[4832]: I0125 08:32:46.730275 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 25 08:32:46 crc kubenswrapper[4832]: I0125 08:32:46.730649 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 25 08:32:46 crc kubenswrapper[4832]: I0125 08:32:46.737717 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-sllb7"] Jan 25 08:32:46 crc kubenswrapper[4832]: I0125 08:32:46.778618 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/d6839ea5-4201-48d8-b390-16fac4368cb9-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-sllb7\" (UID: \"d6839ea5-4201-48d8-b390-16fac4368cb9\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-sllb7" Jan 25 08:32:46 crc kubenswrapper[4832]: I0125 08:32:46.778701 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d6839ea5-4201-48d8-b390-16fac4368cb9-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-sllb7\" (UID: \"d6839ea5-4201-48d8-b390-16fac4368cb9\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-sllb7" Jan 25 08:32:46 crc kubenswrapper[4832]: I0125 08:32:46.778734 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m6qb9\" (UniqueName: \"kubernetes.io/projected/d6839ea5-4201-48d8-b390-16fac4368cb9-kube-api-access-m6qb9\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-sllb7\" (UID: \"d6839ea5-4201-48d8-b390-16fac4368cb9\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-sllb7" Jan 25 08:32:46 crc kubenswrapper[4832]: I0125 08:32:46.779547 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/d6839ea5-4201-48d8-b390-16fac4368cb9-ssh-key-openstack-edpm-ipam\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-sllb7\" (UID: \"d6839ea5-4201-48d8-b390-16fac4368cb9\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-sllb7" Jan 25 08:32:46 crc kubenswrapper[4832]: I0125 08:32:46.780380 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d6839ea5-4201-48d8-b390-16fac4368cb9-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-sllb7\" (UID: \"d6839ea5-4201-48d8-b390-16fac4368cb9\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-sllb7" Jan 25 08:32:46 crc kubenswrapper[4832]: I0125 08:32:46.884211 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/d6839ea5-4201-48d8-b390-16fac4368cb9-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-sllb7\" (UID: \"d6839ea5-4201-48d8-b390-16fac4368cb9\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-sllb7" Jan 25 08:32:46 crc kubenswrapper[4832]: I0125 08:32:46.884290 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d6839ea5-4201-48d8-b390-16fac4368cb9-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-sllb7\" (UID: \"d6839ea5-4201-48d8-b390-16fac4368cb9\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-sllb7" Jan 25 08:32:46 crc kubenswrapper[4832]: I0125 08:32:46.884328 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m6qb9\" (UniqueName: \"kubernetes.io/projected/d6839ea5-4201-48d8-b390-16fac4368cb9-kube-api-access-m6qb9\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-sllb7\" (UID: \"d6839ea5-4201-48d8-b390-16fac4368cb9\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-sllb7" Jan 25 08:32:46 crc kubenswrapper[4832]: I0125 08:32:46.884373 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/d6839ea5-4201-48d8-b390-16fac4368cb9-ssh-key-openstack-edpm-ipam\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-sllb7\" (UID: \"d6839ea5-4201-48d8-b390-16fac4368cb9\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-sllb7" Jan 25 08:32:46 crc kubenswrapper[4832]: I0125 08:32:46.884434 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d6839ea5-4201-48d8-b390-16fac4368cb9-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-sllb7\" (UID: \"d6839ea5-4201-48d8-b390-16fac4368cb9\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-sllb7" Jan 25 08:32:46 crc kubenswrapper[4832]: I0125 08:32:46.888530 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d6839ea5-4201-48d8-b390-16fac4368cb9-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-sllb7\" (UID: \"d6839ea5-4201-48d8-b390-16fac4368cb9\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-sllb7" Jan 25 08:32:46 crc kubenswrapper[4832]: I0125 08:32:46.888820 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d6839ea5-4201-48d8-b390-16fac4368cb9-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-sllb7\" (UID: \"d6839ea5-4201-48d8-b390-16fac4368cb9\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-sllb7" Jan 25 08:32:46 crc kubenswrapper[4832]: I0125 08:32:46.889041 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/d6839ea5-4201-48d8-b390-16fac4368cb9-ssh-key-openstack-edpm-ipam\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-sllb7\" (UID: \"d6839ea5-4201-48d8-b390-16fac4368cb9\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-sllb7" Jan 25 08:32:46 crc kubenswrapper[4832]: I0125 08:32:46.891961 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/d6839ea5-4201-48d8-b390-16fac4368cb9-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-sllb7\" (UID: \"d6839ea5-4201-48d8-b390-16fac4368cb9\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-sllb7" Jan 25 08:32:46 crc kubenswrapper[4832]: I0125 08:32:46.908746 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m6qb9\" (UniqueName: \"kubernetes.io/projected/d6839ea5-4201-48d8-b390-16fac4368cb9-kube-api-access-m6qb9\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-sllb7\" (UID: \"d6839ea5-4201-48d8-b390-16fac4368cb9\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-sllb7" Jan 25 08:32:47 crc kubenswrapper[4832]: I0125 08:32:47.049133 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-sllb7" Jan 25 08:32:47 crc kubenswrapper[4832]: I0125 08:32:47.601342 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-sllb7"] Jan 25 08:32:47 crc kubenswrapper[4832]: I0125 08:32:47.647843 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-sllb7" event={"ID":"d6839ea5-4201-48d8-b390-16fac4368cb9","Type":"ContainerStarted","Data":"7f310be98d0f0d50c116d33f13d0254d6d360805f716cc8fbef26e449792c2b0"} Jan 25 08:32:48 crc kubenswrapper[4832]: I0125 08:32:48.662922 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-sllb7" event={"ID":"d6839ea5-4201-48d8-b390-16fac4368cb9","Type":"ContainerStarted","Data":"92b10b66042845f1cfbdcdbd59d719238872accdac35aa7bc5f64f1cf9f0c4e3"} Jan 25 08:32:48 crc kubenswrapper[4832]: I0125 08:32:48.683154 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-sllb7" podStartSLOduration=2.209424969 podStartE2EDuration="2.683133675s" podCreationTimestamp="2026-01-25 08:32:46 +0000 UTC" firstStartedPulling="2026-01-25 08:32:47.597328442 +0000 UTC m=+2150.271151975" lastFinishedPulling="2026-01-25 08:32:48.071037148 +0000 UTC m=+2150.744860681" observedRunningTime="2026-01-25 08:32:48.682542787 +0000 UTC m=+2151.356366320" watchObservedRunningTime="2026-01-25 08:32:48.683133675 +0000 UTC m=+2151.356957208" Jan 25 08:32:52 crc kubenswrapper[4832]: I0125 08:32:52.150022 4832 patch_prober.go:28] interesting pod/machine-config-daemon-9r9sz container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 25 08:32:52 crc kubenswrapper[4832]: I0125 08:32:52.150788 4832 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-9r9sz" podUID="1fb47e8e-c812-41b4-9be7-3fad81e121b0" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 25 08:33:22 crc kubenswrapper[4832]: I0125 08:33:22.150320 4832 patch_prober.go:28] interesting pod/machine-config-daemon-9r9sz container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 25 08:33:22 crc kubenswrapper[4832]: I0125 08:33:22.151053 4832 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-9r9sz" podUID="1fb47e8e-c812-41b4-9be7-3fad81e121b0" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 25 08:33:22 crc kubenswrapper[4832]: I0125 08:33:22.151100 4832 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-9r9sz" Jan 25 08:33:22 crc kubenswrapper[4832]: I0125 08:33:22.151882 4832 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"9f2eeb7f40f324f08ff39981fc95d743c2fa5a392afa220896be4c22d983c99b"} pod="openshift-machine-config-operator/machine-config-daemon-9r9sz" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 25 08:33:22 crc kubenswrapper[4832]: I0125 08:33:22.151935 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-9r9sz" podUID="1fb47e8e-c812-41b4-9be7-3fad81e121b0" containerName="machine-config-daemon" containerID="cri-o://9f2eeb7f40f324f08ff39981fc95d743c2fa5a392afa220896be4c22d983c99b" gracePeriod=600 Jan 25 08:33:22 crc kubenswrapper[4832]: E0125 08:33:22.540051 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9r9sz_openshift-machine-config-operator(1fb47e8e-c812-41b4-9be7-3fad81e121b0)\"" pod="openshift-machine-config-operator/machine-config-daemon-9r9sz" podUID="1fb47e8e-c812-41b4-9be7-3fad81e121b0" Jan 25 08:33:23 crc kubenswrapper[4832]: I0125 08:33:23.007975 4832 generic.go:334] "Generic (PLEG): container finished" podID="1fb47e8e-c812-41b4-9be7-3fad81e121b0" containerID="9f2eeb7f40f324f08ff39981fc95d743c2fa5a392afa220896be4c22d983c99b" exitCode=0 Jan 25 08:33:23 crc kubenswrapper[4832]: I0125 08:33:23.008028 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-9r9sz" event={"ID":"1fb47e8e-c812-41b4-9be7-3fad81e121b0","Type":"ContainerDied","Data":"9f2eeb7f40f324f08ff39981fc95d743c2fa5a392afa220896be4c22d983c99b"} Jan 25 08:33:23 crc kubenswrapper[4832]: I0125 08:33:23.008074 4832 scope.go:117] "RemoveContainer" containerID="5ee81b1e42e0e2f931beb9dc8d8ff5683471d0ba095236f471161e82f9c1c998" Jan 25 08:33:23 crc kubenswrapper[4832]: I0125 08:33:23.008839 4832 scope.go:117] "RemoveContainer" containerID="9f2eeb7f40f324f08ff39981fc95d743c2fa5a392afa220896be4c22d983c99b" Jan 25 08:33:23 crc kubenswrapper[4832]: E0125 08:33:23.009161 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9r9sz_openshift-machine-config-operator(1fb47e8e-c812-41b4-9be7-3fad81e121b0)\"" pod="openshift-machine-config-operator/machine-config-daemon-9r9sz" podUID="1fb47e8e-c812-41b4-9be7-3fad81e121b0" Jan 25 08:33:38 crc kubenswrapper[4832]: I0125 08:33:38.670428 4832 scope.go:117] "RemoveContainer" containerID="9f2eeb7f40f324f08ff39981fc95d743c2fa5a392afa220896be4c22d983c99b" Jan 25 08:33:38 crc kubenswrapper[4832]: E0125 08:33:38.671336 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9r9sz_openshift-machine-config-operator(1fb47e8e-c812-41b4-9be7-3fad81e121b0)\"" pod="openshift-machine-config-operator/machine-config-daemon-9r9sz" podUID="1fb47e8e-c812-41b4-9be7-3fad81e121b0" Jan 25 08:33:42 crc kubenswrapper[4832]: I0125 08:33:42.049033 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-8xwnb"] Jan 25 08:33:42 crc kubenswrapper[4832]: I0125 08:33:42.052104 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8xwnb" Jan 25 08:33:42 crc kubenswrapper[4832]: I0125 08:33:42.067979 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-8xwnb"] Jan 25 08:33:42 crc kubenswrapper[4832]: I0125 08:33:42.201216 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4wkrb\" (UniqueName: \"kubernetes.io/projected/4acd6361-1940-4a66-ba22-608952fad89a-kube-api-access-4wkrb\") pod \"community-operators-8xwnb\" (UID: \"4acd6361-1940-4a66-ba22-608952fad89a\") " pod="openshift-marketplace/community-operators-8xwnb" Jan 25 08:33:42 crc kubenswrapper[4832]: I0125 08:33:42.201560 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4acd6361-1940-4a66-ba22-608952fad89a-catalog-content\") pod \"community-operators-8xwnb\" (UID: \"4acd6361-1940-4a66-ba22-608952fad89a\") " pod="openshift-marketplace/community-operators-8xwnb" Jan 25 08:33:42 crc kubenswrapper[4832]: I0125 08:33:42.201592 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4acd6361-1940-4a66-ba22-608952fad89a-utilities\") pod \"community-operators-8xwnb\" (UID: \"4acd6361-1940-4a66-ba22-608952fad89a\") " pod="openshift-marketplace/community-operators-8xwnb" Jan 25 08:33:42 crc kubenswrapper[4832]: I0125 08:33:42.303812 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4wkrb\" (UniqueName: \"kubernetes.io/projected/4acd6361-1940-4a66-ba22-608952fad89a-kube-api-access-4wkrb\") pod \"community-operators-8xwnb\" (UID: \"4acd6361-1940-4a66-ba22-608952fad89a\") " pod="openshift-marketplace/community-operators-8xwnb" Jan 25 08:33:42 crc kubenswrapper[4832]: I0125 08:33:42.303882 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4acd6361-1940-4a66-ba22-608952fad89a-catalog-content\") pod \"community-operators-8xwnb\" (UID: \"4acd6361-1940-4a66-ba22-608952fad89a\") " pod="openshift-marketplace/community-operators-8xwnb" Jan 25 08:33:42 crc kubenswrapper[4832]: I0125 08:33:42.303906 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4acd6361-1940-4a66-ba22-608952fad89a-utilities\") pod \"community-operators-8xwnb\" (UID: \"4acd6361-1940-4a66-ba22-608952fad89a\") " pod="openshift-marketplace/community-operators-8xwnb" Jan 25 08:33:42 crc kubenswrapper[4832]: I0125 08:33:42.304537 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4acd6361-1940-4a66-ba22-608952fad89a-utilities\") pod \"community-operators-8xwnb\" (UID: \"4acd6361-1940-4a66-ba22-608952fad89a\") " pod="openshift-marketplace/community-operators-8xwnb" Jan 25 08:33:42 crc kubenswrapper[4832]: I0125 08:33:42.305235 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4acd6361-1940-4a66-ba22-608952fad89a-catalog-content\") pod \"community-operators-8xwnb\" (UID: \"4acd6361-1940-4a66-ba22-608952fad89a\") " pod="openshift-marketplace/community-operators-8xwnb" Jan 25 08:33:42 crc kubenswrapper[4832]: I0125 08:33:42.324889 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4wkrb\" (UniqueName: \"kubernetes.io/projected/4acd6361-1940-4a66-ba22-608952fad89a-kube-api-access-4wkrb\") pod \"community-operators-8xwnb\" (UID: \"4acd6361-1940-4a66-ba22-608952fad89a\") " pod="openshift-marketplace/community-operators-8xwnb" Jan 25 08:33:42 crc kubenswrapper[4832]: I0125 08:33:42.382053 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8xwnb" Jan 25 08:33:42 crc kubenswrapper[4832]: I0125 08:33:42.913044 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-8xwnb"] Jan 25 08:33:43 crc kubenswrapper[4832]: I0125 08:33:43.190423 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8xwnb" event={"ID":"4acd6361-1940-4a66-ba22-608952fad89a","Type":"ContainerStarted","Data":"4eb22f42b3a82cdfb6d5977e9891c76cf93237497adf45a0d7fde2036da0be59"} Jan 25 08:33:43 crc kubenswrapper[4832]: I0125 08:33:43.191639 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8xwnb" event={"ID":"4acd6361-1940-4a66-ba22-608952fad89a","Type":"ContainerStarted","Data":"cbe2884901df011846372c008cc87bdfae20d33762c4b0590846c340615d2534"} Jan 25 08:33:44 crc kubenswrapper[4832]: I0125 08:33:44.199360 4832 generic.go:334] "Generic (PLEG): container finished" podID="4acd6361-1940-4a66-ba22-608952fad89a" containerID="4eb22f42b3a82cdfb6d5977e9891c76cf93237497adf45a0d7fde2036da0be59" exitCode=0 Jan 25 08:33:44 crc kubenswrapper[4832]: I0125 08:33:44.199413 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8xwnb" event={"ID":"4acd6361-1940-4a66-ba22-608952fad89a","Type":"ContainerDied","Data":"4eb22f42b3a82cdfb6d5977e9891c76cf93237497adf45a0d7fde2036da0be59"} Jan 25 08:33:44 crc kubenswrapper[4832]: I0125 08:33:44.448376 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-jzm2z"] Jan 25 08:33:44 crc kubenswrapper[4832]: I0125 08:33:44.450561 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-jzm2z" Jan 25 08:33:44 crc kubenswrapper[4832]: I0125 08:33:44.481223 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-jzm2z"] Jan 25 08:33:44 crc kubenswrapper[4832]: I0125 08:33:44.558516 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kql22\" (UniqueName: \"kubernetes.io/projected/0d683fb6-7c94-459b-ba58-99c5e67526d2-kube-api-access-kql22\") pod \"redhat-marketplace-jzm2z\" (UID: \"0d683fb6-7c94-459b-ba58-99c5e67526d2\") " pod="openshift-marketplace/redhat-marketplace-jzm2z" Jan 25 08:33:44 crc kubenswrapper[4832]: I0125 08:33:44.558608 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0d683fb6-7c94-459b-ba58-99c5e67526d2-utilities\") pod \"redhat-marketplace-jzm2z\" (UID: \"0d683fb6-7c94-459b-ba58-99c5e67526d2\") " pod="openshift-marketplace/redhat-marketplace-jzm2z" Jan 25 08:33:44 crc kubenswrapper[4832]: I0125 08:33:44.558661 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0d683fb6-7c94-459b-ba58-99c5e67526d2-catalog-content\") pod \"redhat-marketplace-jzm2z\" (UID: \"0d683fb6-7c94-459b-ba58-99c5e67526d2\") " pod="openshift-marketplace/redhat-marketplace-jzm2z" Jan 25 08:33:44 crc kubenswrapper[4832]: I0125 08:33:44.661473 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0d683fb6-7c94-459b-ba58-99c5e67526d2-catalog-content\") pod \"redhat-marketplace-jzm2z\" (UID: \"0d683fb6-7c94-459b-ba58-99c5e67526d2\") " pod="openshift-marketplace/redhat-marketplace-jzm2z" Jan 25 08:33:44 crc kubenswrapper[4832]: I0125 08:33:44.661732 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kql22\" (UniqueName: \"kubernetes.io/projected/0d683fb6-7c94-459b-ba58-99c5e67526d2-kube-api-access-kql22\") pod \"redhat-marketplace-jzm2z\" (UID: \"0d683fb6-7c94-459b-ba58-99c5e67526d2\") " pod="openshift-marketplace/redhat-marketplace-jzm2z" Jan 25 08:33:44 crc kubenswrapper[4832]: I0125 08:33:44.661777 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0d683fb6-7c94-459b-ba58-99c5e67526d2-utilities\") pod \"redhat-marketplace-jzm2z\" (UID: \"0d683fb6-7c94-459b-ba58-99c5e67526d2\") " pod="openshift-marketplace/redhat-marketplace-jzm2z" Jan 25 08:33:44 crc kubenswrapper[4832]: I0125 08:33:44.662210 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0d683fb6-7c94-459b-ba58-99c5e67526d2-catalog-content\") pod \"redhat-marketplace-jzm2z\" (UID: \"0d683fb6-7c94-459b-ba58-99c5e67526d2\") " pod="openshift-marketplace/redhat-marketplace-jzm2z" Jan 25 08:33:44 crc kubenswrapper[4832]: I0125 08:33:44.663341 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0d683fb6-7c94-459b-ba58-99c5e67526d2-utilities\") pod \"redhat-marketplace-jzm2z\" (UID: \"0d683fb6-7c94-459b-ba58-99c5e67526d2\") " pod="openshift-marketplace/redhat-marketplace-jzm2z" Jan 25 08:33:44 crc kubenswrapper[4832]: I0125 08:33:44.689586 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kql22\" (UniqueName: \"kubernetes.io/projected/0d683fb6-7c94-459b-ba58-99c5e67526d2-kube-api-access-kql22\") pod \"redhat-marketplace-jzm2z\" (UID: \"0d683fb6-7c94-459b-ba58-99c5e67526d2\") " pod="openshift-marketplace/redhat-marketplace-jzm2z" Jan 25 08:33:44 crc kubenswrapper[4832]: I0125 08:33:44.772869 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-jzm2z" Jan 25 08:33:45 crc kubenswrapper[4832]: I0125 08:33:45.209030 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8xwnb" event={"ID":"4acd6361-1940-4a66-ba22-608952fad89a","Type":"ContainerStarted","Data":"b8e80b3a520b499ae7e00a1cfd9e2c624ec4979e84e22da675621c3d454cc1e5"} Jan 25 08:33:45 crc kubenswrapper[4832]: I0125 08:33:45.302461 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-jzm2z"] Jan 25 08:33:45 crc kubenswrapper[4832]: W0125 08:33:45.306218 4832 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0d683fb6_7c94_459b_ba58_99c5e67526d2.slice/crio-27f6068caa86e781f9530ea2de95263b1c7447354e5e228f3c29ff47681f5f9b WatchSource:0}: Error finding container 27f6068caa86e781f9530ea2de95263b1c7447354e5e228f3c29ff47681f5f9b: Status 404 returned error can't find the container with id 27f6068caa86e781f9530ea2de95263b1c7447354e5e228f3c29ff47681f5f9b Jan 25 08:33:46 crc kubenswrapper[4832]: I0125 08:33:46.218295 4832 generic.go:334] "Generic (PLEG): container finished" podID="4acd6361-1940-4a66-ba22-608952fad89a" containerID="b8e80b3a520b499ae7e00a1cfd9e2c624ec4979e84e22da675621c3d454cc1e5" exitCode=0 Jan 25 08:33:46 crc kubenswrapper[4832]: I0125 08:33:46.218408 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8xwnb" event={"ID":"4acd6361-1940-4a66-ba22-608952fad89a","Type":"ContainerDied","Data":"b8e80b3a520b499ae7e00a1cfd9e2c624ec4979e84e22da675621c3d454cc1e5"} Jan 25 08:33:46 crc kubenswrapper[4832]: I0125 08:33:46.220258 4832 generic.go:334] "Generic (PLEG): container finished" podID="0d683fb6-7c94-459b-ba58-99c5e67526d2" containerID="79cfa9396a3d51076d12fe0c93dffda2aa586ba0a8b975dd5eb9c51087fed8fa" exitCode=0 Jan 25 08:33:46 crc kubenswrapper[4832]: I0125 08:33:46.220295 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jzm2z" event={"ID":"0d683fb6-7c94-459b-ba58-99c5e67526d2","Type":"ContainerDied","Data":"79cfa9396a3d51076d12fe0c93dffda2aa586ba0a8b975dd5eb9c51087fed8fa"} Jan 25 08:33:46 crc kubenswrapper[4832]: I0125 08:33:46.220321 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jzm2z" event={"ID":"0d683fb6-7c94-459b-ba58-99c5e67526d2","Type":"ContainerStarted","Data":"27f6068caa86e781f9530ea2de95263b1c7447354e5e228f3c29ff47681f5f9b"} Jan 25 08:33:48 crc kubenswrapper[4832]: I0125 08:33:48.241181 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8xwnb" event={"ID":"4acd6361-1940-4a66-ba22-608952fad89a","Type":"ContainerStarted","Data":"9a522f18443bf1de7cae3dfd730d20fd651a2eaac6db768dc304ae2f76fc0a72"} Jan 25 08:33:48 crc kubenswrapper[4832]: I0125 08:33:48.265305 4832 generic.go:334] "Generic (PLEG): container finished" podID="0d683fb6-7c94-459b-ba58-99c5e67526d2" containerID="5bc89e5e1557ed4d2ba2a7e8efc3dccba41a49dab1be67e5786c58da90ee651f" exitCode=0 Jan 25 08:33:48 crc kubenswrapper[4832]: I0125 08:33:48.265716 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jzm2z" event={"ID":"0d683fb6-7c94-459b-ba58-99c5e67526d2","Type":"ContainerDied","Data":"5bc89e5e1557ed4d2ba2a7e8efc3dccba41a49dab1be67e5786c58da90ee651f"} Jan 25 08:33:48 crc kubenswrapper[4832]: I0125 08:33:48.279579 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-8xwnb" podStartSLOduration=3.045889786 podStartE2EDuration="6.279548095s" podCreationTimestamp="2026-01-25 08:33:42 +0000 UTC" firstStartedPulling="2026-01-25 08:33:44.201352805 +0000 UTC m=+2206.875176338" lastFinishedPulling="2026-01-25 08:33:47.435011114 +0000 UTC m=+2210.108834647" observedRunningTime="2026-01-25 08:33:48.273816689 +0000 UTC m=+2210.947640222" watchObservedRunningTime="2026-01-25 08:33:48.279548095 +0000 UTC m=+2210.953371628" Jan 25 08:33:49 crc kubenswrapper[4832]: I0125 08:33:49.275253 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jzm2z" event={"ID":"0d683fb6-7c94-459b-ba58-99c5e67526d2","Type":"ContainerStarted","Data":"6056c03d4fa96edde4d2dd65713f3f1a1e80857884a0f5dcfe94d57ffd1ab8b8"} Jan 25 08:33:49 crc kubenswrapper[4832]: I0125 08:33:49.291961 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-jzm2z" podStartSLOduration=2.56294948 podStartE2EDuration="5.291943422s" podCreationTimestamp="2026-01-25 08:33:44 +0000 UTC" firstStartedPulling="2026-01-25 08:33:46.221701154 +0000 UTC m=+2208.895524707" lastFinishedPulling="2026-01-25 08:33:48.950695106 +0000 UTC m=+2211.624518649" observedRunningTime="2026-01-25 08:33:49.288902578 +0000 UTC m=+2211.962726111" watchObservedRunningTime="2026-01-25 08:33:49.291943422 +0000 UTC m=+2211.965766955" Jan 25 08:33:50 crc kubenswrapper[4832]: I0125 08:33:50.669716 4832 scope.go:117] "RemoveContainer" containerID="9f2eeb7f40f324f08ff39981fc95d743c2fa5a392afa220896be4c22d983c99b" Jan 25 08:33:50 crc kubenswrapper[4832]: E0125 08:33:50.670245 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9r9sz_openshift-machine-config-operator(1fb47e8e-c812-41b4-9be7-3fad81e121b0)\"" pod="openshift-machine-config-operator/machine-config-daemon-9r9sz" podUID="1fb47e8e-c812-41b4-9be7-3fad81e121b0" Jan 25 08:33:52 crc kubenswrapper[4832]: I0125 08:33:52.382371 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-8xwnb" Jan 25 08:33:52 crc kubenswrapper[4832]: I0125 08:33:52.382801 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-8xwnb" Jan 25 08:33:52 crc kubenswrapper[4832]: I0125 08:33:52.455281 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-8xwnb" Jan 25 08:33:53 crc kubenswrapper[4832]: I0125 08:33:53.364013 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-8xwnb" Jan 25 08:33:53 crc kubenswrapper[4832]: I0125 08:33:53.836687 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-8xwnb"] Jan 25 08:33:54 crc kubenswrapper[4832]: I0125 08:33:54.773218 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-jzm2z" Jan 25 08:33:54 crc kubenswrapper[4832]: I0125 08:33:54.773380 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-jzm2z" Jan 25 08:33:54 crc kubenswrapper[4832]: I0125 08:33:54.817253 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-jzm2z" Jan 25 08:33:55 crc kubenswrapper[4832]: I0125 08:33:55.330555 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-8xwnb" podUID="4acd6361-1940-4a66-ba22-608952fad89a" containerName="registry-server" containerID="cri-o://9a522f18443bf1de7cae3dfd730d20fd651a2eaac6db768dc304ae2f76fc0a72" gracePeriod=2 Jan 25 08:33:55 crc kubenswrapper[4832]: I0125 08:33:55.392514 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-jzm2z" Jan 25 08:33:55 crc kubenswrapper[4832]: I0125 08:33:55.767428 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8xwnb" Jan 25 08:33:55 crc kubenswrapper[4832]: I0125 08:33:55.896213 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4wkrb\" (UniqueName: \"kubernetes.io/projected/4acd6361-1940-4a66-ba22-608952fad89a-kube-api-access-4wkrb\") pod \"4acd6361-1940-4a66-ba22-608952fad89a\" (UID: \"4acd6361-1940-4a66-ba22-608952fad89a\") " Jan 25 08:33:55 crc kubenswrapper[4832]: I0125 08:33:55.896309 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4acd6361-1940-4a66-ba22-608952fad89a-catalog-content\") pod \"4acd6361-1940-4a66-ba22-608952fad89a\" (UID: \"4acd6361-1940-4a66-ba22-608952fad89a\") " Jan 25 08:33:55 crc kubenswrapper[4832]: I0125 08:33:55.896489 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4acd6361-1940-4a66-ba22-608952fad89a-utilities\") pod \"4acd6361-1940-4a66-ba22-608952fad89a\" (UID: \"4acd6361-1940-4a66-ba22-608952fad89a\") " Jan 25 08:33:55 crc kubenswrapper[4832]: I0125 08:33:55.898587 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4acd6361-1940-4a66-ba22-608952fad89a-utilities" (OuterVolumeSpecName: "utilities") pod "4acd6361-1940-4a66-ba22-608952fad89a" (UID: "4acd6361-1940-4a66-ba22-608952fad89a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 25 08:33:55 crc kubenswrapper[4832]: I0125 08:33:55.901878 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4acd6361-1940-4a66-ba22-608952fad89a-kube-api-access-4wkrb" (OuterVolumeSpecName: "kube-api-access-4wkrb") pod "4acd6361-1940-4a66-ba22-608952fad89a" (UID: "4acd6361-1940-4a66-ba22-608952fad89a"). InnerVolumeSpecName "kube-api-access-4wkrb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 25 08:33:55 crc kubenswrapper[4832]: I0125 08:33:55.952714 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4acd6361-1940-4a66-ba22-608952fad89a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "4acd6361-1940-4a66-ba22-608952fad89a" (UID: "4acd6361-1940-4a66-ba22-608952fad89a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 25 08:33:55 crc kubenswrapper[4832]: I0125 08:33:55.999510 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4wkrb\" (UniqueName: \"kubernetes.io/projected/4acd6361-1940-4a66-ba22-608952fad89a-kube-api-access-4wkrb\") on node \"crc\" DevicePath \"\"" Jan 25 08:33:55 crc kubenswrapper[4832]: I0125 08:33:55.999583 4832 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4acd6361-1940-4a66-ba22-608952fad89a-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 25 08:33:55 crc kubenswrapper[4832]: I0125 08:33:55.999605 4832 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4acd6361-1940-4a66-ba22-608952fad89a-utilities\") on node \"crc\" DevicePath \"\"" Jan 25 08:33:56 crc kubenswrapper[4832]: I0125 08:33:56.340823 4832 generic.go:334] "Generic (PLEG): container finished" podID="4acd6361-1940-4a66-ba22-608952fad89a" containerID="9a522f18443bf1de7cae3dfd730d20fd651a2eaac6db768dc304ae2f76fc0a72" exitCode=0 Jan 25 08:33:56 crc kubenswrapper[4832]: I0125 08:33:56.340925 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8xwnb" Jan 25 08:33:56 crc kubenswrapper[4832]: I0125 08:33:56.340905 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8xwnb" event={"ID":"4acd6361-1940-4a66-ba22-608952fad89a","Type":"ContainerDied","Data":"9a522f18443bf1de7cae3dfd730d20fd651a2eaac6db768dc304ae2f76fc0a72"} Jan 25 08:33:56 crc kubenswrapper[4832]: I0125 08:33:56.341136 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8xwnb" event={"ID":"4acd6361-1940-4a66-ba22-608952fad89a","Type":"ContainerDied","Data":"cbe2884901df011846372c008cc87bdfae20d33762c4b0590846c340615d2534"} Jan 25 08:33:56 crc kubenswrapper[4832]: I0125 08:33:56.341168 4832 scope.go:117] "RemoveContainer" containerID="9a522f18443bf1de7cae3dfd730d20fd651a2eaac6db768dc304ae2f76fc0a72" Jan 25 08:33:56 crc kubenswrapper[4832]: I0125 08:33:56.373803 4832 scope.go:117] "RemoveContainer" containerID="b8e80b3a520b499ae7e00a1cfd9e2c624ec4979e84e22da675621c3d454cc1e5" Jan 25 08:33:56 crc kubenswrapper[4832]: I0125 08:33:56.375566 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-8xwnb"] Jan 25 08:33:56 crc kubenswrapper[4832]: I0125 08:33:56.382241 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-8xwnb"] Jan 25 08:33:56 crc kubenswrapper[4832]: I0125 08:33:56.399054 4832 scope.go:117] "RemoveContainer" containerID="4eb22f42b3a82cdfb6d5977e9891c76cf93237497adf45a0d7fde2036da0be59" Jan 25 08:33:56 crc kubenswrapper[4832]: I0125 08:33:56.445609 4832 scope.go:117] "RemoveContainer" containerID="9a522f18443bf1de7cae3dfd730d20fd651a2eaac6db768dc304ae2f76fc0a72" Jan 25 08:33:56 crc kubenswrapper[4832]: E0125 08:33:56.446240 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9a522f18443bf1de7cae3dfd730d20fd651a2eaac6db768dc304ae2f76fc0a72\": container with ID starting with 9a522f18443bf1de7cae3dfd730d20fd651a2eaac6db768dc304ae2f76fc0a72 not found: ID does not exist" containerID="9a522f18443bf1de7cae3dfd730d20fd651a2eaac6db768dc304ae2f76fc0a72" Jan 25 08:33:56 crc kubenswrapper[4832]: I0125 08:33:56.446278 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9a522f18443bf1de7cae3dfd730d20fd651a2eaac6db768dc304ae2f76fc0a72"} err="failed to get container status \"9a522f18443bf1de7cae3dfd730d20fd651a2eaac6db768dc304ae2f76fc0a72\": rpc error: code = NotFound desc = could not find container \"9a522f18443bf1de7cae3dfd730d20fd651a2eaac6db768dc304ae2f76fc0a72\": container with ID starting with 9a522f18443bf1de7cae3dfd730d20fd651a2eaac6db768dc304ae2f76fc0a72 not found: ID does not exist" Jan 25 08:33:56 crc kubenswrapper[4832]: I0125 08:33:56.446344 4832 scope.go:117] "RemoveContainer" containerID="b8e80b3a520b499ae7e00a1cfd9e2c624ec4979e84e22da675621c3d454cc1e5" Jan 25 08:33:56 crc kubenswrapper[4832]: E0125 08:33:56.446751 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b8e80b3a520b499ae7e00a1cfd9e2c624ec4979e84e22da675621c3d454cc1e5\": container with ID starting with b8e80b3a520b499ae7e00a1cfd9e2c624ec4979e84e22da675621c3d454cc1e5 not found: ID does not exist" containerID="b8e80b3a520b499ae7e00a1cfd9e2c624ec4979e84e22da675621c3d454cc1e5" Jan 25 08:33:56 crc kubenswrapper[4832]: I0125 08:33:56.446804 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b8e80b3a520b499ae7e00a1cfd9e2c624ec4979e84e22da675621c3d454cc1e5"} err="failed to get container status \"b8e80b3a520b499ae7e00a1cfd9e2c624ec4979e84e22da675621c3d454cc1e5\": rpc error: code = NotFound desc = could not find container \"b8e80b3a520b499ae7e00a1cfd9e2c624ec4979e84e22da675621c3d454cc1e5\": container with ID starting with b8e80b3a520b499ae7e00a1cfd9e2c624ec4979e84e22da675621c3d454cc1e5 not found: ID does not exist" Jan 25 08:33:56 crc kubenswrapper[4832]: I0125 08:33:56.446837 4832 scope.go:117] "RemoveContainer" containerID="4eb22f42b3a82cdfb6d5977e9891c76cf93237497adf45a0d7fde2036da0be59" Jan 25 08:33:56 crc kubenswrapper[4832]: E0125 08:33:56.447658 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4eb22f42b3a82cdfb6d5977e9891c76cf93237497adf45a0d7fde2036da0be59\": container with ID starting with 4eb22f42b3a82cdfb6d5977e9891c76cf93237497adf45a0d7fde2036da0be59 not found: ID does not exist" containerID="4eb22f42b3a82cdfb6d5977e9891c76cf93237497adf45a0d7fde2036da0be59" Jan 25 08:33:56 crc kubenswrapper[4832]: I0125 08:33:56.447829 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4eb22f42b3a82cdfb6d5977e9891c76cf93237497adf45a0d7fde2036da0be59"} err="failed to get container status \"4eb22f42b3a82cdfb6d5977e9891c76cf93237497adf45a0d7fde2036da0be59\": rpc error: code = NotFound desc = could not find container \"4eb22f42b3a82cdfb6d5977e9891c76cf93237497adf45a0d7fde2036da0be59\": container with ID starting with 4eb22f42b3a82cdfb6d5977e9891c76cf93237497adf45a0d7fde2036da0be59 not found: ID does not exist" Jan 25 08:33:56 crc kubenswrapper[4832]: I0125 08:33:56.636730 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-jzm2z"] Jan 25 08:33:57 crc kubenswrapper[4832]: I0125 08:33:57.682824 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4acd6361-1940-4a66-ba22-608952fad89a" path="/var/lib/kubelet/pods/4acd6361-1940-4a66-ba22-608952fad89a/volumes" Jan 25 08:33:58 crc kubenswrapper[4832]: I0125 08:33:58.364427 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-jzm2z" podUID="0d683fb6-7c94-459b-ba58-99c5e67526d2" containerName="registry-server" containerID="cri-o://6056c03d4fa96edde4d2dd65713f3f1a1e80857884a0f5dcfe94d57ffd1ab8b8" gracePeriod=2 Jan 25 08:33:58 crc kubenswrapper[4832]: I0125 08:33:58.806344 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-jzm2z" Jan 25 08:33:58 crc kubenswrapper[4832]: I0125 08:33:58.958243 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0d683fb6-7c94-459b-ba58-99c5e67526d2-utilities\") pod \"0d683fb6-7c94-459b-ba58-99c5e67526d2\" (UID: \"0d683fb6-7c94-459b-ba58-99c5e67526d2\") " Jan 25 08:33:58 crc kubenswrapper[4832]: I0125 08:33:58.958408 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kql22\" (UniqueName: \"kubernetes.io/projected/0d683fb6-7c94-459b-ba58-99c5e67526d2-kube-api-access-kql22\") pod \"0d683fb6-7c94-459b-ba58-99c5e67526d2\" (UID: \"0d683fb6-7c94-459b-ba58-99c5e67526d2\") " Jan 25 08:33:58 crc kubenswrapper[4832]: I0125 08:33:58.958445 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0d683fb6-7c94-459b-ba58-99c5e67526d2-catalog-content\") pod \"0d683fb6-7c94-459b-ba58-99c5e67526d2\" (UID: \"0d683fb6-7c94-459b-ba58-99c5e67526d2\") " Jan 25 08:33:58 crc kubenswrapper[4832]: I0125 08:33:58.959418 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0d683fb6-7c94-459b-ba58-99c5e67526d2-utilities" (OuterVolumeSpecName: "utilities") pod "0d683fb6-7c94-459b-ba58-99c5e67526d2" (UID: "0d683fb6-7c94-459b-ba58-99c5e67526d2"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 25 08:33:58 crc kubenswrapper[4832]: I0125 08:33:58.969377 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0d683fb6-7c94-459b-ba58-99c5e67526d2-kube-api-access-kql22" (OuterVolumeSpecName: "kube-api-access-kql22") pod "0d683fb6-7c94-459b-ba58-99c5e67526d2" (UID: "0d683fb6-7c94-459b-ba58-99c5e67526d2"). InnerVolumeSpecName "kube-api-access-kql22". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 25 08:33:58 crc kubenswrapper[4832]: I0125 08:33:58.980981 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0d683fb6-7c94-459b-ba58-99c5e67526d2-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "0d683fb6-7c94-459b-ba58-99c5e67526d2" (UID: "0d683fb6-7c94-459b-ba58-99c5e67526d2"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 25 08:33:59 crc kubenswrapper[4832]: I0125 08:33:59.060939 4832 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0d683fb6-7c94-459b-ba58-99c5e67526d2-utilities\") on node \"crc\" DevicePath \"\"" Jan 25 08:33:59 crc kubenswrapper[4832]: I0125 08:33:59.060973 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kql22\" (UniqueName: \"kubernetes.io/projected/0d683fb6-7c94-459b-ba58-99c5e67526d2-kube-api-access-kql22\") on node \"crc\" DevicePath \"\"" Jan 25 08:33:59 crc kubenswrapper[4832]: I0125 08:33:59.060984 4832 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0d683fb6-7c94-459b-ba58-99c5e67526d2-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 25 08:33:59 crc kubenswrapper[4832]: I0125 08:33:59.376434 4832 generic.go:334] "Generic (PLEG): container finished" podID="0d683fb6-7c94-459b-ba58-99c5e67526d2" containerID="6056c03d4fa96edde4d2dd65713f3f1a1e80857884a0f5dcfe94d57ffd1ab8b8" exitCode=0 Jan 25 08:33:59 crc kubenswrapper[4832]: I0125 08:33:59.376494 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jzm2z" event={"ID":"0d683fb6-7c94-459b-ba58-99c5e67526d2","Type":"ContainerDied","Data":"6056c03d4fa96edde4d2dd65713f3f1a1e80857884a0f5dcfe94d57ffd1ab8b8"} Jan 25 08:33:59 crc kubenswrapper[4832]: I0125 08:33:59.376779 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jzm2z" event={"ID":"0d683fb6-7c94-459b-ba58-99c5e67526d2","Type":"ContainerDied","Data":"27f6068caa86e781f9530ea2de95263b1c7447354e5e228f3c29ff47681f5f9b"} Jan 25 08:33:59 crc kubenswrapper[4832]: I0125 08:33:59.376800 4832 scope.go:117] "RemoveContainer" containerID="6056c03d4fa96edde4d2dd65713f3f1a1e80857884a0f5dcfe94d57ffd1ab8b8" Jan 25 08:33:59 crc kubenswrapper[4832]: I0125 08:33:59.376502 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-jzm2z" Jan 25 08:33:59 crc kubenswrapper[4832]: I0125 08:33:59.400999 4832 scope.go:117] "RemoveContainer" containerID="5bc89e5e1557ed4d2ba2a7e8efc3dccba41a49dab1be67e5786c58da90ee651f" Jan 25 08:33:59 crc kubenswrapper[4832]: I0125 08:33:59.415326 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-jzm2z"] Jan 25 08:33:59 crc kubenswrapper[4832]: I0125 08:33:59.426348 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-jzm2z"] Jan 25 08:33:59 crc kubenswrapper[4832]: I0125 08:33:59.448166 4832 scope.go:117] "RemoveContainer" containerID="79cfa9396a3d51076d12fe0c93dffda2aa586ba0a8b975dd5eb9c51087fed8fa" Jan 25 08:33:59 crc kubenswrapper[4832]: I0125 08:33:59.501844 4832 scope.go:117] "RemoveContainer" containerID="6056c03d4fa96edde4d2dd65713f3f1a1e80857884a0f5dcfe94d57ffd1ab8b8" Jan 25 08:33:59 crc kubenswrapper[4832]: E0125 08:33:59.502568 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6056c03d4fa96edde4d2dd65713f3f1a1e80857884a0f5dcfe94d57ffd1ab8b8\": container with ID starting with 6056c03d4fa96edde4d2dd65713f3f1a1e80857884a0f5dcfe94d57ffd1ab8b8 not found: ID does not exist" containerID="6056c03d4fa96edde4d2dd65713f3f1a1e80857884a0f5dcfe94d57ffd1ab8b8" Jan 25 08:33:59 crc kubenswrapper[4832]: I0125 08:33:59.502619 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6056c03d4fa96edde4d2dd65713f3f1a1e80857884a0f5dcfe94d57ffd1ab8b8"} err="failed to get container status \"6056c03d4fa96edde4d2dd65713f3f1a1e80857884a0f5dcfe94d57ffd1ab8b8\": rpc error: code = NotFound desc = could not find container \"6056c03d4fa96edde4d2dd65713f3f1a1e80857884a0f5dcfe94d57ffd1ab8b8\": container with ID starting with 6056c03d4fa96edde4d2dd65713f3f1a1e80857884a0f5dcfe94d57ffd1ab8b8 not found: ID does not exist" Jan 25 08:33:59 crc kubenswrapper[4832]: I0125 08:33:59.502650 4832 scope.go:117] "RemoveContainer" containerID="5bc89e5e1557ed4d2ba2a7e8efc3dccba41a49dab1be67e5786c58da90ee651f" Jan 25 08:33:59 crc kubenswrapper[4832]: E0125 08:33:59.503116 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5bc89e5e1557ed4d2ba2a7e8efc3dccba41a49dab1be67e5786c58da90ee651f\": container with ID starting with 5bc89e5e1557ed4d2ba2a7e8efc3dccba41a49dab1be67e5786c58da90ee651f not found: ID does not exist" containerID="5bc89e5e1557ed4d2ba2a7e8efc3dccba41a49dab1be67e5786c58da90ee651f" Jan 25 08:33:59 crc kubenswrapper[4832]: I0125 08:33:59.503197 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5bc89e5e1557ed4d2ba2a7e8efc3dccba41a49dab1be67e5786c58da90ee651f"} err="failed to get container status \"5bc89e5e1557ed4d2ba2a7e8efc3dccba41a49dab1be67e5786c58da90ee651f\": rpc error: code = NotFound desc = could not find container \"5bc89e5e1557ed4d2ba2a7e8efc3dccba41a49dab1be67e5786c58da90ee651f\": container with ID starting with 5bc89e5e1557ed4d2ba2a7e8efc3dccba41a49dab1be67e5786c58da90ee651f not found: ID does not exist" Jan 25 08:33:59 crc kubenswrapper[4832]: I0125 08:33:59.503286 4832 scope.go:117] "RemoveContainer" containerID="79cfa9396a3d51076d12fe0c93dffda2aa586ba0a8b975dd5eb9c51087fed8fa" Jan 25 08:33:59 crc kubenswrapper[4832]: E0125 08:33:59.503658 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"79cfa9396a3d51076d12fe0c93dffda2aa586ba0a8b975dd5eb9c51087fed8fa\": container with ID starting with 79cfa9396a3d51076d12fe0c93dffda2aa586ba0a8b975dd5eb9c51087fed8fa not found: ID does not exist" containerID="79cfa9396a3d51076d12fe0c93dffda2aa586ba0a8b975dd5eb9c51087fed8fa" Jan 25 08:33:59 crc kubenswrapper[4832]: I0125 08:33:59.503713 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"79cfa9396a3d51076d12fe0c93dffda2aa586ba0a8b975dd5eb9c51087fed8fa"} err="failed to get container status \"79cfa9396a3d51076d12fe0c93dffda2aa586ba0a8b975dd5eb9c51087fed8fa\": rpc error: code = NotFound desc = could not find container \"79cfa9396a3d51076d12fe0c93dffda2aa586ba0a8b975dd5eb9c51087fed8fa\": container with ID starting with 79cfa9396a3d51076d12fe0c93dffda2aa586ba0a8b975dd5eb9c51087fed8fa not found: ID does not exist" Jan 25 08:33:59 crc kubenswrapper[4832]: I0125 08:33:59.681016 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0d683fb6-7c94-459b-ba58-99c5e67526d2" path="/var/lib/kubelet/pods/0d683fb6-7c94-459b-ba58-99c5e67526d2/volumes" Jan 25 08:34:04 crc kubenswrapper[4832]: I0125 08:34:04.670708 4832 scope.go:117] "RemoveContainer" containerID="9f2eeb7f40f324f08ff39981fc95d743c2fa5a392afa220896be4c22d983c99b" Jan 25 08:34:04 crc kubenswrapper[4832]: E0125 08:34:04.676582 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9r9sz_openshift-machine-config-operator(1fb47e8e-c812-41b4-9be7-3fad81e121b0)\"" pod="openshift-machine-config-operator/machine-config-daemon-9r9sz" podUID="1fb47e8e-c812-41b4-9be7-3fad81e121b0" Jan 25 08:34:19 crc kubenswrapper[4832]: I0125 08:34:19.669622 4832 scope.go:117] "RemoveContainer" containerID="9f2eeb7f40f324f08ff39981fc95d743c2fa5a392afa220896be4c22d983c99b" Jan 25 08:34:19 crc kubenswrapper[4832]: E0125 08:34:19.670501 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9r9sz_openshift-machine-config-operator(1fb47e8e-c812-41b4-9be7-3fad81e121b0)\"" pod="openshift-machine-config-operator/machine-config-daemon-9r9sz" podUID="1fb47e8e-c812-41b4-9be7-3fad81e121b0" Jan 25 08:34:31 crc kubenswrapper[4832]: I0125 08:34:31.670134 4832 scope.go:117] "RemoveContainer" containerID="9f2eeb7f40f324f08ff39981fc95d743c2fa5a392afa220896be4c22d983c99b" Jan 25 08:34:31 crc kubenswrapper[4832]: E0125 08:34:31.670790 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9r9sz_openshift-machine-config-operator(1fb47e8e-c812-41b4-9be7-3fad81e121b0)\"" pod="openshift-machine-config-operator/machine-config-daemon-9r9sz" podUID="1fb47e8e-c812-41b4-9be7-3fad81e121b0" Jan 25 08:34:43 crc kubenswrapper[4832]: I0125 08:34:43.669834 4832 scope.go:117] "RemoveContainer" containerID="9f2eeb7f40f324f08ff39981fc95d743c2fa5a392afa220896be4c22d983c99b" Jan 25 08:34:43 crc kubenswrapper[4832]: E0125 08:34:43.670584 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9r9sz_openshift-machine-config-operator(1fb47e8e-c812-41b4-9be7-3fad81e121b0)\"" pod="openshift-machine-config-operator/machine-config-daemon-9r9sz" podUID="1fb47e8e-c812-41b4-9be7-3fad81e121b0" Jan 25 08:34:45 crc kubenswrapper[4832]: I0125 08:34:45.724007 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-cmrl6"] Jan 25 08:34:45 crc kubenswrapper[4832]: E0125 08:34:45.724781 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4acd6361-1940-4a66-ba22-608952fad89a" containerName="extract-content" Jan 25 08:34:45 crc kubenswrapper[4832]: I0125 08:34:45.724796 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="4acd6361-1940-4a66-ba22-608952fad89a" containerName="extract-content" Jan 25 08:34:45 crc kubenswrapper[4832]: E0125 08:34:45.724817 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4acd6361-1940-4a66-ba22-608952fad89a" containerName="extract-utilities" Jan 25 08:34:45 crc kubenswrapper[4832]: I0125 08:34:45.724823 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="4acd6361-1940-4a66-ba22-608952fad89a" containerName="extract-utilities" Jan 25 08:34:45 crc kubenswrapper[4832]: E0125 08:34:45.724842 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4acd6361-1940-4a66-ba22-608952fad89a" containerName="registry-server" Jan 25 08:34:45 crc kubenswrapper[4832]: I0125 08:34:45.724848 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="4acd6361-1940-4a66-ba22-608952fad89a" containerName="registry-server" Jan 25 08:34:45 crc kubenswrapper[4832]: E0125 08:34:45.724863 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0d683fb6-7c94-459b-ba58-99c5e67526d2" containerName="extract-utilities" Jan 25 08:34:45 crc kubenswrapper[4832]: I0125 08:34:45.724870 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="0d683fb6-7c94-459b-ba58-99c5e67526d2" containerName="extract-utilities" Jan 25 08:34:45 crc kubenswrapper[4832]: E0125 08:34:45.724888 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0d683fb6-7c94-459b-ba58-99c5e67526d2" containerName="registry-server" Jan 25 08:34:45 crc kubenswrapper[4832]: I0125 08:34:45.724893 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="0d683fb6-7c94-459b-ba58-99c5e67526d2" containerName="registry-server" Jan 25 08:34:45 crc kubenswrapper[4832]: E0125 08:34:45.724907 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0d683fb6-7c94-459b-ba58-99c5e67526d2" containerName="extract-content" Jan 25 08:34:45 crc kubenswrapper[4832]: I0125 08:34:45.724913 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="0d683fb6-7c94-459b-ba58-99c5e67526d2" containerName="extract-content" Jan 25 08:34:45 crc kubenswrapper[4832]: I0125 08:34:45.725098 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="0d683fb6-7c94-459b-ba58-99c5e67526d2" containerName="registry-server" Jan 25 08:34:45 crc kubenswrapper[4832]: I0125 08:34:45.725116 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="4acd6361-1940-4a66-ba22-608952fad89a" containerName="registry-server" Jan 25 08:34:45 crc kubenswrapper[4832]: I0125 08:34:45.726691 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-cmrl6" Jan 25 08:34:45 crc kubenswrapper[4832]: I0125 08:34:45.746837 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-cmrl6"] Jan 25 08:34:45 crc kubenswrapper[4832]: I0125 08:34:45.887040 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d569348c-9170-4acb-9fcc-03e3e5ac4171-catalog-content\") pod \"certified-operators-cmrl6\" (UID: \"d569348c-9170-4acb-9fcc-03e3e5ac4171\") " pod="openshift-marketplace/certified-operators-cmrl6" Jan 25 08:34:45 crc kubenswrapper[4832]: I0125 08:34:45.887094 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d569348c-9170-4acb-9fcc-03e3e5ac4171-utilities\") pod \"certified-operators-cmrl6\" (UID: \"d569348c-9170-4acb-9fcc-03e3e5ac4171\") " pod="openshift-marketplace/certified-operators-cmrl6" Jan 25 08:34:45 crc kubenswrapper[4832]: I0125 08:34:45.887132 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jdcc5\" (UniqueName: \"kubernetes.io/projected/d569348c-9170-4acb-9fcc-03e3e5ac4171-kube-api-access-jdcc5\") pod \"certified-operators-cmrl6\" (UID: \"d569348c-9170-4acb-9fcc-03e3e5ac4171\") " pod="openshift-marketplace/certified-operators-cmrl6" Jan 25 08:34:45 crc kubenswrapper[4832]: I0125 08:34:45.988805 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jdcc5\" (UniqueName: \"kubernetes.io/projected/d569348c-9170-4acb-9fcc-03e3e5ac4171-kube-api-access-jdcc5\") pod \"certified-operators-cmrl6\" (UID: \"d569348c-9170-4acb-9fcc-03e3e5ac4171\") " pod="openshift-marketplace/certified-operators-cmrl6" Jan 25 08:34:45 crc kubenswrapper[4832]: I0125 08:34:45.989004 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d569348c-9170-4acb-9fcc-03e3e5ac4171-catalog-content\") pod \"certified-operators-cmrl6\" (UID: \"d569348c-9170-4acb-9fcc-03e3e5ac4171\") " pod="openshift-marketplace/certified-operators-cmrl6" Jan 25 08:34:45 crc kubenswrapper[4832]: I0125 08:34:45.989037 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d569348c-9170-4acb-9fcc-03e3e5ac4171-utilities\") pod \"certified-operators-cmrl6\" (UID: \"d569348c-9170-4acb-9fcc-03e3e5ac4171\") " pod="openshift-marketplace/certified-operators-cmrl6" Jan 25 08:34:45 crc kubenswrapper[4832]: I0125 08:34:45.989617 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d569348c-9170-4acb-9fcc-03e3e5ac4171-utilities\") pod \"certified-operators-cmrl6\" (UID: \"d569348c-9170-4acb-9fcc-03e3e5ac4171\") " pod="openshift-marketplace/certified-operators-cmrl6" Jan 25 08:34:45 crc kubenswrapper[4832]: I0125 08:34:45.989715 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d569348c-9170-4acb-9fcc-03e3e5ac4171-catalog-content\") pod \"certified-operators-cmrl6\" (UID: \"d569348c-9170-4acb-9fcc-03e3e5ac4171\") " pod="openshift-marketplace/certified-operators-cmrl6" Jan 25 08:34:46 crc kubenswrapper[4832]: I0125 08:34:46.015316 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jdcc5\" (UniqueName: \"kubernetes.io/projected/d569348c-9170-4acb-9fcc-03e3e5ac4171-kube-api-access-jdcc5\") pod \"certified-operators-cmrl6\" (UID: \"d569348c-9170-4acb-9fcc-03e3e5ac4171\") " pod="openshift-marketplace/certified-operators-cmrl6" Jan 25 08:34:46 crc kubenswrapper[4832]: I0125 08:34:46.058709 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-cmrl6" Jan 25 08:34:46 crc kubenswrapper[4832]: I0125 08:34:46.645253 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-cmrl6"] Jan 25 08:34:46 crc kubenswrapper[4832]: I0125 08:34:46.780886 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cmrl6" event={"ID":"d569348c-9170-4acb-9fcc-03e3e5ac4171","Type":"ContainerStarted","Data":"af29a9b9675001fee70c155543fd09b17246d25a7862b7c8523f798e19f346e0"} Jan 25 08:34:47 crc kubenswrapper[4832]: I0125 08:34:47.792986 4832 generic.go:334] "Generic (PLEG): container finished" podID="d569348c-9170-4acb-9fcc-03e3e5ac4171" containerID="9cb5e37695ccffb312909025fb945160e87ee2d99285c6ca329aa3170ad0560a" exitCode=0 Jan 25 08:34:47 crc kubenswrapper[4832]: I0125 08:34:47.793049 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cmrl6" event={"ID":"d569348c-9170-4acb-9fcc-03e3e5ac4171","Type":"ContainerDied","Data":"9cb5e37695ccffb312909025fb945160e87ee2d99285c6ca329aa3170ad0560a"} Jan 25 08:34:47 crc kubenswrapper[4832]: I0125 08:34:47.795694 4832 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 25 08:34:48 crc kubenswrapper[4832]: I0125 08:34:48.804042 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cmrl6" event={"ID":"d569348c-9170-4acb-9fcc-03e3e5ac4171","Type":"ContainerStarted","Data":"f20f1cad32539522b779210b5707e63dcf01f74fc935a3239b34729683ffe12a"} Jan 25 08:34:49 crc kubenswrapper[4832]: I0125 08:34:49.816576 4832 generic.go:334] "Generic (PLEG): container finished" podID="d569348c-9170-4acb-9fcc-03e3e5ac4171" containerID="f20f1cad32539522b779210b5707e63dcf01f74fc935a3239b34729683ffe12a" exitCode=0 Jan 25 08:34:49 crc kubenswrapper[4832]: I0125 08:34:49.816626 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cmrl6" event={"ID":"d569348c-9170-4acb-9fcc-03e3e5ac4171","Type":"ContainerDied","Data":"f20f1cad32539522b779210b5707e63dcf01f74fc935a3239b34729683ffe12a"} Jan 25 08:34:50 crc kubenswrapper[4832]: I0125 08:34:50.827694 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cmrl6" event={"ID":"d569348c-9170-4acb-9fcc-03e3e5ac4171","Type":"ContainerStarted","Data":"785afc6275cbaafd1232d997b797388c79b3f60e72df00f286122e77b339ab6c"} Jan 25 08:34:50 crc kubenswrapper[4832]: I0125 08:34:50.845963 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-cmrl6" podStartSLOduration=3.372589084 podStartE2EDuration="5.845944927s" podCreationTimestamp="2026-01-25 08:34:45 +0000 UTC" firstStartedPulling="2026-01-25 08:34:47.795329869 +0000 UTC m=+2270.469153422" lastFinishedPulling="2026-01-25 08:34:50.268685732 +0000 UTC m=+2272.942509265" observedRunningTime="2026-01-25 08:34:50.843475421 +0000 UTC m=+2273.517298954" watchObservedRunningTime="2026-01-25 08:34:50.845944927 +0000 UTC m=+2273.519768460" Jan 25 08:34:56 crc kubenswrapper[4832]: I0125 08:34:56.059601 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-cmrl6" Jan 25 08:34:56 crc kubenswrapper[4832]: I0125 08:34:56.060203 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-cmrl6" Jan 25 08:34:56 crc kubenswrapper[4832]: I0125 08:34:56.151155 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-cmrl6" Jan 25 08:34:56 crc kubenswrapper[4832]: I0125 08:34:56.921919 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-cmrl6" Jan 25 08:34:56 crc kubenswrapper[4832]: I0125 08:34:56.972764 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-cmrl6"] Jan 25 08:34:58 crc kubenswrapper[4832]: I0125 08:34:58.671063 4832 scope.go:117] "RemoveContainer" containerID="9f2eeb7f40f324f08ff39981fc95d743c2fa5a392afa220896be4c22d983c99b" Jan 25 08:34:58 crc kubenswrapper[4832]: E0125 08:34:58.672806 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9r9sz_openshift-machine-config-operator(1fb47e8e-c812-41b4-9be7-3fad81e121b0)\"" pod="openshift-machine-config-operator/machine-config-daemon-9r9sz" podUID="1fb47e8e-c812-41b4-9be7-3fad81e121b0" Jan 25 08:34:58 crc kubenswrapper[4832]: I0125 08:34:58.896479 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-cmrl6" podUID="d569348c-9170-4acb-9fcc-03e3e5ac4171" containerName="registry-server" containerID="cri-o://785afc6275cbaafd1232d997b797388c79b3f60e72df00f286122e77b339ab6c" gracePeriod=2 Jan 25 08:34:59 crc kubenswrapper[4832]: I0125 08:34:59.350494 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-cmrl6" Jan 25 08:34:59 crc kubenswrapper[4832]: I0125 08:34:59.452267 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jdcc5\" (UniqueName: \"kubernetes.io/projected/d569348c-9170-4acb-9fcc-03e3e5ac4171-kube-api-access-jdcc5\") pod \"d569348c-9170-4acb-9fcc-03e3e5ac4171\" (UID: \"d569348c-9170-4acb-9fcc-03e3e5ac4171\") " Jan 25 08:34:59 crc kubenswrapper[4832]: I0125 08:34:59.452701 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d569348c-9170-4acb-9fcc-03e3e5ac4171-catalog-content\") pod \"d569348c-9170-4acb-9fcc-03e3e5ac4171\" (UID: \"d569348c-9170-4acb-9fcc-03e3e5ac4171\") " Jan 25 08:34:59 crc kubenswrapper[4832]: I0125 08:34:59.452733 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d569348c-9170-4acb-9fcc-03e3e5ac4171-utilities\") pod \"d569348c-9170-4acb-9fcc-03e3e5ac4171\" (UID: \"d569348c-9170-4acb-9fcc-03e3e5ac4171\") " Jan 25 08:34:59 crc kubenswrapper[4832]: I0125 08:34:59.453735 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d569348c-9170-4acb-9fcc-03e3e5ac4171-utilities" (OuterVolumeSpecName: "utilities") pod "d569348c-9170-4acb-9fcc-03e3e5ac4171" (UID: "d569348c-9170-4acb-9fcc-03e3e5ac4171"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 25 08:34:59 crc kubenswrapper[4832]: I0125 08:34:59.458366 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d569348c-9170-4acb-9fcc-03e3e5ac4171-kube-api-access-jdcc5" (OuterVolumeSpecName: "kube-api-access-jdcc5") pod "d569348c-9170-4acb-9fcc-03e3e5ac4171" (UID: "d569348c-9170-4acb-9fcc-03e3e5ac4171"). InnerVolumeSpecName "kube-api-access-jdcc5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 25 08:34:59 crc kubenswrapper[4832]: I0125 08:34:59.517792 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d569348c-9170-4acb-9fcc-03e3e5ac4171-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d569348c-9170-4acb-9fcc-03e3e5ac4171" (UID: "d569348c-9170-4acb-9fcc-03e3e5ac4171"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 25 08:34:59 crc kubenswrapper[4832]: I0125 08:34:59.554490 4832 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d569348c-9170-4acb-9fcc-03e3e5ac4171-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 25 08:34:59 crc kubenswrapper[4832]: I0125 08:34:59.554534 4832 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d569348c-9170-4acb-9fcc-03e3e5ac4171-utilities\") on node \"crc\" DevicePath \"\"" Jan 25 08:34:59 crc kubenswrapper[4832]: I0125 08:34:59.554547 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jdcc5\" (UniqueName: \"kubernetes.io/projected/d569348c-9170-4acb-9fcc-03e3e5ac4171-kube-api-access-jdcc5\") on node \"crc\" DevicePath \"\"" Jan 25 08:34:59 crc kubenswrapper[4832]: I0125 08:34:59.905990 4832 generic.go:334] "Generic (PLEG): container finished" podID="d569348c-9170-4acb-9fcc-03e3e5ac4171" containerID="785afc6275cbaafd1232d997b797388c79b3f60e72df00f286122e77b339ab6c" exitCode=0 Jan 25 08:34:59 crc kubenswrapper[4832]: I0125 08:34:59.906076 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cmrl6" event={"ID":"d569348c-9170-4acb-9fcc-03e3e5ac4171","Type":"ContainerDied","Data":"785afc6275cbaafd1232d997b797388c79b3f60e72df00f286122e77b339ab6c"} Jan 25 08:34:59 crc kubenswrapper[4832]: I0125 08:34:59.906154 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-cmrl6" Jan 25 08:34:59 crc kubenswrapper[4832]: I0125 08:34:59.907355 4832 scope.go:117] "RemoveContainer" containerID="785afc6275cbaafd1232d997b797388c79b3f60e72df00f286122e77b339ab6c" Jan 25 08:34:59 crc kubenswrapper[4832]: I0125 08:34:59.907336 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cmrl6" event={"ID":"d569348c-9170-4acb-9fcc-03e3e5ac4171","Type":"ContainerDied","Data":"af29a9b9675001fee70c155543fd09b17246d25a7862b7c8523f798e19f346e0"} Jan 25 08:34:59 crc kubenswrapper[4832]: I0125 08:34:59.933886 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-cmrl6"] Jan 25 08:34:59 crc kubenswrapper[4832]: I0125 08:34:59.940231 4832 scope.go:117] "RemoveContainer" containerID="f20f1cad32539522b779210b5707e63dcf01f74fc935a3239b34729683ffe12a" Jan 25 08:34:59 crc kubenswrapper[4832]: I0125 08:34:59.943000 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-cmrl6"] Jan 25 08:34:59 crc kubenswrapper[4832]: I0125 08:34:59.969869 4832 scope.go:117] "RemoveContainer" containerID="9cb5e37695ccffb312909025fb945160e87ee2d99285c6ca329aa3170ad0560a" Jan 25 08:35:00 crc kubenswrapper[4832]: I0125 08:35:00.018192 4832 scope.go:117] "RemoveContainer" containerID="785afc6275cbaafd1232d997b797388c79b3f60e72df00f286122e77b339ab6c" Jan 25 08:35:00 crc kubenswrapper[4832]: E0125 08:35:00.018640 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"785afc6275cbaafd1232d997b797388c79b3f60e72df00f286122e77b339ab6c\": container with ID starting with 785afc6275cbaafd1232d997b797388c79b3f60e72df00f286122e77b339ab6c not found: ID does not exist" containerID="785afc6275cbaafd1232d997b797388c79b3f60e72df00f286122e77b339ab6c" Jan 25 08:35:00 crc kubenswrapper[4832]: I0125 08:35:00.018674 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"785afc6275cbaafd1232d997b797388c79b3f60e72df00f286122e77b339ab6c"} err="failed to get container status \"785afc6275cbaafd1232d997b797388c79b3f60e72df00f286122e77b339ab6c\": rpc error: code = NotFound desc = could not find container \"785afc6275cbaafd1232d997b797388c79b3f60e72df00f286122e77b339ab6c\": container with ID starting with 785afc6275cbaafd1232d997b797388c79b3f60e72df00f286122e77b339ab6c not found: ID does not exist" Jan 25 08:35:00 crc kubenswrapper[4832]: I0125 08:35:00.018709 4832 scope.go:117] "RemoveContainer" containerID="f20f1cad32539522b779210b5707e63dcf01f74fc935a3239b34729683ffe12a" Jan 25 08:35:00 crc kubenswrapper[4832]: E0125 08:35:00.019038 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f20f1cad32539522b779210b5707e63dcf01f74fc935a3239b34729683ffe12a\": container with ID starting with f20f1cad32539522b779210b5707e63dcf01f74fc935a3239b34729683ffe12a not found: ID does not exist" containerID="f20f1cad32539522b779210b5707e63dcf01f74fc935a3239b34729683ffe12a" Jan 25 08:35:00 crc kubenswrapper[4832]: I0125 08:35:00.019135 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f20f1cad32539522b779210b5707e63dcf01f74fc935a3239b34729683ffe12a"} err="failed to get container status \"f20f1cad32539522b779210b5707e63dcf01f74fc935a3239b34729683ffe12a\": rpc error: code = NotFound desc = could not find container \"f20f1cad32539522b779210b5707e63dcf01f74fc935a3239b34729683ffe12a\": container with ID starting with f20f1cad32539522b779210b5707e63dcf01f74fc935a3239b34729683ffe12a not found: ID does not exist" Jan 25 08:35:00 crc kubenswrapper[4832]: I0125 08:35:00.019175 4832 scope.go:117] "RemoveContainer" containerID="9cb5e37695ccffb312909025fb945160e87ee2d99285c6ca329aa3170ad0560a" Jan 25 08:35:00 crc kubenswrapper[4832]: E0125 08:35:00.019580 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9cb5e37695ccffb312909025fb945160e87ee2d99285c6ca329aa3170ad0560a\": container with ID starting with 9cb5e37695ccffb312909025fb945160e87ee2d99285c6ca329aa3170ad0560a not found: ID does not exist" containerID="9cb5e37695ccffb312909025fb945160e87ee2d99285c6ca329aa3170ad0560a" Jan 25 08:35:00 crc kubenswrapper[4832]: I0125 08:35:00.019607 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9cb5e37695ccffb312909025fb945160e87ee2d99285c6ca329aa3170ad0560a"} err="failed to get container status \"9cb5e37695ccffb312909025fb945160e87ee2d99285c6ca329aa3170ad0560a\": rpc error: code = NotFound desc = could not find container \"9cb5e37695ccffb312909025fb945160e87ee2d99285c6ca329aa3170ad0560a\": container with ID starting with 9cb5e37695ccffb312909025fb945160e87ee2d99285c6ca329aa3170ad0560a not found: ID does not exist" Jan 25 08:35:01 crc kubenswrapper[4832]: I0125 08:35:01.680099 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d569348c-9170-4acb-9fcc-03e3e5ac4171" path="/var/lib/kubelet/pods/d569348c-9170-4acb-9fcc-03e3e5ac4171/volumes" Jan 25 08:35:10 crc kubenswrapper[4832]: I0125 08:35:10.669911 4832 scope.go:117] "RemoveContainer" containerID="9f2eeb7f40f324f08ff39981fc95d743c2fa5a392afa220896be4c22d983c99b" Jan 25 08:35:10 crc kubenswrapper[4832]: E0125 08:35:10.670697 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9r9sz_openshift-machine-config-operator(1fb47e8e-c812-41b4-9be7-3fad81e121b0)\"" pod="openshift-machine-config-operator/machine-config-daemon-9r9sz" podUID="1fb47e8e-c812-41b4-9be7-3fad81e121b0" Jan 25 08:35:21 crc kubenswrapper[4832]: I0125 08:35:21.670179 4832 scope.go:117] "RemoveContainer" containerID="9f2eeb7f40f324f08ff39981fc95d743c2fa5a392afa220896be4c22d983c99b" Jan 25 08:35:21 crc kubenswrapper[4832]: E0125 08:35:21.671106 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9r9sz_openshift-machine-config-operator(1fb47e8e-c812-41b4-9be7-3fad81e121b0)\"" pod="openshift-machine-config-operator/machine-config-daemon-9r9sz" podUID="1fb47e8e-c812-41b4-9be7-3fad81e121b0" Jan 25 08:35:32 crc kubenswrapper[4832]: I0125 08:35:32.671137 4832 scope.go:117] "RemoveContainer" containerID="9f2eeb7f40f324f08ff39981fc95d743c2fa5a392afa220896be4c22d983c99b" Jan 25 08:35:32 crc kubenswrapper[4832]: E0125 08:35:32.672025 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9r9sz_openshift-machine-config-operator(1fb47e8e-c812-41b4-9be7-3fad81e121b0)\"" pod="openshift-machine-config-operator/machine-config-daemon-9r9sz" podUID="1fb47e8e-c812-41b4-9be7-3fad81e121b0" Jan 25 08:35:45 crc kubenswrapper[4832]: I0125 08:35:45.670145 4832 scope.go:117] "RemoveContainer" containerID="9f2eeb7f40f324f08ff39981fc95d743c2fa5a392afa220896be4c22d983c99b" Jan 25 08:35:45 crc kubenswrapper[4832]: E0125 08:35:45.671030 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9r9sz_openshift-machine-config-operator(1fb47e8e-c812-41b4-9be7-3fad81e121b0)\"" pod="openshift-machine-config-operator/machine-config-daemon-9r9sz" podUID="1fb47e8e-c812-41b4-9be7-3fad81e121b0" Jan 25 08:35:57 crc kubenswrapper[4832]: I0125 08:35:57.675371 4832 scope.go:117] "RemoveContainer" containerID="9f2eeb7f40f324f08ff39981fc95d743c2fa5a392afa220896be4c22d983c99b" Jan 25 08:35:57 crc kubenswrapper[4832]: E0125 08:35:57.676159 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9r9sz_openshift-machine-config-operator(1fb47e8e-c812-41b4-9be7-3fad81e121b0)\"" pod="openshift-machine-config-operator/machine-config-daemon-9r9sz" podUID="1fb47e8e-c812-41b4-9be7-3fad81e121b0" Jan 25 08:36:11 crc kubenswrapper[4832]: I0125 08:36:11.669496 4832 scope.go:117] "RemoveContainer" containerID="9f2eeb7f40f324f08ff39981fc95d743c2fa5a392afa220896be4c22d983c99b" Jan 25 08:36:11 crc kubenswrapper[4832]: E0125 08:36:11.670161 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9r9sz_openshift-machine-config-operator(1fb47e8e-c812-41b4-9be7-3fad81e121b0)\"" pod="openshift-machine-config-operator/machine-config-daemon-9r9sz" podUID="1fb47e8e-c812-41b4-9be7-3fad81e121b0" Jan 25 08:36:22 crc kubenswrapper[4832]: I0125 08:36:22.670448 4832 scope.go:117] "RemoveContainer" containerID="9f2eeb7f40f324f08ff39981fc95d743c2fa5a392afa220896be4c22d983c99b" Jan 25 08:36:22 crc kubenswrapper[4832]: E0125 08:36:22.671415 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9r9sz_openshift-machine-config-operator(1fb47e8e-c812-41b4-9be7-3fad81e121b0)\"" pod="openshift-machine-config-operator/machine-config-daemon-9r9sz" podUID="1fb47e8e-c812-41b4-9be7-3fad81e121b0" Jan 25 08:36:37 crc kubenswrapper[4832]: I0125 08:36:37.677365 4832 scope.go:117] "RemoveContainer" containerID="9f2eeb7f40f324f08ff39981fc95d743c2fa5a392afa220896be4c22d983c99b" Jan 25 08:36:37 crc kubenswrapper[4832]: E0125 08:36:37.678347 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9r9sz_openshift-machine-config-operator(1fb47e8e-c812-41b4-9be7-3fad81e121b0)\"" pod="openshift-machine-config-operator/machine-config-daemon-9r9sz" podUID="1fb47e8e-c812-41b4-9be7-3fad81e121b0" Jan 25 08:36:52 crc kubenswrapper[4832]: I0125 08:36:52.671259 4832 scope.go:117] "RemoveContainer" containerID="9f2eeb7f40f324f08ff39981fc95d743c2fa5a392afa220896be4c22d983c99b" Jan 25 08:36:52 crc kubenswrapper[4832]: E0125 08:36:52.672103 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9r9sz_openshift-machine-config-operator(1fb47e8e-c812-41b4-9be7-3fad81e121b0)\"" pod="openshift-machine-config-operator/machine-config-daemon-9r9sz" podUID="1fb47e8e-c812-41b4-9be7-3fad81e121b0" Jan 25 08:37:07 crc kubenswrapper[4832]: I0125 08:37:07.678955 4832 scope.go:117] "RemoveContainer" containerID="9f2eeb7f40f324f08ff39981fc95d743c2fa5a392afa220896be4c22d983c99b" Jan 25 08:37:07 crc kubenswrapper[4832]: E0125 08:37:07.679887 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9r9sz_openshift-machine-config-operator(1fb47e8e-c812-41b4-9be7-3fad81e121b0)\"" pod="openshift-machine-config-operator/machine-config-daemon-9r9sz" podUID="1fb47e8e-c812-41b4-9be7-3fad81e121b0" Jan 25 08:37:22 crc kubenswrapper[4832]: I0125 08:37:22.669413 4832 scope.go:117] "RemoveContainer" containerID="9f2eeb7f40f324f08ff39981fc95d743c2fa5a392afa220896be4c22d983c99b" Jan 25 08:37:22 crc kubenswrapper[4832]: E0125 08:37:22.670168 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9r9sz_openshift-machine-config-operator(1fb47e8e-c812-41b4-9be7-3fad81e121b0)\"" pod="openshift-machine-config-operator/machine-config-daemon-9r9sz" podUID="1fb47e8e-c812-41b4-9be7-3fad81e121b0" Jan 25 08:37:26 crc kubenswrapper[4832]: I0125 08:37:26.639663 4832 generic.go:334] "Generic (PLEG): container finished" podID="d6839ea5-4201-48d8-b390-16fac4368cb9" containerID="92b10b66042845f1cfbdcdbd59d719238872accdac35aa7bc5f64f1cf9f0c4e3" exitCode=0 Jan 25 08:37:26 crc kubenswrapper[4832]: I0125 08:37:26.639757 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-sllb7" event={"ID":"d6839ea5-4201-48d8-b390-16fac4368cb9","Type":"ContainerDied","Data":"92b10b66042845f1cfbdcdbd59d719238872accdac35aa7bc5f64f1cf9f0c4e3"} Jan 25 08:37:28 crc kubenswrapper[4832]: I0125 08:37:28.109783 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-sllb7" Jan 25 08:37:28 crc kubenswrapper[4832]: I0125 08:37:28.208964 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/d6839ea5-4201-48d8-b390-16fac4368cb9-libvirt-secret-0\") pod \"d6839ea5-4201-48d8-b390-16fac4368cb9\" (UID: \"d6839ea5-4201-48d8-b390-16fac4368cb9\") " Jan 25 08:37:28 crc kubenswrapper[4832]: I0125 08:37:28.209030 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/d6839ea5-4201-48d8-b390-16fac4368cb9-ssh-key-openstack-edpm-ipam\") pod \"d6839ea5-4201-48d8-b390-16fac4368cb9\" (UID: \"d6839ea5-4201-48d8-b390-16fac4368cb9\") " Jan 25 08:37:28 crc kubenswrapper[4832]: I0125 08:37:28.209072 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d6839ea5-4201-48d8-b390-16fac4368cb9-libvirt-combined-ca-bundle\") pod \"d6839ea5-4201-48d8-b390-16fac4368cb9\" (UID: \"d6839ea5-4201-48d8-b390-16fac4368cb9\") " Jan 25 08:37:28 crc kubenswrapper[4832]: I0125 08:37:28.209112 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d6839ea5-4201-48d8-b390-16fac4368cb9-inventory\") pod \"d6839ea5-4201-48d8-b390-16fac4368cb9\" (UID: \"d6839ea5-4201-48d8-b390-16fac4368cb9\") " Jan 25 08:37:28 crc kubenswrapper[4832]: I0125 08:37:28.209263 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m6qb9\" (UniqueName: \"kubernetes.io/projected/d6839ea5-4201-48d8-b390-16fac4368cb9-kube-api-access-m6qb9\") pod \"d6839ea5-4201-48d8-b390-16fac4368cb9\" (UID: \"d6839ea5-4201-48d8-b390-16fac4368cb9\") " Jan 25 08:37:28 crc kubenswrapper[4832]: I0125 08:37:28.214323 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d6839ea5-4201-48d8-b390-16fac4368cb9-kube-api-access-m6qb9" (OuterVolumeSpecName: "kube-api-access-m6qb9") pod "d6839ea5-4201-48d8-b390-16fac4368cb9" (UID: "d6839ea5-4201-48d8-b390-16fac4368cb9"). InnerVolumeSpecName "kube-api-access-m6qb9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 25 08:37:28 crc kubenswrapper[4832]: I0125 08:37:28.216494 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d6839ea5-4201-48d8-b390-16fac4368cb9-libvirt-combined-ca-bundle" (OuterVolumeSpecName: "libvirt-combined-ca-bundle") pod "d6839ea5-4201-48d8-b390-16fac4368cb9" (UID: "d6839ea5-4201-48d8-b390-16fac4368cb9"). InnerVolumeSpecName "libvirt-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 25 08:37:28 crc kubenswrapper[4832]: I0125 08:37:28.238078 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d6839ea5-4201-48d8-b390-16fac4368cb9-inventory" (OuterVolumeSpecName: "inventory") pod "d6839ea5-4201-48d8-b390-16fac4368cb9" (UID: "d6839ea5-4201-48d8-b390-16fac4368cb9"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 25 08:37:28 crc kubenswrapper[4832]: I0125 08:37:28.240292 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d6839ea5-4201-48d8-b390-16fac4368cb9-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "d6839ea5-4201-48d8-b390-16fac4368cb9" (UID: "d6839ea5-4201-48d8-b390-16fac4368cb9"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 25 08:37:28 crc kubenswrapper[4832]: I0125 08:37:28.240330 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d6839ea5-4201-48d8-b390-16fac4368cb9-libvirt-secret-0" (OuterVolumeSpecName: "libvirt-secret-0") pod "d6839ea5-4201-48d8-b390-16fac4368cb9" (UID: "d6839ea5-4201-48d8-b390-16fac4368cb9"). InnerVolumeSpecName "libvirt-secret-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 25 08:37:28 crc kubenswrapper[4832]: I0125 08:37:28.311024 4832 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/d6839ea5-4201-48d8-b390-16fac4368cb9-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 25 08:37:28 crc kubenswrapper[4832]: I0125 08:37:28.311058 4832 reconciler_common.go:293] "Volume detached for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d6839ea5-4201-48d8-b390-16fac4368cb9-libvirt-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 25 08:37:28 crc kubenswrapper[4832]: I0125 08:37:28.311074 4832 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d6839ea5-4201-48d8-b390-16fac4368cb9-inventory\") on node \"crc\" DevicePath \"\"" Jan 25 08:37:28 crc kubenswrapper[4832]: I0125 08:37:28.311086 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m6qb9\" (UniqueName: \"kubernetes.io/projected/d6839ea5-4201-48d8-b390-16fac4368cb9-kube-api-access-m6qb9\") on node \"crc\" DevicePath \"\"" Jan 25 08:37:28 crc kubenswrapper[4832]: I0125 08:37:28.311096 4832 reconciler_common.go:293] "Volume detached for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/d6839ea5-4201-48d8-b390-16fac4368cb9-libvirt-secret-0\") on node \"crc\" DevicePath \"\"" Jan 25 08:37:28 crc kubenswrapper[4832]: I0125 08:37:28.658860 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-sllb7" event={"ID":"d6839ea5-4201-48d8-b390-16fac4368cb9","Type":"ContainerDied","Data":"7f310be98d0f0d50c116d33f13d0254d6d360805f716cc8fbef26e449792c2b0"} Jan 25 08:37:28 crc kubenswrapper[4832]: I0125 08:37:28.658919 4832 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7f310be98d0f0d50c116d33f13d0254d6d360805f716cc8fbef26e449792c2b0" Jan 25 08:37:28 crc kubenswrapper[4832]: I0125 08:37:28.658915 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-sllb7" Jan 25 08:37:28 crc kubenswrapper[4832]: I0125 08:37:28.754945 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-f8kjk"] Jan 25 08:37:28 crc kubenswrapper[4832]: E0125 08:37:28.755584 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d569348c-9170-4acb-9fcc-03e3e5ac4171" containerName="extract-utilities" Jan 25 08:37:28 crc kubenswrapper[4832]: I0125 08:37:28.755603 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="d569348c-9170-4acb-9fcc-03e3e5ac4171" containerName="extract-utilities" Jan 25 08:37:28 crc kubenswrapper[4832]: E0125 08:37:28.755623 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d569348c-9170-4acb-9fcc-03e3e5ac4171" containerName="extract-content" Jan 25 08:37:28 crc kubenswrapper[4832]: I0125 08:37:28.755630 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="d569348c-9170-4acb-9fcc-03e3e5ac4171" containerName="extract-content" Jan 25 08:37:28 crc kubenswrapper[4832]: E0125 08:37:28.755643 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d6839ea5-4201-48d8-b390-16fac4368cb9" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Jan 25 08:37:28 crc kubenswrapper[4832]: I0125 08:37:28.755651 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="d6839ea5-4201-48d8-b390-16fac4368cb9" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Jan 25 08:37:28 crc kubenswrapper[4832]: E0125 08:37:28.755668 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d569348c-9170-4acb-9fcc-03e3e5ac4171" containerName="registry-server" Jan 25 08:37:28 crc kubenswrapper[4832]: I0125 08:37:28.755675 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="d569348c-9170-4acb-9fcc-03e3e5ac4171" containerName="registry-server" Jan 25 08:37:28 crc kubenswrapper[4832]: I0125 08:37:28.755857 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="d569348c-9170-4acb-9fcc-03e3e5ac4171" containerName="registry-server" Jan 25 08:37:28 crc kubenswrapper[4832]: I0125 08:37:28.755875 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="d6839ea5-4201-48d8-b390-16fac4368cb9" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Jan 25 08:37:28 crc kubenswrapper[4832]: I0125 08:37:28.756503 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-f8kjk" Jan 25 08:37:28 crc kubenswrapper[4832]: I0125 08:37:28.761286 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-7jwxb" Jan 25 08:37:28 crc kubenswrapper[4832]: I0125 08:37:28.761309 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"nova-extra-config" Jan 25 08:37:28 crc kubenswrapper[4832]: I0125 08:37:28.761317 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-compute-config" Jan 25 08:37:28 crc kubenswrapper[4832]: I0125 08:37:28.761427 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-migration-ssh-key" Jan 25 08:37:28 crc kubenswrapper[4832]: I0125 08:37:28.761434 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 25 08:37:28 crc kubenswrapper[4832]: I0125 08:37:28.761518 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 25 08:37:28 crc kubenswrapper[4832]: I0125 08:37:28.762650 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 25 08:37:28 crc kubenswrapper[4832]: I0125 08:37:28.773463 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-f8kjk"] Jan 25 08:37:28 crc kubenswrapper[4832]: I0125 08:37:28.823077 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2859d34c-ae01-4c03-a14a-5256e17130ed-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-f8kjk\" (UID: \"2859d34c-ae01-4c03-a14a-5256e17130ed\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-f8kjk" Jan 25 08:37:28 crc kubenswrapper[4832]: I0125 08:37:28.823150 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/2859d34c-ae01-4c03-a14a-5256e17130ed-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-f8kjk\" (UID: \"2859d34c-ae01-4c03-a14a-5256e17130ed\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-f8kjk" Jan 25 08:37:28 crc kubenswrapper[4832]: I0125 08:37:28.823216 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/2859d34c-ae01-4c03-a14a-5256e17130ed-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-f8kjk\" (UID: \"2859d34c-ae01-4c03-a14a-5256e17130ed\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-f8kjk" Jan 25 08:37:28 crc kubenswrapper[4832]: I0125 08:37:28.823308 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2859d34c-ae01-4c03-a14a-5256e17130ed-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-f8kjk\" (UID: \"2859d34c-ae01-4c03-a14a-5256e17130ed\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-f8kjk" Jan 25 08:37:28 crc kubenswrapper[4832]: I0125 08:37:28.823363 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-shmws\" (UniqueName: \"kubernetes.io/projected/2859d34c-ae01-4c03-a14a-5256e17130ed-kube-api-access-shmws\") pod \"nova-edpm-deployment-openstack-edpm-ipam-f8kjk\" (UID: \"2859d34c-ae01-4c03-a14a-5256e17130ed\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-f8kjk" Jan 25 08:37:28 crc kubenswrapper[4832]: I0125 08:37:28.823411 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/2859d34c-ae01-4c03-a14a-5256e17130ed-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-f8kjk\" (UID: \"2859d34c-ae01-4c03-a14a-5256e17130ed\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-f8kjk" Jan 25 08:37:28 crc kubenswrapper[4832]: I0125 08:37:28.823479 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/2859d34c-ae01-4c03-a14a-5256e17130ed-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-f8kjk\" (UID: \"2859d34c-ae01-4c03-a14a-5256e17130ed\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-f8kjk" Jan 25 08:37:28 crc kubenswrapper[4832]: I0125 08:37:28.823521 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/2859d34c-ae01-4c03-a14a-5256e17130ed-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-f8kjk\" (UID: \"2859d34c-ae01-4c03-a14a-5256e17130ed\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-f8kjk" Jan 25 08:37:28 crc kubenswrapper[4832]: I0125 08:37:28.823600 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/2859d34c-ae01-4c03-a14a-5256e17130ed-ssh-key-openstack-edpm-ipam\") pod \"nova-edpm-deployment-openstack-edpm-ipam-f8kjk\" (UID: \"2859d34c-ae01-4c03-a14a-5256e17130ed\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-f8kjk" Jan 25 08:37:28 crc kubenswrapper[4832]: I0125 08:37:28.925233 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/2859d34c-ae01-4c03-a14a-5256e17130ed-ssh-key-openstack-edpm-ipam\") pod \"nova-edpm-deployment-openstack-edpm-ipam-f8kjk\" (UID: \"2859d34c-ae01-4c03-a14a-5256e17130ed\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-f8kjk" Jan 25 08:37:28 crc kubenswrapper[4832]: I0125 08:37:28.925336 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2859d34c-ae01-4c03-a14a-5256e17130ed-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-f8kjk\" (UID: \"2859d34c-ae01-4c03-a14a-5256e17130ed\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-f8kjk" Jan 25 08:37:28 crc kubenswrapper[4832]: I0125 08:37:28.925418 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/2859d34c-ae01-4c03-a14a-5256e17130ed-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-f8kjk\" (UID: \"2859d34c-ae01-4c03-a14a-5256e17130ed\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-f8kjk" Jan 25 08:37:28 crc kubenswrapper[4832]: I0125 08:37:28.925523 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/2859d34c-ae01-4c03-a14a-5256e17130ed-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-f8kjk\" (UID: \"2859d34c-ae01-4c03-a14a-5256e17130ed\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-f8kjk" Jan 25 08:37:28 crc kubenswrapper[4832]: I0125 08:37:28.925580 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2859d34c-ae01-4c03-a14a-5256e17130ed-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-f8kjk\" (UID: \"2859d34c-ae01-4c03-a14a-5256e17130ed\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-f8kjk" Jan 25 08:37:28 crc kubenswrapper[4832]: I0125 08:37:28.925651 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-shmws\" (UniqueName: \"kubernetes.io/projected/2859d34c-ae01-4c03-a14a-5256e17130ed-kube-api-access-shmws\") pod \"nova-edpm-deployment-openstack-edpm-ipam-f8kjk\" (UID: \"2859d34c-ae01-4c03-a14a-5256e17130ed\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-f8kjk" Jan 25 08:37:28 crc kubenswrapper[4832]: I0125 08:37:28.925685 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/2859d34c-ae01-4c03-a14a-5256e17130ed-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-f8kjk\" (UID: \"2859d34c-ae01-4c03-a14a-5256e17130ed\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-f8kjk" Jan 25 08:37:28 crc kubenswrapper[4832]: I0125 08:37:28.925720 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/2859d34c-ae01-4c03-a14a-5256e17130ed-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-f8kjk\" (UID: \"2859d34c-ae01-4c03-a14a-5256e17130ed\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-f8kjk" Jan 25 08:37:28 crc kubenswrapper[4832]: I0125 08:37:28.925761 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/2859d34c-ae01-4c03-a14a-5256e17130ed-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-f8kjk\" (UID: \"2859d34c-ae01-4c03-a14a-5256e17130ed\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-f8kjk" Jan 25 08:37:28 crc kubenswrapper[4832]: I0125 08:37:28.927129 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/2859d34c-ae01-4c03-a14a-5256e17130ed-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-f8kjk\" (UID: \"2859d34c-ae01-4c03-a14a-5256e17130ed\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-f8kjk" Jan 25 08:37:28 crc kubenswrapper[4832]: I0125 08:37:28.934170 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/2859d34c-ae01-4c03-a14a-5256e17130ed-ssh-key-openstack-edpm-ipam\") pod \"nova-edpm-deployment-openstack-edpm-ipam-f8kjk\" (UID: \"2859d34c-ae01-4c03-a14a-5256e17130ed\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-f8kjk" Jan 25 08:37:28 crc kubenswrapper[4832]: I0125 08:37:28.934235 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/2859d34c-ae01-4c03-a14a-5256e17130ed-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-f8kjk\" (UID: \"2859d34c-ae01-4c03-a14a-5256e17130ed\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-f8kjk" Jan 25 08:37:28 crc kubenswrapper[4832]: I0125 08:37:28.934272 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/2859d34c-ae01-4c03-a14a-5256e17130ed-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-f8kjk\" (UID: \"2859d34c-ae01-4c03-a14a-5256e17130ed\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-f8kjk" Jan 25 08:37:28 crc kubenswrapper[4832]: I0125 08:37:28.934497 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2859d34c-ae01-4c03-a14a-5256e17130ed-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-f8kjk\" (UID: \"2859d34c-ae01-4c03-a14a-5256e17130ed\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-f8kjk" Jan 25 08:37:28 crc kubenswrapper[4832]: I0125 08:37:28.935522 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2859d34c-ae01-4c03-a14a-5256e17130ed-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-f8kjk\" (UID: \"2859d34c-ae01-4c03-a14a-5256e17130ed\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-f8kjk" Jan 25 08:37:28 crc kubenswrapper[4832]: I0125 08:37:28.937246 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/2859d34c-ae01-4c03-a14a-5256e17130ed-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-f8kjk\" (UID: \"2859d34c-ae01-4c03-a14a-5256e17130ed\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-f8kjk" Jan 25 08:37:28 crc kubenswrapper[4832]: I0125 08:37:28.942272 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/2859d34c-ae01-4c03-a14a-5256e17130ed-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-f8kjk\" (UID: \"2859d34c-ae01-4c03-a14a-5256e17130ed\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-f8kjk" Jan 25 08:37:28 crc kubenswrapper[4832]: I0125 08:37:28.944140 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-shmws\" (UniqueName: \"kubernetes.io/projected/2859d34c-ae01-4c03-a14a-5256e17130ed-kube-api-access-shmws\") pod \"nova-edpm-deployment-openstack-edpm-ipam-f8kjk\" (UID: \"2859d34c-ae01-4c03-a14a-5256e17130ed\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-f8kjk" Jan 25 08:37:29 crc kubenswrapper[4832]: I0125 08:37:29.072882 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-f8kjk" Jan 25 08:37:29 crc kubenswrapper[4832]: I0125 08:37:29.572352 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-f8kjk"] Jan 25 08:37:29 crc kubenswrapper[4832]: I0125 08:37:29.667977 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-f8kjk" event={"ID":"2859d34c-ae01-4c03-a14a-5256e17130ed","Type":"ContainerStarted","Data":"43e05b91252177af056b80cb0f2ed6302887935cb865604226feb2748099f10c"} Jan 25 08:37:30 crc kubenswrapper[4832]: I0125 08:37:30.678641 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-f8kjk" event={"ID":"2859d34c-ae01-4c03-a14a-5256e17130ed","Type":"ContainerStarted","Data":"3b974b0c24f288f79acd928146798d6268cc93619b12dc5309e7933703ab4ee2"} Jan 25 08:37:30 crc kubenswrapper[4832]: I0125 08:37:30.709814 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-f8kjk" podStartSLOduration=2.123768731 podStartE2EDuration="2.709791146s" podCreationTimestamp="2026-01-25 08:37:28 +0000 UTC" firstStartedPulling="2026-01-25 08:37:29.576506222 +0000 UTC m=+2432.250329755" lastFinishedPulling="2026-01-25 08:37:30.162528637 +0000 UTC m=+2432.836352170" observedRunningTime="2026-01-25 08:37:30.698310181 +0000 UTC m=+2433.372133724" watchObservedRunningTime="2026-01-25 08:37:30.709791146 +0000 UTC m=+2433.383614679" Jan 25 08:37:33 crc kubenswrapper[4832]: I0125 08:37:33.670112 4832 scope.go:117] "RemoveContainer" containerID="9f2eeb7f40f324f08ff39981fc95d743c2fa5a392afa220896be4c22d983c99b" Jan 25 08:37:33 crc kubenswrapper[4832]: E0125 08:37:33.670785 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9r9sz_openshift-machine-config-operator(1fb47e8e-c812-41b4-9be7-3fad81e121b0)\"" pod="openshift-machine-config-operator/machine-config-daemon-9r9sz" podUID="1fb47e8e-c812-41b4-9be7-3fad81e121b0" Jan 25 08:37:46 crc kubenswrapper[4832]: I0125 08:37:46.669333 4832 scope.go:117] "RemoveContainer" containerID="9f2eeb7f40f324f08ff39981fc95d743c2fa5a392afa220896be4c22d983c99b" Jan 25 08:37:46 crc kubenswrapper[4832]: E0125 08:37:46.670085 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9r9sz_openshift-machine-config-operator(1fb47e8e-c812-41b4-9be7-3fad81e121b0)\"" pod="openshift-machine-config-operator/machine-config-daemon-9r9sz" podUID="1fb47e8e-c812-41b4-9be7-3fad81e121b0" Jan 25 08:37:58 crc kubenswrapper[4832]: I0125 08:37:58.670073 4832 scope.go:117] "RemoveContainer" containerID="9f2eeb7f40f324f08ff39981fc95d743c2fa5a392afa220896be4c22d983c99b" Jan 25 08:37:58 crc kubenswrapper[4832]: E0125 08:37:58.671129 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9r9sz_openshift-machine-config-operator(1fb47e8e-c812-41b4-9be7-3fad81e121b0)\"" pod="openshift-machine-config-operator/machine-config-daemon-9r9sz" podUID="1fb47e8e-c812-41b4-9be7-3fad81e121b0" Jan 25 08:38:13 crc kubenswrapper[4832]: I0125 08:38:13.669838 4832 scope.go:117] "RemoveContainer" containerID="9f2eeb7f40f324f08ff39981fc95d743c2fa5a392afa220896be4c22d983c99b" Jan 25 08:38:13 crc kubenswrapper[4832]: E0125 08:38:13.670633 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9r9sz_openshift-machine-config-operator(1fb47e8e-c812-41b4-9be7-3fad81e121b0)\"" pod="openshift-machine-config-operator/machine-config-daemon-9r9sz" podUID="1fb47e8e-c812-41b4-9be7-3fad81e121b0" Jan 25 08:38:26 crc kubenswrapper[4832]: I0125 08:38:26.670547 4832 scope.go:117] "RemoveContainer" containerID="9f2eeb7f40f324f08ff39981fc95d743c2fa5a392afa220896be4c22d983c99b" Jan 25 08:38:27 crc kubenswrapper[4832]: I0125 08:38:27.205979 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-9r9sz" event={"ID":"1fb47e8e-c812-41b4-9be7-3fad81e121b0","Type":"ContainerStarted","Data":"01a3d6a79b771ae9ac2fb9588d7531ae3092546b29765dbea401f0026700a915"} Jan 25 08:39:05 crc kubenswrapper[4832]: I0125 08:39:05.005315 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-zfbkk"] Jan 25 08:39:05 crc kubenswrapper[4832]: I0125 08:39:05.009345 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-zfbkk" Jan 25 08:39:05 crc kubenswrapper[4832]: I0125 08:39:05.018362 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-zfbkk"] Jan 25 08:39:05 crc kubenswrapper[4832]: I0125 08:39:05.127952 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-957mf\" (UniqueName: \"kubernetes.io/projected/29a283ec-e73f-44cd-abfb-af54e6428115-kube-api-access-957mf\") pod \"redhat-operators-zfbkk\" (UID: \"29a283ec-e73f-44cd-abfb-af54e6428115\") " pod="openshift-marketplace/redhat-operators-zfbkk" Jan 25 08:39:05 crc kubenswrapper[4832]: I0125 08:39:05.128414 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/29a283ec-e73f-44cd-abfb-af54e6428115-utilities\") pod \"redhat-operators-zfbkk\" (UID: \"29a283ec-e73f-44cd-abfb-af54e6428115\") " pod="openshift-marketplace/redhat-operators-zfbkk" Jan 25 08:39:05 crc kubenswrapper[4832]: I0125 08:39:05.128484 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/29a283ec-e73f-44cd-abfb-af54e6428115-catalog-content\") pod \"redhat-operators-zfbkk\" (UID: \"29a283ec-e73f-44cd-abfb-af54e6428115\") " pod="openshift-marketplace/redhat-operators-zfbkk" Jan 25 08:39:05 crc kubenswrapper[4832]: I0125 08:39:05.230833 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-957mf\" (UniqueName: \"kubernetes.io/projected/29a283ec-e73f-44cd-abfb-af54e6428115-kube-api-access-957mf\") pod \"redhat-operators-zfbkk\" (UID: \"29a283ec-e73f-44cd-abfb-af54e6428115\") " pod="openshift-marketplace/redhat-operators-zfbkk" Jan 25 08:39:05 crc kubenswrapper[4832]: I0125 08:39:05.230918 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/29a283ec-e73f-44cd-abfb-af54e6428115-utilities\") pod \"redhat-operators-zfbkk\" (UID: \"29a283ec-e73f-44cd-abfb-af54e6428115\") " pod="openshift-marketplace/redhat-operators-zfbkk" Jan 25 08:39:05 crc kubenswrapper[4832]: I0125 08:39:05.230947 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/29a283ec-e73f-44cd-abfb-af54e6428115-catalog-content\") pod \"redhat-operators-zfbkk\" (UID: \"29a283ec-e73f-44cd-abfb-af54e6428115\") " pod="openshift-marketplace/redhat-operators-zfbkk" Jan 25 08:39:05 crc kubenswrapper[4832]: I0125 08:39:05.231673 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/29a283ec-e73f-44cd-abfb-af54e6428115-catalog-content\") pod \"redhat-operators-zfbkk\" (UID: \"29a283ec-e73f-44cd-abfb-af54e6428115\") " pod="openshift-marketplace/redhat-operators-zfbkk" Jan 25 08:39:05 crc kubenswrapper[4832]: I0125 08:39:05.231741 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/29a283ec-e73f-44cd-abfb-af54e6428115-utilities\") pod \"redhat-operators-zfbkk\" (UID: \"29a283ec-e73f-44cd-abfb-af54e6428115\") " pod="openshift-marketplace/redhat-operators-zfbkk" Jan 25 08:39:05 crc kubenswrapper[4832]: I0125 08:39:05.258351 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-957mf\" (UniqueName: \"kubernetes.io/projected/29a283ec-e73f-44cd-abfb-af54e6428115-kube-api-access-957mf\") pod \"redhat-operators-zfbkk\" (UID: \"29a283ec-e73f-44cd-abfb-af54e6428115\") " pod="openshift-marketplace/redhat-operators-zfbkk" Jan 25 08:39:05 crc kubenswrapper[4832]: I0125 08:39:05.346608 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-zfbkk" Jan 25 08:39:05 crc kubenswrapper[4832]: I0125 08:39:05.884632 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-zfbkk"] Jan 25 08:39:06 crc kubenswrapper[4832]: I0125 08:39:06.583036 4832 generic.go:334] "Generic (PLEG): container finished" podID="29a283ec-e73f-44cd-abfb-af54e6428115" containerID="786faab45a2c484171edac7b8c00dfbfccb074022246073bfdc755bbc27b0609" exitCode=0 Jan 25 08:39:06 crc kubenswrapper[4832]: I0125 08:39:06.583116 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zfbkk" event={"ID":"29a283ec-e73f-44cd-abfb-af54e6428115","Type":"ContainerDied","Data":"786faab45a2c484171edac7b8c00dfbfccb074022246073bfdc755bbc27b0609"} Jan 25 08:39:06 crc kubenswrapper[4832]: I0125 08:39:06.583362 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zfbkk" event={"ID":"29a283ec-e73f-44cd-abfb-af54e6428115","Type":"ContainerStarted","Data":"202323fcdc093aec7860cfd66eb77876631799c17afe02d8d0e849c85263e23d"} Jan 25 08:39:07 crc kubenswrapper[4832]: I0125 08:39:07.593689 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zfbkk" event={"ID":"29a283ec-e73f-44cd-abfb-af54e6428115","Type":"ContainerStarted","Data":"b4ed91cabf80b7a58162df4d2ab70b1f23bf9ef781dcd3135f9f3051ec4effa6"} Jan 25 08:39:08 crc kubenswrapper[4832]: I0125 08:39:08.603956 4832 generic.go:334] "Generic (PLEG): container finished" podID="29a283ec-e73f-44cd-abfb-af54e6428115" containerID="b4ed91cabf80b7a58162df4d2ab70b1f23bf9ef781dcd3135f9f3051ec4effa6" exitCode=0 Jan 25 08:39:08 crc kubenswrapper[4832]: I0125 08:39:08.604006 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zfbkk" event={"ID":"29a283ec-e73f-44cd-abfb-af54e6428115","Type":"ContainerDied","Data":"b4ed91cabf80b7a58162df4d2ab70b1f23bf9ef781dcd3135f9f3051ec4effa6"} Jan 25 08:39:09 crc kubenswrapper[4832]: I0125 08:39:09.615462 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zfbkk" event={"ID":"29a283ec-e73f-44cd-abfb-af54e6428115","Type":"ContainerStarted","Data":"d2e50dde23067db2a054c714a0bbe7c9e10d067a2d924bcaf24cdb1230a8e3ae"} Jan 25 08:39:09 crc kubenswrapper[4832]: I0125 08:39:09.636944 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-zfbkk" podStartSLOduration=3.116630296 podStartE2EDuration="5.636920851s" podCreationTimestamp="2026-01-25 08:39:04 +0000 UTC" firstStartedPulling="2026-01-25 08:39:06.584766081 +0000 UTC m=+2529.258589614" lastFinishedPulling="2026-01-25 08:39:09.105056636 +0000 UTC m=+2531.778880169" observedRunningTime="2026-01-25 08:39:09.631378229 +0000 UTC m=+2532.305201762" watchObservedRunningTime="2026-01-25 08:39:09.636920851 +0000 UTC m=+2532.310744384" Jan 25 08:39:15 crc kubenswrapper[4832]: I0125 08:39:15.346994 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-zfbkk" Jan 25 08:39:15 crc kubenswrapper[4832]: I0125 08:39:15.347559 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-zfbkk" Jan 25 08:39:15 crc kubenswrapper[4832]: I0125 08:39:15.396950 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-zfbkk" Jan 25 08:39:15 crc kubenswrapper[4832]: I0125 08:39:15.709377 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-zfbkk" Jan 25 08:39:15 crc kubenswrapper[4832]: I0125 08:39:15.759949 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-zfbkk"] Jan 25 08:39:17 crc kubenswrapper[4832]: I0125 08:39:17.681415 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-zfbkk" podUID="29a283ec-e73f-44cd-abfb-af54e6428115" containerName="registry-server" containerID="cri-o://d2e50dde23067db2a054c714a0bbe7c9e10d067a2d924bcaf24cdb1230a8e3ae" gracePeriod=2 Jan 25 08:39:19 crc kubenswrapper[4832]: I0125 08:39:19.701844 4832 generic.go:334] "Generic (PLEG): container finished" podID="29a283ec-e73f-44cd-abfb-af54e6428115" containerID="d2e50dde23067db2a054c714a0bbe7c9e10d067a2d924bcaf24cdb1230a8e3ae" exitCode=0 Jan 25 08:39:19 crc kubenswrapper[4832]: I0125 08:39:19.701922 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zfbkk" event={"ID":"29a283ec-e73f-44cd-abfb-af54e6428115","Type":"ContainerDied","Data":"d2e50dde23067db2a054c714a0bbe7c9e10d067a2d924bcaf24cdb1230a8e3ae"} Jan 25 08:39:20 crc kubenswrapper[4832]: I0125 08:39:20.007223 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-zfbkk" Jan 25 08:39:20 crc kubenswrapper[4832]: I0125 08:39:20.170445 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/29a283ec-e73f-44cd-abfb-af54e6428115-utilities\") pod \"29a283ec-e73f-44cd-abfb-af54e6428115\" (UID: \"29a283ec-e73f-44cd-abfb-af54e6428115\") " Jan 25 08:39:20 crc kubenswrapper[4832]: I0125 08:39:20.170713 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/29a283ec-e73f-44cd-abfb-af54e6428115-catalog-content\") pod \"29a283ec-e73f-44cd-abfb-af54e6428115\" (UID: \"29a283ec-e73f-44cd-abfb-af54e6428115\") " Jan 25 08:39:20 crc kubenswrapper[4832]: I0125 08:39:20.170768 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-957mf\" (UniqueName: \"kubernetes.io/projected/29a283ec-e73f-44cd-abfb-af54e6428115-kube-api-access-957mf\") pod \"29a283ec-e73f-44cd-abfb-af54e6428115\" (UID: \"29a283ec-e73f-44cd-abfb-af54e6428115\") " Jan 25 08:39:20 crc kubenswrapper[4832]: I0125 08:39:20.171263 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/29a283ec-e73f-44cd-abfb-af54e6428115-utilities" (OuterVolumeSpecName: "utilities") pod "29a283ec-e73f-44cd-abfb-af54e6428115" (UID: "29a283ec-e73f-44cd-abfb-af54e6428115"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 25 08:39:20 crc kubenswrapper[4832]: I0125 08:39:20.176187 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/29a283ec-e73f-44cd-abfb-af54e6428115-kube-api-access-957mf" (OuterVolumeSpecName: "kube-api-access-957mf") pod "29a283ec-e73f-44cd-abfb-af54e6428115" (UID: "29a283ec-e73f-44cd-abfb-af54e6428115"). InnerVolumeSpecName "kube-api-access-957mf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 25 08:39:20 crc kubenswrapper[4832]: I0125 08:39:20.272956 4832 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/29a283ec-e73f-44cd-abfb-af54e6428115-utilities\") on node \"crc\" DevicePath \"\"" Jan 25 08:39:20 crc kubenswrapper[4832]: I0125 08:39:20.272988 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-957mf\" (UniqueName: \"kubernetes.io/projected/29a283ec-e73f-44cd-abfb-af54e6428115-kube-api-access-957mf\") on node \"crc\" DevicePath \"\"" Jan 25 08:39:20 crc kubenswrapper[4832]: I0125 08:39:20.284136 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/29a283ec-e73f-44cd-abfb-af54e6428115-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "29a283ec-e73f-44cd-abfb-af54e6428115" (UID: "29a283ec-e73f-44cd-abfb-af54e6428115"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 25 08:39:20 crc kubenswrapper[4832]: I0125 08:39:20.374403 4832 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/29a283ec-e73f-44cd-abfb-af54e6428115-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 25 08:39:20 crc kubenswrapper[4832]: I0125 08:39:20.713943 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zfbkk" event={"ID":"29a283ec-e73f-44cd-abfb-af54e6428115","Type":"ContainerDied","Data":"202323fcdc093aec7860cfd66eb77876631799c17afe02d8d0e849c85263e23d"} Jan 25 08:39:20 crc kubenswrapper[4832]: I0125 08:39:20.714264 4832 scope.go:117] "RemoveContainer" containerID="d2e50dde23067db2a054c714a0bbe7c9e10d067a2d924bcaf24cdb1230a8e3ae" Jan 25 08:39:20 crc kubenswrapper[4832]: I0125 08:39:20.714006 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-zfbkk" Jan 25 08:39:20 crc kubenswrapper[4832]: I0125 08:39:20.739458 4832 scope.go:117] "RemoveContainer" containerID="b4ed91cabf80b7a58162df4d2ab70b1f23bf9ef781dcd3135f9f3051ec4effa6" Jan 25 08:39:20 crc kubenswrapper[4832]: I0125 08:39:20.755580 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-zfbkk"] Jan 25 08:39:20 crc kubenswrapper[4832]: I0125 08:39:20.763347 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-zfbkk"] Jan 25 08:39:20 crc kubenswrapper[4832]: I0125 08:39:20.771975 4832 scope.go:117] "RemoveContainer" containerID="786faab45a2c484171edac7b8c00dfbfccb074022246073bfdc755bbc27b0609" Jan 25 08:39:21 crc kubenswrapper[4832]: I0125 08:39:21.681446 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="29a283ec-e73f-44cd-abfb-af54e6428115" path="/var/lib/kubelet/pods/29a283ec-e73f-44cd-abfb-af54e6428115/volumes" Jan 25 08:40:11 crc kubenswrapper[4832]: I0125 08:40:11.161970 4832 generic.go:334] "Generic (PLEG): container finished" podID="2859d34c-ae01-4c03-a14a-5256e17130ed" containerID="3b974b0c24f288f79acd928146798d6268cc93619b12dc5309e7933703ab4ee2" exitCode=0 Jan 25 08:40:11 crc kubenswrapper[4832]: I0125 08:40:11.162051 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-f8kjk" event={"ID":"2859d34c-ae01-4c03-a14a-5256e17130ed","Type":"ContainerDied","Data":"3b974b0c24f288f79acd928146798d6268cc93619b12dc5309e7933703ab4ee2"} Jan 25 08:40:12 crc kubenswrapper[4832]: I0125 08:40:12.532965 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-f8kjk" Jan 25 08:40:12 crc kubenswrapper[4832]: I0125 08:40:12.717255 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/2859d34c-ae01-4c03-a14a-5256e17130ed-nova-migration-ssh-key-0\") pod \"2859d34c-ae01-4c03-a14a-5256e17130ed\" (UID: \"2859d34c-ae01-4c03-a14a-5256e17130ed\") " Jan 25 08:40:12 crc kubenswrapper[4832]: I0125 08:40:12.717320 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2859d34c-ae01-4c03-a14a-5256e17130ed-inventory\") pod \"2859d34c-ae01-4c03-a14a-5256e17130ed\" (UID: \"2859d34c-ae01-4c03-a14a-5256e17130ed\") " Jan 25 08:40:12 crc kubenswrapper[4832]: I0125 08:40:12.717341 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/2859d34c-ae01-4c03-a14a-5256e17130ed-ssh-key-openstack-edpm-ipam\") pod \"2859d34c-ae01-4c03-a14a-5256e17130ed\" (UID: \"2859d34c-ae01-4c03-a14a-5256e17130ed\") " Jan 25 08:40:12 crc kubenswrapper[4832]: I0125 08:40:12.717364 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/2859d34c-ae01-4c03-a14a-5256e17130ed-nova-cell1-compute-config-0\") pod \"2859d34c-ae01-4c03-a14a-5256e17130ed\" (UID: \"2859d34c-ae01-4c03-a14a-5256e17130ed\") " Jan 25 08:40:12 crc kubenswrapper[4832]: I0125 08:40:12.717405 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-shmws\" (UniqueName: \"kubernetes.io/projected/2859d34c-ae01-4c03-a14a-5256e17130ed-kube-api-access-shmws\") pod \"2859d34c-ae01-4c03-a14a-5256e17130ed\" (UID: \"2859d34c-ae01-4c03-a14a-5256e17130ed\") " Jan 25 08:40:12 crc kubenswrapper[4832]: I0125 08:40:12.717451 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/2859d34c-ae01-4c03-a14a-5256e17130ed-nova-cell1-compute-config-1\") pod \"2859d34c-ae01-4c03-a14a-5256e17130ed\" (UID: \"2859d34c-ae01-4c03-a14a-5256e17130ed\") " Jan 25 08:40:12 crc kubenswrapper[4832]: I0125 08:40:12.717475 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2859d34c-ae01-4c03-a14a-5256e17130ed-nova-combined-ca-bundle\") pod \"2859d34c-ae01-4c03-a14a-5256e17130ed\" (UID: \"2859d34c-ae01-4c03-a14a-5256e17130ed\") " Jan 25 08:40:12 crc kubenswrapper[4832]: I0125 08:40:12.717584 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/2859d34c-ae01-4c03-a14a-5256e17130ed-nova-migration-ssh-key-1\") pod \"2859d34c-ae01-4c03-a14a-5256e17130ed\" (UID: \"2859d34c-ae01-4c03-a14a-5256e17130ed\") " Jan 25 08:40:12 crc kubenswrapper[4832]: I0125 08:40:12.717635 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/2859d34c-ae01-4c03-a14a-5256e17130ed-nova-extra-config-0\") pod \"2859d34c-ae01-4c03-a14a-5256e17130ed\" (UID: \"2859d34c-ae01-4c03-a14a-5256e17130ed\") " Jan 25 08:40:12 crc kubenswrapper[4832]: I0125 08:40:12.723605 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2859d34c-ae01-4c03-a14a-5256e17130ed-kube-api-access-shmws" (OuterVolumeSpecName: "kube-api-access-shmws") pod "2859d34c-ae01-4c03-a14a-5256e17130ed" (UID: "2859d34c-ae01-4c03-a14a-5256e17130ed"). InnerVolumeSpecName "kube-api-access-shmws". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 25 08:40:12 crc kubenswrapper[4832]: I0125 08:40:12.737493 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2859d34c-ae01-4c03-a14a-5256e17130ed-nova-combined-ca-bundle" (OuterVolumeSpecName: "nova-combined-ca-bundle") pod "2859d34c-ae01-4c03-a14a-5256e17130ed" (UID: "2859d34c-ae01-4c03-a14a-5256e17130ed"). InnerVolumeSpecName "nova-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 25 08:40:12 crc kubenswrapper[4832]: I0125 08:40:12.745131 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2859d34c-ae01-4c03-a14a-5256e17130ed-nova-cell1-compute-config-0" (OuterVolumeSpecName: "nova-cell1-compute-config-0") pod "2859d34c-ae01-4c03-a14a-5256e17130ed" (UID: "2859d34c-ae01-4c03-a14a-5256e17130ed"). InnerVolumeSpecName "nova-cell1-compute-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 25 08:40:12 crc kubenswrapper[4832]: I0125 08:40:12.745267 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2859d34c-ae01-4c03-a14a-5256e17130ed-nova-extra-config-0" (OuterVolumeSpecName: "nova-extra-config-0") pod "2859d34c-ae01-4c03-a14a-5256e17130ed" (UID: "2859d34c-ae01-4c03-a14a-5256e17130ed"). InnerVolumeSpecName "nova-extra-config-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 25 08:40:12 crc kubenswrapper[4832]: I0125 08:40:12.748404 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2859d34c-ae01-4c03-a14a-5256e17130ed-nova-migration-ssh-key-0" (OuterVolumeSpecName: "nova-migration-ssh-key-0") pod "2859d34c-ae01-4c03-a14a-5256e17130ed" (UID: "2859d34c-ae01-4c03-a14a-5256e17130ed"). InnerVolumeSpecName "nova-migration-ssh-key-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 25 08:40:12 crc kubenswrapper[4832]: I0125 08:40:12.748833 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2859d34c-ae01-4c03-a14a-5256e17130ed-nova-migration-ssh-key-1" (OuterVolumeSpecName: "nova-migration-ssh-key-1") pod "2859d34c-ae01-4c03-a14a-5256e17130ed" (UID: "2859d34c-ae01-4c03-a14a-5256e17130ed"). InnerVolumeSpecName "nova-migration-ssh-key-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 25 08:40:12 crc kubenswrapper[4832]: I0125 08:40:12.752001 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2859d34c-ae01-4c03-a14a-5256e17130ed-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "2859d34c-ae01-4c03-a14a-5256e17130ed" (UID: "2859d34c-ae01-4c03-a14a-5256e17130ed"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 25 08:40:12 crc kubenswrapper[4832]: I0125 08:40:12.753769 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2859d34c-ae01-4c03-a14a-5256e17130ed-nova-cell1-compute-config-1" (OuterVolumeSpecName: "nova-cell1-compute-config-1") pod "2859d34c-ae01-4c03-a14a-5256e17130ed" (UID: "2859d34c-ae01-4c03-a14a-5256e17130ed"). InnerVolumeSpecName "nova-cell1-compute-config-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 25 08:40:12 crc kubenswrapper[4832]: I0125 08:40:12.755550 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2859d34c-ae01-4c03-a14a-5256e17130ed-inventory" (OuterVolumeSpecName: "inventory") pod "2859d34c-ae01-4c03-a14a-5256e17130ed" (UID: "2859d34c-ae01-4c03-a14a-5256e17130ed"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 25 08:40:12 crc kubenswrapper[4832]: I0125 08:40:12.820440 4832 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2859d34c-ae01-4c03-a14a-5256e17130ed-inventory\") on node \"crc\" DevicePath \"\"" Jan 25 08:40:12 crc kubenswrapper[4832]: I0125 08:40:12.820487 4832 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/2859d34c-ae01-4c03-a14a-5256e17130ed-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 25 08:40:12 crc kubenswrapper[4832]: I0125 08:40:12.820505 4832 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/2859d34c-ae01-4c03-a14a-5256e17130ed-nova-cell1-compute-config-0\") on node \"crc\" DevicePath \"\"" Jan 25 08:40:12 crc kubenswrapper[4832]: I0125 08:40:12.820518 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-shmws\" (UniqueName: \"kubernetes.io/projected/2859d34c-ae01-4c03-a14a-5256e17130ed-kube-api-access-shmws\") on node \"crc\" DevicePath \"\"" Jan 25 08:40:12 crc kubenswrapper[4832]: I0125 08:40:12.820531 4832 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/2859d34c-ae01-4c03-a14a-5256e17130ed-nova-cell1-compute-config-1\") on node \"crc\" DevicePath \"\"" Jan 25 08:40:12 crc kubenswrapper[4832]: I0125 08:40:12.820544 4832 reconciler_common.go:293] "Volume detached for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2859d34c-ae01-4c03-a14a-5256e17130ed-nova-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 25 08:40:12 crc kubenswrapper[4832]: I0125 08:40:12.820556 4832 reconciler_common.go:293] "Volume detached for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/2859d34c-ae01-4c03-a14a-5256e17130ed-nova-migration-ssh-key-1\") on node \"crc\" DevicePath \"\"" Jan 25 08:40:12 crc kubenswrapper[4832]: I0125 08:40:12.820569 4832 reconciler_common.go:293] "Volume detached for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/2859d34c-ae01-4c03-a14a-5256e17130ed-nova-extra-config-0\") on node \"crc\" DevicePath \"\"" Jan 25 08:40:12 crc kubenswrapper[4832]: I0125 08:40:12.820581 4832 reconciler_common.go:293] "Volume detached for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/2859d34c-ae01-4c03-a14a-5256e17130ed-nova-migration-ssh-key-0\") on node \"crc\" DevicePath \"\"" Jan 25 08:40:13 crc kubenswrapper[4832]: I0125 08:40:13.180126 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-f8kjk" event={"ID":"2859d34c-ae01-4c03-a14a-5256e17130ed","Type":"ContainerDied","Data":"43e05b91252177af056b80cb0f2ed6302887935cb865604226feb2748099f10c"} Jan 25 08:40:13 crc kubenswrapper[4832]: I0125 08:40:13.180171 4832 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="43e05b91252177af056b80cb0f2ed6302887935cb865604226feb2748099f10c" Jan 25 08:40:13 crc kubenswrapper[4832]: I0125 08:40:13.180228 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-f8kjk" Jan 25 08:40:13 crc kubenswrapper[4832]: I0125 08:40:13.279352 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-548xj"] Jan 25 08:40:13 crc kubenswrapper[4832]: E0125 08:40:13.279931 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="29a283ec-e73f-44cd-abfb-af54e6428115" containerName="registry-server" Jan 25 08:40:13 crc kubenswrapper[4832]: I0125 08:40:13.279953 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="29a283ec-e73f-44cd-abfb-af54e6428115" containerName="registry-server" Jan 25 08:40:13 crc kubenswrapper[4832]: E0125 08:40:13.279978 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="29a283ec-e73f-44cd-abfb-af54e6428115" containerName="extract-content" Jan 25 08:40:13 crc kubenswrapper[4832]: I0125 08:40:13.279986 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="29a283ec-e73f-44cd-abfb-af54e6428115" containerName="extract-content" Jan 25 08:40:13 crc kubenswrapper[4832]: E0125 08:40:13.280012 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2859d34c-ae01-4c03-a14a-5256e17130ed" containerName="nova-edpm-deployment-openstack-edpm-ipam" Jan 25 08:40:13 crc kubenswrapper[4832]: I0125 08:40:13.280021 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="2859d34c-ae01-4c03-a14a-5256e17130ed" containerName="nova-edpm-deployment-openstack-edpm-ipam" Jan 25 08:40:13 crc kubenswrapper[4832]: E0125 08:40:13.280031 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="29a283ec-e73f-44cd-abfb-af54e6428115" containerName="extract-utilities" Jan 25 08:40:13 crc kubenswrapper[4832]: I0125 08:40:13.280036 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="29a283ec-e73f-44cd-abfb-af54e6428115" containerName="extract-utilities" Jan 25 08:40:13 crc kubenswrapper[4832]: I0125 08:40:13.280270 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="2859d34c-ae01-4c03-a14a-5256e17130ed" containerName="nova-edpm-deployment-openstack-edpm-ipam" Jan 25 08:40:13 crc kubenswrapper[4832]: I0125 08:40:13.280293 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="29a283ec-e73f-44cd-abfb-af54e6428115" containerName="registry-server" Jan 25 08:40:13 crc kubenswrapper[4832]: I0125 08:40:13.281280 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-548xj" Jan 25 08:40:13 crc kubenswrapper[4832]: I0125 08:40:13.284123 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-7jwxb" Jan 25 08:40:13 crc kubenswrapper[4832]: I0125 08:40:13.284592 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 25 08:40:13 crc kubenswrapper[4832]: I0125 08:40:13.284810 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 25 08:40:13 crc kubenswrapper[4832]: I0125 08:40:13.285818 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 25 08:40:13 crc kubenswrapper[4832]: I0125 08:40:13.288763 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-compute-config-data" Jan 25 08:40:13 crc kubenswrapper[4832]: I0125 08:40:13.295702 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-548xj"] Jan 25 08:40:13 crc kubenswrapper[4832]: I0125 08:40:13.432807 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/303826b3-afb9-4ce0-a967-9a30c910c85b-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-548xj\" (UID: \"303826b3-afb9-4ce0-a967-9a30c910c85b\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-548xj" Jan 25 08:40:13 crc kubenswrapper[4832]: I0125 08:40:13.432857 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/303826b3-afb9-4ce0-a967-9a30c910c85b-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-548xj\" (UID: \"303826b3-afb9-4ce0-a967-9a30c910c85b\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-548xj" Jan 25 08:40:13 crc kubenswrapper[4832]: I0125 08:40:13.432883 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/303826b3-afb9-4ce0-a967-9a30c910c85b-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-548xj\" (UID: \"303826b3-afb9-4ce0-a967-9a30c910c85b\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-548xj" Jan 25 08:40:13 crc kubenswrapper[4832]: I0125 08:40:13.432926 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/303826b3-afb9-4ce0-a967-9a30c910c85b-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-548xj\" (UID: \"303826b3-afb9-4ce0-a967-9a30c910c85b\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-548xj" Jan 25 08:40:13 crc kubenswrapper[4832]: I0125 08:40:13.433012 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/303826b3-afb9-4ce0-a967-9a30c910c85b-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-548xj\" (UID: \"303826b3-afb9-4ce0-a967-9a30c910c85b\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-548xj" Jan 25 08:40:13 crc kubenswrapper[4832]: I0125 08:40:13.433049 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-djb7v\" (UniqueName: \"kubernetes.io/projected/303826b3-afb9-4ce0-a967-9a30c910c85b-kube-api-access-djb7v\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-548xj\" (UID: \"303826b3-afb9-4ce0-a967-9a30c910c85b\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-548xj" Jan 25 08:40:13 crc kubenswrapper[4832]: I0125 08:40:13.433068 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/303826b3-afb9-4ce0-a967-9a30c910c85b-ssh-key-openstack-edpm-ipam\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-548xj\" (UID: \"303826b3-afb9-4ce0-a967-9a30c910c85b\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-548xj" Jan 25 08:40:13 crc kubenswrapper[4832]: I0125 08:40:13.535116 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-djb7v\" (UniqueName: \"kubernetes.io/projected/303826b3-afb9-4ce0-a967-9a30c910c85b-kube-api-access-djb7v\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-548xj\" (UID: \"303826b3-afb9-4ce0-a967-9a30c910c85b\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-548xj" Jan 25 08:40:13 crc kubenswrapper[4832]: I0125 08:40:13.535233 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/303826b3-afb9-4ce0-a967-9a30c910c85b-ssh-key-openstack-edpm-ipam\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-548xj\" (UID: \"303826b3-afb9-4ce0-a967-9a30c910c85b\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-548xj" Jan 25 08:40:13 crc kubenswrapper[4832]: I0125 08:40:13.536288 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/303826b3-afb9-4ce0-a967-9a30c910c85b-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-548xj\" (UID: \"303826b3-afb9-4ce0-a967-9a30c910c85b\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-548xj" Jan 25 08:40:13 crc kubenswrapper[4832]: I0125 08:40:13.536635 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/303826b3-afb9-4ce0-a967-9a30c910c85b-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-548xj\" (UID: \"303826b3-afb9-4ce0-a967-9a30c910c85b\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-548xj" Jan 25 08:40:13 crc kubenswrapper[4832]: I0125 08:40:13.536696 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/303826b3-afb9-4ce0-a967-9a30c910c85b-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-548xj\" (UID: \"303826b3-afb9-4ce0-a967-9a30c910c85b\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-548xj" Jan 25 08:40:13 crc kubenswrapper[4832]: I0125 08:40:13.536752 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/303826b3-afb9-4ce0-a967-9a30c910c85b-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-548xj\" (UID: \"303826b3-afb9-4ce0-a967-9a30c910c85b\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-548xj" Jan 25 08:40:13 crc kubenswrapper[4832]: I0125 08:40:13.536909 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/303826b3-afb9-4ce0-a967-9a30c910c85b-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-548xj\" (UID: \"303826b3-afb9-4ce0-a967-9a30c910c85b\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-548xj" Jan 25 08:40:13 crc kubenswrapper[4832]: I0125 08:40:13.540070 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/303826b3-afb9-4ce0-a967-9a30c910c85b-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-548xj\" (UID: \"303826b3-afb9-4ce0-a967-9a30c910c85b\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-548xj" Jan 25 08:40:13 crc kubenswrapper[4832]: I0125 08:40:13.540932 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/303826b3-afb9-4ce0-a967-9a30c910c85b-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-548xj\" (UID: \"303826b3-afb9-4ce0-a967-9a30c910c85b\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-548xj" Jan 25 08:40:13 crc kubenswrapper[4832]: I0125 08:40:13.541568 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/303826b3-afb9-4ce0-a967-9a30c910c85b-ssh-key-openstack-edpm-ipam\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-548xj\" (UID: \"303826b3-afb9-4ce0-a967-9a30c910c85b\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-548xj" Jan 25 08:40:13 crc kubenswrapper[4832]: I0125 08:40:13.543203 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/303826b3-afb9-4ce0-a967-9a30c910c85b-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-548xj\" (UID: \"303826b3-afb9-4ce0-a967-9a30c910c85b\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-548xj" Jan 25 08:40:13 crc kubenswrapper[4832]: I0125 08:40:13.544060 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/303826b3-afb9-4ce0-a967-9a30c910c85b-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-548xj\" (UID: \"303826b3-afb9-4ce0-a967-9a30c910c85b\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-548xj" Jan 25 08:40:13 crc kubenswrapper[4832]: I0125 08:40:13.545100 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/303826b3-afb9-4ce0-a967-9a30c910c85b-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-548xj\" (UID: \"303826b3-afb9-4ce0-a967-9a30c910c85b\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-548xj" Jan 25 08:40:13 crc kubenswrapper[4832]: I0125 08:40:13.554212 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-djb7v\" (UniqueName: \"kubernetes.io/projected/303826b3-afb9-4ce0-a967-9a30c910c85b-kube-api-access-djb7v\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-548xj\" (UID: \"303826b3-afb9-4ce0-a967-9a30c910c85b\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-548xj" Jan 25 08:40:13 crc kubenswrapper[4832]: I0125 08:40:13.651037 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-548xj" Jan 25 08:40:14 crc kubenswrapper[4832]: I0125 08:40:14.183105 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-548xj"] Jan 25 08:40:14 crc kubenswrapper[4832]: I0125 08:40:14.189956 4832 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 25 08:40:15 crc kubenswrapper[4832]: I0125 08:40:15.199686 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-548xj" event={"ID":"303826b3-afb9-4ce0-a967-9a30c910c85b","Type":"ContainerStarted","Data":"23c5168f42d820175d4b279f7613863929096d89a3697b09222d451cd73b4903"} Jan 25 08:40:15 crc kubenswrapper[4832]: I0125 08:40:15.199760 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-548xj" event={"ID":"303826b3-afb9-4ce0-a967-9a30c910c85b","Type":"ContainerStarted","Data":"ef72ea7cc4fd91e0d102404b3dab7f6aae7d8654113ec6fea55819a7a05aa9ee"} Jan 25 08:40:15 crc kubenswrapper[4832]: I0125 08:40:15.225152 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-548xj" podStartSLOduration=1.7558239530000002 podStartE2EDuration="2.225125568s" podCreationTimestamp="2026-01-25 08:40:13 +0000 UTC" firstStartedPulling="2026-01-25 08:40:14.189762626 +0000 UTC m=+2596.863586159" lastFinishedPulling="2026-01-25 08:40:14.659064241 +0000 UTC m=+2597.332887774" observedRunningTime="2026-01-25 08:40:15.219170554 +0000 UTC m=+2597.892994087" watchObservedRunningTime="2026-01-25 08:40:15.225125568 +0000 UTC m=+2597.898949101" Jan 25 08:40:52 crc kubenswrapper[4832]: I0125 08:40:52.150117 4832 patch_prober.go:28] interesting pod/machine-config-daemon-9r9sz container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 25 08:40:52 crc kubenswrapper[4832]: I0125 08:40:52.150744 4832 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-9r9sz" podUID="1fb47e8e-c812-41b4-9be7-3fad81e121b0" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 25 08:41:22 crc kubenswrapper[4832]: I0125 08:41:22.150482 4832 patch_prober.go:28] interesting pod/machine-config-daemon-9r9sz container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 25 08:41:22 crc kubenswrapper[4832]: I0125 08:41:22.151245 4832 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-9r9sz" podUID="1fb47e8e-c812-41b4-9be7-3fad81e121b0" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 25 08:41:52 crc kubenswrapper[4832]: I0125 08:41:52.150168 4832 patch_prober.go:28] interesting pod/machine-config-daemon-9r9sz container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 25 08:41:52 crc kubenswrapper[4832]: I0125 08:41:52.150922 4832 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-9r9sz" podUID="1fb47e8e-c812-41b4-9be7-3fad81e121b0" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 25 08:41:52 crc kubenswrapper[4832]: I0125 08:41:52.150981 4832 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-9r9sz" Jan 25 08:41:52 crc kubenswrapper[4832]: I0125 08:41:52.151847 4832 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"01a3d6a79b771ae9ac2fb9588d7531ae3092546b29765dbea401f0026700a915"} pod="openshift-machine-config-operator/machine-config-daemon-9r9sz" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 25 08:41:52 crc kubenswrapper[4832]: I0125 08:41:52.151927 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-9r9sz" podUID="1fb47e8e-c812-41b4-9be7-3fad81e121b0" containerName="machine-config-daemon" containerID="cri-o://01a3d6a79b771ae9ac2fb9588d7531ae3092546b29765dbea401f0026700a915" gracePeriod=600 Jan 25 08:41:53 crc kubenswrapper[4832]: I0125 08:41:53.095196 4832 generic.go:334] "Generic (PLEG): container finished" podID="1fb47e8e-c812-41b4-9be7-3fad81e121b0" containerID="01a3d6a79b771ae9ac2fb9588d7531ae3092546b29765dbea401f0026700a915" exitCode=0 Jan 25 08:41:53 crc kubenswrapper[4832]: I0125 08:41:53.095270 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-9r9sz" event={"ID":"1fb47e8e-c812-41b4-9be7-3fad81e121b0","Type":"ContainerDied","Data":"01a3d6a79b771ae9ac2fb9588d7531ae3092546b29765dbea401f0026700a915"} Jan 25 08:41:53 crc kubenswrapper[4832]: I0125 08:41:53.095862 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-9r9sz" event={"ID":"1fb47e8e-c812-41b4-9be7-3fad81e121b0","Type":"ContainerStarted","Data":"0a0a610809d12c84df2264dec7ffeeee111e92f1be8ae7232e65d8461dcf9246"} Jan 25 08:41:53 crc kubenswrapper[4832]: I0125 08:41:53.095885 4832 scope.go:117] "RemoveContainer" containerID="9f2eeb7f40f324f08ff39981fc95d743c2fa5a392afa220896be4c22d983c99b" Jan 25 08:42:59 crc kubenswrapper[4832]: I0125 08:42:59.668538 4832 generic.go:334] "Generic (PLEG): container finished" podID="303826b3-afb9-4ce0-a967-9a30c910c85b" containerID="23c5168f42d820175d4b279f7613863929096d89a3697b09222d451cd73b4903" exitCode=0 Jan 25 08:42:59 crc kubenswrapper[4832]: I0125 08:42:59.681360 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-548xj" event={"ID":"303826b3-afb9-4ce0-a967-9a30c910c85b","Type":"ContainerDied","Data":"23c5168f42d820175d4b279f7613863929096d89a3697b09222d451cd73b4903"} Jan 25 08:43:01 crc kubenswrapper[4832]: I0125 08:43:01.755636 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-548xj" Jan 25 08:43:01 crc kubenswrapper[4832]: I0125 08:43:01.836223 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-djb7v\" (UniqueName: \"kubernetes.io/projected/303826b3-afb9-4ce0-a967-9a30c910c85b-kube-api-access-djb7v\") pod \"303826b3-afb9-4ce0-a967-9a30c910c85b\" (UID: \"303826b3-afb9-4ce0-a967-9a30c910c85b\") " Jan 25 08:43:01 crc kubenswrapper[4832]: I0125 08:43:01.836293 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/303826b3-afb9-4ce0-a967-9a30c910c85b-ceilometer-compute-config-data-2\") pod \"303826b3-afb9-4ce0-a967-9a30c910c85b\" (UID: \"303826b3-afb9-4ce0-a967-9a30c910c85b\") " Jan 25 08:43:01 crc kubenswrapper[4832]: I0125 08:43:01.836378 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/303826b3-afb9-4ce0-a967-9a30c910c85b-ssh-key-openstack-edpm-ipam\") pod \"303826b3-afb9-4ce0-a967-9a30c910c85b\" (UID: \"303826b3-afb9-4ce0-a967-9a30c910c85b\") " Jan 25 08:43:01 crc kubenswrapper[4832]: I0125 08:43:01.836442 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/303826b3-afb9-4ce0-a967-9a30c910c85b-inventory\") pod \"303826b3-afb9-4ce0-a967-9a30c910c85b\" (UID: \"303826b3-afb9-4ce0-a967-9a30c910c85b\") " Jan 25 08:43:01 crc kubenswrapper[4832]: I0125 08:43:01.836517 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/303826b3-afb9-4ce0-a967-9a30c910c85b-ceilometer-compute-config-data-0\") pod \"303826b3-afb9-4ce0-a967-9a30c910c85b\" (UID: \"303826b3-afb9-4ce0-a967-9a30c910c85b\") " Jan 25 08:43:01 crc kubenswrapper[4832]: I0125 08:43:01.836618 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/303826b3-afb9-4ce0-a967-9a30c910c85b-telemetry-combined-ca-bundle\") pod \"303826b3-afb9-4ce0-a967-9a30c910c85b\" (UID: \"303826b3-afb9-4ce0-a967-9a30c910c85b\") " Jan 25 08:43:01 crc kubenswrapper[4832]: I0125 08:43:01.836670 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/303826b3-afb9-4ce0-a967-9a30c910c85b-ceilometer-compute-config-data-1\") pod \"303826b3-afb9-4ce0-a967-9a30c910c85b\" (UID: \"303826b3-afb9-4ce0-a967-9a30c910c85b\") " Jan 25 08:43:01 crc kubenswrapper[4832]: I0125 08:43:01.842781 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/303826b3-afb9-4ce0-a967-9a30c910c85b-kube-api-access-djb7v" (OuterVolumeSpecName: "kube-api-access-djb7v") pod "303826b3-afb9-4ce0-a967-9a30c910c85b" (UID: "303826b3-afb9-4ce0-a967-9a30c910c85b"). InnerVolumeSpecName "kube-api-access-djb7v". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 25 08:43:01 crc kubenswrapper[4832]: I0125 08:43:01.850453 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/303826b3-afb9-4ce0-a967-9a30c910c85b-telemetry-combined-ca-bundle" (OuterVolumeSpecName: "telemetry-combined-ca-bundle") pod "303826b3-afb9-4ce0-a967-9a30c910c85b" (UID: "303826b3-afb9-4ce0-a967-9a30c910c85b"). InnerVolumeSpecName "telemetry-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 25 08:43:01 crc kubenswrapper[4832]: I0125 08:43:01.865934 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/303826b3-afb9-4ce0-a967-9a30c910c85b-ceilometer-compute-config-data-0" (OuterVolumeSpecName: "ceilometer-compute-config-data-0") pod "303826b3-afb9-4ce0-a967-9a30c910c85b" (UID: "303826b3-afb9-4ce0-a967-9a30c910c85b"). InnerVolumeSpecName "ceilometer-compute-config-data-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 25 08:43:01 crc kubenswrapper[4832]: I0125 08:43:01.867167 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/303826b3-afb9-4ce0-a967-9a30c910c85b-ceilometer-compute-config-data-2" (OuterVolumeSpecName: "ceilometer-compute-config-data-2") pod "303826b3-afb9-4ce0-a967-9a30c910c85b" (UID: "303826b3-afb9-4ce0-a967-9a30c910c85b"). InnerVolumeSpecName "ceilometer-compute-config-data-2". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 25 08:43:01 crc kubenswrapper[4832]: I0125 08:43:01.868460 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/303826b3-afb9-4ce0-a967-9a30c910c85b-inventory" (OuterVolumeSpecName: "inventory") pod "303826b3-afb9-4ce0-a967-9a30c910c85b" (UID: "303826b3-afb9-4ce0-a967-9a30c910c85b"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 25 08:43:01 crc kubenswrapper[4832]: I0125 08:43:01.873583 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/303826b3-afb9-4ce0-a967-9a30c910c85b-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "303826b3-afb9-4ce0-a967-9a30c910c85b" (UID: "303826b3-afb9-4ce0-a967-9a30c910c85b"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 25 08:43:01 crc kubenswrapper[4832]: I0125 08:43:01.874982 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/303826b3-afb9-4ce0-a967-9a30c910c85b-ceilometer-compute-config-data-1" (OuterVolumeSpecName: "ceilometer-compute-config-data-1") pod "303826b3-afb9-4ce0-a967-9a30c910c85b" (UID: "303826b3-afb9-4ce0-a967-9a30c910c85b"). InnerVolumeSpecName "ceilometer-compute-config-data-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 25 08:43:01 crc kubenswrapper[4832]: I0125 08:43:01.938910 4832 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/303826b3-afb9-4ce0-a967-9a30c910c85b-ceilometer-compute-config-data-0\") on node \"crc\" DevicePath \"\"" Jan 25 08:43:01 crc kubenswrapper[4832]: I0125 08:43:01.939215 4832 reconciler_common.go:293] "Volume detached for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/303826b3-afb9-4ce0-a967-9a30c910c85b-telemetry-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 25 08:43:01 crc kubenswrapper[4832]: I0125 08:43:01.939292 4832 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/303826b3-afb9-4ce0-a967-9a30c910c85b-ceilometer-compute-config-data-1\") on node \"crc\" DevicePath \"\"" Jan 25 08:43:01 crc kubenswrapper[4832]: I0125 08:43:01.939377 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-djb7v\" (UniqueName: \"kubernetes.io/projected/303826b3-afb9-4ce0-a967-9a30c910c85b-kube-api-access-djb7v\") on node \"crc\" DevicePath \"\"" Jan 25 08:43:01 crc kubenswrapper[4832]: I0125 08:43:01.939700 4832 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/303826b3-afb9-4ce0-a967-9a30c910c85b-ceilometer-compute-config-data-2\") on node \"crc\" DevicePath \"\"" Jan 25 08:43:01 crc kubenswrapper[4832]: I0125 08:43:01.939776 4832 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/303826b3-afb9-4ce0-a967-9a30c910c85b-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 25 08:43:01 crc kubenswrapper[4832]: I0125 08:43:01.939850 4832 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/303826b3-afb9-4ce0-a967-9a30c910c85b-inventory\") on node \"crc\" DevicePath \"\"" Jan 25 08:43:02 crc kubenswrapper[4832]: I0125 08:43:02.174441 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-548xj" event={"ID":"303826b3-afb9-4ce0-a967-9a30c910c85b","Type":"ContainerDied","Data":"ef72ea7cc4fd91e0d102404b3dab7f6aae7d8654113ec6fea55819a7a05aa9ee"} Jan 25 08:43:02 crc kubenswrapper[4832]: I0125 08:43:02.174498 4832 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ef72ea7cc4fd91e0d102404b3dab7f6aae7d8654113ec6fea55819a7a05aa9ee" Jan 25 08:43:02 crc kubenswrapper[4832]: I0125 08:43:02.174564 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-548xj" Jan 25 08:43:43 crc kubenswrapper[4832]: I0125 08:43:43.789247 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/tempest-tests-tempest"] Jan 25 08:43:43 crc kubenswrapper[4832]: E0125 08:43:43.790128 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="303826b3-afb9-4ce0-a967-9a30c910c85b" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Jan 25 08:43:43 crc kubenswrapper[4832]: I0125 08:43:43.790150 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="303826b3-afb9-4ce0-a967-9a30c910c85b" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Jan 25 08:43:43 crc kubenswrapper[4832]: I0125 08:43:43.790522 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="303826b3-afb9-4ce0-a967-9a30c910c85b" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Jan 25 08:43:43 crc kubenswrapper[4832]: I0125 08:43:43.791319 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Jan 25 08:43:43 crc kubenswrapper[4832]: I0125 08:43:43.794032 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"test-operator-controller-priv-key" Jan 25 08:43:43 crc kubenswrapper[4832]: I0125 08:43:43.795793 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-custom-data-s0" Jan 25 08:43:43 crc kubenswrapper[4832]: I0125 08:43:43.795964 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-env-vars-s0" Jan 25 08:43:43 crc kubenswrapper[4832]: I0125 08:43:43.795973 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-wnc6t" Jan 25 08:43:43 crc kubenswrapper[4832]: I0125 08:43:43.811907 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tempest-tests-tempest"] Jan 25 08:43:43 crc kubenswrapper[4832]: I0125 08:43:43.889470 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/f075c376-fe6e-44de-bb3d-113de5b9fb3f-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"f075c376-fe6e-44de-bb3d-113de5b9fb3f\") " pod="openstack/tempest-tests-tempest" Jan 25 08:43:43 crc kubenswrapper[4832]: I0125 08:43:43.890030 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/f075c376-fe6e-44de-bb3d-113de5b9fb3f-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"f075c376-fe6e-44de-bb3d-113de5b9fb3f\") " pod="openstack/tempest-tests-tempest" Jan 25 08:43:43 crc kubenswrapper[4832]: I0125 08:43:43.890182 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/f075c376-fe6e-44de-bb3d-113de5b9fb3f-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"f075c376-fe6e-44de-bb3d-113de5b9fb3f\") " pod="openstack/tempest-tests-tempest" Jan 25 08:43:43 crc kubenswrapper[4832]: I0125 08:43:43.890310 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rft5k\" (UniqueName: \"kubernetes.io/projected/f075c376-fe6e-44de-bb3d-113de5b9fb3f-kube-api-access-rft5k\") pod \"tempest-tests-tempest\" (UID: \"f075c376-fe6e-44de-bb3d-113de5b9fb3f\") " pod="openstack/tempest-tests-tempest" Jan 25 08:43:43 crc kubenswrapper[4832]: I0125 08:43:43.890365 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/f075c376-fe6e-44de-bb3d-113de5b9fb3f-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"f075c376-fe6e-44de-bb3d-113de5b9fb3f\") " pod="openstack/tempest-tests-tempest" Jan 25 08:43:43 crc kubenswrapper[4832]: I0125 08:43:43.890522 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/f075c376-fe6e-44de-bb3d-113de5b9fb3f-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"f075c376-fe6e-44de-bb3d-113de5b9fb3f\") " pod="openstack/tempest-tests-tempest" Jan 25 08:43:43 crc kubenswrapper[4832]: I0125 08:43:43.890553 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f075c376-fe6e-44de-bb3d-113de5b9fb3f-config-data\") pod \"tempest-tests-tempest\" (UID: \"f075c376-fe6e-44de-bb3d-113de5b9fb3f\") " pod="openstack/tempest-tests-tempest" Jan 25 08:43:43 crc kubenswrapper[4832]: I0125 08:43:43.890640 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/f075c376-fe6e-44de-bb3d-113de5b9fb3f-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"f075c376-fe6e-44de-bb3d-113de5b9fb3f\") " pod="openstack/tempest-tests-tempest" Jan 25 08:43:43 crc kubenswrapper[4832]: I0125 08:43:43.890676 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"tempest-tests-tempest\" (UID: \"f075c376-fe6e-44de-bb3d-113de5b9fb3f\") " pod="openstack/tempest-tests-tempest" Jan 25 08:43:43 crc kubenswrapper[4832]: I0125 08:43:43.992555 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"tempest-tests-tempest\" (UID: \"f075c376-fe6e-44de-bb3d-113de5b9fb3f\") " pod="openstack/tempest-tests-tempest" Jan 25 08:43:43 crc kubenswrapper[4832]: I0125 08:43:43.992613 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/f075c376-fe6e-44de-bb3d-113de5b9fb3f-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"f075c376-fe6e-44de-bb3d-113de5b9fb3f\") " pod="openstack/tempest-tests-tempest" Jan 25 08:43:43 crc kubenswrapper[4832]: I0125 08:43:43.992643 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/f075c376-fe6e-44de-bb3d-113de5b9fb3f-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"f075c376-fe6e-44de-bb3d-113de5b9fb3f\") " pod="openstack/tempest-tests-tempest" Jan 25 08:43:43 crc kubenswrapper[4832]: I0125 08:43:43.992677 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/f075c376-fe6e-44de-bb3d-113de5b9fb3f-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"f075c376-fe6e-44de-bb3d-113de5b9fb3f\") " pod="openstack/tempest-tests-tempest" Jan 25 08:43:43 crc kubenswrapper[4832]: I0125 08:43:43.992703 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rft5k\" (UniqueName: \"kubernetes.io/projected/f075c376-fe6e-44de-bb3d-113de5b9fb3f-kube-api-access-rft5k\") pod \"tempest-tests-tempest\" (UID: \"f075c376-fe6e-44de-bb3d-113de5b9fb3f\") " pod="openstack/tempest-tests-tempest" Jan 25 08:43:43 crc kubenswrapper[4832]: I0125 08:43:43.992729 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/f075c376-fe6e-44de-bb3d-113de5b9fb3f-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"f075c376-fe6e-44de-bb3d-113de5b9fb3f\") " pod="openstack/tempest-tests-tempest" Jan 25 08:43:43 crc kubenswrapper[4832]: I0125 08:43:43.992799 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/f075c376-fe6e-44de-bb3d-113de5b9fb3f-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"f075c376-fe6e-44de-bb3d-113de5b9fb3f\") " pod="openstack/tempest-tests-tempest" Jan 25 08:43:43 crc kubenswrapper[4832]: I0125 08:43:43.992817 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f075c376-fe6e-44de-bb3d-113de5b9fb3f-config-data\") pod \"tempest-tests-tempest\" (UID: \"f075c376-fe6e-44de-bb3d-113de5b9fb3f\") " pod="openstack/tempest-tests-tempest" Jan 25 08:43:43 crc kubenswrapper[4832]: I0125 08:43:43.992858 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/f075c376-fe6e-44de-bb3d-113de5b9fb3f-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"f075c376-fe6e-44de-bb3d-113de5b9fb3f\") " pod="openstack/tempest-tests-tempest" Jan 25 08:43:43 crc kubenswrapper[4832]: I0125 08:43:43.993816 4832 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"tempest-tests-tempest\" (UID: \"f075c376-fe6e-44de-bb3d-113de5b9fb3f\") device mount path \"/mnt/openstack/pv07\"" pod="openstack/tempest-tests-tempest" Jan 25 08:43:43 crc kubenswrapper[4832]: I0125 08:43:43.994024 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/f075c376-fe6e-44de-bb3d-113de5b9fb3f-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"f075c376-fe6e-44de-bb3d-113de5b9fb3f\") " pod="openstack/tempest-tests-tempest" Jan 25 08:43:43 crc kubenswrapper[4832]: I0125 08:43:43.994345 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/f075c376-fe6e-44de-bb3d-113de5b9fb3f-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"f075c376-fe6e-44de-bb3d-113de5b9fb3f\") " pod="openstack/tempest-tests-tempest" Jan 25 08:43:43 crc kubenswrapper[4832]: I0125 08:43:43.994612 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/f075c376-fe6e-44de-bb3d-113de5b9fb3f-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"f075c376-fe6e-44de-bb3d-113de5b9fb3f\") " pod="openstack/tempest-tests-tempest" Jan 25 08:43:43 crc kubenswrapper[4832]: I0125 08:43:43.995320 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f075c376-fe6e-44de-bb3d-113de5b9fb3f-config-data\") pod \"tempest-tests-tempest\" (UID: \"f075c376-fe6e-44de-bb3d-113de5b9fb3f\") " pod="openstack/tempest-tests-tempest" Jan 25 08:43:44 crc kubenswrapper[4832]: I0125 08:43:44.021683 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/f075c376-fe6e-44de-bb3d-113de5b9fb3f-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"f075c376-fe6e-44de-bb3d-113de5b9fb3f\") " pod="openstack/tempest-tests-tempest" Jan 25 08:43:44 crc kubenswrapper[4832]: I0125 08:43:44.021789 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/f075c376-fe6e-44de-bb3d-113de5b9fb3f-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"f075c376-fe6e-44de-bb3d-113de5b9fb3f\") " pod="openstack/tempest-tests-tempest" Jan 25 08:43:44 crc kubenswrapper[4832]: I0125 08:43:44.022131 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/f075c376-fe6e-44de-bb3d-113de5b9fb3f-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"f075c376-fe6e-44de-bb3d-113de5b9fb3f\") " pod="openstack/tempest-tests-tempest" Jan 25 08:43:44 crc kubenswrapper[4832]: I0125 08:43:44.025337 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rft5k\" (UniqueName: \"kubernetes.io/projected/f075c376-fe6e-44de-bb3d-113de5b9fb3f-kube-api-access-rft5k\") pod \"tempest-tests-tempest\" (UID: \"f075c376-fe6e-44de-bb3d-113de5b9fb3f\") " pod="openstack/tempest-tests-tempest" Jan 25 08:43:44 crc kubenswrapper[4832]: I0125 08:43:44.038809 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"tempest-tests-tempest\" (UID: \"f075c376-fe6e-44de-bb3d-113de5b9fb3f\") " pod="openstack/tempest-tests-tempest" Jan 25 08:43:44 crc kubenswrapper[4832]: I0125 08:43:44.127176 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Jan 25 08:43:44 crc kubenswrapper[4832]: I0125 08:43:44.605266 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tempest-tests-tempest"] Jan 25 08:43:44 crc kubenswrapper[4832]: I0125 08:43:44.660481 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-7dnh4"] Jan 25 08:43:44 crc kubenswrapper[4832]: I0125 08:43:44.662754 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-7dnh4" Jan 25 08:43:44 crc kubenswrapper[4832]: I0125 08:43:44.678745 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-7dnh4"] Jan 25 08:43:44 crc kubenswrapper[4832]: I0125 08:43:44.812988 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bpbk5\" (UniqueName: \"kubernetes.io/projected/bf137d78-d7aa-4571-a336-130ab2a9bf77-kube-api-access-bpbk5\") pod \"community-operators-7dnh4\" (UID: \"bf137d78-d7aa-4571-a336-130ab2a9bf77\") " pod="openshift-marketplace/community-operators-7dnh4" Jan 25 08:43:44 crc kubenswrapper[4832]: I0125 08:43:44.813624 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bf137d78-d7aa-4571-a336-130ab2a9bf77-utilities\") pod \"community-operators-7dnh4\" (UID: \"bf137d78-d7aa-4571-a336-130ab2a9bf77\") " pod="openshift-marketplace/community-operators-7dnh4" Jan 25 08:43:44 crc kubenswrapper[4832]: I0125 08:43:44.813817 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bf137d78-d7aa-4571-a336-130ab2a9bf77-catalog-content\") pod \"community-operators-7dnh4\" (UID: \"bf137d78-d7aa-4571-a336-130ab2a9bf77\") " pod="openshift-marketplace/community-operators-7dnh4" Jan 25 08:43:44 crc kubenswrapper[4832]: I0125 08:43:44.915686 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bf137d78-d7aa-4571-a336-130ab2a9bf77-catalog-content\") pod \"community-operators-7dnh4\" (UID: \"bf137d78-d7aa-4571-a336-130ab2a9bf77\") " pod="openshift-marketplace/community-operators-7dnh4" Jan 25 08:43:44 crc kubenswrapper[4832]: I0125 08:43:44.915892 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bpbk5\" (UniqueName: \"kubernetes.io/projected/bf137d78-d7aa-4571-a336-130ab2a9bf77-kube-api-access-bpbk5\") pod \"community-operators-7dnh4\" (UID: \"bf137d78-d7aa-4571-a336-130ab2a9bf77\") " pod="openshift-marketplace/community-operators-7dnh4" Jan 25 08:43:44 crc kubenswrapper[4832]: I0125 08:43:44.915925 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bf137d78-d7aa-4571-a336-130ab2a9bf77-utilities\") pod \"community-operators-7dnh4\" (UID: \"bf137d78-d7aa-4571-a336-130ab2a9bf77\") " pod="openshift-marketplace/community-operators-7dnh4" Jan 25 08:43:44 crc kubenswrapper[4832]: I0125 08:43:44.916456 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bf137d78-d7aa-4571-a336-130ab2a9bf77-catalog-content\") pod \"community-operators-7dnh4\" (UID: \"bf137d78-d7aa-4571-a336-130ab2a9bf77\") " pod="openshift-marketplace/community-operators-7dnh4" Jan 25 08:43:44 crc kubenswrapper[4832]: I0125 08:43:44.916579 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bf137d78-d7aa-4571-a336-130ab2a9bf77-utilities\") pod \"community-operators-7dnh4\" (UID: \"bf137d78-d7aa-4571-a336-130ab2a9bf77\") " pod="openshift-marketplace/community-operators-7dnh4" Jan 25 08:43:44 crc kubenswrapper[4832]: I0125 08:43:44.957301 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bpbk5\" (UniqueName: \"kubernetes.io/projected/bf137d78-d7aa-4571-a336-130ab2a9bf77-kube-api-access-bpbk5\") pod \"community-operators-7dnh4\" (UID: \"bf137d78-d7aa-4571-a336-130ab2a9bf77\") " pod="openshift-marketplace/community-operators-7dnh4" Jan 25 08:43:45 crc kubenswrapper[4832]: I0125 08:43:45.007666 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-7dnh4" Jan 25 08:43:45 crc kubenswrapper[4832]: I0125 08:43:45.504465 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-7dnh4"] Jan 25 08:43:45 crc kubenswrapper[4832]: W0125 08:43:45.525646 4832 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbf137d78_d7aa_4571_a336_130ab2a9bf77.slice/crio-9094cbe69d4d93164fe0071d541adb363f4a3aae4c9edeb893474c573c1f1481 WatchSource:0}: Error finding container 9094cbe69d4d93164fe0071d541adb363f4a3aae4c9edeb893474c573c1f1481: Status 404 returned error can't find the container with id 9094cbe69d4d93164fe0071d541adb363f4a3aae4c9edeb893474c573c1f1481 Jan 25 08:43:45 crc kubenswrapper[4832]: I0125 08:43:45.556220 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"f075c376-fe6e-44de-bb3d-113de5b9fb3f","Type":"ContainerStarted","Data":"a079734bbb82710295e961674635d06d5d22609699f27b92b5e630c25b526814"} Jan 25 08:43:45 crc kubenswrapper[4832]: I0125 08:43:45.559651 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7dnh4" event={"ID":"bf137d78-d7aa-4571-a336-130ab2a9bf77","Type":"ContainerStarted","Data":"9094cbe69d4d93164fe0071d541adb363f4a3aae4c9edeb893474c573c1f1481"} Jan 25 08:43:46 crc kubenswrapper[4832]: I0125 08:43:46.582213 4832 generic.go:334] "Generic (PLEG): container finished" podID="bf137d78-d7aa-4571-a336-130ab2a9bf77" containerID="76499b965008c0081a64fe86b145e902ba275aee8d77ecb301205eed690a0c3c" exitCode=0 Jan 25 08:43:46 crc kubenswrapper[4832]: I0125 08:43:46.582271 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7dnh4" event={"ID":"bf137d78-d7aa-4571-a336-130ab2a9bf77","Type":"ContainerDied","Data":"76499b965008c0081a64fe86b145e902ba275aee8d77ecb301205eed690a0c3c"} Jan 25 08:43:50 crc kubenswrapper[4832]: I0125 08:43:50.264621 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-wfqkb"] Jan 25 08:43:50 crc kubenswrapper[4832]: I0125 08:43:50.267285 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-wfqkb" Jan 25 08:43:50 crc kubenswrapper[4832]: I0125 08:43:50.306469 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-wfqkb"] Jan 25 08:43:50 crc kubenswrapper[4832]: I0125 08:43:50.351053 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/20959446-e6d0-4c75-a573-73340d847308-utilities\") pod \"redhat-marketplace-wfqkb\" (UID: \"20959446-e6d0-4c75-a573-73340d847308\") " pod="openshift-marketplace/redhat-marketplace-wfqkb" Jan 25 08:43:50 crc kubenswrapper[4832]: I0125 08:43:50.351159 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/20959446-e6d0-4c75-a573-73340d847308-catalog-content\") pod \"redhat-marketplace-wfqkb\" (UID: \"20959446-e6d0-4c75-a573-73340d847308\") " pod="openshift-marketplace/redhat-marketplace-wfqkb" Jan 25 08:43:50 crc kubenswrapper[4832]: I0125 08:43:50.351230 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6n6cs\" (UniqueName: \"kubernetes.io/projected/20959446-e6d0-4c75-a573-73340d847308-kube-api-access-6n6cs\") pod \"redhat-marketplace-wfqkb\" (UID: \"20959446-e6d0-4c75-a573-73340d847308\") " pod="openshift-marketplace/redhat-marketplace-wfqkb" Jan 25 08:43:50 crc kubenswrapper[4832]: I0125 08:43:50.453519 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/20959446-e6d0-4c75-a573-73340d847308-utilities\") pod \"redhat-marketplace-wfqkb\" (UID: \"20959446-e6d0-4c75-a573-73340d847308\") " pod="openshift-marketplace/redhat-marketplace-wfqkb" Jan 25 08:43:50 crc kubenswrapper[4832]: I0125 08:43:50.453615 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/20959446-e6d0-4c75-a573-73340d847308-catalog-content\") pod \"redhat-marketplace-wfqkb\" (UID: \"20959446-e6d0-4c75-a573-73340d847308\") " pod="openshift-marketplace/redhat-marketplace-wfqkb" Jan 25 08:43:50 crc kubenswrapper[4832]: I0125 08:43:50.453670 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6n6cs\" (UniqueName: \"kubernetes.io/projected/20959446-e6d0-4c75-a573-73340d847308-kube-api-access-6n6cs\") pod \"redhat-marketplace-wfqkb\" (UID: \"20959446-e6d0-4c75-a573-73340d847308\") " pod="openshift-marketplace/redhat-marketplace-wfqkb" Jan 25 08:43:50 crc kubenswrapper[4832]: I0125 08:43:50.454212 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/20959446-e6d0-4c75-a573-73340d847308-utilities\") pod \"redhat-marketplace-wfqkb\" (UID: \"20959446-e6d0-4c75-a573-73340d847308\") " pod="openshift-marketplace/redhat-marketplace-wfqkb" Jan 25 08:43:50 crc kubenswrapper[4832]: I0125 08:43:50.454422 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/20959446-e6d0-4c75-a573-73340d847308-catalog-content\") pod \"redhat-marketplace-wfqkb\" (UID: \"20959446-e6d0-4c75-a573-73340d847308\") " pod="openshift-marketplace/redhat-marketplace-wfqkb" Jan 25 08:43:50 crc kubenswrapper[4832]: I0125 08:43:50.478139 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6n6cs\" (UniqueName: \"kubernetes.io/projected/20959446-e6d0-4c75-a573-73340d847308-kube-api-access-6n6cs\") pod \"redhat-marketplace-wfqkb\" (UID: \"20959446-e6d0-4c75-a573-73340d847308\") " pod="openshift-marketplace/redhat-marketplace-wfqkb" Jan 25 08:43:50 crc kubenswrapper[4832]: I0125 08:43:50.599437 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-wfqkb" Jan 25 08:43:51 crc kubenswrapper[4832]: I0125 08:43:51.151175 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-wfqkb"] Jan 25 08:43:51 crc kubenswrapper[4832]: I0125 08:43:51.703052 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7dnh4" event={"ID":"bf137d78-d7aa-4571-a336-130ab2a9bf77","Type":"ContainerStarted","Data":"1b4745a995caeb2ad8beb3a59498bfb7f36e921af68cc2a7e261383dea8935da"} Jan 25 08:43:51 crc kubenswrapper[4832]: I0125 08:43:51.706028 4832 generic.go:334] "Generic (PLEG): container finished" podID="20959446-e6d0-4c75-a573-73340d847308" containerID="225b63af2f47bd73075ddd356717bef80abffc4efc8c15b207d45b35f74ed2d9" exitCode=0 Jan 25 08:43:51 crc kubenswrapper[4832]: I0125 08:43:51.706075 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wfqkb" event={"ID":"20959446-e6d0-4c75-a573-73340d847308","Type":"ContainerDied","Data":"225b63af2f47bd73075ddd356717bef80abffc4efc8c15b207d45b35f74ed2d9"} Jan 25 08:43:51 crc kubenswrapper[4832]: I0125 08:43:51.706130 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wfqkb" event={"ID":"20959446-e6d0-4c75-a573-73340d847308","Type":"ContainerStarted","Data":"20ac646aa9df1ad34f861e62ee94bbaa1a0a610a8492ebf3679d798e2c1a3357"} Jan 25 08:43:52 crc kubenswrapper[4832]: I0125 08:43:52.150217 4832 patch_prober.go:28] interesting pod/machine-config-daemon-9r9sz container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 25 08:43:52 crc kubenswrapper[4832]: I0125 08:43:52.150697 4832 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-9r9sz" podUID="1fb47e8e-c812-41b4-9be7-3fad81e121b0" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 25 08:43:52 crc kubenswrapper[4832]: I0125 08:43:52.717970 4832 generic.go:334] "Generic (PLEG): container finished" podID="bf137d78-d7aa-4571-a336-130ab2a9bf77" containerID="1b4745a995caeb2ad8beb3a59498bfb7f36e921af68cc2a7e261383dea8935da" exitCode=0 Jan 25 08:43:52 crc kubenswrapper[4832]: I0125 08:43:52.718019 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7dnh4" event={"ID":"bf137d78-d7aa-4571-a336-130ab2a9bf77","Type":"ContainerDied","Data":"1b4745a995caeb2ad8beb3a59498bfb7f36e921af68cc2a7e261383dea8935da"} Jan 25 08:43:53 crc kubenswrapper[4832]: I0125 08:43:53.728993 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wfqkb" event={"ID":"20959446-e6d0-4c75-a573-73340d847308","Type":"ContainerStarted","Data":"584a9893af3ef1223f6f081acfebf588cd109918f9b0691a9dfb9b22df2f0165"} Jan 25 08:43:55 crc kubenswrapper[4832]: I0125 08:43:55.762149 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7dnh4" event={"ID":"bf137d78-d7aa-4571-a336-130ab2a9bf77","Type":"ContainerStarted","Data":"a807a7f72b58c05aaaa2d96a5dd0dfed7c5627837eadbe44d422e8b96ca73ac6"} Jan 25 08:43:55 crc kubenswrapper[4832]: I0125 08:43:55.766106 4832 generic.go:334] "Generic (PLEG): container finished" podID="20959446-e6d0-4c75-a573-73340d847308" containerID="584a9893af3ef1223f6f081acfebf588cd109918f9b0691a9dfb9b22df2f0165" exitCode=0 Jan 25 08:43:55 crc kubenswrapper[4832]: I0125 08:43:55.766159 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wfqkb" event={"ID":"20959446-e6d0-4c75-a573-73340d847308","Type":"ContainerDied","Data":"584a9893af3ef1223f6f081acfebf588cd109918f9b0691a9dfb9b22df2f0165"} Jan 25 08:43:56 crc kubenswrapper[4832]: I0125 08:43:56.795725 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-7dnh4" podStartSLOduration=9.760025113 podStartE2EDuration="12.795703652s" podCreationTimestamp="2026-01-25 08:43:44 +0000 UTC" firstStartedPulling="2026-01-25 08:43:50.100017827 +0000 UTC m=+2812.773841360" lastFinishedPulling="2026-01-25 08:43:53.135696366 +0000 UTC m=+2815.809519899" observedRunningTime="2026-01-25 08:43:56.791003696 +0000 UTC m=+2819.464827229" watchObservedRunningTime="2026-01-25 08:43:56.795703652 +0000 UTC m=+2819.469527185" Jan 25 08:44:05 crc kubenswrapper[4832]: I0125 08:44:05.008375 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-7dnh4" Jan 25 08:44:05 crc kubenswrapper[4832]: I0125 08:44:05.008978 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-7dnh4" Jan 25 08:44:05 crc kubenswrapper[4832]: I0125 08:44:05.134606 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-7dnh4" Jan 25 08:44:05 crc kubenswrapper[4832]: I0125 08:44:05.925448 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-7dnh4" Jan 25 08:44:05 crc kubenswrapper[4832]: I0125 08:44:05.982448 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-7dnh4"] Jan 25 08:44:07 crc kubenswrapper[4832]: I0125 08:44:07.891874 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-7dnh4" podUID="bf137d78-d7aa-4571-a336-130ab2a9bf77" containerName="registry-server" containerID="cri-o://a807a7f72b58c05aaaa2d96a5dd0dfed7c5627837eadbe44d422e8b96ca73ac6" gracePeriod=2 Jan 25 08:44:08 crc kubenswrapper[4832]: I0125 08:44:08.906221 4832 generic.go:334] "Generic (PLEG): container finished" podID="bf137d78-d7aa-4571-a336-130ab2a9bf77" containerID="a807a7f72b58c05aaaa2d96a5dd0dfed7c5627837eadbe44d422e8b96ca73ac6" exitCode=0 Jan 25 08:44:08 crc kubenswrapper[4832]: I0125 08:44:08.907545 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7dnh4" event={"ID":"bf137d78-d7aa-4571-a336-130ab2a9bf77","Type":"ContainerDied","Data":"a807a7f72b58c05aaaa2d96a5dd0dfed7c5627837eadbe44d422e8b96ca73ac6"} Jan 25 08:44:15 crc kubenswrapper[4832]: E0125 08:44:15.010076 4832 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of a807a7f72b58c05aaaa2d96a5dd0dfed7c5627837eadbe44d422e8b96ca73ac6 is running failed: container process not found" containerID="a807a7f72b58c05aaaa2d96a5dd0dfed7c5627837eadbe44d422e8b96ca73ac6" cmd=["grpc_health_probe","-addr=:50051"] Jan 25 08:44:15 crc kubenswrapper[4832]: E0125 08:44:15.012359 4832 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of a807a7f72b58c05aaaa2d96a5dd0dfed7c5627837eadbe44d422e8b96ca73ac6 is running failed: container process not found" containerID="a807a7f72b58c05aaaa2d96a5dd0dfed7c5627837eadbe44d422e8b96ca73ac6" cmd=["grpc_health_probe","-addr=:50051"] Jan 25 08:44:15 crc kubenswrapper[4832]: E0125 08:44:15.012792 4832 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of a807a7f72b58c05aaaa2d96a5dd0dfed7c5627837eadbe44d422e8b96ca73ac6 is running failed: container process not found" containerID="a807a7f72b58c05aaaa2d96a5dd0dfed7c5627837eadbe44d422e8b96ca73ac6" cmd=["grpc_health_probe","-addr=:50051"] Jan 25 08:44:15 crc kubenswrapper[4832]: E0125 08:44:15.012895 4832 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of a807a7f72b58c05aaaa2d96a5dd0dfed7c5627837eadbe44d422e8b96ca73ac6 is running failed: container process not found" probeType="Readiness" pod="openshift-marketplace/community-operators-7dnh4" podUID="bf137d78-d7aa-4571-a336-130ab2a9bf77" containerName="registry-server" Jan 25 08:44:22 crc kubenswrapper[4832]: I0125 08:44:22.149962 4832 patch_prober.go:28] interesting pod/machine-config-daemon-9r9sz container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 25 08:44:22 crc kubenswrapper[4832]: I0125 08:44:22.150543 4832 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-9r9sz" podUID="1fb47e8e-c812-41b4-9be7-3fad81e121b0" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 25 08:44:24 crc kubenswrapper[4832]: E0125 08:44:24.106897 4832 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-tempest-all:current-podified" Jan 25 08:44:24 crc kubenswrapper[4832]: E0125 08:44:24.107832 4832 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:tempest-tests-tempest-tests-runner,Image:quay.io/podified-antelope-centos9/openstack-tempest-all:current-podified,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:false,MountPath:/etc/test_operator,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-ephemeral-workdir,ReadOnly:false,MountPath:/var/lib/tempest,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-ephemeral-temporary,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-logs,ReadOnly:false,MountPath:/var/lib/tempest/external_files,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config,ReadOnly:true,MountPath:/etc/openstack/clouds.yaml,SubPath:clouds.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config,ReadOnly:true,MountPath:/var/lib/tempest/.config/openstack/clouds.yaml,SubPath:clouds.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config-secret,ReadOnly:false,MountPath:/etc/openstack/secure.yaml,SubPath:secure.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ca-certs,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ssh-key,ReadOnly:false,MountPath:/var/lib/tempest/id_ecdsa,SubPath:ssh_key,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rft5k,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42480,RunAsNonRoot:*false,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:*true,RunAsGroup:*42480,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{EnvFromSource{Prefix:,ConfigMapRef:&ConfigMapEnvSource{LocalObjectReference:LocalObjectReference{Name:tempest-tests-tempest-custom-data-s0,},Optional:nil,},SecretRef:nil,},EnvFromSource{Prefix:,ConfigMapRef:&ConfigMapEnvSource{LocalObjectReference:LocalObjectReference{Name:tempest-tests-tempest-env-vars-s0,},Optional:nil,},SecretRef:nil,},},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod tempest-tests-tempest_openstack(f075c376-fe6e-44de-bb3d-113de5b9fb3f): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 25 08:44:24 crc kubenswrapper[4832]: E0125 08:44:24.113552 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tempest-tests-tempest-tests-runner\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/tempest-tests-tempest" podUID="f075c376-fe6e-44de-bb3d-113de5b9fb3f" Jan 25 08:44:24 crc kubenswrapper[4832]: I0125 08:44:24.429399 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-7dnh4" Jan 25 08:44:24 crc kubenswrapper[4832]: I0125 08:44:24.585941 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bf137d78-d7aa-4571-a336-130ab2a9bf77-utilities\") pod \"bf137d78-d7aa-4571-a336-130ab2a9bf77\" (UID: \"bf137d78-d7aa-4571-a336-130ab2a9bf77\") " Jan 25 08:44:24 crc kubenswrapper[4832]: I0125 08:44:24.586431 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bf137d78-d7aa-4571-a336-130ab2a9bf77-catalog-content\") pod \"bf137d78-d7aa-4571-a336-130ab2a9bf77\" (UID: \"bf137d78-d7aa-4571-a336-130ab2a9bf77\") " Jan 25 08:44:24 crc kubenswrapper[4832]: I0125 08:44:24.586650 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bpbk5\" (UniqueName: \"kubernetes.io/projected/bf137d78-d7aa-4571-a336-130ab2a9bf77-kube-api-access-bpbk5\") pod \"bf137d78-d7aa-4571-a336-130ab2a9bf77\" (UID: \"bf137d78-d7aa-4571-a336-130ab2a9bf77\") " Jan 25 08:44:24 crc kubenswrapper[4832]: I0125 08:44:24.587429 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bf137d78-d7aa-4571-a336-130ab2a9bf77-utilities" (OuterVolumeSpecName: "utilities") pod "bf137d78-d7aa-4571-a336-130ab2a9bf77" (UID: "bf137d78-d7aa-4571-a336-130ab2a9bf77"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 25 08:44:24 crc kubenswrapper[4832]: I0125 08:44:24.595282 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf137d78-d7aa-4571-a336-130ab2a9bf77-kube-api-access-bpbk5" (OuterVolumeSpecName: "kube-api-access-bpbk5") pod "bf137d78-d7aa-4571-a336-130ab2a9bf77" (UID: "bf137d78-d7aa-4571-a336-130ab2a9bf77"). InnerVolumeSpecName "kube-api-access-bpbk5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 25 08:44:24 crc kubenswrapper[4832]: I0125 08:44:24.639224 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bf137d78-d7aa-4571-a336-130ab2a9bf77-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "bf137d78-d7aa-4571-a336-130ab2a9bf77" (UID: "bf137d78-d7aa-4571-a336-130ab2a9bf77"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 25 08:44:24 crc kubenswrapper[4832]: I0125 08:44:24.688744 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bpbk5\" (UniqueName: \"kubernetes.io/projected/bf137d78-d7aa-4571-a336-130ab2a9bf77-kube-api-access-bpbk5\") on node \"crc\" DevicePath \"\"" Jan 25 08:44:24 crc kubenswrapper[4832]: I0125 08:44:24.688788 4832 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bf137d78-d7aa-4571-a336-130ab2a9bf77-utilities\") on node \"crc\" DevicePath \"\"" Jan 25 08:44:24 crc kubenswrapper[4832]: I0125 08:44:24.688802 4832 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bf137d78-d7aa-4571-a336-130ab2a9bf77-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 25 08:44:25 crc kubenswrapper[4832]: I0125 08:44:25.064121 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7dnh4" event={"ID":"bf137d78-d7aa-4571-a336-130ab2a9bf77","Type":"ContainerDied","Data":"9094cbe69d4d93164fe0071d541adb363f4a3aae4c9edeb893474c573c1f1481"} Jan 25 08:44:25 crc kubenswrapper[4832]: I0125 08:44:25.064147 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-7dnh4" Jan 25 08:44:25 crc kubenswrapper[4832]: I0125 08:44:25.064186 4832 scope.go:117] "RemoveContainer" containerID="a807a7f72b58c05aaaa2d96a5dd0dfed7c5627837eadbe44d422e8b96ca73ac6" Jan 25 08:44:25 crc kubenswrapper[4832]: I0125 08:44:25.067420 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wfqkb" event={"ID":"20959446-e6d0-4c75-a573-73340d847308","Type":"ContainerStarted","Data":"059e92e9cfaef88e44330c01bfbdded450dddb3195b8e6bf61280a1b5ab4b389"} Jan 25 08:44:25 crc kubenswrapper[4832]: E0125 08:44:25.069104 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tempest-tests-tempest-tests-runner\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-tempest-all:current-podified\\\"\"" pod="openstack/tempest-tests-tempest" podUID="f075c376-fe6e-44de-bb3d-113de5b9fb3f" Jan 25 08:44:25 crc kubenswrapper[4832]: I0125 08:44:25.090528 4832 scope.go:117] "RemoveContainer" containerID="1b4745a995caeb2ad8beb3a59498bfb7f36e921af68cc2a7e261383dea8935da" Jan 25 08:44:25 crc kubenswrapper[4832]: I0125 08:44:25.134649 4832 scope.go:117] "RemoveContainer" containerID="76499b965008c0081a64fe86b145e902ba275aee8d77ecb301205eed690a0c3c" Jan 25 08:44:25 crc kubenswrapper[4832]: I0125 08:44:25.157236 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-wfqkb" podStartSLOduration=2.798577454 podStartE2EDuration="35.157204888s" podCreationTimestamp="2026-01-25 08:43:50 +0000 UTC" firstStartedPulling="2026-01-25 08:43:51.707866915 +0000 UTC m=+2814.381690448" lastFinishedPulling="2026-01-25 08:44:24.066494349 +0000 UTC m=+2846.740317882" observedRunningTime="2026-01-25 08:44:25.148019193 +0000 UTC m=+2847.821842736" watchObservedRunningTime="2026-01-25 08:44:25.157204888 +0000 UTC m=+2847.831028421" Jan 25 08:44:25 crc kubenswrapper[4832]: I0125 08:44:25.212454 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-7dnh4"] Jan 25 08:44:25 crc kubenswrapper[4832]: I0125 08:44:25.242667 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-7dnh4"] Jan 25 08:44:25 crc kubenswrapper[4832]: I0125 08:44:25.680545 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf137d78-d7aa-4571-a336-130ab2a9bf77" path="/var/lib/kubelet/pods/bf137d78-d7aa-4571-a336-130ab2a9bf77/volumes" Jan 25 08:44:30 crc kubenswrapper[4832]: I0125 08:44:30.600500 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-wfqkb" Jan 25 08:44:30 crc kubenswrapper[4832]: I0125 08:44:30.602048 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-wfqkb" Jan 25 08:44:30 crc kubenswrapper[4832]: I0125 08:44:30.646198 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-wfqkb" Jan 25 08:44:31 crc kubenswrapper[4832]: I0125 08:44:31.175779 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-wfqkb" Jan 25 08:44:31 crc kubenswrapper[4832]: I0125 08:44:31.221248 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-wfqkb"] Jan 25 08:44:33 crc kubenswrapper[4832]: I0125 08:44:33.149505 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-wfqkb" podUID="20959446-e6d0-4c75-a573-73340d847308" containerName="registry-server" containerID="cri-o://059e92e9cfaef88e44330c01bfbdded450dddb3195b8e6bf61280a1b5ab4b389" gracePeriod=2 Jan 25 08:44:33 crc kubenswrapper[4832]: I0125 08:44:33.618717 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-wfqkb" Jan 25 08:44:33 crc kubenswrapper[4832]: I0125 08:44:33.785061 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/20959446-e6d0-4c75-a573-73340d847308-utilities\") pod \"20959446-e6d0-4c75-a573-73340d847308\" (UID: \"20959446-e6d0-4c75-a573-73340d847308\") " Jan 25 08:44:33 crc kubenswrapper[4832]: I0125 08:44:33.785198 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6n6cs\" (UniqueName: \"kubernetes.io/projected/20959446-e6d0-4c75-a573-73340d847308-kube-api-access-6n6cs\") pod \"20959446-e6d0-4c75-a573-73340d847308\" (UID: \"20959446-e6d0-4c75-a573-73340d847308\") " Jan 25 08:44:33 crc kubenswrapper[4832]: I0125 08:44:33.785283 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/20959446-e6d0-4c75-a573-73340d847308-catalog-content\") pod \"20959446-e6d0-4c75-a573-73340d847308\" (UID: \"20959446-e6d0-4c75-a573-73340d847308\") " Jan 25 08:44:33 crc kubenswrapper[4832]: I0125 08:44:33.786137 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/20959446-e6d0-4c75-a573-73340d847308-utilities" (OuterVolumeSpecName: "utilities") pod "20959446-e6d0-4c75-a573-73340d847308" (UID: "20959446-e6d0-4c75-a573-73340d847308"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 25 08:44:33 crc kubenswrapper[4832]: I0125 08:44:33.793224 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20959446-e6d0-4c75-a573-73340d847308-kube-api-access-6n6cs" (OuterVolumeSpecName: "kube-api-access-6n6cs") pod "20959446-e6d0-4c75-a573-73340d847308" (UID: "20959446-e6d0-4c75-a573-73340d847308"). InnerVolumeSpecName "kube-api-access-6n6cs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 25 08:44:33 crc kubenswrapper[4832]: I0125 08:44:33.810175 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/20959446-e6d0-4c75-a573-73340d847308-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "20959446-e6d0-4c75-a573-73340d847308" (UID: "20959446-e6d0-4c75-a573-73340d847308"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 25 08:44:33 crc kubenswrapper[4832]: I0125 08:44:33.888369 4832 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/20959446-e6d0-4c75-a573-73340d847308-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 25 08:44:33 crc kubenswrapper[4832]: I0125 08:44:33.888444 4832 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/20959446-e6d0-4c75-a573-73340d847308-utilities\") on node \"crc\" DevicePath \"\"" Jan 25 08:44:33 crc kubenswrapper[4832]: I0125 08:44:33.888464 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6n6cs\" (UniqueName: \"kubernetes.io/projected/20959446-e6d0-4c75-a573-73340d847308-kube-api-access-6n6cs\") on node \"crc\" DevicePath \"\"" Jan 25 08:44:34 crc kubenswrapper[4832]: I0125 08:44:34.162502 4832 generic.go:334] "Generic (PLEG): container finished" podID="20959446-e6d0-4c75-a573-73340d847308" containerID="059e92e9cfaef88e44330c01bfbdded450dddb3195b8e6bf61280a1b5ab4b389" exitCode=0 Jan 25 08:44:34 crc kubenswrapper[4832]: I0125 08:44:34.162561 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wfqkb" event={"ID":"20959446-e6d0-4c75-a573-73340d847308","Type":"ContainerDied","Data":"059e92e9cfaef88e44330c01bfbdded450dddb3195b8e6bf61280a1b5ab4b389"} Jan 25 08:44:34 crc kubenswrapper[4832]: I0125 08:44:34.162596 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wfqkb" event={"ID":"20959446-e6d0-4c75-a573-73340d847308","Type":"ContainerDied","Data":"20ac646aa9df1ad34f861e62ee94bbaa1a0a610a8492ebf3679d798e2c1a3357"} Jan 25 08:44:34 crc kubenswrapper[4832]: I0125 08:44:34.162618 4832 scope.go:117] "RemoveContainer" containerID="059e92e9cfaef88e44330c01bfbdded450dddb3195b8e6bf61280a1b5ab4b389" Jan 25 08:44:34 crc kubenswrapper[4832]: I0125 08:44:34.162839 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-wfqkb" Jan 25 08:44:34 crc kubenswrapper[4832]: I0125 08:44:34.191222 4832 scope.go:117] "RemoveContainer" containerID="584a9893af3ef1223f6f081acfebf588cd109918f9b0691a9dfb9b22df2f0165" Jan 25 08:44:34 crc kubenswrapper[4832]: I0125 08:44:34.212603 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-wfqkb"] Jan 25 08:44:34 crc kubenswrapper[4832]: I0125 08:44:34.220763 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-wfqkb"] Jan 25 08:44:34 crc kubenswrapper[4832]: I0125 08:44:34.230220 4832 scope.go:117] "RemoveContainer" containerID="225b63af2f47bd73075ddd356717bef80abffc4efc8c15b207d45b35f74ed2d9" Jan 25 08:44:34 crc kubenswrapper[4832]: I0125 08:44:34.281644 4832 scope.go:117] "RemoveContainer" containerID="059e92e9cfaef88e44330c01bfbdded450dddb3195b8e6bf61280a1b5ab4b389" Jan 25 08:44:34 crc kubenswrapper[4832]: E0125 08:44:34.282868 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"059e92e9cfaef88e44330c01bfbdded450dddb3195b8e6bf61280a1b5ab4b389\": container with ID starting with 059e92e9cfaef88e44330c01bfbdded450dddb3195b8e6bf61280a1b5ab4b389 not found: ID does not exist" containerID="059e92e9cfaef88e44330c01bfbdded450dddb3195b8e6bf61280a1b5ab4b389" Jan 25 08:44:34 crc kubenswrapper[4832]: I0125 08:44:34.282904 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"059e92e9cfaef88e44330c01bfbdded450dddb3195b8e6bf61280a1b5ab4b389"} err="failed to get container status \"059e92e9cfaef88e44330c01bfbdded450dddb3195b8e6bf61280a1b5ab4b389\": rpc error: code = NotFound desc = could not find container \"059e92e9cfaef88e44330c01bfbdded450dddb3195b8e6bf61280a1b5ab4b389\": container with ID starting with 059e92e9cfaef88e44330c01bfbdded450dddb3195b8e6bf61280a1b5ab4b389 not found: ID does not exist" Jan 25 08:44:34 crc kubenswrapper[4832]: I0125 08:44:34.282929 4832 scope.go:117] "RemoveContainer" containerID="584a9893af3ef1223f6f081acfebf588cd109918f9b0691a9dfb9b22df2f0165" Jan 25 08:44:34 crc kubenswrapper[4832]: E0125 08:44:34.283543 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"584a9893af3ef1223f6f081acfebf588cd109918f9b0691a9dfb9b22df2f0165\": container with ID starting with 584a9893af3ef1223f6f081acfebf588cd109918f9b0691a9dfb9b22df2f0165 not found: ID does not exist" containerID="584a9893af3ef1223f6f081acfebf588cd109918f9b0691a9dfb9b22df2f0165" Jan 25 08:44:34 crc kubenswrapper[4832]: I0125 08:44:34.283566 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"584a9893af3ef1223f6f081acfebf588cd109918f9b0691a9dfb9b22df2f0165"} err="failed to get container status \"584a9893af3ef1223f6f081acfebf588cd109918f9b0691a9dfb9b22df2f0165\": rpc error: code = NotFound desc = could not find container \"584a9893af3ef1223f6f081acfebf588cd109918f9b0691a9dfb9b22df2f0165\": container with ID starting with 584a9893af3ef1223f6f081acfebf588cd109918f9b0691a9dfb9b22df2f0165 not found: ID does not exist" Jan 25 08:44:34 crc kubenswrapper[4832]: I0125 08:44:34.283582 4832 scope.go:117] "RemoveContainer" containerID="225b63af2f47bd73075ddd356717bef80abffc4efc8c15b207d45b35f74ed2d9" Jan 25 08:44:34 crc kubenswrapper[4832]: E0125 08:44:34.283891 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"225b63af2f47bd73075ddd356717bef80abffc4efc8c15b207d45b35f74ed2d9\": container with ID starting with 225b63af2f47bd73075ddd356717bef80abffc4efc8c15b207d45b35f74ed2d9 not found: ID does not exist" containerID="225b63af2f47bd73075ddd356717bef80abffc4efc8c15b207d45b35f74ed2d9" Jan 25 08:44:34 crc kubenswrapper[4832]: I0125 08:44:34.283916 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"225b63af2f47bd73075ddd356717bef80abffc4efc8c15b207d45b35f74ed2d9"} err="failed to get container status \"225b63af2f47bd73075ddd356717bef80abffc4efc8c15b207d45b35f74ed2d9\": rpc error: code = NotFound desc = could not find container \"225b63af2f47bd73075ddd356717bef80abffc4efc8c15b207d45b35f74ed2d9\": container with ID starting with 225b63af2f47bd73075ddd356717bef80abffc4efc8c15b207d45b35f74ed2d9 not found: ID does not exist" Jan 25 08:44:35 crc kubenswrapper[4832]: I0125 08:44:35.680247 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20959446-e6d0-4c75-a573-73340d847308" path="/var/lib/kubelet/pods/20959446-e6d0-4c75-a573-73340d847308/volumes" Jan 25 08:44:41 crc kubenswrapper[4832]: I0125 08:44:41.211140 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-env-vars-s0" Jan 25 08:44:42 crc kubenswrapper[4832]: I0125 08:44:42.238623 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"f075c376-fe6e-44de-bb3d-113de5b9fb3f","Type":"ContainerStarted","Data":"60691ffa1d211192cd9ccf878b2abc715c52cee85666c1a21dae351f7a192400"} Jan 25 08:44:42 crc kubenswrapper[4832]: I0125 08:44:42.262097 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/tempest-tests-tempest" podStartSLOduration=3.674470494 podStartE2EDuration="1m0.262072532s" podCreationTimestamp="2026-01-25 08:43:42 +0000 UTC" firstStartedPulling="2026-01-25 08:43:44.62034279 +0000 UTC m=+2807.294166323" lastFinishedPulling="2026-01-25 08:44:41.207944828 +0000 UTC m=+2863.881768361" observedRunningTime="2026-01-25 08:44:42.255351624 +0000 UTC m=+2864.929175167" watchObservedRunningTime="2026-01-25 08:44:42.262072532 +0000 UTC m=+2864.935896065" Jan 25 08:44:51 crc kubenswrapper[4832]: I0125 08:44:51.137418 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-vmmvk"] Jan 25 08:44:51 crc kubenswrapper[4832]: E0125 08:44:51.138608 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bf137d78-d7aa-4571-a336-130ab2a9bf77" containerName="extract-content" Jan 25 08:44:51 crc kubenswrapper[4832]: I0125 08:44:51.138626 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="bf137d78-d7aa-4571-a336-130ab2a9bf77" containerName="extract-content" Jan 25 08:44:51 crc kubenswrapper[4832]: E0125 08:44:51.138646 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="20959446-e6d0-4c75-a573-73340d847308" containerName="extract-utilities" Jan 25 08:44:51 crc kubenswrapper[4832]: I0125 08:44:51.138654 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="20959446-e6d0-4c75-a573-73340d847308" containerName="extract-utilities" Jan 25 08:44:51 crc kubenswrapper[4832]: E0125 08:44:51.138667 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="20959446-e6d0-4c75-a573-73340d847308" containerName="registry-server" Jan 25 08:44:51 crc kubenswrapper[4832]: I0125 08:44:51.138676 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="20959446-e6d0-4c75-a573-73340d847308" containerName="registry-server" Jan 25 08:44:51 crc kubenswrapper[4832]: E0125 08:44:51.138695 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="20959446-e6d0-4c75-a573-73340d847308" containerName="extract-content" Jan 25 08:44:51 crc kubenswrapper[4832]: I0125 08:44:51.138701 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="20959446-e6d0-4c75-a573-73340d847308" containerName="extract-content" Jan 25 08:44:51 crc kubenswrapper[4832]: E0125 08:44:51.138734 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bf137d78-d7aa-4571-a336-130ab2a9bf77" containerName="extract-utilities" Jan 25 08:44:51 crc kubenswrapper[4832]: I0125 08:44:51.138742 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="bf137d78-d7aa-4571-a336-130ab2a9bf77" containerName="extract-utilities" Jan 25 08:44:51 crc kubenswrapper[4832]: E0125 08:44:51.138944 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bf137d78-d7aa-4571-a336-130ab2a9bf77" containerName="registry-server" Jan 25 08:44:51 crc kubenswrapper[4832]: I0125 08:44:51.138951 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="bf137d78-d7aa-4571-a336-130ab2a9bf77" containerName="registry-server" Jan 25 08:44:51 crc kubenswrapper[4832]: I0125 08:44:51.139148 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="20959446-e6d0-4c75-a573-73340d847308" containerName="registry-server" Jan 25 08:44:51 crc kubenswrapper[4832]: I0125 08:44:51.139171 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="bf137d78-d7aa-4571-a336-130ab2a9bf77" containerName="registry-server" Jan 25 08:44:51 crc kubenswrapper[4832]: I0125 08:44:51.141501 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-vmmvk" Jan 25 08:44:51 crc kubenswrapper[4832]: I0125 08:44:51.153658 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-vmmvk"] Jan 25 08:44:51 crc kubenswrapper[4832]: I0125 08:44:51.265627 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/897711cc-6bad-4714-ac9f-2b69b3e7ed1d-utilities\") pod \"certified-operators-vmmvk\" (UID: \"897711cc-6bad-4714-ac9f-2b69b3e7ed1d\") " pod="openshift-marketplace/certified-operators-vmmvk" Jan 25 08:44:51 crc kubenswrapper[4832]: I0125 08:44:51.265699 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-66dkz\" (UniqueName: \"kubernetes.io/projected/897711cc-6bad-4714-ac9f-2b69b3e7ed1d-kube-api-access-66dkz\") pod \"certified-operators-vmmvk\" (UID: \"897711cc-6bad-4714-ac9f-2b69b3e7ed1d\") " pod="openshift-marketplace/certified-operators-vmmvk" Jan 25 08:44:51 crc kubenswrapper[4832]: I0125 08:44:51.266080 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/897711cc-6bad-4714-ac9f-2b69b3e7ed1d-catalog-content\") pod \"certified-operators-vmmvk\" (UID: \"897711cc-6bad-4714-ac9f-2b69b3e7ed1d\") " pod="openshift-marketplace/certified-operators-vmmvk" Jan 25 08:44:51 crc kubenswrapper[4832]: I0125 08:44:51.368724 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/897711cc-6bad-4714-ac9f-2b69b3e7ed1d-utilities\") pod \"certified-operators-vmmvk\" (UID: \"897711cc-6bad-4714-ac9f-2b69b3e7ed1d\") " pod="openshift-marketplace/certified-operators-vmmvk" Jan 25 08:44:51 crc kubenswrapper[4832]: I0125 08:44:51.368771 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-66dkz\" (UniqueName: \"kubernetes.io/projected/897711cc-6bad-4714-ac9f-2b69b3e7ed1d-kube-api-access-66dkz\") pod \"certified-operators-vmmvk\" (UID: \"897711cc-6bad-4714-ac9f-2b69b3e7ed1d\") " pod="openshift-marketplace/certified-operators-vmmvk" Jan 25 08:44:51 crc kubenswrapper[4832]: I0125 08:44:51.368842 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/897711cc-6bad-4714-ac9f-2b69b3e7ed1d-catalog-content\") pod \"certified-operators-vmmvk\" (UID: \"897711cc-6bad-4714-ac9f-2b69b3e7ed1d\") " pod="openshift-marketplace/certified-operators-vmmvk" Jan 25 08:44:51 crc kubenswrapper[4832]: I0125 08:44:51.369172 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/897711cc-6bad-4714-ac9f-2b69b3e7ed1d-utilities\") pod \"certified-operators-vmmvk\" (UID: \"897711cc-6bad-4714-ac9f-2b69b3e7ed1d\") " pod="openshift-marketplace/certified-operators-vmmvk" Jan 25 08:44:51 crc kubenswrapper[4832]: I0125 08:44:51.369211 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/897711cc-6bad-4714-ac9f-2b69b3e7ed1d-catalog-content\") pod \"certified-operators-vmmvk\" (UID: \"897711cc-6bad-4714-ac9f-2b69b3e7ed1d\") " pod="openshift-marketplace/certified-operators-vmmvk" Jan 25 08:44:51 crc kubenswrapper[4832]: I0125 08:44:51.399327 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-66dkz\" (UniqueName: \"kubernetes.io/projected/897711cc-6bad-4714-ac9f-2b69b3e7ed1d-kube-api-access-66dkz\") pod \"certified-operators-vmmvk\" (UID: \"897711cc-6bad-4714-ac9f-2b69b3e7ed1d\") " pod="openshift-marketplace/certified-operators-vmmvk" Jan 25 08:44:51 crc kubenswrapper[4832]: I0125 08:44:51.482499 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-vmmvk" Jan 25 08:44:51 crc kubenswrapper[4832]: I0125 08:44:51.989536 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-vmmvk"] Jan 25 08:44:52 crc kubenswrapper[4832]: I0125 08:44:52.149684 4832 patch_prober.go:28] interesting pod/machine-config-daemon-9r9sz container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 25 08:44:52 crc kubenswrapper[4832]: I0125 08:44:52.149750 4832 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-9r9sz" podUID="1fb47e8e-c812-41b4-9be7-3fad81e121b0" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 25 08:44:52 crc kubenswrapper[4832]: I0125 08:44:52.149799 4832 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-9r9sz" Jan 25 08:44:52 crc kubenswrapper[4832]: I0125 08:44:52.150648 4832 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"0a0a610809d12c84df2264dec7ffeeee111e92f1be8ae7232e65d8461dcf9246"} pod="openshift-machine-config-operator/machine-config-daemon-9r9sz" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 25 08:44:52 crc kubenswrapper[4832]: I0125 08:44:52.150721 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-9r9sz" podUID="1fb47e8e-c812-41b4-9be7-3fad81e121b0" containerName="machine-config-daemon" containerID="cri-o://0a0a610809d12c84df2264dec7ffeeee111e92f1be8ae7232e65d8461dcf9246" gracePeriod=600 Jan 25 08:44:52 crc kubenswrapper[4832]: E0125 08:44:52.284861 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9r9sz_openshift-machine-config-operator(1fb47e8e-c812-41b4-9be7-3fad81e121b0)\"" pod="openshift-machine-config-operator/machine-config-daemon-9r9sz" podUID="1fb47e8e-c812-41b4-9be7-3fad81e121b0" Jan 25 08:44:52 crc kubenswrapper[4832]: I0125 08:44:52.360917 4832 generic.go:334] "Generic (PLEG): container finished" podID="1fb47e8e-c812-41b4-9be7-3fad81e121b0" containerID="0a0a610809d12c84df2264dec7ffeeee111e92f1be8ae7232e65d8461dcf9246" exitCode=0 Jan 25 08:44:52 crc kubenswrapper[4832]: I0125 08:44:52.361039 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-9r9sz" event={"ID":"1fb47e8e-c812-41b4-9be7-3fad81e121b0","Type":"ContainerDied","Data":"0a0a610809d12c84df2264dec7ffeeee111e92f1be8ae7232e65d8461dcf9246"} Jan 25 08:44:52 crc kubenswrapper[4832]: I0125 08:44:52.361151 4832 scope.go:117] "RemoveContainer" containerID="01a3d6a79b771ae9ac2fb9588d7531ae3092546b29765dbea401f0026700a915" Jan 25 08:44:52 crc kubenswrapper[4832]: I0125 08:44:52.363281 4832 scope.go:117] "RemoveContainer" containerID="0a0a610809d12c84df2264dec7ffeeee111e92f1be8ae7232e65d8461dcf9246" Jan 25 08:44:52 crc kubenswrapper[4832]: E0125 08:44:52.363792 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9r9sz_openshift-machine-config-operator(1fb47e8e-c812-41b4-9be7-3fad81e121b0)\"" pod="openshift-machine-config-operator/machine-config-daemon-9r9sz" podUID="1fb47e8e-c812-41b4-9be7-3fad81e121b0" Jan 25 08:44:52 crc kubenswrapper[4832]: I0125 08:44:52.399118 4832 generic.go:334] "Generic (PLEG): container finished" podID="897711cc-6bad-4714-ac9f-2b69b3e7ed1d" containerID="721e593ef6154c4268a1d717c87bea9ad688ff07ba0d7649d559d0f93c682339" exitCode=0 Jan 25 08:44:52 crc kubenswrapper[4832]: I0125 08:44:52.399874 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vmmvk" event={"ID":"897711cc-6bad-4714-ac9f-2b69b3e7ed1d","Type":"ContainerDied","Data":"721e593ef6154c4268a1d717c87bea9ad688ff07ba0d7649d559d0f93c682339"} Jan 25 08:44:52 crc kubenswrapper[4832]: I0125 08:44:52.399977 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vmmvk" event={"ID":"897711cc-6bad-4714-ac9f-2b69b3e7ed1d","Type":"ContainerStarted","Data":"16373d9f804e99ab8955321f215f693c96b0cc89d0e297aa2f5d6bb895fa4607"} Jan 25 08:44:53 crc kubenswrapper[4832]: I0125 08:44:53.410358 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vmmvk" event={"ID":"897711cc-6bad-4714-ac9f-2b69b3e7ed1d","Type":"ContainerStarted","Data":"4d2f1ac7d1ff5125dc203fa909a4093f7ce214f8ac0193c3a13cc1e760393c8a"} Jan 25 08:44:54 crc kubenswrapper[4832]: I0125 08:44:54.423809 4832 generic.go:334] "Generic (PLEG): container finished" podID="897711cc-6bad-4714-ac9f-2b69b3e7ed1d" containerID="4d2f1ac7d1ff5125dc203fa909a4093f7ce214f8ac0193c3a13cc1e760393c8a" exitCode=0 Jan 25 08:44:54 crc kubenswrapper[4832]: I0125 08:44:54.423900 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vmmvk" event={"ID":"897711cc-6bad-4714-ac9f-2b69b3e7ed1d","Type":"ContainerDied","Data":"4d2f1ac7d1ff5125dc203fa909a4093f7ce214f8ac0193c3a13cc1e760393c8a"} Jan 25 08:44:57 crc kubenswrapper[4832]: I0125 08:44:57.449222 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vmmvk" event={"ID":"897711cc-6bad-4714-ac9f-2b69b3e7ed1d","Type":"ContainerStarted","Data":"40a710056ea626d7ba2341231f3f9fe39c7ee7d169b2b3fdc01ec805d1648b93"} Jan 25 08:44:57 crc kubenswrapper[4832]: I0125 08:44:57.470983 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-vmmvk" podStartSLOduration=2.845072761 podStartE2EDuration="6.470955237s" podCreationTimestamp="2026-01-25 08:44:51 +0000 UTC" firstStartedPulling="2026-01-25 08:44:52.4140183 +0000 UTC m=+2875.087841833" lastFinishedPulling="2026-01-25 08:44:56.039900766 +0000 UTC m=+2878.713724309" observedRunningTime="2026-01-25 08:44:57.465681574 +0000 UTC m=+2880.139505107" watchObservedRunningTime="2026-01-25 08:44:57.470955237 +0000 UTC m=+2880.144778770" Jan 25 08:45:00 crc kubenswrapper[4832]: I0125 08:45:00.154339 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29488845-4p7v6"] Jan 25 08:45:00 crc kubenswrapper[4832]: I0125 08:45:00.171678 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29488845-4p7v6" Jan 25 08:45:00 crc kubenswrapper[4832]: I0125 08:45:00.176618 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 25 08:45:00 crc kubenswrapper[4832]: I0125 08:45:00.177098 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 25 08:45:00 crc kubenswrapper[4832]: I0125 08:45:00.178718 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p7sxx\" (UniqueName: \"kubernetes.io/projected/8180eadd-bb60-469a-ae1b-9dc2af83d3dd-kube-api-access-p7sxx\") pod \"collect-profiles-29488845-4p7v6\" (UID: \"8180eadd-bb60-469a-ae1b-9dc2af83d3dd\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29488845-4p7v6" Jan 25 08:45:00 crc kubenswrapper[4832]: I0125 08:45:00.179052 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8180eadd-bb60-469a-ae1b-9dc2af83d3dd-config-volume\") pod \"collect-profiles-29488845-4p7v6\" (UID: \"8180eadd-bb60-469a-ae1b-9dc2af83d3dd\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29488845-4p7v6" Jan 25 08:45:00 crc kubenswrapper[4832]: I0125 08:45:00.179282 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/8180eadd-bb60-469a-ae1b-9dc2af83d3dd-secret-volume\") pod \"collect-profiles-29488845-4p7v6\" (UID: \"8180eadd-bb60-469a-ae1b-9dc2af83d3dd\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29488845-4p7v6" Jan 25 08:45:00 crc kubenswrapper[4832]: I0125 08:45:00.183434 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29488845-4p7v6"] Jan 25 08:45:00 crc kubenswrapper[4832]: I0125 08:45:00.281747 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p7sxx\" (UniqueName: \"kubernetes.io/projected/8180eadd-bb60-469a-ae1b-9dc2af83d3dd-kube-api-access-p7sxx\") pod \"collect-profiles-29488845-4p7v6\" (UID: \"8180eadd-bb60-469a-ae1b-9dc2af83d3dd\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29488845-4p7v6" Jan 25 08:45:00 crc kubenswrapper[4832]: I0125 08:45:00.281952 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8180eadd-bb60-469a-ae1b-9dc2af83d3dd-config-volume\") pod \"collect-profiles-29488845-4p7v6\" (UID: \"8180eadd-bb60-469a-ae1b-9dc2af83d3dd\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29488845-4p7v6" Jan 25 08:45:00 crc kubenswrapper[4832]: I0125 08:45:00.282016 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/8180eadd-bb60-469a-ae1b-9dc2af83d3dd-secret-volume\") pod \"collect-profiles-29488845-4p7v6\" (UID: \"8180eadd-bb60-469a-ae1b-9dc2af83d3dd\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29488845-4p7v6" Jan 25 08:45:00 crc kubenswrapper[4832]: I0125 08:45:00.283835 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8180eadd-bb60-469a-ae1b-9dc2af83d3dd-config-volume\") pod \"collect-profiles-29488845-4p7v6\" (UID: \"8180eadd-bb60-469a-ae1b-9dc2af83d3dd\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29488845-4p7v6" Jan 25 08:45:00 crc kubenswrapper[4832]: I0125 08:45:00.287447 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/8180eadd-bb60-469a-ae1b-9dc2af83d3dd-secret-volume\") pod \"collect-profiles-29488845-4p7v6\" (UID: \"8180eadd-bb60-469a-ae1b-9dc2af83d3dd\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29488845-4p7v6" Jan 25 08:45:00 crc kubenswrapper[4832]: I0125 08:45:00.303175 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p7sxx\" (UniqueName: \"kubernetes.io/projected/8180eadd-bb60-469a-ae1b-9dc2af83d3dd-kube-api-access-p7sxx\") pod \"collect-profiles-29488845-4p7v6\" (UID: \"8180eadd-bb60-469a-ae1b-9dc2af83d3dd\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29488845-4p7v6" Jan 25 08:45:00 crc kubenswrapper[4832]: I0125 08:45:00.502148 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29488845-4p7v6" Jan 25 08:45:00 crc kubenswrapper[4832]: I0125 08:45:00.944289 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29488845-4p7v6"] Jan 25 08:45:00 crc kubenswrapper[4832]: W0125 08:45:00.945374 4832 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8180eadd_bb60_469a_ae1b_9dc2af83d3dd.slice/crio-683cfbf7e9331f8ac1e3c867cabd7812c5c0f0b89a5890c063d60846d250662c WatchSource:0}: Error finding container 683cfbf7e9331f8ac1e3c867cabd7812c5c0f0b89a5890c063d60846d250662c: Status 404 returned error can't find the container with id 683cfbf7e9331f8ac1e3c867cabd7812c5c0f0b89a5890c063d60846d250662c Jan 25 08:45:01 crc kubenswrapper[4832]: I0125 08:45:01.482931 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-vmmvk" Jan 25 08:45:01 crc kubenswrapper[4832]: I0125 08:45:01.483165 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-vmmvk" Jan 25 08:45:01 crc kubenswrapper[4832]: I0125 08:45:01.486318 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29488845-4p7v6" event={"ID":"8180eadd-bb60-469a-ae1b-9dc2af83d3dd","Type":"ContainerStarted","Data":"1a7f4ee94fe89a3bc3d04be1a0d939ff0a9951d7ace02daa7b01cd6a4452bb96"} Jan 25 08:45:01 crc kubenswrapper[4832]: I0125 08:45:01.486380 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29488845-4p7v6" event={"ID":"8180eadd-bb60-469a-ae1b-9dc2af83d3dd","Type":"ContainerStarted","Data":"683cfbf7e9331f8ac1e3c867cabd7812c5c0f0b89a5890c063d60846d250662c"} Jan 25 08:45:01 crc kubenswrapper[4832]: I0125 08:45:01.535451 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-vmmvk" Jan 25 08:45:02 crc kubenswrapper[4832]: I0125 08:45:02.499233 4832 generic.go:334] "Generic (PLEG): container finished" podID="8180eadd-bb60-469a-ae1b-9dc2af83d3dd" containerID="1a7f4ee94fe89a3bc3d04be1a0d939ff0a9951d7ace02daa7b01cd6a4452bb96" exitCode=0 Jan 25 08:45:02 crc kubenswrapper[4832]: I0125 08:45:02.499367 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29488845-4p7v6" event={"ID":"8180eadd-bb60-469a-ae1b-9dc2af83d3dd","Type":"ContainerDied","Data":"1a7f4ee94fe89a3bc3d04be1a0d939ff0a9951d7ace02daa7b01cd6a4452bb96"} Jan 25 08:45:02 crc kubenswrapper[4832]: I0125 08:45:02.556007 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-vmmvk" Jan 25 08:45:02 crc kubenswrapper[4832]: I0125 08:45:02.607410 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-vmmvk"] Jan 25 08:45:02 crc kubenswrapper[4832]: I0125 08:45:02.669701 4832 scope.go:117] "RemoveContainer" containerID="0a0a610809d12c84df2264dec7ffeeee111e92f1be8ae7232e65d8461dcf9246" Jan 25 08:45:02 crc kubenswrapper[4832]: E0125 08:45:02.670061 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9r9sz_openshift-machine-config-operator(1fb47e8e-c812-41b4-9be7-3fad81e121b0)\"" pod="openshift-machine-config-operator/machine-config-daemon-9r9sz" podUID="1fb47e8e-c812-41b4-9be7-3fad81e121b0" Jan 25 08:45:03 crc kubenswrapper[4832]: I0125 08:45:03.910854 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29488845-4p7v6" Jan 25 08:45:03 crc kubenswrapper[4832]: I0125 08:45:03.955038 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/8180eadd-bb60-469a-ae1b-9dc2af83d3dd-secret-volume\") pod \"8180eadd-bb60-469a-ae1b-9dc2af83d3dd\" (UID: \"8180eadd-bb60-469a-ae1b-9dc2af83d3dd\") " Jan 25 08:45:03 crc kubenswrapper[4832]: I0125 08:45:03.955126 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8180eadd-bb60-469a-ae1b-9dc2af83d3dd-config-volume\") pod \"8180eadd-bb60-469a-ae1b-9dc2af83d3dd\" (UID: \"8180eadd-bb60-469a-ae1b-9dc2af83d3dd\") " Jan 25 08:45:03 crc kubenswrapper[4832]: I0125 08:45:03.955160 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p7sxx\" (UniqueName: \"kubernetes.io/projected/8180eadd-bb60-469a-ae1b-9dc2af83d3dd-kube-api-access-p7sxx\") pod \"8180eadd-bb60-469a-ae1b-9dc2af83d3dd\" (UID: \"8180eadd-bb60-469a-ae1b-9dc2af83d3dd\") " Jan 25 08:45:03 crc kubenswrapper[4832]: I0125 08:45:03.956673 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8180eadd-bb60-469a-ae1b-9dc2af83d3dd-config-volume" (OuterVolumeSpecName: "config-volume") pod "8180eadd-bb60-469a-ae1b-9dc2af83d3dd" (UID: "8180eadd-bb60-469a-ae1b-9dc2af83d3dd"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 25 08:45:03 crc kubenswrapper[4832]: I0125 08:45:03.961667 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8180eadd-bb60-469a-ae1b-9dc2af83d3dd-kube-api-access-p7sxx" (OuterVolumeSpecName: "kube-api-access-p7sxx") pod "8180eadd-bb60-469a-ae1b-9dc2af83d3dd" (UID: "8180eadd-bb60-469a-ae1b-9dc2af83d3dd"). InnerVolumeSpecName "kube-api-access-p7sxx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 25 08:45:03 crc kubenswrapper[4832]: I0125 08:45:03.962145 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8180eadd-bb60-469a-ae1b-9dc2af83d3dd-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "8180eadd-bb60-469a-ae1b-9dc2af83d3dd" (UID: "8180eadd-bb60-469a-ae1b-9dc2af83d3dd"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 25 08:45:04 crc kubenswrapper[4832]: I0125 08:45:04.057463 4832 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/8180eadd-bb60-469a-ae1b-9dc2af83d3dd-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 25 08:45:04 crc kubenswrapper[4832]: I0125 08:45:04.057505 4832 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8180eadd-bb60-469a-ae1b-9dc2af83d3dd-config-volume\") on node \"crc\" DevicePath \"\"" Jan 25 08:45:04 crc kubenswrapper[4832]: I0125 08:45:04.057518 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p7sxx\" (UniqueName: \"kubernetes.io/projected/8180eadd-bb60-469a-ae1b-9dc2af83d3dd-kube-api-access-p7sxx\") on node \"crc\" DevicePath \"\"" Jan 25 08:45:04 crc kubenswrapper[4832]: I0125 08:45:04.520042 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29488845-4p7v6" Jan 25 08:45:04 crc kubenswrapper[4832]: I0125 08:45:04.520025 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29488845-4p7v6" event={"ID":"8180eadd-bb60-469a-ae1b-9dc2af83d3dd","Type":"ContainerDied","Data":"683cfbf7e9331f8ac1e3c867cabd7812c5c0f0b89a5890c063d60846d250662c"} Jan 25 08:45:04 crc kubenswrapper[4832]: I0125 08:45:04.520114 4832 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="683cfbf7e9331f8ac1e3c867cabd7812c5c0f0b89a5890c063d60846d250662c" Jan 25 08:45:04 crc kubenswrapper[4832]: I0125 08:45:04.520146 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-vmmvk" podUID="897711cc-6bad-4714-ac9f-2b69b3e7ed1d" containerName="registry-server" containerID="cri-o://40a710056ea626d7ba2341231f3f9fe39c7ee7d169b2b3fdc01ec805d1648b93" gracePeriod=2 Jan 25 08:45:04 crc kubenswrapper[4832]: I0125 08:45:04.997253 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29488800-492g8"] Jan 25 08:45:05 crc kubenswrapper[4832]: I0125 08:45:05.005458 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29488800-492g8"] Jan 25 08:45:05 crc kubenswrapper[4832]: I0125 08:45:05.050733 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-vmmvk" Jan 25 08:45:05 crc kubenswrapper[4832]: I0125 08:45:05.179753 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/897711cc-6bad-4714-ac9f-2b69b3e7ed1d-utilities\") pod \"897711cc-6bad-4714-ac9f-2b69b3e7ed1d\" (UID: \"897711cc-6bad-4714-ac9f-2b69b3e7ed1d\") " Jan 25 08:45:05 crc kubenswrapper[4832]: I0125 08:45:05.179888 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-66dkz\" (UniqueName: \"kubernetes.io/projected/897711cc-6bad-4714-ac9f-2b69b3e7ed1d-kube-api-access-66dkz\") pod \"897711cc-6bad-4714-ac9f-2b69b3e7ed1d\" (UID: \"897711cc-6bad-4714-ac9f-2b69b3e7ed1d\") " Jan 25 08:45:05 crc kubenswrapper[4832]: I0125 08:45:05.180112 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/897711cc-6bad-4714-ac9f-2b69b3e7ed1d-catalog-content\") pod \"897711cc-6bad-4714-ac9f-2b69b3e7ed1d\" (UID: \"897711cc-6bad-4714-ac9f-2b69b3e7ed1d\") " Jan 25 08:45:05 crc kubenswrapper[4832]: I0125 08:45:05.180581 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/897711cc-6bad-4714-ac9f-2b69b3e7ed1d-utilities" (OuterVolumeSpecName: "utilities") pod "897711cc-6bad-4714-ac9f-2b69b3e7ed1d" (UID: "897711cc-6bad-4714-ac9f-2b69b3e7ed1d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 25 08:45:05 crc kubenswrapper[4832]: I0125 08:45:05.196924 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/897711cc-6bad-4714-ac9f-2b69b3e7ed1d-kube-api-access-66dkz" (OuterVolumeSpecName: "kube-api-access-66dkz") pod "897711cc-6bad-4714-ac9f-2b69b3e7ed1d" (UID: "897711cc-6bad-4714-ac9f-2b69b3e7ed1d"). InnerVolumeSpecName "kube-api-access-66dkz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 25 08:45:05 crc kubenswrapper[4832]: I0125 08:45:05.229143 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/897711cc-6bad-4714-ac9f-2b69b3e7ed1d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "897711cc-6bad-4714-ac9f-2b69b3e7ed1d" (UID: "897711cc-6bad-4714-ac9f-2b69b3e7ed1d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 25 08:45:05 crc kubenswrapper[4832]: I0125 08:45:05.283928 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-66dkz\" (UniqueName: \"kubernetes.io/projected/897711cc-6bad-4714-ac9f-2b69b3e7ed1d-kube-api-access-66dkz\") on node \"crc\" DevicePath \"\"" Jan 25 08:45:05 crc kubenswrapper[4832]: I0125 08:45:05.283967 4832 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/897711cc-6bad-4714-ac9f-2b69b3e7ed1d-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 25 08:45:05 crc kubenswrapper[4832]: I0125 08:45:05.283984 4832 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/897711cc-6bad-4714-ac9f-2b69b3e7ed1d-utilities\") on node \"crc\" DevicePath \"\"" Jan 25 08:45:05 crc kubenswrapper[4832]: I0125 08:45:05.531877 4832 generic.go:334] "Generic (PLEG): container finished" podID="897711cc-6bad-4714-ac9f-2b69b3e7ed1d" containerID="40a710056ea626d7ba2341231f3f9fe39c7ee7d169b2b3fdc01ec805d1648b93" exitCode=0 Jan 25 08:45:05 crc kubenswrapper[4832]: I0125 08:45:05.531961 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-vmmvk" Jan 25 08:45:05 crc kubenswrapper[4832]: I0125 08:45:05.531983 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vmmvk" event={"ID":"897711cc-6bad-4714-ac9f-2b69b3e7ed1d","Type":"ContainerDied","Data":"40a710056ea626d7ba2341231f3f9fe39c7ee7d169b2b3fdc01ec805d1648b93"} Jan 25 08:45:05 crc kubenswrapper[4832]: I0125 08:45:05.532580 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vmmvk" event={"ID":"897711cc-6bad-4714-ac9f-2b69b3e7ed1d","Type":"ContainerDied","Data":"16373d9f804e99ab8955321f215f693c96b0cc89d0e297aa2f5d6bb895fa4607"} Jan 25 08:45:05 crc kubenswrapper[4832]: I0125 08:45:05.532605 4832 scope.go:117] "RemoveContainer" containerID="40a710056ea626d7ba2341231f3f9fe39c7ee7d169b2b3fdc01ec805d1648b93" Jan 25 08:45:05 crc kubenswrapper[4832]: I0125 08:45:05.558229 4832 scope.go:117] "RemoveContainer" containerID="4d2f1ac7d1ff5125dc203fa909a4093f7ce214f8ac0193c3a13cc1e760393c8a" Jan 25 08:45:05 crc kubenswrapper[4832]: I0125 08:45:05.576372 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-vmmvk"] Jan 25 08:45:05 crc kubenswrapper[4832]: I0125 08:45:05.588391 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-vmmvk"] Jan 25 08:45:05 crc kubenswrapper[4832]: I0125 08:45:05.617352 4832 scope.go:117] "RemoveContainer" containerID="721e593ef6154c4268a1d717c87bea9ad688ff07ba0d7649d559d0f93c682339" Jan 25 08:45:05 crc kubenswrapper[4832]: I0125 08:45:05.639131 4832 scope.go:117] "RemoveContainer" containerID="40a710056ea626d7ba2341231f3f9fe39c7ee7d169b2b3fdc01ec805d1648b93" Jan 25 08:45:05 crc kubenswrapper[4832]: E0125 08:45:05.639780 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"40a710056ea626d7ba2341231f3f9fe39c7ee7d169b2b3fdc01ec805d1648b93\": container with ID starting with 40a710056ea626d7ba2341231f3f9fe39c7ee7d169b2b3fdc01ec805d1648b93 not found: ID does not exist" containerID="40a710056ea626d7ba2341231f3f9fe39c7ee7d169b2b3fdc01ec805d1648b93" Jan 25 08:45:05 crc kubenswrapper[4832]: I0125 08:45:05.639846 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"40a710056ea626d7ba2341231f3f9fe39c7ee7d169b2b3fdc01ec805d1648b93"} err="failed to get container status \"40a710056ea626d7ba2341231f3f9fe39c7ee7d169b2b3fdc01ec805d1648b93\": rpc error: code = NotFound desc = could not find container \"40a710056ea626d7ba2341231f3f9fe39c7ee7d169b2b3fdc01ec805d1648b93\": container with ID starting with 40a710056ea626d7ba2341231f3f9fe39c7ee7d169b2b3fdc01ec805d1648b93 not found: ID does not exist" Jan 25 08:45:05 crc kubenswrapper[4832]: I0125 08:45:05.639884 4832 scope.go:117] "RemoveContainer" containerID="4d2f1ac7d1ff5125dc203fa909a4093f7ce214f8ac0193c3a13cc1e760393c8a" Jan 25 08:45:05 crc kubenswrapper[4832]: E0125 08:45:05.640439 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4d2f1ac7d1ff5125dc203fa909a4093f7ce214f8ac0193c3a13cc1e760393c8a\": container with ID starting with 4d2f1ac7d1ff5125dc203fa909a4093f7ce214f8ac0193c3a13cc1e760393c8a not found: ID does not exist" containerID="4d2f1ac7d1ff5125dc203fa909a4093f7ce214f8ac0193c3a13cc1e760393c8a" Jan 25 08:45:05 crc kubenswrapper[4832]: I0125 08:45:05.640486 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4d2f1ac7d1ff5125dc203fa909a4093f7ce214f8ac0193c3a13cc1e760393c8a"} err="failed to get container status \"4d2f1ac7d1ff5125dc203fa909a4093f7ce214f8ac0193c3a13cc1e760393c8a\": rpc error: code = NotFound desc = could not find container \"4d2f1ac7d1ff5125dc203fa909a4093f7ce214f8ac0193c3a13cc1e760393c8a\": container with ID starting with 4d2f1ac7d1ff5125dc203fa909a4093f7ce214f8ac0193c3a13cc1e760393c8a not found: ID does not exist" Jan 25 08:45:05 crc kubenswrapper[4832]: I0125 08:45:05.640513 4832 scope.go:117] "RemoveContainer" containerID="721e593ef6154c4268a1d717c87bea9ad688ff07ba0d7649d559d0f93c682339" Jan 25 08:45:05 crc kubenswrapper[4832]: E0125 08:45:05.640763 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"721e593ef6154c4268a1d717c87bea9ad688ff07ba0d7649d559d0f93c682339\": container with ID starting with 721e593ef6154c4268a1d717c87bea9ad688ff07ba0d7649d559d0f93c682339 not found: ID does not exist" containerID="721e593ef6154c4268a1d717c87bea9ad688ff07ba0d7649d559d0f93c682339" Jan 25 08:45:05 crc kubenswrapper[4832]: I0125 08:45:05.640796 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"721e593ef6154c4268a1d717c87bea9ad688ff07ba0d7649d559d0f93c682339"} err="failed to get container status \"721e593ef6154c4268a1d717c87bea9ad688ff07ba0d7649d559d0f93c682339\": rpc error: code = NotFound desc = could not find container \"721e593ef6154c4268a1d717c87bea9ad688ff07ba0d7649d559d0f93c682339\": container with ID starting with 721e593ef6154c4268a1d717c87bea9ad688ff07ba0d7649d559d0f93c682339 not found: ID does not exist" Jan 25 08:45:05 crc kubenswrapper[4832]: I0125 08:45:05.684472 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="169d3ee1-b6be-49bc-9522-c3579c6965f4" path="/var/lib/kubelet/pods/169d3ee1-b6be-49bc-9522-c3579c6965f4/volumes" Jan 25 08:45:05 crc kubenswrapper[4832]: I0125 08:45:05.685223 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="897711cc-6bad-4714-ac9f-2b69b3e7ed1d" path="/var/lib/kubelet/pods/897711cc-6bad-4714-ac9f-2b69b3e7ed1d/volumes" Jan 25 08:45:17 crc kubenswrapper[4832]: I0125 08:45:17.675824 4832 scope.go:117] "RemoveContainer" containerID="0a0a610809d12c84df2264dec7ffeeee111e92f1be8ae7232e65d8461dcf9246" Jan 25 08:45:17 crc kubenswrapper[4832]: E0125 08:45:17.676681 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9r9sz_openshift-machine-config-operator(1fb47e8e-c812-41b4-9be7-3fad81e121b0)\"" pod="openshift-machine-config-operator/machine-config-daemon-9r9sz" podUID="1fb47e8e-c812-41b4-9be7-3fad81e121b0" Jan 25 08:45:32 crc kubenswrapper[4832]: I0125 08:45:32.669724 4832 scope.go:117] "RemoveContainer" containerID="0a0a610809d12c84df2264dec7ffeeee111e92f1be8ae7232e65d8461dcf9246" Jan 25 08:45:32 crc kubenswrapper[4832]: E0125 08:45:32.670514 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9r9sz_openshift-machine-config-operator(1fb47e8e-c812-41b4-9be7-3fad81e121b0)\"" pod="openshift-machine-config-operator/machine-config-daemon-9r9sz" podUID="1fb47e8e-c812-41b4-9be7-3fad81e121b0" Jan 25 08:45:42 crc kubenswrapper[4832]: I0125 08:45:42.217112 4832 scope.go:117] "RemoveContainer" containerID="5f37ea3a126374f6bc752d94be6de4dbaa535813eb6522dc68fa3ce71b8c7394" Jan 25 08:45:44 crc kubenswrapper[4832]: I0125 08:45:44.670169 4832 scope.go:117] "RemoveContainer" containerID="0a0a610809d12c84df2264dec7ffeeee111e92f1be8ae7232e65d8461dcf9246" Jan 25 08:45:44 crc kubenswrapper[4832]: E0125 08:45:44.670786 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9r9sz_openshift-machine-config-operator(1fb47e8e-c812-41b4-9be7-3fad81e121b0)\"" pod="openshift-machine-config-operator/machine-config-daemon-9r9sz" podUID="1fb47e8e-c812-41b4-9be7-3fad81e121b0" Jan 25 08:45:59 crc kubenswrapper[4832]: I0125 08:45:59.670760 4832 scope.go:117] "RemoveContainer" containerID="0a0a610809d12c84df2264dec7ffeeee111e92f1be8ae7232e65d8461dcf9246" Jan 25 08:45:59 crc kubenswrapper[4832]: E0125 08:45:59.671601 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9r9sz_openshift-machine-config-operator(1fb47e8e-c812-41b4-9be7-3fad81e121b0)\"" pod="openshift-machine-config-operator/machine-config-daemon-9r9sz" podUID="1fb47e8e-c812-41b4-9be7-3fad81e121b0" Jan 25 08:46:12 crc kubenswrapper[4832]: I0125 08:46:12.669714 4832 scope.go:117] "RemoveContainer" containerID="0a0a610809d12c84df2264dec7ffeeee111e92f1be8ae7232e65d8461dcf9246" Jan 25 08:46:12 crc kubenswrapper[4832]: E0125 08:46:12.670540 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9r9sz_openshift-machine-config-operator(1fb47e8e-c812-41b4-9be7-3fad81e121b0)\"" pod="openshift-machine-config-operator/machine-config-daemon-9r9sz" podUID="1fb47e8e-c812-41b4-9be7-3fad81e121b0" Jan 25 08:46:23 crc kubenswrapper[4832]: I0125 08:46:23.670471 4832 scope.go:117] "RemoveContainer" containerID="0a0a610809d12c84df2264dec7ffeeee111e92f1be8ae7232e65d8461dcf9246" Jan 25 08:46:23 crc kubenswrapper[4832]: E0125 08:46:23.672011 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9r9sz_openshift-machine-config-operator(1fb47e8e-c812-41b4-9be7-3fad81e121b0)\"" pod="openshift-machine-config-operator/machine-config-daemon-9r9sz" podUID="1fb47e8e-c812-41b4-9be7-3fad81e121b0" Jan 25 08:46:36 crc kubenswrapper[4832]: I0125 08:46:36.671140 4832 scope.go:117] "RemoveContainer" containerID="0a0a610809d12c84df2264dec7ffeeee111e92f1be8ae7232e65d8461dcf9246" Jan 25 08:46:36 crc kubenswrapper[4832]: E0125 08:46:36.673452 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9r9sz_openshift-machine-config-operator(1fb47e8e-c812-41b4-9be7-3fad81e121b0)\"" pod="openshift-machine-config-operator/machine-config-daemon-9r9sz" podUID="1fb47e8e-c812-41b4-9be7-3fad81e121b0" Jan 25 08:46:49 crc kubenswrapper[4832]: I0125 08:46:49.670097 4832 scope.go:117] "RemoveContainer" containerID="0a0a610809d12c84df2264dec7ffeeee111e92f1be8ae7232e65d8461dcf9246" Jan 25 08:46:49 crc kubenswrapper[4832]: E0125 08:46:49.670872 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9r9sz_openshift-machine-config-operator(1fb47e8e-c812-41b4-9be7-3fad81e121b0)\"" pod="openshift-machine-config-operator/machine-config-daemon-9r9sz" podUID="1fb47e8e-c812-41b4-9be7-3fad81e121b0" Jan 25 08:47:02 crc kubenswrapper[4832]: I0125 08:47:02.670509 4832 scope.go:117] "RemoveContainer" containerID="0a0a610809d12c84df2264dec7ffeeee111e92f1be8ae7232e65d8461dcf9246" Jan 25 08:47:02 crc kubenswrapper[4832]: E0125 08:47:02.672216 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9r9sz_openshift-machine-config-operator(1fb47e8e-c812-41b4-9be7-3fad81e121b0)\"" pod="openshift-machine-config-operator/machine-config-daemon-9r9sz" podUID="1fb47e8e-c812-41b4-9be7-3fad81e121b0" Jan 25 08:47:14 crc kubenswrapper[4832]: I0125 08:47:14.669538 4832 scope.go:117] "RemoveContainer" containerID="0a0a610809d12c84df2264dec7ffeeee111e92f1be8ae7232e65d8461dcf9246" Jan 25 08:47:14 crc kubenswrapper[4832]: E0125 08:47:14.670476 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9r9sz_openshift-machine-config-operator(1fb47e8e-c812-41b4-9be7-3fad81e121b0)\"" pod="openshift-machine-config-operator/machine-config-daemon-9r9sz" podUID="1fb47e8e-c812-41b4-9be7-3fad81e121b0" Jan 25 08:47:26 crc kubenswrapper[4832]: I0125 08:47:26.669655 4832 scope.go:117] "RemoveContainer" containerID="0a0a610809d12c84df2264dec7ffeeee111e92f1be8ae7232e65d8461dcf9246" Jan 25 08:47:26 crc kubenswrapper[4832]: E0125 08:47:26.670731 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9r9sz_openshift-machine-config-operator(1fb47e8e-c812-41b4-9be7-3fad81e121b0)\"" pod="openshift-machine-config-operator/machine-config-daemon-9r9sz" podUID="1fb47e8e-c812-41b4-9be7-3fad81e121b0" Jan 25 08:47:38 crc kubenswrapper[4832]: I0125 08:47:38.669622 4832 scope.go:117] "RemoveContainer" containerID="0a0a610809d12c84df2264dec7ffeeee111e92f1be8ae7232e65d8461dcf9246" Jan 25 08:47:38 crc kubenswrapper[4832]: E0125 08:47:38.670771 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9r9sz_openshift-machine-config-operator(1fb47e8e-c812-41b4-9be7-3fad81e121b0)\"" pod="openshift-machine-config-operator/machine-config-daemon-9r9sz" podUID="1fb47e8e-c812-41b4-9be7-3fad81e121b0" Jan 25 08:47:51 crc kubenswrapper[4832]: I0125 08:47:51.669769 4832 scope.go:117] "RemoveContainer" containerID="0a0a610809d12c84df2264dec7ffeeee111e92f1be8ae7232e65d8461dcf9246" Jan 25 08:47:51 crc kubenswrapper[4832]: E0125 08:47:51.670450 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9r9sz_openshift-machine-config-operator(1fb47e8e-c812-41b4-9be7-3fad81e121b0)\"" pod="openshift-machine-config-operator/machine-config-daemon-9r9sz" podUID="1fb47e8e-c812-41b4-9be7-3fad81e121b0" Jan 25 08:48:04 crc kubenswrapper[4832]: I0125 08:48:04.670928 4832 scope.go:117] "RemoveContainer" containerID="0a0a610809d12c84df2264dec7ffeeee111e92f1be8ae7232e65d8461dcf9246" Jan 25 08:48:04 crc kubenswrapper[4832]: E0125 08:48:04.671746 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9r9sz_openshift-machine-config-operator(1fb47e8e-c812-41b4-9be7-3fad81e121b0)\"" pod="openshift-machine-config-operator/machine-config-daemon-9r9sz" podUID="1fb47e8e-c812-41b4-9be7-3fad81e121b0" Jan 25 08:48:18 crc kubenswrapper[4832]: I0125 08:48:18.670672 4832 scope.go:117] "RemoveContainer" containerID="0a0a610809d12c84df2264dec7ffeeee111e92f1be8ae7232e65d8461dcf9246" Jan 25 08:48:18 crc kubenswrapper[4832]: E0125 08:48:18.671485 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9r9sz_openshift-machine-config-operator(1fb47e8e-c812-41b4-9be7-3fad81e121b0)\"" pod="openshift-machine-config-operator/machine-config-daemon-9r9sz" podUID="1fb47e8e-c812-41b4-9be7-3fad81e121b0" Jan 25 08:48:32 crc kubenswrapper[4832]: I0125 08:48:32.670236 4832 scope.go:117] "RemoveContainer" containerID="0a0a610809d12c84df2264dec7ffeeee111e92f1be8ae7232e65d8461dcf9246" Jan 25 08:48:32 crc kubenswrapper[4832]: E0125 08:48:32.671017 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9r9sz_openshift-machine-config-operator(1fb47e8e-c812-41b4-9be7-3fad81e121b0)\"" pod="openshift-machine-config-operator/machine-config-daemon-9r9sz" podUID="1fb47e8e-c812-41b4-9be7-3fad81e121b0" Jan 25 08:48:47 crc kubenswrapper[4832]: I0125 08:48:47.675745 4832 scope.go:117] "RemoveContainer" containerID="0a0a610809d12c84df2264dec7ffeeee111e92f1be8ae7232e65d8461dcf9246" Jan 25 08:48:47 crc kubenswrapper[4832]: E0125 08:48:47.676596 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9r9sz_openshift-machine-config-operator(1fb47e8e-c812-41b4-9be7-3fad81e121b0)\"" pod="openshift-machine-config-operator/machine-config-daemon-9r9sz" podUID="1fb47e8e-c812-41b4-9be7-3fad81e121b0" Jan 25 08:48:59 crc kubenswrapper[4832]: I0125 08:48:59.669472 4832 scope.go:117] "RemoveContainer" containerID="0a0a610809d12c84df2264dec7ffeeee111e92f1be8ae7232e65d8461dcf9246" Jan 25 08:48:59 crc kubenswrapper[4832]: E0125 08:48:59.670256 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9r9sz_openshift-machine-config-operator(1fb47e8e-c812-41b4-9be7-3fad81e121b0)\"" pod="openshift-machine-config-operator/machine-config-daemon-9r9sz" podUID="1fb47e8e-c812-41b4-9be7-3fad81e121b0" Jan 25 08:49:04 crc kubenswrapper[4832]: I0125 08:49:04.071640 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-rczqs"] Jan 25 08:49:04 crc kubenswrapper[4832]: E0125 08:49:04.072530 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8180eadd-bb60-469a-ae1b-9dc2af83d3dd" containerName="collect-profiles" Jan 25 08:49:04 crc kubenswrapper[4832]: I0125 08:49:04.072576 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="8180eadd-bb60-469a-ae1b-9dc2af83d3dd" containerName="collect-profiles" Jan 25 08:49:04 crc kubenswrapper[4832]: E0125 08:49:04.072586 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="897711cc-6bad-4714-ac9f-2b69b3e7ed1d" containerName="extract-content" Jan 25 08:49:04 crc kubenswrapper[4832]: I0125 08:49:04.072592 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="897711cc-6bad-4714-ac9f-2b69b3e7ed1d" containerName="extract-content" Jan 25 08:49:04 crc kubenswrapper[4832]: E0125 08:49:04.072629 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="897711cc-6bad-4714-ac9f-2b69b3e7ed1d" containerName="registry-server" Jan 25 08:49:04 crc kubenswrapper[4832]: I0125 08:49:04.072636 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="897711cc-6bad-4714-ac9f-2b69b3e7ed1d" containerName="registry-server" Jan 25 08:49:04 crc kubenswrapper[4832]: E0125 08:49:04.072654 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="897711cc-6bad-4714-ac9f-2b69b3e7ed1d" containerName="extract-utilities" Jan 25 08:49:04 crc kubenswrapper[4832]: I0125 08:49:04.072661 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="897711cc-6bad-4714-ac9f-2b69b3e7ed1d" containerName="extract-utilities" Jan 25 08:49:04 crc kubenswrapper[4832]: I0125 08:49:04.072906 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="897711cc-6bad-4714-ac9f-2b69b3e7ed1d" containerName="registry-server" Jan 25 08:49:04 crc kubenswrapper[4832]: I0125 08:49:04.072996 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="8180eadd-bb60-469a-ae1b-9dc2af83d3dd" containerName="collect-profiles" Jan 25 08:49:04 crc kubenswrapper[4832]: I0125 08:49:04.074350 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-rczqs" Jan 25 08:49:04 crc kubenswrapper[4832]: I0125 08:49:04.081029 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-rczqs"] Jan 25 08:49:04 crc kubenswrapper[4832]: I0125 08:49:04.245624 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/81369db4-bfc3-4fa1-98d2-a7562916fa78-catalog-content\") pod \"redhat-operators-rczqs\" (UID: \"81369db4-bfc3-4fa1-98d2-a7562916fa78\") " pod="openshift-marketplace/redhat-operators-rczqs" Jan 25 08:49:04 crc kubenswrapper[4832]: I0125 08:49:04.245745 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-46c7d\" (UniqueName: \"kubernetes.io/projected/81369db4-bfc3-4fa1-98d2-a7562916fa78-kube-api-access-46c7d\") pod \"redhat-operators-rczqs\" (UID: \"81369db4-bfc3-4fa1-98d2-a7562916fa78\") " pod="openshift-marketplace/redhat-operators-rczqs" Jan 25 08:49:04 crc kubenswrapper[4832]: I0125 08:49:04.245840 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/81369db4-bfc3-4fa1-98d2-a7562916fa78-utilities\") pod \"redhat-operators-rczqs\" (UID: \"81369db4-bfc3-4fa1-98d2-a7562916fa78\") " pod="openshift-marketplace/redhat-operators-rczqs" Jan 25 08:49:04 crc kubenswrapper[4832]: I0125 08:49:04.347814 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-46c7d\" (UniqueName: \"kubernetes.io/projected/81369db4-bfc3-4fa1-98d2-a7562916fa78-kube-api-access-46c7d\") pod \"redhat-operators-rczqs\" (UID: \"81369db4-bfc3-4fa1-98d2-a7562916fa78\") " pod="openshift-marketplace/redhat-operators-rczqs" Jan 25 08:49:04 crc kubenswrapper[4832]: I0125 08:49:04.347908 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/81369db4-bfc3-4fa1-98d2-a7562916fa78-utilities\") pod \"redhat-operators-rczqs\" (UID: \"81369db4-bfc3-4fa1-98d2-a7562916fa78\") " pod="openshift-marketplace/redhat-operators-rczqs" Jan 25 08:49:04 crc kubenswrapper[4832]: I0125 08:49:04.347983 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/81369db4-bfc3-4fa1-98d2-a7562916fa78-catalog-content\") pod \"redhat-operators-rczqs\" (UID: \"81369db4-bfc3-4fa1-98d2-a7562916fa78\") " pod="openshift-marketplace/redhat-operators-rczqs" Jan 25 08:49:04 crc kubenswrapper[4832]: I0125 08:49:04.348482 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/81369db4-bfc3-4fa1-98d2-a7562916fa78-catalog-content\") pod \"redhat-operators-rczqs\" (UID: \"81369db4-bfc3-4fa1-98d2-a7562916fa78\") " pod="openshift-marketplace/redhat-operators-rczqs" Jan 25 08:49:04 crc kubenswrapper[4832]: I0125 08:49:04.348743 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/81369db4-bfc3-4fa1-98d2-a7562916fa78-utilities\") pod \"redhat-operators-rczqs\" (UID: \"81369db4-bfc3-4fa1-98d2-a7562916fa78\") " pod="openshift-marketplace/redhat-operators-rczqs" Jan 25 08:49:04 crc kubenswrapper[4832]: I0125 08:49:04.370263 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-46c7d\" (UniqueName: \"kubernetes.io/projected/81369db4-bfc3-4fa1-98d2-a7562916fa78-kube-api-access-46c7d\") pod \"redhat-operators-rczqs\" (UID: \"81369db4-bfc3-4fa1-98d2-a7562916fa78\") " pod="openshift-marketplace/redhat-operators-rczqs" Jan 25 08:49:04 crc kubenswrapper[4832]: I0125 08:49:04.392057 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-rczqs" Jan 25 08:49:04 crc kubenswrapper[4832]: I0125 08:49:04.956000 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-rczqs"] Jan 25 08:49:05 crc kubenswrapper[4832]: I0125 08:49:05.695909 4832 generic.go:334] "Generic (PLEG): container finished" podID="81369db4-bfc3-4fa1-98d2-a7562916fa78" containerID="6c88684d4e2ef562dfa4325d193e1bff6a94adbac831db3d92a85a9006c66987" exitCode=0 Jan 25 08:49:05 crc kubenswrapper[4832]: I0125 08:49:05.696005 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rczqs" event={"ID":"81369db4-bfc3-4fa1-98d2-a7562916fa78","Type":"ContainerDied","Data":"6c88684d4e2ef562dfa4325d193e1bff6a94adbac831db3d92a85a9006c66987"} Jan 25 08:49:05 crc kubenswrapper[4832]: I0125 08:49:05.696202 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rczqs" event={"ID":"81369db4-bfc3-4fa1-98d2-a7562916fa78","Type":"ContainerStarted","Data":"860dbf1c355daa6b9f2efd679671edbf0590dcc1009631fd1e262574f465e58b"} Jan 25 08:49:05 crc kubenswrapper[4832]: I0125 08:49:05.699641 4832 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 25 08:49:06 crc kubenswrapper[4832]: I0125 08:49:06.708209 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rczqs" event={"ID":"81369db4-bfc3-4fa1-98d2-a7562916fa78","Type":"ContainerStarted","Data":"7b20af00d01c20079567c555dfd5dc23b6df3f29262f94f884862aa607b70cca"} Jan 25 08:49:09 crc kubenswrapper[4832]: I0125 08:49:09.749512 4832 generic.go:334] "Generic (PLEG): container finished" podID="81369db4-bfc3-4fa1-98d2-a7562916fa78" containerID="7b20af00d01c20079567c555dfd5dc23b6df3f29262f94f884862aa607b70cca" exitCode=0 Jan 25 08:49:09 crc kubenswrapper[4832]: I0125 08:49:09.749640 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rczqs" event={"ID":"81369db4-bfc3-4fa1-98d2-a7562916fa78","Type":"ContainerDied","Data":"7b20af00d01c20079567c555dfd5dc23b6df3f29262f94f884862aa607b70cca"} Jan 25 08:49:10 crc kubenswrapper[4832]: I0125 08:49:10.762888 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rczqs" event={"ID":"81369db4-bfc3-4fa1-98d2-a7562916fa78","Type":"ContainerStarted","Data":"e400b83e9a2714357bb050a6fb65a8bd4f44e24a132177e0e802b8142ffc8bb5"} Jan 25 08:49:10 crc kubenswrapper[4832]: I0125 08:49:10.781903 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-rczqs" podStartSLOduration=2.2879460160000002 podStartE2EDuration="6.781880427s" podCreationTimestamp="2026-01-25 08:49:04 +0000 UTC" firstStartedPulling="2026-01-25 08:49:05.69899522 +0000 UTC m=+3128.372818753" lastFinishedPulling="2026-01-25 08:49:10.192929641 +0000 UTC m=+3132.866753164" observedRunningTime="2026-01-25 08:49:10.780816454 +0000 UTC m=+3133.454640017" watchObservedRunningTime="2026-01-25 08:49:10.781880427 +0000 UTC m=+3133.455703960" Jan 25 08:49:12 crc kubenswrapper[4832]: I0125 08:49:12.670063 4832 scope.go:117] "RemoveContainer" containerID="0a0a610809d12c84df2264dec7ffeeee111e92f1be8ae7232e65d8461dcf9246" Jan 25 08:49:12 crc kubenswrapper[4832]: E0125 08:49:12.671884 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9r9sz_openshift-machine-config-operator(1fb47e8e-c812-41b4-9be7-3fad81e121b0)\"" pod="openshift-machine-config-operator/machine-config-daemon-9r9sz" podUID="1fb47e8e-c812-41b4-9be7-3fad81e121b0" Jan 25 08:49:14 crc kubenswrapper[4832]: I0125 08:49:14.393161 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-rczqs" Jan 25 08:49:14 crc kubenswrapper[4832]: I0125 08:49:14.393461 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-rczqs" Jan 25 08:49:15 crc kubenswrapper[4832]: I0125 08:49:15.453829 4832 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-rczqs" podUID="81369db4-bfc3-4fa1-98d2-a7562916fa78" containerName="registry-server" probeResult="failure" output=< Jan 25 08:49:15 crc kubenswrapper[4832]: timeout: failed to connect service ":50051" within 1s Jan 25 08:49:15 crc kubenswrapper[4832]: > Jan 25 08:49:24 crc kubenswrapper[4832]: I0125 08:49:24.446076 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-rczqs" Jan 25 08:49:24 crc kubenswrapper[4832]: I0125 08:49:24.493705 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-rczqs" Jan 25 08:49:24 crc kubenswrapper[4832]: I0125 08:49:24.696721 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-rczqs"] Jan 25 08:49:25 crc kubenswrapper[4832]: I0125 08:49:25.669882 4832 scope.go:117] "RemoveContainer" containerID="0a0a610809d12c84df2264dec7ffeeee111e92f1be8ae7232e65d8461dcf9246" Jan 25 08:49:25 crc kubenswrapper[4832]: E0125 08:49:25.670369 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9r9sz_openshift-machine-config-operator(1fb47e8e-c812-41b4-9be7-3fad81e121b0)\"" pod="openshift-machine-config-operator/machine-config-daemon-9r9sz" podUID="1fb47e8e-c812-41b4-9be7-3fad81e121b0" Jan 25 08:49:25 crc kubenswrapper[4832]: I0125 08:49:25.889418 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-rczqs" podUID="81369db4-bfc3-4fa1-98d2-a7562916fa78" containerName="registry-server" containerID="cri-o://e400b83e9a2714357bb050a6fb65a8bd4f44e24a132177e0e802b8142ffc8bb5" gracePeriod=2 Jan 25 08:49:26 crc kubenswrapper[4832]: I0125 08:49:26.415688 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-rczqs" Jan 25 08:49:26 crc kubenswrapper[4832]: I0125 08:49:26.570420 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-46c7d\" (UniqueName: \"kubernetes.io/projected/81369db4-bfc3-4fa1-98d2-a7562916fa78-kube-api-access-46c7d\") pod \"81369db4-bfc3-4fa1-98d2-a7562916fa78\" (UID: \"81369db4-bfc3-4fa1-98d2-a7562916fa78\") " Jan 25 08:49:26 crc kubenswrapper[4832]: I0125 08:49:26.570593 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/81369db4-bfc3-4fa1-98d2-a7562916fa78-utilities\") pod \"81369db4-bfc3-4fa1-98d2-a7562916fa78\" (UID: \"81369db4-bfc3-4fa1-98d2-a7562916fa78\") " Jan 25 08:49:26 crc kubenswrapper[4832]: I0125 08:49:26.570662 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/81369db4-bfc3-4fa1-98d2-a7562916fa78-catalog-content\") pod \"81369db4-bfc3-4fa1-98d2-a7562916fa78\" (UID: \"81369db4-bfc3-4fa1-98d2-a7562916fa78\") " Jan 25 08:49:26 crc kubenswrapper[4832]: I0125 08:49:26.575065 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/81369db4-bfc3-4fa1-98d2-a7562916fa78-utilities" (OuterVolumeSpecName: "utilities") pod "81369db4-bfc3-4fa1-98d2-a7562916fa78" (UID: "81369db4-bfc3-4fa1-98d2-a7562916fa78"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 25 08:49:26 crc kubenswrapper[4832]: I0125 08:49:26.576492 4832 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/81369db4-bfc3-4fa1-98d2-a7562916fa78-utilities\") on node \"crc\" DevicePath \"\"" Jan 25 08:49:26 crc kubenswrapper[4832]: I0125 08:49:26.586690 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/81369db4-bfc3-4fa1-98d2-a7562916fa78-kube-api-access-46c7d" (OuterVolumeSpecName: "kube-api-access-46c7d") pod "81369db4-bfc3-4fa1-98d2-a7562916fa78" (UID: "81369db4-bfc3-4fa1-98d2-a7562916fa78"). InnerVolumeSpecName "kube-api-access-46c7d". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 25 08:49:26 crc kubenswrapper[4832]: I0125 08:49:26.678944 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-46c7d\" (UniqueName: \"kubernetes.io/projected/81369db4-bfc3-4fa1-98d2-a7562916fa78-kube-api-access-46c7d\") on node \"crc\" DevicePath \"\"" Jan 25 08:49:26 crc kubenswrapper[4832]: I0125 08:49:26.725452 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/81369db4-bfc3-4fa1-98d2-a7562916fa78-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "81369db4-bfc3-4fa1-98d2-a7562916fa78" (UID: "81369db4-bfc3-4fa1-98d2-a7562916fa78"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 25 08:49:26 crc kubenswrapper[4832]: I0125 08:49:26.781066 4832 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/81369db4-bfc3-4fa1-98d2-a7562916fa78-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 25 08:49:26 crc kubenswrapper[4832]: I0125 08:49:26.901262 4832 generic.go:334] "Generic (PLEG): container finished" podID="81369db4-bfc3-4fa1-98d2-a7562916fa78" containerID="e400b83e9a2714357bb050a6fb65a8bd4f44e24a132177e0e802b8142ffc8bb5" exitCode=0 Jan 25 08:49:26 crc kubenswrapper[4832]: I0125 08:49:26.901312 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rczqs" event={"ID":"81369db4-bfc3-4fa1-98d2-a7562916fa78","Type":"ContainerDied","Data":"e400b83e9a2714357bb050a6fb65a8bd4f44e24a132177e0e802b8142ffc8bb5"} Jan 25 08:49:26 crc kubenswrapper[4832]: I0125 08:49:26.901354 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rczqs" event={"ID":"81369db4-bfc3-4fa1-98d2-a7562916fa78","Type":"ContainerDied","Data":"860dbf1c355daa6b9f2efd679671edbf0590dcc1009631fd1e262574f465e58b"} Jan 25 08:49:26 crc kubenswrapper[4832]: I0125 08:49:26.901376 4832 scope.go:117] "RemoveContainer" containerID="e400b83e9a2714357bb050a6fb65a8bd4f44e24a132177e0e802b8142ffc8bb5" Jan 25 08:49:26 crc kubenswrapper[4832]: I0125 08:49:26.901476 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-rczqs" Jan 25 08:49:26 crc kubenswrapper[4832]: I0125 08:49:26.933029 4832 scope.go:117] "RemoveContainer" containerID="7b20af00d01c20079567c555dfd5dc23b6df3f29262f94f884862aa607b70cca" Jan 25 08:49:26 crc kubenswrapper[4832]: I0125 08:49:26.964542 4832 scope.go:117] "RemoveContainer" containerID="6c88684d4e2ef562dfa4325d193e1bff6a94adbac831db3d92a85a9006c66987" Jan 25 08:49:26 crc kubenswrapper[4832]: I0125 08:49:26.968169 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-rczqs"] Jan 25 08:49:26 crc kubenswrapper[4832]: I0125 08:49:26.977356 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-rczqs"] Jan 25 08:49:27 crc kubenswrapper[4832]: I0125 08:49:27.002098 4832 scope.go:117] "RemoveContainer" containerID="e400b83e9a2714357bb050a6fb65a8bd4f44e24a132177e0e802b8142ffc8bb5" Jan 25 08:49:27 crc kubenswrapper[4832]: E0125 08:49:27.002710 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e400b83e9a2714357bb050a6fb65a8bd4f44e24a132177e0e802b8142ffc8bb5\": container with ID starting with e400b83e9a2714357bb050a6fb65a8bd4f44e24a132177e0e802b8142ffc8bb5 not found: ID does not exist" containerID="e400b83e9a2714357bb050a6fb65a8bd4f44e24a132177e0e802b8142ffc8bb5" Jan 25 08:49:27 crc kubenswrapper[4832]: I0125 08:49:27.002758 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e400b83e9a2714357bb050a6fb65a8bd4f44e24a132177e0e802b8142ffc8bb5"} err="failed to get container status \"e400b83e9a2714357bb050a6fb65a8bd4f44e24a132177e0e802b8142ffc8bb5\": rpc error: code = NotFound desc = could not find container \"e400b83e9a2714357bb050a6fb65a8bd4f44e24a132177e0e802b8142ffc8bb5\": container with ID starting with e400b83e9a2714357bb050a6fb65a8bd4f44e24a132177e0e802b8142ffc8bb5 not found: ID does not exist" Jan 25 08:49:27 crc kubenswrapper[4832]: I0125 08:49:27.002787 4832 scope.go:117] "RemoveContainer" containerID="7b20af00d01c20079567c555dfd5dc23b6df3f29262f94f884862aa607b70cca" Jan 25 08:49:27 crc kubenswrapper[4832]: E0125 08:49:27.003627 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7b20af00d01c20079567c555dfd5dc23b6df3f29262f94f884862aa607b70cca\": container with ID starting with 7b20af00d01c20079567c555dfd5dc23b6df3f29262f94f884862aa607b70cca not found: ID does not exist" containerID="7b20af00d01c20079567c555dfd5dc23b6df3f29262f94f884862aa607b70cca" Jan 25 08:49:27 crc kubenswrapper[4832]: I0125 08:49:27.003707 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7b20af00d01c20079567c555dfd5dc23b6df3f29262f94f884862aa607b70cca"} err="failed to get container status \"7b20af00d01c20079567c555dfd5dc23b6df3f29262f94f884862aa607b70cca\": rpc error: code = NotFound desc = could not find container \"7b20af00d01c20079567c555dfd5dc23b6df3f29262f94f884862aa607b70cca\": container with ID starting with 7b20af00d01c20079567c555dfd5dc23b6df3f29262f94f884862aa607b70cca not found: ID does not exist" Jan 25 08:49:27 crc kubenswrapper[4832]: I0125 08:49:27.003740 4832 scope.go:117] "RemoveContainer" containerID="6c88684d4e2ef562dfa4325d193e1bff6a94adbac831db3d92a85a9006c66987" Jan 25 08:49:27 crc kubenswrapper[4832]: E0125 08:49:27.004048 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6c88684d4e2ef562dfa4325d193e1bff6a94adbac831db3d92a85a9006c66987\": container with ID starting with 6c88684d4e2ef562dfa4325d193e1bff6a94adbac831db3d92a85a9006c66987 not found: ID does not exist" containerID="6c88684d4e2ef562dfa4325d193e1bff6a94adbac831db3d92a85a9006c66987" Jan 25 08:49:27 crc kubenswrapper[4832]: I0125 08:49:27.004100 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6c88684d4e2ef562dfa4325d193e1bff6a94adbac831db3d92a85a9006c66987"} err="failed to get container status \"6c88684d4e2ef562dfa4325d193e1bff6a94adbac831db3d92a85a9006c66987\": rpc error: code = NotFound desc = could not find container \"6c88684d4e2ef562dfa4325d193e1bff6a94adbac831db3d92a85a9006c66987\": container with ID starting with 6c88684d4e2ef562dfa4325d193e1bff6a94adbac831db3d92a85a9006c66987 not found: ID does not exist" Jan 25 08:49:27 crc kubenswrapper[4832]: I0125 08:49:27.685959 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="81369db4-bfc3-4fa1-98d2-a7562916fa78" path="/var/lib/kubelet/pods/81369db4-bfc3-4fa1-98d2-a7562916fa78/volumes" Jan 25 08:49:36 crc kubenswrapper[4832]: I0125 08:49:36.669678 4832 scope.go:117] "RemoveContainer" containerID="0a0a610809d12c84df2264dec7ffeeee111e92f1be8ae7232e65d8461dcf9246" Jan 25 08:49:36 crc kubenswrapper[4832]: E0125 08:49:36.670467 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9r9sz_openshift-machine-config-operator(1fb47e8e-c812-41b4-9be7-3fad81e121b0)\"" pod="openshift-machine-config-operator/machine-config-daemon-9r9sz" podUID="1fb47e8e-c812-41b4-9be7-3fad81e121b0" Jan 25 08:49:47 crc kubenswrapper[4832]: I0125 08:49:47.676745 4832 scope.go:117] "RemoveContainer" containerID="0a0a610809d12c84df2264dec7ffeeee111e92f1be8ae7232e65d8461dcf9246" Jan 25 08:49:47 crc kubenswrapper[4832]: E0125 08:49:47.677476 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9r9sz_openshift-machine-config-operator(1fb47e8e-c812-41b4-9be7-3fad81e121b0)\"" pod="openshift-machine-config-operator/machine-config-daemon-9r9sz" podUID="1fb47e8e-c812-41b4-9be7-3fad81e121b0" Jan 25 08:50:00 crc kubenswrapper[4832]: I0125 08:50:00.670836 4832 scope.go:117] "RemoveContainer" containerID="0a0a610809d12c84df2264dec7ffeeee111e92f1be8ae7232e65d8461dcf9246" Jan 25 08:50:01 crc kubenswrapper[4832]: I0125 08:50:01.247405 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-9r9sz" event={"ID":"1fb47e8e-c812-41b4-9be7-3fad81e121b0","Type":"ContainerStarted","Data":"7ace08f928564b03ea6b92806bc43a72271873c73f1423c0385090593b7be414"} Jan 25 08:52:22 crc kubenswrapper[4832]: I0125 08:52:22.150108 4832 patch_prober.go:28] interesting pod/machine-config-daemon-9r9sz container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 25 08:52:22 crc kubenswrapper[4832]: I0125 08:52:22.150810 4832 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-9r9sz" podUID="1fb47e8e-c812-41b4-9be7-3fad81e121b0" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 25 08:52:52 crc kubenswrapper[4832]: I0125 08:52:52.149263 4832 patch_prober.go:28] interesting pod/machine-config-daemon-9r9sz container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 25 08:52:52 crc kubenswrapper[4832]: I0125 08:52:52.150085 4832 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-9r9sz" podUID="1fb47e8e-c812-41b4-9be7-3fad81e121b0" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 25 08:53:22 crc kubenswrapper[4832]: I0125 08:53:22.149807 4832 patch_prober.go:28] interesting pod/machine-config-daemon-9r9sz container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 25 08:53:22 crc kubenswrapper[4832]: I0125 08:53:22.150695 4832 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-9r9sz" podUID="1fb47e8e-c812-41b4-9be7-3fad81e121b0" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 25 08:53:22 crc kubenswrapper[4832]: I0125 08:53:22.150750 4832 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-9r9sz" Jan 25 08:53:22 crc kubenswrapper[4832]: I0125 08:53:22.151685 4832 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"7ace08f928564b03ea6b92806bc43a72271873c73f1423c0385090593b7be414"} pod="openshift-machine-config-operator/machine-config-daemon-9r9sz" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 25 08:53:22 crc kubenswrapper[4832]: I0125 08:53:22.151745 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-9r9sz" podUID="1fb47e8e-c812-41b4-9be7-3fad81e121b0" containerName="machine-config-daemon" containerID="cri-o://7ace08f928564b03ea6b92806bc43a72271873c73f1423c0385090593b7be414" gracePeriod=600 Jan 25 08:53:23 crc kubenswrapper[4832]: I0125 08:53:23.041265 4832 generic.go:334] "Generic (PLEG): container finished" podID="1fb47e8e-c812-41b4-9be7-3fad81e121b0" containerID="7ace08f928564b03ea6b92806bc43a72271873c73f1423c0385090593b7be414" exitCode=0 Jan 25 08:53:23 crc kubenswrapper[4832]: I0125 08:53:23.041344 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-9r9sz" event={"ID":"1fb47e8e-c812-41b4-9be7-3fad81e121b0","Type":"ContainerDied","Data":"7ace08f928564b03ea6b92806bc43a72271873c73f1423c0385090593b7be414"} Jan 25 08:53:23 crc kubenswrapper[4832]: I0125 08:53:23.041855 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-9r9sz" event={"ID":"1fb47e8e-c812-41b4-9be7-3fad81e121b0","Type":"ContainerStarted","Data":"47785627d9fed4967d30c7d530949092bec3ab3c86f8b6a114d139f561674311"} Jan 25 08:53:23 crc kubenswrapper[4832]: I0125 08:53:23.041877 4832 scope.go:117] "RemoveContainer" containerID="0a0a610809d12c84df2264dec7ffeeee111e92f1be8ae7232e65d8461dcf9246" Jan 25 08:53:54 crc kubenswrapper[4832]: I0125 08:53:54.723495 4832 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/swift-proxy-658c5f7995-t6v6k" podUID="81bd3301-f264-4150-8f71-869af2c1ed3d" containerName="proxy-server" probeResult="failure" output="HTTP probe failed with statuscode: 502" Jan 25 08:55:04 crc kubenswrapper[4832]: I0125 08:55:04.099646 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-pdq9j"] Jan 25 08:55:04 crc kubenswrapper[4832]: E0125 08:55:04.100546 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="81369db4-bfc3-4fa1-98d2-a7562916fa78" containerName="extract-utilities" Jan 25 08:55:04 crc kubenswrapper[4832]: I0125 08:55:04.100560 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="81369db4-bfc3-4fa1-98d2-a7562916fa78" containerName="extract-utilities" Jan 25 08:55:04 crc kubenswrapper[4832]: E0125 08:55:04.100593 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="81369db4-bfc3-4fa1-98d2-a7562916fa78" containerName="registry-server" Jan 25 08:55:04 crc kubenswrapper[4832]: I0125 08:55:04.100599 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="81369db4-bfc3-4fa1-98d2-a7562916fa78" containerName="registry-server" Jan 25 08:55:04 crc kubenswrapper[4832]: E0125 08:55:04.100611 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="81369db4-bfc3-4fa1-98d2-a7562916fa78" containerName="extract-content" Jan 25 08:55:04 crc kubenswrapper[4832]: I0125 08:55:04.100617 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="81369db4-bfc3-4fa1-98d2-a7562916fa78" containerName="extract-content" Jan 25 08:55:04 crc kubenswrapper[4832]: I0125 08:55:04.100793 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="81369db4-bfc3-4fa1-98d2-a7562916fa78" containerName="registry-server" Jan 25 08:55:04 crc kubenswrapper[4832]: I0125 08:55:04.102357 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-pdq9j" Jan 25 08:55:04 crc kubenswrapper[4832]: I0125 08:55:04.112590 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-pdq9j"] Jan 25 08:55:04 crc kubenswrapper[4832]: I0125 08:55:04.250624 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/82c7022e-ca3c-43e1-bf10-0062888b752f-utilities\") pod \"certified-operators-pdq9j\" (UID: \"82c7022e-ca3c-43e1-bf10-0062888b752f\") " pod="openshift-marketplace/certified-operators-pdq9j" Jan 25 08:55:04 crc kubenswrapper[4832]: I0125 08:55:04.250710 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/82c7022e-ca3c-43e1-bf10-0062888b752f-catalog-content\") pod \"certified-operators-pdq9j\" (UID: \"82c7022e-ca3c-43e1-bf10-0062888b752f\") " pod="openshift-marketplace/certified-operators-pdq9j" Jan 25 08:55:04 crc kubenswrapper[4832]: I0125 08:55:04.250947 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-455mc\" (UniqueName: \"kubernetes.io/projected/82c7022e-ca3c-43e1-bf10-0062888b752f-kube-api-access-455mc\") pod \"certified-operators-pdq9j\" (UID: \"82c7022e-ca3c-43e1-bf10-0062888b752f\") " pod="openshift-marketplace/certified-operators-pdq9j" Jan 25 08:55:04 crc kubenswrapper[4832]: I0125 08:55:04.353541 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/82c7022e-ca3c-43e1-bf10-0062888b752f-utilities\") pod \"certified-operators-pdq9j\" (UID: \"82c7022e-ca3c-43e1-bf10-0062888b752f\") " pod="openshift-marketplace/certified-operators-pdq9j" Jan 25 08:55:04 crc kubenswrapper[4832]: I0125 08:55:04.354091 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/82c7022e-ca3c-43e1-bf10-0062888b752f-catalog-content\") pod \"certified-operators-pdq9j\" (UID: \"82c7022e-ca3c-43e1-bf10-0062888b752f\") " pod="openshift-marketplace/certified-operators-pdq9j" Jan 25 08:55:04 crc kubenswrapper[4832]: I0125 08:55:04.354149 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-455mc\" (UniqueName: \"kubernetes.io/projected/82c7022e-ca3c-43e1-bf10-0062888b752f-kube-api-access-455mc\") pod \"certified-operators-pdq9j\" (UID: \"82c7022e-ca3c-43e1-bf10-0062888b752f\") " pod="openshift-marketplace/certified-operators-pdq9j" Jan 25 08:55:04 crc kubenswrapper[4832]: I0125 08:55:04.355397 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/82c7022e-ca3c-43e1-bf10-0062888b752f-utilities\") pod \"certified-operators-pdq9j\" (UID: \"82c7022e-ca3c-43e1-bf10-0062888b752f\") " pod="openshift-marketplace/certified-operators-pdq9j" Jan 25 08:55:04 crc kubenswrapper[4832]: I0125 08:55:04.355740 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/82c7022e-ca3c-43e1-bf10-0062888b752f-catalog-content\") pod \"certified-operators-pdq9j\" (UID: \"82c7022e-ca3c-43e1-bf10-0062888b752f\") " pod="openshift-marketplace/certified-operators-pdq9j" Jan 25 08:55:04 crc kubenswrapper[4832]: I0125 08:55:04.381141 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-455mc\" (UniqueName: \"kubernetes.io/projected/82c7022e-ca3c-43e1-bf10-0062888b752f-kube-api-access-455mc\") pod \"certified-operators-pdq9j\" (UID: \"82c7022e-ca3c-43e1-bf10-0062888b752f\") " pod="openshift-marketplace/certified-operators-pdq9j" Jan 25 08:55:04 crc kubenswrapper[4832]: I0125 08:55:04.424459 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-pdq9j" Jan 25 08:55:04 crc kubenswrapper[4832]: I0125 08:55:04.926520 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-pdq9j"] Jan 25 08:55:04 crc kubenswrapper[4832]: I0125 08:55:04.960386 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-pdq9j" event={"ID":"82c7022e-ca3c-43e1-bf10-0062888b752f","Type":"ContainerStarted","Data":"29ca65c752e377fa6ef19a72162e7ff32f28cbe2ab2a2776d8e51eee304685c7"} Jan 25 08:55:05 crc kubenswrapper[4832]: I0125 08:55:05.972453 4832 generic.go:334] "Generic (PLEG): container finished" podID="82c7022e-ca3c-43e1-bf10-0062888b752f" containerID="881d5deeb5bb1a67045214b1f6855971d4625de23b3c58bc685eded072f7a326" exitCode=0 Jan 25 08:55:05 crc kubenswrapper[4832]: I0125 08:55:05.972974 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-pdq9j" event={"ID":"82c7022e-ca3c-43e1-bf10-0062888b752f","Type":"ContainerDied","Data":"881d5deeb5bb1a67045214b1f6855971d4625de23b3c58bc685eded072f7a326"} Jan 25 08:55:05 crc kubenswrapper[4832]: I0125 08:55:05.975780 4832 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 25 08:55:06 crc kubenswrapper[4832]: I0125 08:55:06.983986 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-pdq9j" event={"ID":"82c7022e-ca3c-43e1-bf10-0062888b752f","Type":"ContainerStarted","Data":"52004729339dac3b6a65df53b0dab074413f6b2341d9e67266e84b987b7f0c24"} Jan 25 08:55:07 crc kubenswrapper[4832]: I0125 08:55:07.999060 4832 generic.go:334] "Generic (PLEG): container finished" podID="82c7022e-ca3c-43e1-bf10-0062888b752f" containerID="52004729339dac3b6a65df53b0dab074413f6b2341d9e67266e84b987b7f0c24" exitCode=0 Jan 25 08:55:08 crc kubenswrapper[4832]: I0125 08:55:07.999138 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-pdq9j" event={"ID":"82c7022e-ca3c-43e1-bf10-0062888b752f","Type":"ContainerDied","Data":"52004729339dac3b6a65df53b0dab074413f6b2341d9e67266e84b987b7f0c24"} Jan 25 08:55:09 crc kubenswrapper[4832]: I0125 08:55:09.010903 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-pdq9j" event={"ID":"82c7022e-ca3c-43e1-bf10-0062888b752f","Type":"ContainerStarted","Data":"1a21578f9713c5c094fed5e2a4911c4522279c60936f0f55c4c51a976f3ca4c3"} Jan 25 08:55:09 crc kubenswrapper[4832]: I0125 08:55:09.036994 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-pdq9j" podStartSLOduration=2.557516841 podStartE2EDuration="5.036970331s" podCreationTimestamp="2026-01-25 08:55:04 +0000 UTC" firstStartedPulling="2026-01-25 08:55:05.97547601 +0000 UTC m=+3488.649299543" lastFinishedPulling="2026-01-25 08:55:08.45492948 +0000 UTC m=+3491.128753033" observedRunningTime="2026-01-25 08:55:09.032688347 +0000 UTC m=+3491.706511880" watchObservedRunningTime="2026-01-25 08:55:09.036970331 +0000 UTC m=+3491.710793874" Jan 25 08:55:11 crc kubenswrapper[4832]: I0125 08:55:11.487322 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-dx4vz"] Jan 25 08:55:11 crc kubenswrapper[4832]: I0125 08:55:11.491259 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-dx4vz" Jan 25 08:55:11 crc kubenswrapper[4832]: I0125 08:55:11.500606 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-dx4vz"] Jan 25 08:55:11 crc kubenswrapper[4832]: I0125 08:55:11.599345 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3f12e82f-acb8-4f4f-ba1b-2e47764e7aa2-catalog-content\") pod \"redhat-marketplace-dx4vz\" (UID: \"3f12e82f-acb8-4f4f-ba1b-2e47764e7aa2\") " pod="openshift-marketplace/redhat-marketplace-dx4vz" Jan 25 08:55:11 crc kubenswrapper[4832]: I0125 08:55:11.599756 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nwbv7\" (UniqueName: \"kubernetes.io/projected/3f12e82f-acb8-4f4f-ba1b-2e47764e7aa2-kube-api-access-nwbv7\") pod \"redhat-marketplace-dx4vz\" (UID: \"3f12e82f-acb8-4f4f-ba1b-2e47764e7aa2\") " pod="openshift-marketplace/redhat-marketplace-dx4vz" Jan 25 08:55:11 crc kubenswrapper[4832]: I0125 08:55:11.599982 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3f12e82f-acb8-4f4f-ba1b-2e47764e7aa2-utilities\") pod \"redhat-marketplace-dx4vz\" (UID: \"3f12e82f-acb8-4f4f-ba1b-2e47764e7aa2\") " pod="openshift-marketplace/redhat-marketplace-dx4vz" Jan 25 08:55:11 crc kubenswrapper[4832]: I0125 08:55:11.702232 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3f12e82f-acb8-4f4f-ba1b-2e47764e7aa2-utilities\") pod \"redhat-marketplace-dx4vz\" (UID: \"3f12e82f-acb8-4f4f-ba1b-2e47764e7aa2\") " pod="openshift-marketplace/redhat-marketplace-dx4vz" Jan 25 08:55:11 crc kubenswrapper[4832]: I0125 08:55:11.702360 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3f12e82f-acb8-4f4f-ba1b-2e47764e7aa2-catalog-content\") pod \"redhat-marketplace-dx4vz\" (UID: \"3f12e82f-acb8-4f4f-ba1b-2e47764e7aa2\") " pod="openshift-marketplace/redhat-marketplace-dx4vz" Jan 25 08:55:11 crc kubenswrapper[4832]: I0125 08:55:11.702503 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nwbv7\" (UniqueName: \"kubernetes.io/projected/3f12e82f-acb8-4f4f-ba1b-2e47764e7aa2-kube-api-access-nwbv7\") pod \"redhat-marketplace-dx4vz\" (UID: \"3f12e82f-acb8-4f4f-ba1b-2e47764e7aa2\") " pod="openshift-marketplace/redhat-marketplace-dx4vz" Jan 25 08:55:11 crc kubenswrapper[4832]: I0125 08:55:11.702858 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3f12e82f-acb8-4f4f-ba1b-2e47764e7aa2-utilities\") pod \"redhat-marketplace-dx4vz\" (UID: \"3f12e82f-acb8-4f4f-ba1b-2e47764e7aa2\") " pod="openshift-marketplace/redhat-marketplace-dx4vz" Jan 25 08:55:11 crc kubenswrapper[4832]: I0125 08:55:11.702894 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3f12e82f-acb8-4f4f-ba1b-2e47764e7aa2-catalog-content\") pod \"redhat-marketplace-dx4vz\" (UID: \"3f12e82f-acb8-4f4f-ba1b-2e47764e7aa2\") " pod="openshift-marketplace/redhat-marketplace-dx4vz" Jan 25 08:55:11 crc kubenswrapper[4832]: I0125 08:55:11.725414 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nwbv7\" (UniqueName: \"kubernetes.io/projected/3f12e82f-acb8-4f4f-ba1b-2e47764e7aa2-kube-api-access-nwbv7\") pod \"redhat-marketplace-dx4vz\" (UID: \"3f12e82f-acb8-4f4f-ba1b-2e47764e7aa2\") " pod="openshift-marketplace/redhat-marketplace-dx4vz" Jan 25 08:55:11 crc kubenswrapper[4832]: I0125 08:55:11.842436 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-dx4vz" Jan 25 08:55:12 crc kubenswrapper[4832]: I0125 08:55:12.306957 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-dx4vz"] Jan 25 08:55:13 crc kubenswrapper[4832]: I0125 08:55:13.054656 4832 generic.go:334] "Generic (PLEG): container finished" podID="3f12e82f-acb8-4f4f-ba1b-2e47764e7aa2" containerID="fae02f1777f549bf56d5a86b7042ec830c147eaee1642d4de358a538a351d321" exitCode=0 Jan 25 08:55:13 crc kubenswrapper[4832]: I0125 08:55:13.055105 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-dx4vz" event={"ID":"3f12e82f-acb8-4f4f-ba1b-2e47764e7aa2","Type":"ContainerDied","Data":"fae02f1777f549bf56d5a86b7042ec830c147eaee1642d4de358a538a351d321"} Jan 25 08:55:13 crc kubenswrapper[4832]: I0125 08:55:13.055239 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-dx4vz" event={"ID":"3f12e82f-acb8-4f4f-ba1b-2e47764e7aa2","Type":"ContainerStarted","Data":"d9cc48ed5097ae223085b63cb94c38e5ddb9e06b02f94c9587adcb478aa7ed0c"} Jan 25 08:55:14 crc kubenswrapper[4832]: I0125 08:55:14.088047 4832 generic.go:334] "Generic (PLEG): container finished" podID="3f12e82f-acb8-4f4f-ba1b-2e47764e7aa2" containerID="c5280f56b6ec5593e463210c851cc907001c66edd47be261925d4763b8e3ba0e" exitCode=0 Jan 25 08:55:14 crc kubenswrapper[4832]: I0125 08:55:14.088141 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-dx4vz" event={"ID":"3f12e82f-acb8-4f4f-ba1b-2e47764e7aa2","Type":"ContainerDied","Data":"c5280f56b6ec5593e463210c851cc907001c66edd47be261925d4763b8e3ba0e"} Jan 25 08:55:14 crc kubenswrapper[4832]: I0125 08:55:14.425408 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-pdq9j" Jan 25 08:55:14 crc kubenswrapper[4832]: I0125 08:55:14.425464 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-pdq9j" Jan 25 08:55:14 crc kubenswrapper[4832]: I0125 08:55:14.472922 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-pdq9j" Jan 25 08:55:15 crc kubenswrapper[4832]: I0125 08:55:15.099043 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-dx4vz" event={"ID":"3f12e82f-acb8-4f4f-ba1b-2e47764e7aa2","Type":"ContainerStarted","Data":"6de0686168dbd20641061f614d2f373d4546e585726dcc277ca9947a45de7e3a"} Jan 25 08:55:15 crc kubenswrapper[4832]: I0125 08:55:15.119661 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-dx4vz" podStartSLOduration=2.568416549 podStartE2EDuration="4.119637006s" podCreationTimestamp="2026-01-25 08:55:11 +0000 UTC" firstStartedPulling="2026-01-25 08:55:13.056497501 +0000 UTC m=+3495.730321024" lastFinishedPulling="2026-01-25 08:55:14.607717948 +0000 UTC m=+3497.281541481" observedRunningTime="2026-01-25 08:55:15.116064966 +0000 UTC m=+3497.789888509" watchObservedRunningTime="2026-01-25 08:55:15.119637006 +0000 UTC m=+3497.793460539" Jan 25 08:55:15 crc kubenswrapper[4832]: I0125 08:55:15.149150 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-pdq9j" Jan 25 08:55:16 crc kubenswrapper[4832]: I0125 08:55:16.870307 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-pdq9j"] Jan 25 08:55:17 crc kubenswrapper[4832]: I0125 08:55:17.114167 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-pdq9j" podUID="82c7022e-ca3c-43e1-bf10-0062888b752f" containerName="registry-server" containerID="cri-o://1a21578f9713c5c094fed5e2a4911c4522279c60936f0f55c4c51a976f3ca4c3" gracePeriod=2 Jan 25 08:55:17 crc kubenswrapper[4832]: I0125 08:55:17.628850 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-pdq9j" Jan 25 08:55:17 crc kubenswrapper[4832]: I0125 08:55:17.731931 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-455mc\" (UniqueName: \"kubernetes.io/projected/82c7022e-ca3c-43e1-bf10-0062888b752f-kube-api-access-455mc\") pod \"82c7022e-ca3c-43e1-bf10-0062888b752f\" (UID: \"82c7022e-ca3c-43e1-bf10-0062888b752f\") " Jan 25 08:55:17 crc kubenswrapper[4832]: I0125 08:55:17.732105 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/82c7022e-ca3c-43e1-bf10-0062888b752f-utilities\") pod \"82c7022e-ca3c-43e1-bf10-0062888b752f\" (UID: \"82c7022e-ca3c-43e1-bf10-0062888b752f\") " Jan 25 08:55:17 crc kubenswrapper[4832]: I0125 08:55:17.732159 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/82c7022e-ca3c-43e1-bf10-0062888b752f-catalog-content\") pod \"82c7022e-ca3c-43e1-bf10-0062888b752f\" (UID: \"82c7022e-ca3c-43e1-bf10-0062888b752f\") " Jan 25 08:55:17 crc kubenswrapper[4832]: I0125 08:55:17.733012 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/82c7022e-ca3c-43e1-bf10-0062888b752f-utilities" (OuterVolumeSpecName: "utilities") pod "82c7022e-ca3c-43e1-bf10-0062888b752f" (UID: "82c7022e-ca3c-43e1-bf10-0062888b752f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 25 08:55:17 crc kubenswrapper[4832]: I0125 08:55:17.739315 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/82c7022e-ca3c-43e1-bf10-0062888b752f-kube-api-access-455mc" (OuterVolumeSpecName: "kube-api-access-455mc") pod "82c7022e-ca3c-43e1-bf10-0062888b752f" (UID: "82c7022e-ca3c-43e1-bf10-0062888b752f"). InnerVolumeSpecName "kube-api-access-455mc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 25 08:55:17 crc kubenswrapper[4832]: I0125 08:55:17.780274 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/82c7022e-ca3c-43e1-bf10-0062888b752f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "82c7022e-ca3c-43e1-bf10-0062888b752f" (UID: "82c7022e-ca3c-43e1-bf10-0062888b752f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 25 08:55:17 crc kubenswrapper[4832]: I0125 08:55:17.835317 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-455mc\" (UniqueName: \"kubernetes.io/projected/82c7022e-ca3c-43e1-bf10-0062888b752f-kube-api-access-455mc\") on node \"crc\" DevicePath \"\"" Jan 25 08:55:17 crc kubenswrapper[4832]: I0125 08:55:17.835351 4832 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/82c7022e-ca3c-43e1-bf10-0062888b752f-utilities\") on node \"crc\" DevicePath \"\"" Jan 25 08:55:17 crc kubenswrapper[4832]: I0125 08:55:17.835360 4832 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/82c7022e-ca3c-43e1-bf10-0062888b752f-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 25 08:55:18 crc kubenswrapper[4832]: I0125 08:55:18.125631 4832 generic.go:334] "Generic (PLEG): container finished" podID="82c7022e-ca3c-43e1-bf10-0062888b752f" containerID="1a21578f9713c5c094fed5e2a4911c4522279c60936f0f55c4c51a976f3ca4c3" exitCode=0 Jan 25 08:55:18 crc kubenswrapper[4832]: I0125 08:55:18.125697 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-pdq9j" event={"ID":"82c7022e-ca3c-43e1-bf10-0062888b752f","Type":"ContainerDied","Data":"1a21578f9713c5c094fed5e2a4911c4522279c60936f0f55c4c51a976f3ca4c3"} Jan 25 08:55:18 crc kubenswrapper[4832]: I0125 08:55:18.125759 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-pdq9j" Jan 25 08:55:18 crc kubenswrapper[4832]: I0125 08:55:18.125784 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-pdq9j" event={"ID":"82c7022e-ca3c-43e1-bf10-0062888b752f","Type":"ContainerDied","Data":"29ca65c752e377fa6ef19a72162e7ff32f28cbe2ab2a2776d8e51eee304685c7"} Jan 25 08:55:18 crc kubenswrapper[4832]: I0125 08:55:18.125820 4832 scope.go:117] "RemoveContainer" containerID="1a21578f9713c5c094fed5e2a4911c4522279c60936f0f55c4c51a976f3ca4c3" Jan 25 08:55:18 crc kubenswrapper[4832]: I0125 08:55:18.152451 4832 scope.go:117] "RemoveContainer" containerID="52004729339dac3b6a65df53b0dab074413f6b2341d9e67266e84b987b7f0c24" Jan 25 08:55:18 crc kubenswrapper[4832]: I0125 08:55:18.168578 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-pdq9j"] Jan 25 08:55:18 crc kubenswrapper[4832]: I0125 08:55:18.175784 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-pdq9j"] Jan 25 08:55:18 crc kubenswrapper[4832]: I0125 08:55:18.193176 4832 scope.go:117] "RemoveContainer" containerID="881d5deeb5bb1a67045214b1f6855971d4625de23b3c58bc685eded072f7a326" Jan 25 08:55:18 crc kubenswrapper[4832]: I0125 08:55:18.220262 4832 scope.go:117] "RemoveContainer" containerID="1a21578f9713c5c094fed5e2a4911c4522279c60936f0f55c4c51a976f3ca4c3" Jan 25 08:55:18 crc kubenswrapper[4832]: E0125 08:55:18.220814 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1a21578f9713c5c094fed5e2a4911c4522279c60936f0f55c4c51a976f3ca4c3\": container with ID starting with 1a21578f9713c5c094fed5e2a4911c4522279c60936f0f55c4c51a976f3ca4c3 not found: ID does not exist" containerID="1a21578f9713c5c094fed5e2a4911c4522279c60936f0f55c4c51a976f3ca4c3" Jan 25 08:55:18 crc kubenswrapper[4832]: I0125 08:55:18.220847 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1a21578f9713c5c094fed5e2a4911c4522279c60936f0f55c4c51a976f3ca4c3"} err="failed to get container status \"1a21578f9713c5c094fed5e2a4911c4522279c60936f0f55c4c51a976f3ca4c3\": rpc error: code = NotFound desc = could not find container \"1a21578f9713c5c094fed5e2a4911c4522279c60936f0f55c4c51a976f3ca4c3\": container with ID starting with 1a21578f9713c5c094fed5e2a4911c4522279c60936f0f55c4c51a976f3ca4c3 not found: ID does not exist" Jan 25 08:55:18 crc kubenswrapper[4832]: I0125 08:55:18.220871 4832 scope.go:117] "RemoveContainer" containerID="52004729339dac3b6a65df53b0dab074413f6b2341d9e67266e84b987b7f0c24" Jan 25 08:55:18 crc kubenswrapper[4832]: E0125 08:55:18.221351 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"52004729339dac3b6a65df53b0dab074413f6b2341d9e67266e84b987b7f0c24\": container with ID starting with 52004729339dac3b6a65df53b0dab074413f6b2341d9e67266e84b987b7f0c24 not found: ID does not exist" containerID="52004729339dac3b6a65df53b0dab074413f6b2341d9e67266e84b987b7f0c24" Jan 25 08:55:18 crc kubenswrapper[4832]: I0125 08:55:18.221414 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"52004729339dac3b6a65df53b0dab074413f6b2341d9e67266e84b987b7f0c24"} err="failed to get container status \"52004729339dac3b6a65df53b0dab074413f6b2341d9e67266e84b987b7f0c24\": rpc error: code = NotFound desc = could not find container \"52004729339dac3b6a65df53b0dab074413f6b2341d9e67266e84b987b7f0c24\": container with ID starting with 52004729339dac3b6a65df53b0dab074413f6b2341d9e67266e84b987b7f0c24 not found: ID does not exist" Jan 25 08:55:18 crc kubenswrapper[4832]: I0125 08:55:18.221443 4832 scope.go:117] "RemoveContainer" containerID="881d5deeb5bb1a67045214b1f6855971d4625de23b3c58bc685eded072f7a326" Jan 25 08:55:18 crc kubenswrapper[4832]: E0125 08:55:18.221765 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"881d5deeb5bb1a67045214b1f6855971d4625de23b3c58bc685eded072f7a326\": container with ID starting with 881d5deeb5bb1a67045214b1f6855971d4625de23b3c58bc685eded072f7a326 not found: ID does not exist" containerID="881d5deeb5bb1a67045214b1f6855971d4625de23b3c58bc685eded072f7a326" Jan 25 08:55:18 crc kubenswrapper[4832]: I0125 08:55:18.221790 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"881d5deeb5bb1a67045214b1f6855971d4625de23b3c58bc685eded072f7a326"} err="failed to get container status \"881d5deeb5bb1a67045214b1f6855971d4625de23b3c58bc685eded072f7a326\": rpc error: code = NotFound desc = could not find container \"881d5deeb5bb1a67045214b1f6855971d4625de23b3c58bc685eded072f7a326\": container with ID starting with 881d5deeb5bb1a67045214b1f6855971d4625de23b3c58bc685eded072f7a326 not found: ID does not exist" Jan 25 08:55:19 crc kubenswrapper[4832]: I0125 08:55:19.686979 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="82c7022e-ca3c-43e1-bf10-0062888b752f" path="/var/lib/kubelet/pods/82c7022e-ca3c-43e1-bf10-0062888b752f/volumes" Jan 25 08:55:21 crc kubenswrapper[4832]: I0125 08:55:21.045336 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-rgbj2"] Jan 25 08:55:21 crc kubenswrapper[4832]: E0125 08:55:21.046490 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="82c7022e-ca3c-43e1-bf10-0062888b752f" containerName="extract-utilities" Jan 25 08:55:21 crc kubenswrapper[4832]: I0125 08:55:21.046511 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="82c7022e-ca3c-43e1-bf10-0062888b752f" containerName="extract-utilities" Jan 25 08:55:21 crc kubenswrapper[4832]: E0125 08:55:21.046527 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="82c7022e-ca3c-43e1-bf10-0062888b752f" containerName="registry-server" Jan 25 08:55:21 crc kubenswrapper[4832]: I0125 08:55:21.046536 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="82c7022e-ca3c-43e1-bf10-0062888b752f" containerName="registry-server" Jan 25 08:55:21 crc kubenswrapper[4832]: E0125 08:55:21.046578 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="82c7022e-ca3c-43e1-bf10-0062888b752f" containerName="extract-content" Jan 25 08:55:21 crc kubenswrapper[4832]: I0125 08:55:21.046588 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="82c7022e-ca3c-43e1-bf10-0062888b752f" containerName="extract-content" Jan 25 08:55:21 crc kubenswrapper[4832]: I0125 08:55:21.046846 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="82c7022e-ca3c-43e1-bf10-0062888b752f" containerName="registry-server" Jan 25 08:55:21 crc kubenswrapper[4832]: I0125 08:55:21.048659 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-rgbj2" Jan 25 08:55:21 crc kubenswrapper[4832]: I0125 08:55:21.054099 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-rgbj2"] Jan 25 08:55:21 crc kubenswrapper[4832]: I0125 08:55:21.200847 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/04629ecb-eed3-4eb1-b085-448a64d0b2d8-catalog-content\") pod \"community-operators-rgbj2\" (UID: \"04629ecb-eed3-4eb1-b085-448a64d0b2d8\") " pod="openshift-marketplace/community-operators-rgbj2" Jan 25 08:55:21 crc kubenswrapper[4832]: I0125 08:55:21.201018 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/04629ecb-eed3-4eb1-b085-448a64d0b2d8-utilities\") pod \"community-operators-rgbj2\" (UID: \"04629ecb-eed3-4eb1-b085-448a64d0b2d8\") " pod="openshift-marketplace/community-operators-rgbj2" Jan 25 08:55:21 crc kubenswrapper[4832]: I0125 08:55:21.201079 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vxxfz\" (UniqueName: \"kubernetes.io/projected/04629ecb-eed3-4eb1-b085-448a64d0b2d8-kube-api-access-vxxfz\") pod \"community-operators-rgbj2\" (UID: \"04629ecb-eed3-4eb1-b085-448a64d0b2d8\") " pod="openshift-marketplace/community-operators-rgbj2" Jan 25 08:55:21 crc kubenswrapper[4832]: I0125 08:55:21.303209 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/04629ecb-eed3-4eb1-b085-448a64d0b2d8-utilities\") pod \"community-operators-rgbj2\" (UID: \"04629ecb-eed3-4eb1-b085-448a64d0b2d8\") " pod="openshift-marketplace/community-operators-rgbj2" Jan 25 08:55:21 crc kubenswrapper[4832]: I0125 08:55:21.303311 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vxxfz\" (UniqueName: \"kubernetes.io/projected/04629ecb-eed3-4eb1-b085-448a64d0b2d8-kube-api-access-vxxfz\") pod \"community-operators-rgbj2\" (UID: \"04629ecb-eed3-4eb1-b085-448a64d0b2d8\") " pod="openshift-marketplace/community-operators-rgbj2" Jan 25 08:55:21 crc kubenswrapper[4832]: I0125 08:55:21.303368 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/04629ecb-eed3-4eb1-b085-448a64d0b2d8-catalog-content\") pod \"community-operators-rgbj2\" (UID: \"04629ecb-eed3-4eb1-b085-448a64d0b2d8\") " pod="openshift-marketplace/community-operators-rgbj2" Jan 25 08:55:21 crc kubenswrapper[4832]: I0125 08:55:21.303927 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/04629ecb-eed3-4eb1-b085-448a64d0b2d8-utilities\") pod \"community-operators-rgbj2\" (UID: \"04629ecb-eed3-4eb1-b085-448a64d0b2d8\") " pod="openshift-marketplace/community-operators-rgbj2" Jan 25 08:55:21 crc kubenswrapper[4832]: I0125 08:55:21.304003 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/04629ecb-eed3-4eb1-b085-448a64d0b2d8-catalog-content\") pod \"community-operators-rgbj2\" (UID: \"04629ecb-eed3-4eb1-b085-448a64d0b2d8\") " pod="openshift-marketplace/community-operators-rgbj2" Jan 25 08:55:21 crc kubenswrapper[4832]: I0125 08:55:21.336238 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vxxfz\" (UniqueName: \"kubernetes.io/projected/04629ecb-eed3-4eb1-b085-448a64d0b2d8-kube-api-access-vxxfz\") pod \"community-operators-rgbj2\" (UID: \"04629ecb-eed3-4eb1-b085-448a64d0b2d8\") " pod="openshift-marketplace/community-operators-rgbj2" Jan 25 08:55:21 crc kubenswrapper[4832]: I0125 08:55:21.418088 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-rgbj2" Jan 25 08:55:21 crc kubenswrapper[4832]: I0125 08:55:21.843263 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-dx4vz" Jan 25 08:55:21 crc kubenswrapper[4832]: I0125 08:55:21.843576 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-dx4vz" Jan 25 08:55:21 crc kubenswrapper[4832]: I0125 08:55:21.895583 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-dx4vz" Jan 25 08:55:21 crc kubenswrapper[4832]: I0125 08:55:21.957438 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-rgbj2"] Jan 25 08:55:21 crc kubenswrapper[4832]: W0125 08:55:21.964801 4832 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod04629ecb_eed3_4eb1_b085_448a64d0b2d8.slice/crio-558dfce818ccc317058d5ff838dea1d782fa9718ae1cec0260b1b360ff174449 WatchSource:0}: Error finding container 558dfce818ccc317058d5ff838dea1d782fa9718ae1cec0260b1b360ff174449: Status 404 returned error can't find the container with id 558dfce818ccc317058d5ff838dea1d782fa9718ae1cec0260b1b360ff174449 Jan 25 08:55:22 crc kubenswrapper[4832]: I0125 08:55:22.149580 4832 patch_prober.go:28] interesting pod/machine-config-daemon-9r9sz container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 25 08:55:22 crc kubenswrapper[4832]: I0125 08:55:22.149909 4832 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-9r9sz" podUID="1fb47e8e-c812-41b4-9be7-3fad81e121b0" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 25 08:55:22 crc kubenswrapper[4832]: I0125 08:55:22.164564 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rgbj2" event={"ID":"04629ecb-eed3-4eb1-b085-448a64d0b2d8","Type":"ContainerStarted","Data":"558dfce818ccc317058d5ff838dea1d782fa9718ae1cec0260b1b360ff174449"} Jan 25 08:55:22 crc kubenswrapper[4832]: I0125 08:55:22.227554 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-dx4vz" Jan 25 08:55:23 crc kubenswrapper[4832]: I0125 08:55:23.174451 4832 generic.go:334] "Generic (PLEG): container finished" podID="04629ecb-eed3-4eb1-b085-448a64d0b2d8" containerID="773c9441535f0f73e4a33be522e93c53bb4382c678fb4be2d37a3c1a16bdb0d8" exitCode=0 Jan 25 08:55:23 crc kubenswrapper[4832]: I0125 08:55:23.174549 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rgbj2" event={"ID":"04629ecb-eed3-4eb1-b085-448a64d0b2d8","Type":"ContainerDied","Data":"773c9441535f0f73e4a33be522e93c53bb4382c678fb4be2d37a3c1a16bdb0d8"} Jan 25 08:55:24 crc kubenswrapper[4832]: I0125 08:55:24.190172 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rgbj2" event={"ID":"04629ecb-eed3-4eb1-b085-448a64d0b2d8","Type":"ContainerStarted","Data":"51361fec0da9db3fd5ee218fb78032db4bd9e81e755c7a8371ab2a4be618fe1d"} Jan 25 08:55:24 crc kubenswrapper[4832]: I0125 08:55:24.270021 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-dx4vz"] Jan 25 08:55:24 crc kubenswrapper[4832]: I0125 08:55:24.270293 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-dx4vz" podUID="3f12e82f-acb8-4f4f-ba1b-2e47764e7aa2" containerName="registry-server" containerID="cri-o://6de0686168dbd20641061f614d2f373d4546e585726dcc277ca9947a45de7e3a" gracePeriod=2 Jan 25 08:55:25 crc kubenswrapper[4832]: I0125 08:55:25.202530 4832 generic.go:334] "Generic (PLEG): container finished" podID="04629ecb-eed3-4eb1-b085-448a64d0b2d8" containerID="51361fec0da9db3fd5ee218fb78032db4bd9e81e755c7a8371ab2a4be618fe1d" exitCode=0 Jan 25 08:55:25 crc kubenswrapper[4832]: I0125 08:55:25.202584 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rgbj2" event={"ID":"04629ecb-eed3-4eb1-b085-448a64d0b2d8","Type":"ContainerDied","Data":"51361fec0da9db3fd5ee218fb78032db4bd9e81e755c7a8371ab2a4be618fe1d"} Jan 25 08:55:25 crc kubenswrapper[4832]: I0125 08:55:25.956839 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-dx4vz" Jan 25 08:55:26 crc kubenswrapper[4832]: I0125 08:55:26.109760 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nwbv7\" (UniqueName: \"kubernetes.io/projected/3f12e82f-acb8-4f4f-ba1b-2e47764e7aa2-kube-api-access-nwbv7\") pod \"3f12e82f-acb8-4f4f-ba1b-2e47764e7aa2\" (UID: \"3f12e82f-acb8-4f4f-ba1b-2e47764e7aa2\") " Jan 25 08:55:26 crc kubenswrapper[4832]: I0125 08:55:26.109837 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3f12e82f-acb8-4f4f-ba1b-2e47764e7aa2-catalog-content\") pod \"3f12e82f-acb8-4f4f-ba1b-2e47764e7aa2\" (UID: \"3f12e82f-acb8-4f4f-ba1b-2e47764e7aa2\") " Jan 25 08:55:26 crc kubenswrapper[4832]: I0125 08:55:26.109884 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3f12e82f-acb8-4f4f-ba1b-2e47764e7aa2-utilities\") pod \"3f12e82f-acb8-4f4f-ba1b-2e47764e7aa2\" (UID: \"3f12e82f-acb8-4f4f-ba1b-2e47764e7aa2\") " Jan 25 08:55:26 crc kubenswrapper[4832]: I0125 08:55:26.110617 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3f12e82f-acb8-4f4f-ba1b-2e47764e7aa2-utilities" (OuterVolumeSpecName: "utilities") pod "3f12e82f-acb8-4f4f-ba1b-2e47764e7aa2" (UID: "3f12e82f-acb8-4f4f-ba1b-2e47764e7aa2"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 25 08:55:26 crc kubenswrapper[4832]: I0125 08:55:26.117709 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3f12e82f-acb8-4f4f-ba1b-2e47764e7aa2-kube-api-access-nwbv7" (OuterVolumeSpecName: "kube-api-access-nwbv7") pod "3f12e82f-acb8-4f4f-ba1b-2e47764e7aa2" (UID: "3f12e82f-acb8-4f4f-ba1b-2e47764e7aa2"). InnerVolumeSpecName "kube-api-access-nwbv7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 25 08:55:26 crc kubenswrapper[4832]: I0125 08:55:26.142181 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3f12e82f-acb8-4f4f-ba1b-2e47764e7aa2-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "3f12e82f-acb8-4f4f-ba1b-2e47764e7aa2" (UID: "3f12e82f-acb8-4f4f-ba1b-2e47764e7aa2"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 25 08:55:26 crc kubenswrapper[4832]: I0125 08:55:26.211584 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nwbv7\" (UniqueName: \"kubernetes.io/projected/3f12e82f-acb8-4f4f-ba1b-2e47764e7aa2-kube-api-access-nwbv7\") on node \"crc\" DevicePath \"\"" Jan 25 08:55:26 crc kubenswrapper[4832]: I0125 08:55:26.211613 4832 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3f12e82f-acb8-4f4f-ba1b-2e47764e7aa2-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 25 08:55:26 crc kubenswrapper[4832]: I0125 08:55:26.211623 4832 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3f12e82f-acb8-4f4f-ba1b-2e47764e7aa2-utilities\") on node \"crc\" DevicePath \"\"" Jan 25 08:55:26 crc kubenswrapper[4832]: I0125 08:55:26.214213 4832 generic.go:334] "Generic (PLEG): container finished" podID="3f12e82f-acb8-4f4f-ba1b-2e47764e7aa2" containerID="6de0686168dbd20641061f614d2f373d4546e585726dcc277ca9947a45de7e3a" exitCode=0 Jan 25 08:55:26 crc kubenswrapper[4832]: I0125 08:55:26.214266 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-dx4vz" Jan 25 08:55:26 crc kubenswrapper[4832]: I0125 08:55:26.214310 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-dx4vz" event={"ID":"3f12e82f-acb8-4f4f-ba1b-2e47764e7aa2","Type":"ContainerDied","Data":"6de0686168dbd20641061f614d2f373d4546e585726dcc277ca9947a45de7e3a"} Jan 25 08:55:26 crc kubenswrapper[4832]: I0125 08:55:26.214376 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-dx4vz" event={"ID":"3f12e82f-acb8-4f4f-ba1b-2e47764e7aa2","Type":"ContainerDied","Data":"d9cc48ed5097ae223085b63cb94c38e5ddb9e06b02f94c9587adcb478aa7ed0c"} Jan 25 08:55:26 crc kubenswrapper[4832]: I0125 08:55:26.214421 4832 scope.go:117] "RemoveContainer" containerID="6de0686168dbd20641061f614d2f373d4546e585726dcc277ca9947a45de7e3a" Jan 25 08:55:26 crc kubenswrapper[4832]: I0125 08:55:26.216940 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rgbj2" event={"ID":"04629ecb-eed3-4eb1-b085-448a64d0b2d8","Type":"ContainerStarted","Data":"f463ae35a7366cea50327299d13aec343675130a0c09c436a899d7e6926c3a5b"} Jan 25 08:55:26 crc kubenswrapper[4832]: I0125 08:55:26.240913 4832 scope.go:117] "RemoveContainer" containerID="c5280f56b6ec5593e463210c851cc907001c66edd47be261925d4763b8e3ba0e" Jan 25 08:55:26 crc kubenswrapper[4832]: I0125 08:55:26.246173 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-rgbj2" podStartSLOduration=2.7617912430000002 podStartE2EDuration="5.246147125s" podCreationTimestamp="2026-01-25 08:55:21 +0000 UTC" firstStartedPulling="2026-01-25 08:55:23.178252255 +0000 UTC m=+3505.852075788" lastFinishedPulling="2026-01-25 08:55:25.662608117 +0000 UTC m=+3508.336431670" observedRunningTime="2026-01-25 08:55:26.240424496 +0000 UTC m=+3508.914248079" watchObservedRunningTime="2026-01-25 08:55:26.246147125 +0000 UTC m=+3508.919970658" Jan 25 08:55:26 crc kubenswrapper[4832]: I0125 08:55:26.277885 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-dx4vz"] Jan 25 08:55:26 crc kubenswrapper[4832]: I0125 08:55:26.283121 4832 scope.go:117] "RemoveContainer" containerID="fae02f1777f549bf56d5a86b7042ec830c147eaee1642d4de358a538a351d321" Jan 25 08:55:26 crc kubenswrapper[4832]: I0125 08:55:26.284832 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-dx4vz"] Jan 25 08:55:26 crc kubenswrapper[4832]: I0125 08:55:26.319358 4832 scope.go:117] "RemoveContainer" containerID="6de0686168dbd20641061f614d2f373d4546e585726dcc277ca9947a45de7e3a" Jan 25 08:55:26 crc kubenswrapper[4832]: E0125 08:55:26.319778 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6de0686168dbd20641061f614d2f373d4546e585726dcc277ca9947a45de7e3a\": container with ID starting with 6de0686168dbd20641061f614d2f373d4546e585726dcc277ca9947a45de7e3a not found: ID does not exist" containerID="6de0686168dbd20641061f614d2f373d4546e585726dcc277ca9947a45de7e3a" Jan 25 08:55:26 crc kubenswrapper[4832]: I0125 08:55:26.319812 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6de0686168dbd20641061f614d2f373d4546e585726dcc277ca9947a45de7e3a"} err="failed to get container status \"6de0686168dbd20641061f614d2f373d4546e585726dcc277ca9947a45de7e3a\": rpc error: code = NotFound desc = could not find container \"6de0686168dbd20641061f614d2f373d4546e585726dcc277ca9947a45de7e3a\": container with ID starting with 6de0686168dbd20641061f614d2f373d4546e585726dcc277ca9947a45de7e3a not found: ID does not exist" Jan 25 08:55:26 crc kubenswrapper[4832]: I0125 08:55:26.319833 4832 scope.go:117] "RemoveContainer" containerID="c5280f56b6ec5593e463210c851cc907001c66edd47be261925d4763b8e3ba0e" Jan 25 08:55:26 crc kubenswrapper[4832]: E0125 08:55:26.320270 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c5280f56b6ec5593e463210c851cc907001c66edd47be261925d4763b8e3ba0e\": container with ID starting with c5280f56b6ec5593e463210c851cc907001c66edd47be261925d4763b8e3ba0e not found: ID does not exist" containerID="c5280f56b6ec5593e463210c851cc907001c66edd47be261925d4763b8e3ba0e" Jan 25 08:55:26 crc kubenswrapper[4832]: I0125 08:55:26.320293 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c5280f56b6ec5593e463210c851cc907001c66edd47be261925d4763b8e3ba0e"} err="failed to get container status \"c5280f56b6ec5593e463210c851cc907001c66edd47be261925d4763b8e3ba0e\": rpc error: code = NotFound desc = could not find container \"c5280f56b6ec5593e463210c851cc907001c66edd47be261925d4763b8e3ba0e\": container with ID starting with c5280f56b6ec5593e463210c851cc907001c66edd47be261925d4763b8e3ba0e not found: ID does not exist" Jan 25 08:55:26 crc kubenswrapper[4832]: I0125 08:55:26.320314 4832 scope.go:117] "RemoveContainer" containerID="fae02f1777f549bf56d5a86b7042ec830c147eaee1642d4de358a538a351d321" Jan 25 08:55:26 crc kubenswrapper[4832]: E0125 08:55:26.320546 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fae02f1777f549bf56d5a86b7042ec830c147eaee1642d4de358a538a351d321\": container with ID starting with fae02f1777f549bf56d5a86b7042ec830c147eaee1642d4de358a538a351d321 not found: ID does not exist" containerID="fae02f1777f549bf56d5a86b7042ec830c147eaee1642d4de358a538a351d321" Jan 25 08:55:26 crc kubenswrapper[4832]: I0125 08:55:26.320570 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fae02f1777f549bf56d5a86b7042ec830c147eaee1642d4de358a538a351d321"} err="failed to get container status \"fae02f1777f549bf56d5a86b7042ec830c147eaee1642d4de358a538a351d321\": rpc error: code = NotFound desc = could not find container \"fae02f1777f549bf56d5a86b7042ec830c147eaee1642d4de358a538a351d321\": container with ID starting with fae02f1777f549bf56d5a86b7042ec830c147eaee1642d4de358a538a351d321 not found: ID does not exist" Jan 25 08:55:27 crc kubenswrapper[4832]: I0125 08:55:27.680762 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3f12e82f-acb8-4f4f-ba1b-2e47764e7aa2" path="/var/lib/kubelet/pods/3f12e82f-acb8-4f4f-ba1b-2e47764e7aa2/volumes" Jan 25 08:55:31 crc kubenswrapper[4832]: I0125 08:55:31.418998 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-rgbj2" Jan 25 08:55:31 crc kubenswrapper[4832]: I0125 08:55:31.419569 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-rgbj2" Jan 25 08:55:31 crc kubenswrapper[4832]: I0125 08:55:31.463634 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-rgbj2" Jan 25 08:55:32 crc kubenswrapper[4832]: I0125 08:55:32.313958 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-rgbj2" Jan 25 08:55:32 crc kubenswrapper[4832]: I0125 08:55:32.357731 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-rgbj2"] Jan 25 08:55:34 crc kubenswrapper[4832]: I0125 08:55:34.285495 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-rgbj2" podUID="04629ecb-eed3-4eb1-b085-448a64d0b2d8" containerName="registry-server" containerID="cri-o://f463ae35a7366cea50327299d13aec343675130a0c09c436a899d7e6926c3a5b" gracePeriod=2 Jan 25 08:55:34 crc kubenswrapper[4832]: I0125 08:55:34.730787 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-rgbj2" Jan 25 08:55:34 crc kubenswrapper[4832]: I0125 08:55:34.779979 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vxxfz\" (UniqueName: \"kubernetes.io/projected/04629ecb-eed3-4eb1-b085-448a64d0b2d8-kube-api-access-vxxfz\") pod \"04629ecb-eed3-4eb1-b085-448a64d0b2d8\" (UID: \"04629ecb-eed3-4eb1-b085-448a64d0b2d8\") " Jan 25 08:55:34 crc kubenswrapper[4832]: I0125 08:55:34.780109 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/04629ecb-eed3-4eb1-b085-448a64d0b2d8-utilities\") pod \"04629ecb-eed3-4eb1-b085-448a64d0b2d8\" (UID: \"04629ecb-eed3-4eb1-b085-448a64d0b2d8\") " Jan 25 08:55:34 crc kubenswrapper[4832]: I0125 08:55:34.780180 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/04629ecb-eed3-4eb1-b085-448a64d0b2d8-catalog-content\") pod \"04629ecb-eed3-4eb1-b085-448a64d0b2d8\" (UID: \"04629ecb-eed3-4eb1-b085-448a64d0b2d8\") " Jan 25 08:55:34 crc kubenswrapper[4832]: I0125 08:55:34.781428 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/04629ecb-eed3-4eb1-b085-448a64d0b2d8-utilities" (OuterVolumeSpecName: "utilities") pod "04629ecb-eed3-4eb1-b085-448a64d0b2d8" (UID: "04629ecb-eed3-4eb1-b085-448a64d0b2d8"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 25 08:55:34 crc kubenswrapper[4832]: I0125 08:55:34.788263 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/04629ecb-eed3-4eb1-b085-448a64d0b2d8-kube-api-access-vxxfz" (OuterVolumeSpecName: "kube-api-access-vxxfz") pod "04629ecb-eed3-4eb1-b085-448a64d0b2d8" (UID: "04629ecb-eed3-4eb1-b085-448a64d0b2d8"). InnerVolumeSpecName "kube-api-access-vxxfz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 25 08:55:34 crc kubenswrapper[4832]: I0125 08:55:34.854576 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/04629ecb-eed3-4eb1-b085-448a64d0b2d8-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "04629ecb-eed3-4eb1-b085-448a64d0b2d8" (UID: "04629ecb-eed3-4eb1-b085-448a64d0b2d8"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 25 08:55:34 crc kubenswrapper[4832]: I0125 08:55:34.883026 4832 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/04629ecb-eed3-4eb1-b085-448a64d0b2d8-utilities\") on node \"crc\" DevicePath \"\"" Jan 25 08:55:34 crc kubenswrapper[4832]: I0125 08:55:34.883071 4832 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/04629ecb-eed3-4eb1-b085-448a64d0b2d8-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 25 08:55:34 crc kubenswrapper[4832]: I0125 08:55:34.883084 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vxxfz\" (UniqueName: \"kubernetes.io/projected/04629ecb-eed3-4eb1-b085-448a64d0b2d8-kube-api-access-vxxfz\") on node \"crc\" DevicePath \"\"" Jan 25 08:55:35 crc kubenswrapper[4832]: I0125 08:55:35.310629 4832 generic.go:334] "Generic (PLEG): container finished" podID="04629ecb-eed3-4eb1-b085-448a64d0b2d8" containerID="f463ae35a7366cea50327299d13aec343675130a0c09c436a899d7e6926c3a5b" exitCode=0 Jan 25 08:55:35 crc kubenswrapper[4832]: I0125 08:55:35.310681 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rgbj2" event={"ID":"04629ecb-eed3-4eb1-b085-448a64d0b2d8","Type":"ContainerDied","Data":"f463ae35a7366cea50327299d13aec343675130a0c09c436a899d7e6926c3a5b"} Jan 25 08:55:35 crc kubenswrapper[4832]: I0125 08:55:35.310722 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rgbj2" event={"ID":"04629ecb-eed3-4eb1-b085-448a64d0b2d8","Type":"ContainerDied","Data":"558dfce818ccc317058d5ff838dea1d782fa9718ae1cec0260b1b360ff174449"} Jan 25 08:55:35 crc kubenswrapper[4832]: I0125 08:55:35.310744 4832 scope.go:117] "RemoveContainer" containerID="f463ae35a7366cea50327299d13aec343675130a0c09c436a899d7e6926c3a5b" Jan 25 08:55:35 crc kubenswrapper[4832]: I0125 08:55:35.310743 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-rgbj2" Jan 25 08:55:35 crc kubenswrapper[4832]: I0125 08:55:35.354160 4832 scope.go:117] "RemoveContainer" containerID="51361fec0da9db3fd5ee218fb78032db4bd9e81e755c7a8371ab2a4be618fe1d" Jan 25 08:55:35 crc kubenswrapper[4832]: I0125 08:55:35.354467 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-rgbj2"] Jan 25 08:55:35 crc kubenswrapper[4832]: I0125 08:55:35.364492 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-rgbj2"] Jan 25 08:55:35 crc kubenswrapper[4832]: I0125 08:55:35.383239 4832 scope.go:117] "RemoveContainer" containerID="773c9441535f0f73e4a33be522e93c53bb4382c678fb4be2d37a3c1a16bdb0d8" Jan 25 08:55:35 crc kubenswrapper[4832]: I0125 08:55:35.425959 4832 scope.go:117] "RemoveContainer" containerID="f463ae35a7366cea50327299d13aec343675130a0c09c436a899d7e6926c3a5b" Jan 25 08:55:35 crc kubenswrapper[4832]: E0125 08:55:35.426920 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f463ae35a7366cea50327299d13aec343675130a0c09c436a899d7e6926c3a5b\": container with ID starting with f463ae35a7366cea50327299d13aec343675130a0c09c436a899d7e6926c3a5b not found: ID does not exist" containerID="f463ae35a7366cea50327299d13aec343675130a0c09c436a899d7e6926c3a5b" Jan 25 08:55:35 crc kubenswrapper[4832]: I0125 08:55:35.426966 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f463ae35a7366cea50327299d13aec343675130a0c09c436a899d7e6926c3a5b"} err="failed to get container status \"f463ae35a7366cea50327299d13aec343675130a0c09c436a899d7e6926c3a5b\": rpc error: code = NotFound desc = could not find container \"f463ae35a7366cea50327299d13aec343675130a0c09c436a899d7e6926c3a5b\": container with ID starting with f463ae35a7366cea50327299d13aec343675130a0c09c436a899d7e6926c3a5b not found: ID does not exist" Jan 25 08:55:35 crc kubenswrapper[4832]: I0125 08:55:35.426999 4832 scope.go:117] "RemoveContainer" containerID="51361fec0da9db3fd5ee218fb78032db4bd9e81e755c7a8371ab2a4be618fe1d" Jan 25 08:55:35 crc kubenswrapper[4832]: E0125 08:55:35.427607 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"51361fec0da9db3fd5ee218fb78032db4bd9e81e755c7a8371ab2a4be618fe1d\": container with ID starting with 51361fec0da9db3fd5ee218fb78032db4bd9e81e755c7a8371ab2a4be618fe1d not found: ID does not exist" containerID="51361fec0da9db3fd5ee218fb78032db4bd9e81e755c7a8371ab2a4be618fe1d" Jan 25 08:55:35 crc kubenswrapper[4832]: I0125 08:55:35.427643 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"51361fec0da9db3fd5ee218fb78032db4bd9e81e755c7a8371ab2a4be618fe1d"} err="failed to get container status \"51361fec0da9db3fd5ee218fb78032db4bd9e81e755c7a8371ab2a4be618fe1d\": rpc error: code = NotFound desc = could not find container \"51361fec0da9db3fd5ee218fb78032db4bd9e81e755c7a8371ab2a4be618fe1d\": container with ID starting with 51361fec0da9db3fd5ee218fb78032db4bd9e81e755c7a8371ab2a4be618fe1d not found: ID does not exist" Jan 25 08:55:35 crc kubenswrapper[4832]: I0125 08:55:35.427667 4832 scope.go:117] "RemoveContainer" containerID="773c9441535f0f73e4a33be522e93c53bb4382c678fb4be2d37a3c1a16bdb0d8" Jan 25 08:55:35 crc kubenswrapper[4832]: E0125 08:55:35.428159 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"773c9441535f0f73e4a33be522e93c53bb4382c678fb4be2d37a3c1a16bdb0d8\": container with ID starting with 773c9441535f0f73e4a33be522e93c53bb4382c678fb4be2d37a3c1a16bdb0d8 not found: ID does not exist" containerID="773c9441535f0f73e4a33be522e93c53bb4382c678fb4be2d37a3c1a16bdb0d8" Jan 25 08:55:35 crc kubenswrapper[4832]: I0125 08:55:35.428190 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"773c9441535f0f73e4a33be522e93c53bb4382c678fb4be2d37a3c1a16bdb0d8"} err="failed to get container status \"773c9441535f0f73e4a33be522e93c53bb4382c678fb4be2d37a3c1a16bdb0d8\": rpc error: code = NotFound desc = could not find container \"773c9441535f0f73e4a33be522e93c53bb4382c678fb4be2d37a3c1a16bdb0d8\": container with ID starting with 773c9441535f0f73e4a33be522e93c53bb4382c678fb4be2d37a3c1a16bdb0d8 not found: ID does not exist" Jan 25 08:55:35 crc kubenswrapper[4832]: I0125 08:55:35.681255 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="04629ecb-eed3-4eb1-b085-448a64d0b2d8" path="/var/lib/kubelet/pods/04629ecb-eed3-4eb1-b085-448a64d0b2d8/volumes" Jan 25 08:55:45 crc kubenswrapper[4832]: I0125 08:55:45.422361 4832 generic.go:334] "Generic (PLEG): container finished" podID="f075c376-fe6e-44de-bb3d-113de5b9fb3f" containerID="60691ffa1d211192cd9ccf878b2abc715c52cee85666c1a21dae351f7a192400" exitCode=0 Jan 25 08:55:45 crc kubenswrapper[4832]: I0125 08:55:45.422492 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"f075c376-fe6e-44de-bb3d-113de5b9fb3f","Type":"ContainerDied","Data":"60691ffa1d211192cd9ccf878b2abc715c52cee85666c1a21dae351f7a192400"} Jan 25 08:55:46 crc kubenswrapper[4832]: I0125 08:55:46.790955 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Jan 25 08:55:46 crc kubenswrapper[4832]: I0125 08:55:46.920737 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/f075c376-fe6e-44de-bb3d-113de5b9fb3f-ca-certs\") pod \"f075c376-fe6e-44de-bb3d-113de5b9fb3f\" (UID: \"f075c376-fe6e-44de-bb3d-113de5b9fb3f\") " Jan 25 08:55:46 crc kubenswrapper[4832]: I0125 08:55:46.920819 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/f075c376-fe6e-44de-bb3d-113de5b9fb3f-openstack-config\") pod \"f075c376-fe6e-44de-bb3d-113de5b9fb3f\" (UID: \"f075c376-fe6e-44de-bb3d-113de5b9fb3f\") " Jan 25 08:55:46 crc kubenswrapper[4832]: I0125 08:55:46.920935 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/f075c376-fe6e-44de-bb3d-113de5b9fb3f-test-operator-ephemeral-temporary\") pod \"f075c376-fe6e-44de-bb3d-113de5b9fb3f\" (UID: \"f075c376-fe6e-44de-bb3d-113de5b9fb3f\") " Jan 25 08:55:46 crc kubenswrapper[4832]: I0125 08:55:46.920957 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/f075c376-fe6e-44de-bb3d-113de5b9fb3f-openstack-config-secret\") pod \"f075c376-fe6e-44de-bb3d-113de5b9fb3f\" (UID: \"f075c376-fe6e-44de-bb3d-113de5b9fb3f\") " Jan 25 08:55:46 crc kubenswrapper[4832]: I0125 08:55:46.921011 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/f075c376-fe6e-44de-bb3d-113de5b9fb3f-ssh-key\") pod \"f075c376-fe6e-44de-bb3d-113de5b9fb3f\" (UID: \"f075c376-fe6e-44de-bb3d-113de5b9fb3f\") " Jan 25 08:55:46 crc kubenswrapper[4832]: I0125 08:55:46.922097 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f075c376-fe6e-44de-bb3d-113de5b9fb3f-config-data" (OuterVolumeSpecName: "config-data") pod "f075c376-fe6e-44de-bb3d-113de5b9fb3f" (UID: "f075c376-fe6e-44de-bb3d-113de5b9fb3f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 25 08:55:46 crc kubenswrapper[4832]: I0125 08:55:46.922176 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f075c376-fe6e-44de-bb3d-113de5b9fb3f-test-operator-ephemeral-temporary" (OuterVolumeSpecName: "test-operator-ephemeral-temporary") pod "f075c376-fe6e-44de-bb3d-113de5b9fb3f" (UID: "f075c376-fe6e-44de-bb3d-113de5b9fb3f"). InnerVolumeSpecName "test-operator-ephemeral-temporary". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 25 08:55:46 crc kubenswrapper[4832]: I0125 08:55:46.922441 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f075c376-fe6e-44de-bb3d-113de5b9fb3f-config-data\") pod \"f075c376-fe6e-44de-bb3d-113de5b9fb3f\" (UID: \"f075c376-fe6e-44de-bb3d-113de5b9fb3f\") " Jan 25 08:55:46 crc kubenswrapper[4832]: I0125 08:55:46.922650 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rft5k\" (UniqueName: \"kubernetes.io/projected/f075c376-fe6e-44de-bb3d-113de5b9fb3f-kube-api-access-rft5k\") pod \"f075c376-fe6e-44de-bb3d-113de5b9fb3f\" (UID: \"f075c376-fe6e-44de-bb3d-113de5b9fb3f\") " Jan 25 08:55:46 crc kubenswrapper[4832]: I0125 08:55:46.922739 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/f075c376-fe6e-44de-bb3d-113de5b9fb3f-test-operator-ephemeral-workdir\") pod \"f075c376-fe6e-44de-bb3d-113de5b9fb3f\" (UID: \"f075c376-fe6e-44de-bb3d-113de5b9fb3f\") " Jan 25 08:55:46 crc kubenswrapper[4832]: I0125 08:55:46.922848 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-logs\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"f075c376-fe6e-44de-bb3d-113de5b9fb3f\" (UID: \"f075c376-fe6e-44de-bb3d-113de5b9fb3f\") " Jan 25 08:55:46 crc kubenswrapper[4832]: I0125 08:55:46.923907 4832 reconciler_common.go:293] "Volume detached for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/f075c376-fe6e-44de-bb3d-113de5b9fb3f-test-operator-ephemeral-temporary\") on node \"crc\" DevicePath \"\"" Jan 25 08:55:46 crc kubenswrapper[4832]: I0125 08:55:46.923965 4832 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f075c376-fe6e-44de-bb3d-113de5b9fb3f-config-data\") on node \"crc\" DevicePath \"\"" Jan 25 08:55:46 crc kubenswrapper[4832]: I0125 08:55:46.930326 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f075c376-fe6e-44de-bb3d-113de5b9fb3f-test-operator-ephemeral-workdir" (OuterVolumeSpecName: "test-operator-ephemeral-workdir") pod "f075c376-fe6e-44de-bb3d-113de5b9fb3f" (UID: "f075c376-fe6e-44de-bb3d-113de5b9fb3f"). InnerVolumeSpecName "test-operator-ephemeral-workdir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 25 08:55:46 crc kubenswrapper[4832]: I0125 08:55:46.932157 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage07-crc" (OuterVolumeSpecName: "test-operator-logs") pod "f075c376-fe6e-44de-bb3d-113de5b9fb3f" (UID: "f075c376-fe6e-44de-bb3d-113de5b9fb3f"). InnerVolumeSpecName "local-storage07-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 25 08:55:46 crc kubenswrapper[4832]: I0125 08:55:46.934027 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f075c376-fe6e-44de-bb3d-113de5b9fb3f-kube-api-access-rft5k" (OuterVolumeSpecName: "kube-api-access-rft5k") pod "f075c376-fe6e-44de-bb3d-113de5b9fb3f" (UID: "f075c376-fe6e-44de-bb3d-113de5b9fb3f"). InnerVolumeSpecName "kube-api-access-rft5k". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 25 08:55:46 crc kubenswrapper[4832]: I0125 08:55:46.953201 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f075c376-fe6e-44de-bb3d-113de5b9fb3f-ca-certs" (OuterVolumeSpecName: "ca-certs") pod "f075c376-fe6e-44de-bb3d-113de5b9fb3f" (UID: "f075c376-fe6e-44de-bb3d-113de5b9fb3f"). InnerVolumeSpecName "ca-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 25 08:55:46 crc kubenswrapper[4832]: I0125 08:55:46.960787 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f075c376-fe6e-44de-bb3d-113de5b9fb3f-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "f075c376-fe6e-44de-bb3d-113de5b9fb3f" (UID: "f075c376-fe6e-44de-bb3d-113de5b9fb3f"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 25 08:55:46 crc kubenswrapper[4832]: I0125 08:55:46.965140 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f075c376-fe6e-44de-bb3d-113de5b9fb3f-openstack-config-secret" (OuterVolumeSpecName: "openstack-config-secret") pod "f075c376-fe6e-44de-bb3d-113de5b9fb3f" (UID: "f075c376-fe6e-44de-bb3d-113de5b9fb3f"). InnerVolumeSpecName "openstack-config-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 25 08:55:46 crc kubenswrapper[4832]: I0125 08:55:46.977584 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f075c376-fe6e-44de-bb3d-113de5b9fb3f-openstack-config" (OuterVolumeSpecName: "openstack-config") pod "f075c376-fe6e-44de-bb3d-113de5b9fb3f" (UID: "f075c376-fe6e-44de-bb3d-113de5b9fb3f"). InnerVolumeSpecName "openstack-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 25 08:55:47 crc kubenswrapper[4832]: I0125 08:55:47.026052 4832 reconciler_common.go:293] "Volume detached for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/f075c376-fe6e-44de-bb3d-113de5b9fb3f-test-operator-ephemeral-workdir\") on node \"crc\" DevicePath \"\"" Jan 25 08:55:47 crc kubenswrapper[4832]: I0125 08:55:47.026654 4832 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") on node \"crc\" " Jan 25 08:55:47 crc kubenswrapper[4832]: I0125 08:55:47.027373 4832 reconciler_common.go:293] "Volume detached for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/f075c376-fe6e-44de-bb3d-113de5b9fb3f-ca-certs\") on node \"crc\" DevicePath \"\"" Jan 25 08:55:47 crc kubenswrapper[4832]: I0125 08:55:47.027485 4832 reconciler_common.go:293] "Volume detached for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/f075c376-fe6e-44de-bb3d-113de5b9fb3f-openstack-config\") on node \"crc\" DevicePath \"\"" Jan 25 08:55:47 crc kubenswrapper[4832]: I0125 08:55:47.027519 4832 reconciler_common.go:293] "Volume detached for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/f075c376-fe6e-44de-bb3d-113de5b9fb3f-openstack-config-secret\") on node \"crc\" DevicePath \"\"" Jan 25 08:55:47 crc kubenswrapper[4832]: I0125 08:55:47.027545 4832 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/f075c376-fe6e-44de-bb3d-113de5b9fb3f-ssh-key\") on node \"crc\" DevicePath \"\"" Jan 25 08:55:47 crc kubenswrapper[4832]: I0125 08:55:47.027571 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rft5k\" (UniqueName: \"kubernetes.io/projected/f075c376-fe6e-44de-bb3d-113de5b9fb3f-kube-api-access-rft5k\") on node \"crc\" DevicePath \"\"" Jan 25 08:55:47 crc kubenswrapper[4832]: I0125 08:55:47.048493 4832 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage07-crc" (UniqueName: "kubernetes.io/local-volume/local-storage07-crc") on node "crc" Jan 25 08:55:47 crc kubenswrapper[4832]: I0125 08:55:47.129939 4832 reconciler_common.go:293] "Volume detached for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") on node \"crc\" DevicePath \"\"" Jan 25 08:55:47 crc kubenswrapper[4832]: I0125 08:55:47.443815 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"f075c376-fe6e-44de-bb3d-113de5b9fb3f","Type":"ContainerDied","Data":"a079734bbb82710295e961674635d06d5d22609699f27b92b5e630c25b526814"} Jan 25 08:55:47 crc kubenswrapper[4832]: I0125 08:55:47.443877 4832 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a079734bbb82710295e961674635d06d5d22609699f27b92b5e630c25b526814" Jan 25 08:55:47 crc kubenswrapper[4832]: I0125 08:55:47.444146 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Jan 25 08:55:52 crc kubenswrapper[4832]: I0125 08:55:52.149950 4832 patch_prober.go:28] interesting pod/machine-config-daemon-9r9sz container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 25 08:55:52 crc kubenswrapper[4832]: I0125 08:55:52.150607 4832 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-9r9sz" podUID="1fb47e8e-c812-41b4-9be7-3fad81e121b0" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 25 08:55:55 crc kubenswrapper[4832]: I0125 08:55:55.283457 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Jan 25 08:55:55 crc kubenswrapper[4832]: E0125 08:55:55.284587 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="04629ecb-eed3-4eb1-b085-448a64d0b2d8" containerName="registry-server" Jan 25 08:55:55 crc kubenswrapper[4832]: I0125 08:55:55.284617 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="04629ecb-eed3-4eb1-b085-448a64d0b2d8" containerName="registry-server" Jan 25 08:55:55 crc kubenswrapper[4832]: E0125 08:55:55.284646 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3f12e82f-acb8-4f4f-ba1b-2e47764e7aa2" containerName="extract-content" Jan 25 08:55:55 crc kubenswrapper[4832]: I0125 08:55:55.284658 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="3f12e82f-acb8-4f4f-ba1b-2e47764e7aa2" containerName="extract-content" Jan 25 08:55:55 crc kubenswrapper[4832]: E0125 08:55:55.284693 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="04629ecb-eed3-4eb1-b085-448a64d0b2d8" containerName="extract-utilities" Jan 25 08:55:55 crc kubenswrapper[4832]: I0125 08:55:55.284709 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="04629ecb-eed3-4eb1-b085-448a64d0b2d8" containerName="extract-utilities" Jan 25 08:55:55 crc kubenswrapper[4832]: E0125 08:55:55.284731 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f075c376-fe6e-44de-bb3d-113de5b9fb3f" containerName="tempest-tests-tempest-tests-runner" Jan 25 08:55:55 crc kubenswrapper[4832]: I0125 08:55:55.284743 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="f075c376-fe6e-44de-bb3d-113de5b9fb3f" containerName="tempest-tests-tempest-tests-runner" Jan 25 08:55:55 crc kubenswrapper[4832]: E0125 08:55:55.284769 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3f12e82f-acb8-4f4f-ba1b-2e47764e7aa2" containerName="extract-utilities" Jan 25 08:55:55 crc kubenswrapper[4832]: I0125 08:55:55.284780 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="3f12e82f-acb8-4f4f-ba1b-2e47764e7aa2" containerName="extract-utilities" Jan 25 08:55:55 crc kubenswrapper[4832]: E0125 08:55:55.284811 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="04629ecb-eed3-4eb1-b085-448a64d0b2d8" containerName="extract-content" Jan 25 08:55:55 crc kubenswrapper[4832]: I0125 08:55:55.284822 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="04629ecb-eed3-4eb1-b085-448a64d0b2d8" containerName="extract-content" Jan 25 08:55:55 crc kubenswrapper[4832]: E0125 08:55:55.284844 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3f12e82f-acb8-4f4f-ba1b-2e47764e7aa2" containerName="registry-server" Jan 25 08:55:55 crc kubenswrapper[4832]: I0125 08:55:55.284855 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="3f12e82f-acb8-4f4f-ba1b-2e47764e7aa2" containerName="registry-server" Jan 25 08:55:55 crc kubenswrapper[4832]: I0125 08:55:55.285188 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="04629ecb-eed3-4eb1-b085-448a64d0b2d8" containerName="registry-server" Jan 25 08:55:55 crc kubenswrapper[4832]: I0125 08:55:55.285218 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="3f12e82f-acb8-4f4f-ba1b-2e47764e7aa2" containerName="registry-server" Jan 25 08:55:55 crc kubenswrapper[4832]: I0125 08:55:55.285241 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="f075c376-fe6e-44de-bb3d-113de5b9fb3f" containerName="tempest-tests-tempest-tests-runner" Jan 25 08:55:55 crc kubenswrapper[4832]: I0125 08:55:55.286379 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 25 08:55:55 crc kubenswrapper[4832]: I0125 08:55:55.291937 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-wnc6t" Jan 25 08:55:55 crc kubenswrapper[4832]: I0125 08:55:55.303447 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Jan 25 08:55:55 crc kubenswrapper[4832]: I0125 08:55:55.398724 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"5d3f03a6-2f57-4a65-9e70-0828473a9469\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 25 08:55:55 crc kubenswrapper[4832]: I0125 08:55:55.398780 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z7v58\" (UniqueName: \"kubernetes.io/projected/5d3f03a6-2f57-4a65-9e70-0828473a9469-kube-api-access-z7v58\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"5d3f03a6-2f57-4a65-9e70-0828473a9469\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 25 08:55:55 crc kubenswrapper[4832]: I0125 08:55:55.500675 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"5d3f03a6-2f57-4a65-9e70-0828473a9469\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 25 08:55:55 crc kubenswrapper[4832]: I0125 08:55:55.501135 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z7v58\" (UniqueName: \"kubernetes.io/projected/5d3f03a6-2f57-4a65-9e70-0828473a9469-kube-api-access-z7v58\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"5d3f03a6-2f57-4a65-9e70-0828473a9469\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 25 08:55:55 crc kubenswrapper[4832]: I0125 08:55:55.501630 4832 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"5d3f03a6-2f57-4a65-9e70-0828473a9469\") device mount path \"/mnt/openstack/pv07\"" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 25 08:55:55 crc kubenswrapper[4832]: I0125 08:55:55.524833 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z7v58\" (UniqueName: \"kubernetes.io/projected/5d3f03a6-2f57-4a65-9e70-0828473a9469-kube-api-access-z7v58\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"5d3f03a6-2f57-4a65-9e70-0828473a9469\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 25 08:55:55 crc kubenswrapper[4832]: I0125 08:55:55.528431 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"5d3f03a6-2f57-4a65-9e70-0828473a9469\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 25 08:55:55 crc kubenswrapper[4832]: I0125 08:55:55.625245 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 25 08:55:56 crc kubenswrapper[4832]: I0125 08:55:56.076857 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Jan 25 08:55:56 crc kubenswrapper[4832]: I0125 08:55:56.534790 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" event={"ID":"5d3f03a6-2f57-4a65-9e70-0828473a9469","Type":"ContainerStarted","Data":"cd375f42c2ac6d6f2033e0c9c7b6b04170b1ebee6e57ee5b130e47bf34d1d301"} Jan 25 08:55:57 crc kubenswrapper[4832]: I0125 08:55:57.548980 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" event={"ID":"5d3f03a6-2f57-4a65-9e70-0828473a9469","Type":"ContainerStarted","Data":"058614fc0583e71904cfa6a831ae71d86a6a255b4a6125b7c87785c2038d0c8e"} Jan 25 08:55:57 crc kubenswrapper[4832]: I0125 08:55:57.571077 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" podStartSLOduration=1.533562785 podStartE2EDuration="2.571052188s" podCreationTimestamp="2026-01-25 08:55:55 +0000 UTC" firstStartedPulling="2026-01-25 08:55:56.086054981 +0000 UTC m=+3538.759878514" lastFinishedPulling="2026-01-25 08:55:57.123544344 +0000 UTC m=+3539.797367917" observedRunningTime="2026-01-25 08:55:57.566614079 +0000 UTC m=+3540.240437622" watchObservedRunningTime="2026-01-25 08:55:57.571052188 +0000 UTC m=+3540.244875721" Jan 25 08:56:19 crc kubenswrapper[4832]: I0125 08:56:19.519054 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-t2k6c/must-gather-wf66j"] Jan 25 08:56:19 crc kubenswrapper[4832]: I0125 08:56:19.520993 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-t2k6c/must-gather-wf66j" Jan 25 08:56:19 crc kubenswrapper[4832]: I0125 08:56:19.523556 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-t2k6c"/"kube-root-ca.crt" Jan 25 08:56:19 crc kubenswrapper[4832]: I0125 08:56:19.536381 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-t2k6c"/"openshift-service-ca.crt" Jan 25 08:56:19 crc kubenswrapper[4832]: I0125 08:56:19.567750 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-t2k6c/must-gather-wf66j"] Jan 25 08:56:19 crc kubenswrapper[4832]: I0125 08:56:19.586739 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mf692\" (UniqueName: \"kubernetes.io/projected/c2c42541-00a2-4d5a-a875-3b52d73b08eb-kube-api-access-mf692\") pod \"must-gather-wf66j\" (UID: \"c2c42541-00a2-4d5a-a875-3b52d73b08eb\") " pod="openshift-must-gather-t2k6c/must-gather-wf66j" Jan 25 08:56:19 crc kubenswrapper[4832]: I0125 08:56:19.587175 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/c2c42541-00a2-4d5a-a875-3b52d73b08eb-must-gather-output\") pod \"must-gather-wf66j\" (UID: \"c2c42541-00a2-4d5a-a875-3b52d73b08eb\") " pod="openshift-must-gather-t2k6c/must-gather-wf66j" Jan 25 08:56:19 crc kubenswrapper[4832]: I0125 08:56:19.691640 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mf692\" (UniqueName: \"kubernetes.io/projected/c2c42541-00a2-4d5a-a875-3b52d73b08eb-kube-api-access-mf692\") pod \"must-gather-wf66j\" (UID: \"c2c42541-00a2-4d5a-a875-3b52d73b08eb\") " pod="openshift-must-gather-t2k6c/must-gather-wf66j" Jan 25 08:56:19 crc kubenswrapper[4832]: I0125 08:56:19.691741 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/c2c42541-00a2-4d5a-a875-3b52d73b08eb-must-gather-output\") pod \"must-gather-wf66j\" (UID: \"c2c42541-00a2-4d5a-a875-3b52d73b08eb\") " pod="openshift-must-gather-t2k6c/must-gather-wf66j" Jan 25 08:56:19 crc kubenswrapper[4832]: I0125 08:56:19.702440 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/c2c42541-00a2-4d5a-a875-3b52d73b08eb-must-gather-output\") pod \"must-gather-wf66j\" (UID: \"c2c42541-00a2-4d5a-a875-3b52d73b08eb\") " pod="openshift-must-gather-t2k6c/must-gather-wf66j" Jan 25 08:56:19 crc kubenswrapper[4832]: I0125 08:56:19.741458 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mf692\" (UniqueName: \"kubernetes.io/projected/c2c42541-00a2-4d5a-a875-3b52d73b08eb-kube-api-access-mf692\") pod \"must-gather-wf66j\" (UID: \"c2c42541-00a2-4d5a-a875-3b52d73b08eb\") " pod="openshift-must-gather-t2k6c/must-gather-wf66j" Jan 25 08:56:19 crc kubenswrapper[4832]: I0125 08:56:19.856821 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-t2k6c/must-gather-wf66j" Jan 25 08:56:20 crc kubenswrapper[4832]: I0125 08:56:20.342600 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-t2k6c/must-gather-wf66j"] Jan 25 08:56:20 crc kubenswrapper[4832]: I0125 08:56:20.770465 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-t2k6c/must-gather-wf66j" event={"ID":"c2c42541-00a2-4d5a-a875-3b52d73b08eb","Type":"ContainerStarted","Data":"85baa1736854db4a1a3472ef9cc835294dc7ed3c7568bc7988e25a3ba52a9bf7"} Jan 25 08:56:22 crc kubenswrapper[4832]: I0125 08:56:22.150783 4832 patch_prober.go:28] interesting pod/machine-config-daemon-9r9sz container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 25 08:56:22 crc kubenswrapper[4832]: I0125 08:56:22.151234 4832 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-9r9sz" podUID="1fb47e8e-c812-41b4-9be7-3fad81e121b0" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 25 08:56:22 crc kubenswrapper[4832]: I0125 08:56:22.151294 4832 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-9r9sz" Jan 25 08:56:22 crc kubenswrapper[4832]: I0125 08:56:22.152296 4832 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"47785627d9fed4967d30c7d530949092bec3ab3c86f8b6a114d139f561674311"} pod="openshift-machine-config-operator/machine-config-daemon-9r9sz" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 25 08:56:22 crc kubenswrapper[4832]: I0125 08:56:22.152356 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-9r9sz" podUID="1fb47e8e-c812-41b4-9be7-3fad81e121b0" containerName="machine-config-daemon" containerID="cri-o://47785627d9fed4967d30c7d530949092bec3ab3c86f8b6a114d139f561674311" gracePeriod=600 Jan 25 08:56:22 crc kubenswrapper[4832]: E0125 08:56:22.282661 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9r9sz_openshift-machine-config-operator(1fb47e8e-c812-41b4-9be7-3fad81e121b0)\"" pod="openshift-machine-config-operator/machine-config-daemon-9r9sz" podUID="1fb47e8e-c812-41b4-9be7-3fad81e121b0" Jan 25 08:56:22 crc kubenswrapper[4832]: I0125 08:56:22.813136 4832 generic.go:334] "Generic (PLEG): container finished" podID="1fb47e8e-c812-41b4-9be7-3fad81e121b0" containerID="47785627d9fed4967d30c7d530949092bec3ab3c86f8b6a114d139f561674311" exitCode=0 Jan 25 08:56:22 crc kubenswrapper[4832]: I0125 08:56:22.813186 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-9r9sz" event={"ID":"1fb47e8e-c812-41b4-9be7-3fad81e121b0","Type":"ContainerDied","Data":"47785627d9fed4967d30c7d530949092bec3ab3c86f8b6a114d139f561674311"} Jan 25 08:56:22 crc kubenswrapper[4832]: I0125 08:56:22.813500 4832 scope.go:117] "RemoveContainer" containerID="7ace08f928564b03ea6b92806bc43a72271873c73f1423c0385090593b7be414" Jan 25 08:56:22 crc kubenswrapper[4832]: I0125 08:56:22.814293 4832 scope.go:117] "RemoveContainer" containerID="47785627d9fed4967d30c7d530949092bec3ab3c86f8b6a114d139f561674311" Jan 25 08:56:22 crc kubenswrapper[4832]: E0125 08:56:22.814652 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9r9sz_openshift-machine-config-operator(1fb47e8e-c812-41b4-9be7-3fad81e121b0)\"" pod="openshift-machine-config-operator/machine-config-daemon-9r9sz" podUID="1fb47e8e-c812-41b4-9be7-3fad81e121b0" Jan 25 08:56:27 crc kubenswrapper[4832]: I0125 08:56:27.907509 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-t2k6c/must-gather-wf66j" event={"ID":"c2c42541-00a2-4d5a-a875-3b52d73b08eb","Type":"ContainerStarted","Data":"a31cf8509d9193a87d867f7d2bc61b2552efddc7ddb2431f6e4febfd80e63834"} Jan 25 08:56:27 crc kubenswrapper[4832]: I0125 08:56:27.908004 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-t2k6c/must-gather-wf66j" event={"ID":"c2c42541-00a2-4d5a-a875-3b52d73b08eb","Type":"ContainerStarted","Data":"a84cf7e2c40f7d1d7f0c37dfec4ad70f7b6e2f0a60e43def974d50bbc0b0ab17"} Jan 25 08:56:27 crc kubenswrapper[4832]: I0125 08:56:27.930571 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-t2k6c/must-gather-wf66j" podStartSLOduration=2.197736068 podStartE2EDuration="8.930545085s" podCreationTimestamp="2026-01-25 08:56:19 +0000 UTC" firstStartedPulling="2026-01-25 08:56:20.358628441 +0000 UTC m=+3563.032451974" lastFinishedPulling="2026-01-25 08:56:27.091437458 +0000 UTC m=+3569.765260991" observedRunningTime="2026-01-25 08:56:27.930128732 +0000 UTC m=+3570.603952275" watchObservedRunningTime="2026-01-25 08:56:27.930545085 +0000 UTC m=+3570.604368628" Jan 25 08:56:31 crc kubenswrapper[4832]: I0125 08:56:31.166098 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-t2k6c/crc-debug-5j86b"] Jan 25 08:56:31 crc kubenswrapper[4832]: I0125 08:56:31.168233 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-t2k6c/crc-debug-5j86b" Jan 25 08:56:31 crc kubenswrapper[4832]: I0125 08:56:31.171269 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-t2k6c"/"default-dockercfg-85m8k" Jan 25 08:56:31 crc kubenswrapper[4832]: I0125 08:56:31.248310 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/c6713ef3-1924-4cc8-bbc4-c4d0151b00b1-host\") pod \"crc-debug-5j86b\" (UID: \"c6713ef3-1924-4cc8-bbc4-c4d0151b00b1\") " pod="openshift-must-gather-t2k6c/crc-debug-5j86b" Jan 25 08:56:31 crc kubenswrapper[4832]: I0125 08:56:31.248406 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m75sq\" (UniqueName: \"kubernetes.io/projected/c6713ef3-1924-4cc8-bbc4-c4d0151b00b1-kube-api-access-m75sq\") pod \"crc-debug-5j86b\" (UID: \"c6713ef3-1924-4cc8-bbc4-c4d0151b00b1\") " pod="openshift-must-gather-t2k6c/crc-debug-5j86b" Jan 25 08:56:31 crc kubenswrapper[4832]: I0125 08:56:31.350278 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/c6713ef3-1924-4cc8-bbc4-c4d0151b00b1-host\") pod \"crc-debug-5j86b\" (UID: \"c6713ef3-1924-4cc8-bbc4-c4d0151b00b1\") " pod="openshift-must-gather-t2k6c/crc-debug-5j86b" Jan 25 08:56:31 crc kubenswrapper[4832]: I0125 08:56:31.350380 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m75sq\" (UniqueName: \"kubernetes.io/projected/c6713ef3-1924-4cc8-bbc4-c4d0151b00b1-kube-api-access-m75sq\") pod \"crc-debug-5j86b\" (UID: \"c6713ef3-1924-4cc8-bbc4-c4d0151b00b1\") " pod="openshift-must-gather-t2k6c/crc-debug-5j86b" Jan 25 08:56:31 crc kubenswrapper[4832]: I0125 08:56:31.350488 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/c6713ef3-1924-4cc8-bbc4-c4d0151b00b1-host\") pod \"crc-debug-5j86b\" (UID: \"c6713ef3-1924-4cc8-bbc4-c4d0151b00b1\") " pod="openshift-must-gather-t2k6c/crc-debug-5j86b" Jan 25 08:56:31 crc kubenswrapper[4832]: I0125 08:56:31.379504 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m75sq\" (UniqueName: \"kubernetes.io/projected/c6713ef3-1924-4cc8-bbc4-c4d0151b00b1-kube-api-access-m75sq\") pod \"crc-debug-5j86b\" (UID: \"c6713ef3-1924-4cc8-bbc4-c4d0151b00b1\") " pod="openshift-must-gather-t2k6c/crc-debug-5j86b" Jan 25 08:56:31 crc kubenswrapper[4832]: I0125 08:56:31.486257 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-t2k6c/crc-debug-5j86b" Jan 25 08:56:31 crc kubenswrapper[4832]: W0125 08:56:31.530829 4832 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc6713ef3_1924_4cc8_bbc4_c4d0151b00b1.slice/crio-a3aa3470a2f2ce63de197dd0889ca9bfb5865f434153b4e16d9a53bf9fb99fa4 WatchSource:0}: Error finding container a3aa3470a2f2ce63de197dd0889ca9bfb5865f434153b4e16d9a53bf9fb99fa4: Status 404 returned error can't find the container with id a3aa3470a2f2ce63de197dd0889ca9bfb5865f434153b4e16d9a53bf9fb99fa4 Jan 25 08:56:31 crc kubenswrapper[4832]: I0125 08:56:31.960751 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-t2k6c/crc-debug-5j86b" event={"ID":"c6713ef3-1924-4cc8-bbc4-c4d0151b00b1","Type":"ContainerStarted","Data":"a3aa3470a2f2ce63de197dd0889ca9bfb5865f434153b4e16d9a53bf9fb99fa4"} Jan 25 08:56:34 crc kubenswrapper[4832]: I0125 08:56:34.669908 4832 scope.go:117] "RemoveContainer" containerID="47785627d9fed4967d30c7d530949092bec3ab3c86f8b6a114d139f561674311" Jan 25 08:56:34 crc kubenswrapper[4832]: E0125 08:56:34.670236 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9r9sz_openshift-machine-config-operator(1fb47e8e-c812-41b4-9be7-3fad81e121b0)\"" pod="openshift-machine-config-operator/machine-config-daemon-9r9sz" podUID="1fb47e8e-c812-41b4-9be7-3fad81e121b0" Jan 25 08:56:43 crc kubenswrapper[4832]: I0125 08:56:43.078943 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-t2k6c/crc-debug-5j86b" event={"ID":"c6713ef3-1924-4cc8-bbc4-c4d0151b00b1","Type":"ContainerStarted","Data":"a5f6cb748904837856822bcc5556449548eafb31f998f6b735fe970fe439417f"} Jan 25 08:56:43 crc kubenswrapper[4832]: I0125 08:56:43.099716 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-t2k6c/crc-debug-5j86b" podStartSLOduration=1.036134326 podStartE2EDuration="12.099684856s" podCreationTimestamp="2026-01-25 08:56:31 +0000 UTC" firstStartedPulling="2026-01-25 08:56:31.533178636 +0000 UTC m=+3574.207002169" lastFinishedPulling="2026-01-25 08:56:42.596729166 +0000 UTC m=+3585.270552699" observedRunningTime="2026-01-25 08:56:43.092842623 +0000 UTC m=+3585.766666156" watchObservedRunningTime="2026-01-25 08:56:43.099684856 +0000 UTC m=+3585.773508389" Jan 25 08:56:46 crc kubenswrapper[4832]: I0125 08:56:46.670229 4832 scope.go:117] "RemoveContainer" containerID="47785627d9fed4967d30c7d530949092bec3ab3c86f8b6a114d139f561674311" Jan 25 08:56:46 crc kubenswrapper[4832]: E0125 08:56:46.671354 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9r9sz_openshift-machine-config-operator(1fb47e8e-c812-41b4-9be7-3fad81e121b0)\"" pod="openshift-machine-config-operator/machine-config-daemon-9r9sz" podUID="1fb47e8e-c812-41b4-9be7-3fad81e121b0" Jan 25 08:57:00 crc kubenswrapper[4832]: I0125 08:57:00.670245 4832 scope.go:117] "RemoveContainer" containerID="47785627d9fed4967d30c7d530949092bec3ab3c86f8b6a114d139f561674311" Jan 25 08:57:00 crc kubenswrapper[4832]: E0125 08:57:00.670913 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9r9sz_openshift-machine-config-operator(1fb47e8e-c812-41b4-9be7-3fad81e121b0)\"" pod="openshift-machine-config-operator/machine-config-daemon-9r9sz" podUID="1fb47e8e-c812-41b4-9be7-3fad81e121b0" Jan 25 08:57:15 crc kubenswrapper[4832]: I0125 08:57:15.670013 4832 scope.go:117] "RemoveContainer" containerID="47785627d9fed4967d30c7d530949092bec3ab3c86f8b6a114d139f561674311" Jan 25 08:57:15 crc kubenswrapper[4832]: E0125 08:57:15.670752 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9r9sz_openshift-machine-config-operator(1fb47e8e-c812-41b4-9be7-3fad81e121b0)\"" pod="openshift-machine-config-operator/machine-config-daemon-9r9sz" podUID="1fb47e8e-c812-41b4-9be7-3fad81e121b0" Jan 25 08:57:21 crc kubenswrapper[4832]: I0125 08:57:21.436898 4832 generic.go:334] "Generic (PLEG): container finished" podID="c6713ef3-1924-4cc8-bbc4-c4d0151b00b1" containerID="a5f6cb748904837856822bcc5556449548eafb31f998f6b735fe970fe439417f" exitCode=0 Jan 25 08:57:21 crc kubenswrapper[4832]: I0125 08:57:21.436996 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-t2k6c/crc-debug-5j86b" event={"ID":"c6713ef3-1924-4cc8-bbc4-c4d0151b00b1","Type":"ContainerDied","Data":"a5f6cb748904837856822bcc5556449548eafb31f998f6b735fe970fe439417f"} Jan 25 08:57:22 crc kubenswrapper[4832]: I0125 08:57:22.550222 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-t2k6c/crc-debug-5j86b" Jan 25 08:57:22 crc kubenswrapper[4832]: I0125 08:57:22.588956 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-t2k6c/crc-debug-5j86b"] Jan 25 08:57:22 crc kubenswrapper[4832]: I0125 08:57:22.597633 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-t2k6c/crc-debug-5j86b"] Jan 25 08:57:22 crc kubenswrapper[4832]: I0125 08:57:22.656245 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/c6713ef3-1924-4cc8-bbc4-c4d0151b00b1-host\") pod \"c6713ef3-1924-4cc8-bbc4-c4d0151b00b1\" (UID: \"c6713ef3-1924-4cc8-bbc4-c4d0151b00b1\") " Jan 25 08:57:22 crc kubenswrapper[4832]: I0125 08:57:22.656522 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m75sq\" (UniqueName: \"kubernetes.io/projected/c6713ef3-1924-4cc8-bbc4-c4d0151b00b1-kube-api-access-m75sq\") pod \"c6713ef3-1924-4cc8-bbc4-c4d0151b00b1\" (UID: \"c6713ef3-1924-4cc8-bbc4-c4d0151b00b1\") " Jan 25 08:57:22 crc kubenswrapper[4832]: I0125 08:57:22.656697 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c6713ef3-1924-4cc8-bbc4-c4d0151b00b1-host" (OuterVolumeSpecName: "host") pod "c6713ef3-1924-4cc8-bbc4-c4d0151b00b1" (UID: "c6713ef3-1924-4cc8-bbc4-c4d0151b00b1"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 25 08:57:22 crc kubenswrapper[4832]: I0125 08:57:22.657072 4832 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/c6713ef3-1924-4cc8-bbc4-c4d0151b00b1-host\") on node \"crc\" DevicePath \"\"" Jan 25 08:57:22 crc kubenswrapper[4832]: I0125 08:57:22.663228 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c6713ef3-1924-4cc8-bbc4-c4d0151b00b1-kube-api-access-m75sq" (OuterVolumeSpecName: "kube-api-access-m75sq") pod "c6713ef3-1924-4cc8-bbc4-c4d0151b00b1" (UID: "c6713ef3-1924-4cc8-bbc4-c4d0151b00b1"). InnerVolumeSpecName "kube-api-access-m75sq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 25 08:57:22 crc kubenswrapper[4832]: I0125 08:57:22.759305 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m75sq\" (UniqueName: \"kubernetes.io/projected/c6713ef3-1924-4cc8-bbc4-c4d0151b00b1-kube-api-access-m75sq\") on node \"crc\" DevicePath \"\"" Jan 25 08:57:23 crc kubenswrapper[4832]: I0125 08:57:23.458346 4832 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a3aa3470a2f2ce63de197dd0889ca9bfb5865f434153b4e16d9a53bf9fb99fa4" Jan 25 08:57:23 crc kubenswrapper[4832]: I0125 08:57:23.458753 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-t2k6c/crc-debug-5j86b" Jan 25 08:57:23 crc kubenswrapper[4832]: I0125 08:57:23.679348 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c6713ef3-1924-4cc8-bbc4-c4d0151b00b1" path="/var/lib/kubelet/pods/c6713ef3-1924-4cc8-bbc4-c4d0151b00b1/volumes" Jan 25 08:57:23 crc kubenswrapper[4832]: I0125 08:57:23.806701 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-t2k6c/crc-debug-qgmk4"] Jan 25 08:57:23 crc kubenswrapper[4832]: E0125 08:57:23.807261 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c6713ef3-1924-4cc8-bbc4-c4d0151b00b1" containerName="container-00" Jan 25 08:57:23 crc kubenswrapper[4832]: I0125 08:57:23.807284 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="c6713ef3-1924-4cc8-bbc4-c4d0151b00b1" containerName="container-00" Jan 25 08:57:23 crc kubenswrapper[4832]: I0125 08:57:23.807551 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="c6713ef3-1924-4cc8-bbc4-c4d0151b00b1" containerName="container-00" Jan 25 08:57:23 crc kubenswrapper[4832]: I0125 08:57:23.808369 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-t2k6c/crc-debug-qgmk4" Jan 25 08:57:23 crc kubenswrapper[4832]: I0125 08:57:23.810759 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-t2k6c"/"default-dockercfg-85m8k" Jan 25 08:57:23 crc kubenswrapper[4832]: I0125 08:57:23.988539 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/1c917e9f-3207-4f9f-854f-1e5300a506a8-host\") pod \"crc-debug-qgmk4\" (UID: \"1c917e9f-3207-4f9f-854f-1e5300a506a8\") " pod="openshift-must-gather-t2k6c/crc-debug-qgmk4" Jan 25 08:57:23 crc kubenswrapper[4832]: I0125 08:57:23.989087 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-48c4b\" (UniqueName: \"kubernetes.io/projected/1c917e9f-3207-4f9f-854f-1e5300a506a8-kube-api-access-48c4b\") pod \"crc-debug-qgmk4\" (UID: \"1c917e9f-3207-4f9f-854f-1e5300a506a8\") " pod="openshift-must-gather-t2k6c/crc-debug-qgmk4" Jan 25 08:57:24 crc kubenswrapper[4832]: I0125 08:57:24.091231 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/1c917e9f-3207-4f9f-854f-1e5300a506a8-host\") pod \"crc-debug-qgmk4\" (UID: \"1c917e9f-3207-4f9f-854f-1e5300a506a8\") " pod="openshift-must-gather-t2k6c/crc-debug-qgmk4" Jan 25 08:57:24 crc kubenswrapper[4832]: I0125 08:57:24.091356 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-48c4b\" (UniqueName: \"kubernetes.io/projected/1c917e9f-3207-4f9f-854f-1e5300a506a8-kube-api-access-48c4b\") pod \"crc-debug-qgmk4\" (UID: \"1c917e9f-3207-4f9f-854f-1e5300a506a8\") " pod="openshift-must-gather-t2k6c/crc-debug-qgmk4" Jan 25 08:57:24 crc kubenswrapper[4832]: I0125 08:57:24.091448 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/1c917e9f-3207-4f9f-854f-1e5300a506a8-host\") pod \"crc-debug-qgmk4\" (UID: \"1c917e9f-3207-4f9f-854f-1e5300a506a8\") " pod="openshift-must-gather-t2k6c/crc-debug-qgmk4" Jan 25 08:57:24 crc kubenswrapper[4832]: I0125 08:57:24.111730 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-48c4b\" (UniqueName: \"kubernetes.io/projected/1c917e9f-3207-4f9f-854f-1e5300a506a8-kube-api-access-48c4b\") pod \"crc-debug-qgmk4\" (UID: \"1c917e9f-3207-4f9f-854f-1e5300a506a8\") " pod="openshift-must-gather-t2k6c/crc-debug-qgmk4" Jan 25 08:57:24 crc kubenswrapper[4832]: I0125 08:57:24.125715 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-t2k6c/crc-debug-qgmk4" Jan 25 08:57:24 crc kubenswrapper[4832]: I0125 08:57:24.470839 4832 generic.go:334] "Generic (PLEG): container finished" podID="1c917e9f-3207-4f9f-854f-1e5300a506a8" containerID="d688e911fcb74987d98802204a05b49f0f5cd0346cf1d52ffcf1a40ef48c7393" exitCode=0 Jan 25 08:57:24 crc kubenswrapper[4832]: I0125 08:57:24.471180 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-t2k6c/crc-debug-qgmk4" event={"ID":"1c917e9f-3207-4f9f-854f-1e5300a506a8","Type":"ContainerDied","Data":"d688e911fcb74987d98802204a05b49f0f5cd0346cf1d52ffcf1a40ef48c7393"} Jan 25 08:57:24 crc kubenswrapper[4832]: I0125 08:57:24.471215 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-t2k6c/crc-debug-qgmk4" event={"ID":"1c917e9f-3207-4f9f-854f-1e5300a506a8","Type":"ContainerStarted","Data":"0bdae367f39ce63ee5e4fca8dcada384e2b11bfc8ba969894b149691c3befe5e"} Jan 25 08:57:24 crc kubenswrapper[4832]: I0125 08:57:24.853662 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-t2k6c/crc-debug-qgmk4"] Jan 25 08:57:24 crc kubenswrapper[4832]: I0125 08:57:24.864955 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-t2k6c/crc-debug-qgmk4"] Jan 25 08:57:25 crc kubenswrapper[4832]: I0125 08:57:25.595527 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-t2k6c/crc-debug-qgmk4" Jan 25 08:57:25 crc kubenswrapper[4832]: I0125 08:57:25.728901 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/1c917e9f-3207-4f9f-854f-1e5300a506a8-host\") pod \"1c917e9f-3207-4f9f-854f-1e5300a506a8\" (UID: \"1c917e9f-3207-4f9f-854f-1e5300a506a8\") " Jan 25 08:57:25 crc kubenswrapper[4832]: I0125 08:57:25.729040 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1c917e9f-3207-4f9f-854f-1e5300a506a8-host" (OuterVolumeSpecName: "host") pod "1c917e9f-3207-4f9f-854f-1e5300a506a8" (UID: "1c917e9f-3207-4f9f-854f-1e5300a506a8"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 25 08:57:25 crc kubenswrapper[4832]: I0125 08:57:25.729186 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-48c4b\" (UniqueName: \"kubernetes.io/projected/1c917e9f-3207-4f9f-854f-1e5300a506a8-kube-api-access-48c4b\") pod \"1c917e9f-3207-4f9f-854f-1e5300a506a8\" (UID: \"1c917e9f-3207-4f9f-854f-1e5300a506a8\") " Jan 25 08:57:25 crc kubenswrapper[4832]: I0125 08:57:25.729728 4832 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/1c917e9f-3207-4f9f-854f-1e5300a506a8-host\") on node \"crc\" DevicePath \"\"" Jan 25 08:57:25 crc kubenswrapper[4832]: I0125 08:57:25.738622 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1c917e9f-3207-4f9f-854f-1e5300a506a8-kube-api-access-48c4b" (OuterVolumeSpecName: "kube-api-access-48c4b") pod "1c917e9f-3207-4f9f-854f-1e5300a506a8" (UID: "1c917e9f-3207-4f9f-854f-1e5300a506a8"). InnerVolumeSpecName "kube-api-access-48c4b". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 25 08:57:25 crc kubenswrapper[4832]: I0125 08:57:25.831825 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-48c4b\" (UniqueName: \"kubernetes.io/projected/1c917e9f-3207-4f9f-854f-1e5300a506a8-kube-api-access-48c4b\") on node \"crc\" DevicePath \"\"" Jan 25 08:57:26 crc kubenswrapper[4832]: I0125 08:57:26.033211 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-t2k6c/crc-debug-4b49g"] Jan 25 08:57:26 crc kubenswrapper[4832]: E0125 08:57:26.033738 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1c917e9f-3207-4f9f-854f-1e5300a506a8" containerName="container-00" Jan 25 08:57:26 crc kubenswrapper[4832]: I0125 08:57:26.033753 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="1c917e9f-3207-4f9f-854f-1e5300a506a8" containerName="container-00" Jan 25 08:57:26 crc kubenswrapper[4832]: I0125 08:57:26.033981 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="1c917e9f-3207-4f9f-854f-1e5300a506a8" containerName="container-00" Jan 25 08:57:26 crc kubenswrapper[4832]: I0125 08:57:26.034831 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-t2k6c/crc-debug-4b49g" Jan 25 08:57:26 crc kubenswrapper[4832]: I0125 08:57:26.138060 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/d6fcba51-8bea-4761-93a4-eb10626cef22-host\") pod \"crc-debug-4b49g\" (UID: \"d6fcba51-8bea-4761-93a4-eb10626cef22\") " pod="openshift-must-gather-t2k6c/crc-debug-4b49g" Jan 25 08:57:26 crc kubenswrapper[4832]: I0125 08:57:26.138160 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qcccj\" (UniqueName: \"kubernetes.io/projected/d6fcba51-8bea-4761-93a4-eb10626cef22-kube-api-access-qcccj\") pod \"crc-debug-4b49g\" (UID: \"d6fcba51-8bea-4761-93a4-eb10626cef22\") " pod="openshift-must-gather-t2k6c/crc-debug-4b49g" Jan 25 08:57:26 crc kubenswrapper[4832]: I0125 08:57:26.239798 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/d6fcba51-8bea-4761-93a4-eb10626cef22-host\") pod \"crc-debug-4b49g\" (UID: \"d6fcba51-8bea-4761-93a4-eb10626cef22\") " pod="openshift-must-gather-t2k6c/crc-debug-4b49g" Jan 25 08:57:26 crc kubenswrapper[4832]: I0125 08:57:26.239902 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qcccj\" (UniqueName: \"kubernetes.io/projected/d6fcba51-8bea-4761-93a4-eb10626cef22-kube-api-access-qcccj\") pod \"crc-debug-4b49g\" (UID: \"d6fcba51-8bea-4761-93a4-eb10626cef22\") " pod="openshift-must-gather-t2k6c/crc-debug-4b49g" Jan 25 08:57:26 crc kubenswrapper[4832]: I0125 08:57:26.239974 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/d6fcba51-8bea-4761-93a4-eb10626cef22-host\") pod \"crc-debug-4b49g\" (UID: \"d6fcba51-8bea-4761-93a4-eb10626cef22\") " pod="openshift-must-gather-t2k6c/crc-debug-4b49g" Jan 25 08:57:26 crc kubenswrapper[4832]: I0125 08:57:26.256780 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qcccj\" (UniqueName: \"kubernetes.io/projected/d6fcba51-8bea-4761-93a4-eb10626cef22-kube-api-access-qcccj\") pod \"crc-debug-4b49g\" (UID: \"d6fcba51-8bea-4761-93a4-eb10626cef22\") " pod="openshift-must-gather-t2k6c/crc-debug-4b49g" Jan 25 08:57:26 crc kubenswrapper[4832]: I0125 08:57:26.353679 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-t2k6c/crc-debug-4b49g" Jan 25 08:57:26 crc kubenswrapper[4832]: W0125 08:57:26.391928 4832 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd6fcba51_8bea_4761_93a4_eb10626cef22.slice/crio-703a168b79fb7714106d6ecaa24fe1d8c690f689aca5426e964e419b53a6cc83 WatchSource:0}: Error finding container 703a168b79fb7714106d6ecaa24fe1d8c690f689aca5426e964e419b53a6cc83: Status 404 returned error can't find the container with id 703a168b79fb7714106d6ecaa24fe1d8c690f689aca5426e964e419b53a6cc83 Jan 25 08:57:26 crc kubenswrapper[4832]: I0125 08:57:26.496634 4832 scope.go:117] "RemoveContainer" containerID="d688e911fcb74987d98802204a05b49f0f5cd0346cf1d52ffcf1a40ef48c7393" Jan 25 08:57:26 crc kubenswrapper[4832]: I0125 08:57:26.496669 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-t2k6c/crc-debug-qgmk4" Jan 25 08:57:26 crc kubenswrapper[4832]: I0125 08:57:26.498291 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-t2k6c/crc-debug-4b49g" event={"ID":"d6fcba51-8bea-4761-93a4-eb10626cef22","Type":"ContainerStarted","Data":"703a168b79fb7714106d6ecaa24fe1d8c690f689aca5426e964e419b53a6cc83"} Jan 25 08:57:27 crc kubenswrapper[4832]: I0125 08:57:27.507972 4832 generic.go:334] "Generic (PLEG): container finished" podID="d6fcba51-8bea-4761-93a4-eb10626cef22" containerID="629ddcc57c7fd440555d04d8a7742054cc08e0cf1bdfb4a354403744991c9ba7" exitCode=0 Jan 25 08:57:27 crc kubenswrapper[4832]: I0125 08:57:27.508043 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-t2k6c/crc-debug-4b49g" event={"ID":"d6fcba51-8bea-4761-93a4-eb10626cef22","Type":"ContainerDied","Data":"629ddcc57c7fd440555d04d8a7742054cc08e0cf1bdfb4a354403744991c9ba7"} Jan 25 08:57:27 crc kubenswrapper[4832]: I0125 08:57:27.547869 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-t2k6c/crc-debug-4b49g"] Jan 25 08:57:27 crc kubenswrapper[4832]: I0125 08:57:27.555811 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-t2k6c/crc-debug-4b49g"] Jan 25 08:57:27 crc kubenswrapper[4832]: I0125 08:57:27.686956 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1c917e9f-3207-4f9f-854f-1e5300a506a8" path="/var/lib/kubelet/pods/1c917e9f-3207-4f9f-854f-1e5300a506a8/volumes" Jan 25 08:57:28 crc kubenswrapper[4832]: I0125 08:57:28.625085 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-t2k6c/crc-debug-4b49g" Jan 25 08:57:28 crc kubenswrapper[4832]: I0125 08:57:28.795868 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qcccj\" (UniqueName: \"kubernetes.io/projected/d6fcba51-8bea-4761-93a4-eb10626cef22-kube-api-access-qcccj\") pod \"d6fcba51-8bea-4761-93a4-eb10626cef22\" (UID: \"d6fcba51-8bea-4761-93a4-eb10626cef22\") " Jan 25 08:57:28 crc kubenswrapper[4832]: I0125 08:57:28.795945 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/d6fcba51-8bea-4761-93a4-eb10626cef22-host\") pod \"d6fcba51-8bea-4761-93a4-eb10626cef22\" (UID: \"d6fcba51-8bea-4761-93a4-eb10626cef22\") " Jan 25 08:57:28 crc kubenswrapper[4832]: I0125 08:57:28.796101 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d6fcba51-8bea-4761-93a4-eb10626cef22-host" (OuterVolumeSpecName: "host") pod "d6fcba51-8bea-4761-93a4-eb10626cef22" (UID: "d6fcba51-8bea-4761-93a4-eb10626cef22"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 25 08:57:28 crc kubenswrapper[4832]: I0125 08:57:28.796491 4832 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/d6fcba51-8bea-4761-93a4-eb10626cef22-host\") on node \"crc\" DevicePath \"\"" Jan 25 08:57:28 crc kubenswrapper[4832]: I0125 08:57:28.804617 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d6fcba51-8bea-4761-93a4-eb10626cef22-kube-api-access-qcccj" (OuterVolumeSpecName: "kube-api-access-qcccj") pod "d6fcba51-8bea-4761-93a4-eb10626cef22" (UID: "d6fcba51-8bea-4761-93a4-eb10626cef22"). InnerVolumeSpecName "kube-api-access-qcccj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 25 08:57:28 crc kubenswrapper[4832]: I0125 08:57:28.897981 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qcccj\" (UniqueName: \"kubernetes.io/projected/d6fcba51-8bea-4761-93a4-eb10626cef22-kube-api-access-qcccj\") on node \"crc\" DevicePath \"\"" Jan 25 08:57:29 crc kubenswrapper[4832]: I0125 08:57:29.529313 4832 scope.go:117] "RemoveContainer" containerID="629ddcc57c7fd440555d04d8a7742054cc08e0cf1bdfb4a354403744991c9ba7" Jan 25 08:57:29 crc kubenswrapper[4832]: I0125 08:57:29.529340 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-t2k6c/crc-debug-4b49g" Jan 25 08:57:29 crc kubenswrapper[4832]: I0125 08:57:29.684487 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d6fcba51-8bea-4761-93a4-eb10626cef22" path="/var/lib/kubelet/pods/d6fcba51-8bea-4761-93a4-eb10626cef22/volumes" Jan 25 08:57:30 crc kubenswrapper[4832]: I0125 08:57:30.670117 4832 scope.go:117] "RemoveContainer" containerID="47785627d9fed4967d30c7d530949092bec3ab3c86f8b6a114d139f561674311" Jan 25 08:57:30 crc kubenswrapper[4832]: E0125 08:57:30.670821 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9r9sz_openshift-machine-config-operator(1fb47e8e-c812-41b4-9be7-3fad81e121b0)\"" pod="openshift-machine-config-operator/machine-config-daemon-9r9sz" podUID="1fb47e8e-c812-41b4-9be7-3fad81e121b0" Jan 25 08:57:43 crc kubenswrapper[4832]: I0125 08:57:43.669909 4832 scope.go:117] "RemoveContainer" containerID="47785627d9fed4967d30c7d530949092bec3ab3c86f8b6a114d139f561674311" Jan 25 08:57:43 crc kubenswrapper[4832]: E0125 08:57:43.671149 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9r9sz_openshift-machine-config-operator(1fb47e8e-c812-41b4-9be7-3fad81e121b0)\"" pod="openshift-machine-config-operator/machine-config-daemon-9r9sz" podUID="1fb47e8e-c812-41b4-9be7-3fad81e121b0" Jan 25 08:57:43 crc kubenswrapper[4832]: I0125 08:57:43.885361 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-9f466dd54-88fdd_ae8a1d7e-bb0c-4228-b39b-1de7e6c62ff5/barbican-api/0.log" Jan 25 08:57:44 crc kubenswrapper[4832]: I0125 08:57:44.081917 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-9f466dd54-88fdd_ae8a1d7e-bb0c-4228-b39b-1de7e6c62ff5/barbican-api-log/0.log" Jan 25 08:57:44 crc kubenswrapper[4832]: I0125 08:57:44.098770 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-7b4947bb84-pmdh6_4899f618-1f51-4d34-9970-7c096359b47e/barbican-keystone-listener/0.log" Jan 25 08:57:44 crc kubenswrapper[4832]: I0125 08:57:44.123470 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-7b4947bb84-pmdh6_4899f618-1f51-4d34-9970-7c096359b47e/barbican-keystone-listener-log/0.log" Jan 25 08:57:44 crc kubenswrapper[4832]: I0125 08:57:44.372740 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-855cdf875c-rxk79_26baac3d-6d07-4f33-956e-4048e3318099/barbican-worker/0.log" Jan 25 08:57:44 crc kubenswrapper[4832]: I0125 08:57:44.374078 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-855cdf875c-rxk79_26baac3d-6d07-4f33-956e-4048e3318099/barbican-worker-log/0.log" Jan 25 08:57:44 crc kubenswrapper[4832]: I0125 08:57:44.570554 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_bootstrap-edpm-deployment-openstack-edpm-ipam-hdzmf_146a1b8e-1733-40ca-81a5-d73122618f4d/bootstrap-edpm-deployment-openstack-edpm-ipam/0.log" Jan 25 08:57:44 crc kubenswrapper[4832]: I0125 08:57:44.666437 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_eb5b7f6d-8b64-475d-b4b4-c12ce7e9c468/ceilometer-central-agent/0.log" Jan 25 08:57:44 crc kubenswrapper[4832]: I0125 08:57:44.729558 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_eb5b7f6d-8b64-475d-b4b4-c12ce7e9c468/ceilometer-notification-agent/0.log" Jan 25 08:57:44 crc kubenswrapper[4832]: I0125 08:57:44.825414 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_eb5b7f6d-8b64-475d-b4b4-c12ce7e9c468/proxy-httpd/0.log" Jan 25 08:57:44 crc kubenswrapper[4832]: I0125 08:57:44.853861 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_eb5b7f6d-8b64-475d-b4b4-c12ce7e9c468/sg-core/0.log" Jan 25 08:57:45 crc kubenswrapper[4832]: I0125 08:57:45.013504 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_db0ff763-c24c-45a4-b3c5-7dc32962816f/cinder-api/0.log" Jan 25 08:57:45 crc kubenswrapper[4832]: I0125 08:57:45.042325 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_db0ff763-c24c-45a4-b3c5-7dc32962816f/cinder-api-log/0.log" Jan 25 08:57:45 crc kubenswrapper[4832]: I0125 08:57:45.158320 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_c3f65dba-194a-46be-b020-24ee852b965a/cinder-scheduler/0.log" Jan 25 08:57:45 crc kubenswrapper[4832]: I0125 08:57:45.240233 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_c3f65dba-194a-46be-b020-24ee852b965a/probe/0.log" Jan 25 08:57:45 crc kubenswrapper[4832]: I0125 08:57:45.332310 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-network-edpm-deployment-openstack-edpm-ipam-fr296_ef813e8a-d19f-4638-bd75-5cba3643b1d0/configure-network-edpm-deployment-openstack-edpm-ipam/0.log" Jan 25 08:57:45 crc kubenswrapper[4832]: I0125 08:57:45.473736 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-os-edpm-deployment-openstack-edpm-ipam-rk2l7_10ca3609-7786-4065-9125-f1460e9718f2/configure-os-edpm-deployment-openstack-edpm-ipam/0.log" Jan 25 08:57:45 crc kubenswrapper[4832]: I0125 08:57:45.577439 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-cb6ffcf87-5r9mm_8b7acd70-a72a-477f-af0d-455512cb4e81/init/0.log" Jan 25 08:57:45 crc kubenswrapper[4832]: I0125 08:57:45.814654 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-cb6ffcf87-5r9mm_8b7acd70-a72a-477f-af0d-455512cb4e81/init/0.log" Jan 25 08:57:45 crc kubenswrapper[4832]: I0125 08:57:45.835545 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-cb6ffcf87-5r9mm_8b7acd70-a72a-477f-af0d-455512cb4e81/dnsmasq-dns/0.log" Jan 25 08:57:45 crc kubenswrapper[4832]: I0125 08:57:45.870832 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_download-cache-edpm-deployment-openstack-edpm-ipam-5wttx_c2445bfc-4cb1-417b-9eea-3ef40a5dcb6f/download-cache-edpm-deployment-openstack-edpm-ipam/0.log" Jan 25 08:57:46 crc kubenswrapper[4832]: I0125 08:57:46.251637 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_2ba1988f-0ee4-4e4d-9b32-eff3fe30c959/glance-log/0.log" Jan 25 08:57:46 crc kubenswrapper[4832]: I0125 08:57:46.258764 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_2ba1988f-0ee4-4e4d-9b32-eff3fe30c959/glance-httpd/0.log" Jan 25 08:57:46 crc kubenswrapper[4832]: I0125 08:57:46.446053 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_ca10626f-eeda-438c-8d2b-5b7c734db90d/glance-httpd/0.log" Jan 25 08:57:46 crc kubenswrapper[4832]: I0125 08:57:46.566152 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_ca10626f-eeda-438c-8d2b-5b7c734db90d/glance-log/0.log" Jan 25 08:57:46 crc kubenswrapper[4832]: I0125 08:57:46.620167 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-f649cfc6-vzpx7_26fd6803-3263-4989-a86e-908f6a504d14/horizon/1.log" Jan 25 08:57:46 crc kubenswrapper[4832]: I0125 08:57:46.804831 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-f649cfc6-vzpx7_26fd6803-3263-4989-a86e-908f6a504d14/horizon/0.log" Jan 25 08:57:46 crc kubenswrapper[4832]: I0125 08:57:46.924174 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-certs-edpm-deployment-openstack-edpm-ipam-ftpbj_ca88c519-c20b-4e26-86c2-5b62b163af37/install-certs-edpm-deployment-openstack-edpm-ipam/0.log" Jan 25 08:57:47 crc kubenswrapper[4832]: I0125 08:57:47.094575 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-f649cfc6-vzpx7_26fd6803-3263-4989-a86e-908f6a504d14/horizon-log/0.log" Jan 25 08:57:47 crc kubenswrapper[4832]: I0125 08:57:47.134892 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-os-edpm-deployment-openstack-edpm-ipam-b4dhr_112e50b5-86e0-4401-b4f9-b32be27ab508/install-os-edpm-deployment-openstack-edpm-ipam/0.log" Jan 25 08:57:47 crc kubenswrapper[4832]: I0125 08:57:47.401764 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_kube-state-metrics-0_ad2ea2ab-d727-4547-b2b4-d905b66428e5/kube-state-metrics/0.log" Jan 25 08:57:47 crc kubenswrapper[4832]: I0125 08:57:47.421877 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-699f4599dd-j695n_b32b998a-5689-42f6-9c15-b7e794acb916/keystone-api/0.log" Jan 25 08:57:47 crc kubenswrapper[4832]: I0125 08:57:47.631365 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_libvirt-edpm-deployment-openstack-edpm-ipam-sllb7_d6839ea5-4201-48d8-b390-16fac4368cb9/libvirt-edpm-deployment-openstack-edpm-ipam/0.log" Jan 25 08:57:47 crc kubenswrapper[4832]: I0125 08:57:47.991942 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-857c8bdbcf-kwd2q_d1a230b2-45ba-4298-b3d6-2280431c592d/neutron-api/0.log" Jan 25 08:57:48 crc kubenswrapper[4832]: I0125 08:57:48.001045 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-857c8bdbcf-kwd2q_d1a230b2-45ba-4298-b3d6-2280431c592d/neutron-httpd/0.log" Jan 25 08:57:48 crc kubenswrapper[4832]: I0125 08:57:48.086053 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-metadata-edpm-deployment-openstack-edpm-ipam-cz2vj_e0e39d1f-665b-486a-bc7c-d89d1e50fee9/neutron-metadata-edpm-deployment-openstack-edpm-ipam/0.log" Jan 25 08:57:48 crc kubenswrapper[4832]: I0125 08:57:48.726446 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_853956ed-8d6c-401a-9d3b-7325013053a4/nova-api-log/0.log" Jan 25 08:57:48 crc kubenswrapper[4832]: I0125 08:57:48.740603 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell0-conductor-0_b0b4eea3-2f29-4f50-a197-b3e6531df0d5/nova-cell0-conductor-conductor/0.log" Jan 25 08:57:48 crc kubenswrapper[4832]: I0125 08:57:48.876465 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_853956ed-8d6c-401a-9d3b-7325013053a4/nova-api-api/0.log" Jan 25 08:57:48 crc kubenswrapper[4832]: I0125 08:57:48.938431 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-conductor-0_2052de31-aa8d-4127-b9ef-12bdb9d90fd9/nova-cell1-conductor-conductor/0.log" Jan 25 08:57:49 crc kubenswrapper[4832]: I0125 08:57:49.051821 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-novncproxy-0_c420c690-6a2a-4ccc-876b-b3ca1d5d8781/nova-cell1-novncproxy-novncproxy/0.log" Jan 25 08:57:49 crc kubenswrapper[4832]: I0125 08:57:49.266430 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-edpm-deployment-openstack-edpm-ipam-f8kjk_2859d34c-ae01-4c03-a14a-5256e17130ed/nova-edpm-deployment-openstack-edpm-ipam/0.log" Jan 25 08:57:49 crc kubenswrapper[4832]: I0125 08:57:49.349468 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_3c0a6750-31ec-4a66-8160-2f74a44a5d33/nova-metadata-log/0.log" Jan 25 08:57:49 crc kubenswrapper[4832]: I0125 08:57:49.704080 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-scheduler-0_d322a933-38eb-4eb0-81c7-86d11a5f2d2c/nova-scheduler-scheduler/0.log" Jan 25 08:57:49 crc kubenswrapper[4832]: I0125 08:57:49.953201 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_43f07a95-68ce-4138-b2ff-ef2543e68e46/mysql-bootstrap/0.log" Jan 25 08:57:50 crc kubenswrapper[4832]: I0125 08:57:50.143077 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_43f07a95-68ce-4138-b2ff-ef2543e68e46/mysql-bootstrap/0.log" Jan 25 08:57:50 crc kubenswrapper[4832]: I0125 08:57:50.145921 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_43f07a95-68ce-4138-b2ff-ef2543e68e46/galera/0.log" Jan 25 08:57:50 crc kubenswrapper[4832]: I0125 08:57:50.397625 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_9ca53255-293b-4c35-a202-ac7ad7ac8d65/mysql-bootstrap/0.log" Jan 25 08:57:50 crc kubenswrapper[4832]: I0125 08:57:50.548741 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_9ca53255-293b-4c35-a202-ac7ad7ac8d65/mysql-bootstrap/0.log" Jan 25 08:57:50 crc kubenswrapper[4832]: I0125 08:57:50.566590 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_9ca53255-293b-4c35-a202-ac7ad7ac8d65/galera/0.log" Jan 25 08:57:50 crc kubenswrapper[4832]: I0125 08:57:50.687697 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_3c0a6750-31ec-4a66-8160-2f74a44a5d33/nova-metadata-metadata/0.log" Jan 25 08:57:50 crc kubenswrapper[4832]: I0125 08:57:50.772302 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstackclient_a962ff03-629f-458b-b5dc-3980f55d9f66/openstackclient/0.log" Jan 25 08:57:50 crc kubenswrapper[4832]: I0125 08:57:50.923374 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-hcd8h_4b6aa9f6-e110-4147-a8d0-b1c8287226d1/openstack-network-exporter/0.log" Jan 25 08:57:50 crc kubenswrapper[4832]: I0125 08:57:50.958315 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-n6hrr_54cecc85-b18f-4136-bd00-cbcc0f680643/ovn-controller/0.log" Jan 25 08:57:51 crc kubenswrapper[4832]: I0125 08:57:51.133218 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-tk26k_1eb6b5ae-927c-4920-9ad4-bc1936555efb/ovsdb-server-init/0.log" Jan 25 08:57:51 crc kubenswrapper[4832]: I0125 08:57:51.387285 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-tk26k_1eb6b5ae-927c-4920-9ad4-bc1936555efb/ovsdb-server/0.log" Jan 25 08:57:51 crc kubenswrapper[4832]: I0125 08:57:51.409856 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-tk26k_1eb6b5ae-927c-4920-9ad4-bc1936555efb/ovs-vswitchd/0.log" Jan 25 08:57:51 crc kubenswrapper[4832]: I0125 08:57:51.432609 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-tk26k_1eb6b5ae-927c-4920-9ad4-bc1936555efb/ovsdb-server-init/0.log" Jan 25 08:57:51 crc kubenswrapper[4832]: I0125 08:57:51.664866 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_828fc400-0bbb-4fbb-ae6c-7aa12c12864a/openstack-network-exporter/0.log" Jan 25 08:57:51 crc kubenswrapper[4832]: I0125 08:57:51.696479 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-edpm-deployment-openstack-edpm-ipam-bxs2f_23b2cd4e-4921-4082-8a44-50c065f88f52/ovn-edpm-deployment-openstack-edpm-ipam/0.log" Jan 25 08:57:51 crc kubenswrapper[4832]: I0125 08:57:51.708828 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_828fc400-0bbb-4fbb-ae6c-7aa12c12864a/ovn-northd/0.log" Jan 25 08:57:51 crc kubenswrapper[4832]: I0125 08:57:51.928823 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_0d2475d7-df45-45d0-a604-22b5008d000f/openstack-network-exporter/0.log" Jan 25 08:57:51 crc kubenswrapper[4832]: I0125 08:57:51.929766 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_0d2475d7-df45-45d0-a604-22b5008d000f/ovsdbserver-nb/0.log" Jan 25 08:57:52 crc kubenswrapper[4832]: I0125 08:57:52.135987 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_666395bf-0cf6-4e7a-a0d0-2ad1a8928424/openstack-network-exporter/0.log" Jan 25 08:57:52 crc kubenswrapper[4832]: I0125 08:57:52.162336 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_666395bf-0cf6-4e7a-a0d0-2ad1a8928424/ovsdbserver-sb/0.log" Jan 25 08:57:52 crc kubenswrapper[4832]: I0125 08:57:52.302518 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-5cd5868dbb-cxxfw_c6f5e19c-ec70-424e-a446-09b1b78697be/placement-api/0.log" Jan 25 08:57:52 crc kubenswrapper[4832]: I0125 08:57:52.392308 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-5cd5868dbb-cxxfw_c6f5e19c-ec70-424e-a446-09b1b78697be/placement-log/0.log" Jan 25 08:57:52 crc kubenswrapper[4832]: I0125 08:57:52.440077 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_9cf62746-47cb-4e83-9211-57a799a06e93/setup-container/0.log" Jan 25 08:57:52 crc kubenswrapper[4832]: I0125 08:57:52.673648 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_efe389bf-7e64-417c-96c8-d302858a0722/setup-container/0.log" Jan 25 08:57:52 crc kubenswrapper[4832]: I0125 08:57:52.763757 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_9cf62746-47cb-4e83-9211-57a799a06e93/rabbitmq/0.log" Jan 25 08:57:52 crc kubenswrapper[4832]: I0125 08:57:52.799866 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_9cf62746-47cb-4e83-9211-57a799a06e93/setup-container/0.log" Jan 25 08:57:52 crc kubenswrapper[4832]: I0125 08:57:52.917254 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_efe389bf-7e64-417c-96c8-d302858a0722/setup-container/0.log" Jan 25 08:57:53 crc kubenswrapper[4832]: I0125 08:57:53.003631 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_reboot-os-edpm-deployment-openstack-edpm-ipam-x685s_63023ae6-5cfd-4940-8160-7547220bbb5b/reboot-os-edpm-deployment-openstack-edpm-ipam/0.log" Jan 25 08:57:53 crc kubenswrapper[4832]: I0125 08:57:53.023103 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_efe389bf-7e64-417c-96c8-d302858a0722/rabbitmq/0.log" Jan 25 08:57:53 crc kubenswrapper[4832]: I0125 08:57:53.268255 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_repo-setup-edpm-deployment-openstack-edpm-ipam-97bvv_be2a25f4-32ba-4406-b6a6-bdae29720048/repo-setup-edpm-deployment-openstack-edpm-ipam/0.log" Jan 25 08:57:53 crc kubenswrapper[4832]: I0125 08:57:53.320191 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_redhat-edpm-deployment-openstack-edpm-ipam-lr429_306310b5-6753-4a5a-b279-41e070c2f970/redhat-edpm-deployment-openstack-edpm-ipam/0.log" Jan 25 08:57:53 crc kubenswrapper[4832]: I0125 08:57:53.508232 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_run-os-edpm-deployment-openstack-edpm-ipam-qvjw2_acaaf210-0845-4432-b149-30c8c038bfcb/run-os-edpm-deployment-openstack-edpm-ipam/0.log" Jan 25 08:57:53 crc kubenswrapper[4832]: I0125 08:57:53.823843 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ssh-known-hosts-edpm-deployment-7xcl5_977dfa38-e1a5-4daf-b1b4-4be30da2ee0f/ssh-known-hosts-edpm-deployment/0.log" Jan 25 08:57:54 crc kubenswrapper[4832]: I0125 08:57:54.063848 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-658c5f7995-t6v6k_81bd3301-f264-4150-8f71-869af2c1ed3d/proxy-server/0.log" Jan 25 08:57:54 crc kubenswrapper[4832]: I0125 08:57:54.097404 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-658c5f7995-t6v6k_81bd3301-f264-4150-8f71-869af2c1ed3d/proxy-httpd/0.log" Jan 25 08:57:54 crc kubenswrapper[4832]: I0125 08:57:54.192915 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-ring-rebalance-s7nx7_8780670c-4459-4064-a5ee-d22abf7923aa/swift-ring-rebalance/0.log" Jan 25 08:57:54 crc kubenswrapper[4832]: I0125 08:57:54.355539 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_68ef9e02-9e33-48c3-a32b-ceae36687171/account-auditor/0.log" Jan 25 08:57:54 crc kubenswrapper[4832]: I0125 08:57:54.371038 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_68ef9e02-9e33-48c3-a32b-ceae36687171/account-reaper/0.log" Jan 25 08:57:54 crc kubenswrapper[4832]: I0125 08:57:54.438597 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_68ef9e02-9e33-48c3-a32b-ceae36687171/account-replicator/0.log" Jan 25 08:57:54 crc kubenswrapper[4832]: I0125 08:57:54.563606 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_68ef9e02-9e33-48c3-a32b-ceae36687171/container-auditor/0.log" Jan 25 08:57:54 crc kubenswrapper[4832]: I0125 08:57:54.636940 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_68ef9e02-9e33-48c3-a32b-ceae36687171/account-server/0.log" Jan 25 08:57:54 crc kubenswrapper[4832]: I0125 08:57:54.642154 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_68ef9e02-9e33-48c3-a32b-ceae36687171/container-server/0.log" Jan 25 08:57:54 crc kubenswrapper[4832]: I0125 08:57:54.642775 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_68ef9e02-9e33-48c3-a32b-ceae36687171/container-replicator/0.log" Jan 25 08:57:54 crc kubenswrapper[4832]: I0125 08:57:54.811021 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_68ef9e02-9e33-48c3-a32b-ceae36687171/container-updater/0.log" Jan 25 08:57:54 crc kubenswrapper[4832]: I0125 08:57:54.885325 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_68ef9e02-9e33-48c3-a32b-ceae36687171/object-expirer/0.log" Jan 25 08:57:54 crc kubenswrapper[4832]: I0125 08:57:54.907850 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_68ef9e02-9e33-48c3-a32b-ceae36687171/object-auditor/0.log" Jan 25 08:57:54 crc kubenswrapper[4832]: I0125 08:57:54.910979 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_68ef9e02-9e33-48c3-a32b-ceae36687171/object-replicator/0.log" Jan 25 08:57:55 crc kubenswrapper[4832]: I0125 08:57:55.030687 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_68ef9e02-9e33-48c3-a32b-ceae36687171/object-server/0.log" Jan 25 08:57:55 crc kubenswrapper[4832]: I0125 08:57:55.103286 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_68ef9e02-9e33-48c3-a32b-ceae36687171/rsync/0.log" Jan 25 08:57:55 crc kubenswrapper[4832]: I0125 08:57:55.134534 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_68ef9e02-9e33-48c3-a32b-ceae36687171/object-updater/0.log" Jan 25 08:57:55 crc kubenswrapper[4832]: I0125 08:57:55.161318 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_68ef9e02-9e33-48c3-a32b-ceae36687171/swift-recon-cron/0.log" Jan 25 08:57:55 crc kubenswrapper[4832]: I0125 08:57:55.393443 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_telemetry-edpm-deployment-openstack-edpm-ipam-548xj_303826b3-afb9-4ce0-a967-9a30c910c85b/telemetry-edpm-deployment-openstack-edpm-ipam/0.log" Jan 25 08:57:55 crc kubenswrapper[4832]: I0125 08:57:55.426970 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_tempest-tests-tempest_f075c376-fe6e-44de-bb3d-113de5b9fb3f/tempest-tests-tempest-tests-runner/0.log" Jan 25 08:57:55 crc kubenswrapper[4832]: I0125 08:57:55.666876 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_test-operator-logs-pod-tempest-tempest-tests-tempest_5d3f03a6-2f57-4a65-9e70-0828473a9469/test-operator-logs-container/0.log" Jan 25 08:57:55 crc kubenswrapper[4832]: I0125 08:57:55.679818 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_validate-network-edpm-deployment-openstack-edpm-ipam-jb565_51471519-c6e2-4ab1-9536-3443579b4bb1/validate-network-edpm-deployment-openstack-edpm-ipam/0.log" Jan 25 08:57:58 crc kubenswrapper[4832]: I0125 08:57:58.669826 4832 scope.go:117] "RemoveContainer" containerID="47785627d9fed4967d30c7d530949092bec3ab3c86f8b6a114d139f561674311" Jan 25 08:57:58 crc kubenswrapper[4832]: E0125 08:57:58.670543 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9r9sz_openshift-machine-config-operator(1fb47e8e-c812-41b4-9be7-3fad81e121b0)\"" pod="openshift-machine-config-operator/machine-config-daemon-9r9sz" podUID="1fb47e8e-c812-41b4-9be7-3fad81e121b0" Jan 25 08:58:04 crc kubenswrapper[4832]: I0125 08:58:04.578014 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_memcached-0_44713664-4137-4321-baff-36c54dcbae96/memcached/0.log" Jan 25 08:58:12 crc kubenswrapper[4832]: I0125 08:58:12.669741 4832 scope.go:117] "RemoveContainer" containerID="47785627d9fed4967d30c7d530949092bec3ab3c86f8b6a114d139f561674311" Jan 25 08:58:12 crc kubenswrapper[4832]: E0125 08:58:12.670554 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9r9sz_openshift-machine-config-operator(1fb47e8e-c812-41b4-9be7-3fad81e121b0)\"" pod="openshift-machine-config-operator/machine-config-daemon-9r9sz" podUID="1fb47e8e-c812-41b4-9be7-3fad81e121b0" Jan 25 08:58:22 crc kubenswrapper[4832]: I0125 08:58:22.004954 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_2d2f0d7580858c77849655cfe8dde1d34625d82185eda51b1088a6ebe2g2vmq_f27419fd-d9b8-4ae4-ae3c-a9ad071152b2/util/0.log" Jan 25 08:58:22 crc kubenswrapper[4832]: I0125 08:58:22.206568 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_2d2f0d7580858c77849655cfe8dde1d34625d82185eda51b1088a6ebe2g2vmq_f27419fd-d9b8-4ae4-ae3c-a9ad071152b2/util/0.log" Jan 25 08:58:22 crc kubenswrapper[4832]: I0125 08:58:22.210474 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_2d2f0d7580858c77849655cfe8dde1d34625d82185eda51b1088a6ebe2g2vmq_f27419fd-d9b8-4ae4-ae3c-a9ad071152b2/pull/0.log" Jan 25 08:58:22 crc kubenswrapper[4832]: I0125 08:58:22.216096 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_2d2f0d7580858c77849655cfe8dde1d34625d82185eda51b1088a6ebe2g2vmq_f27419fd-d9b8-4ae4-ae3c-a9ad071152b2/pull/0.log" Jan 25 08:58:22 crc kubenswrapper[4832]: I0125 08:58:22.387625 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_2d2f0d7580858c77849655cfe8dde1d34625d82185eda51b1088a6ebe2g2vmq_f27419fd-d9b8-4ae4-ae3c-a9ad071152b2/util/0.log" Jan 25 08:58:22 crc kubenswrapper[4832]: I0125 08:58:22.416770 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_2d2f0d7580858c77849655cfe8dde1d34625d82185eda51b1088a6ebe2g2vmq_f27419fd-d9b8-4ae4-ae3c-a9ad071152b2/pull/0.log" Jan 25 08:58:22 crc kubenswrapper[4832]: I0125 08:58:22.423824 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_2d2f0d7580858c77849655cfe8dde1d34625d82185eda51b1088a6ebe2g2vmq_f27419fd-d9b8-4ae4-ae3c-a9ad071152b2/extract/0.log" Jan 25 08:58:22 crc kubenswrapper[4832]: I0125 08:58:22.636177 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-7f86f8796f-hr9t5_8251d5ba-3a9a-429c-ba20-1af897640ad3/manager/0.log" Jan 25 08:58:22 crc kubenswrapper[4832]: I0125 08:58:22.653104 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-7478f7dbf9-qdwdw_b3a8f752-cc73-4933-88d1-3b661a42ead2/manager/0.log" Jan 25 08:58:22 crc kubenswrapper[4832]: I0125 08:58:22.816861 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-b45d7bf98-75hsw_0cac9e7d-b342-4b55-a667-76fa1c144080/manager/0.log" Jan 25 08:58:22 crc kubenswrapper[4832]: I0125 08:58:22.942061 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-78fdd796fd-mgsq7_b1702aab-2dd8-488f-8a7f-93f43df4b0ab/manager/0.log" Jan 25 08:58:23 crc kubenswrapper[4832]: I0125 08:58:23.030903 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-594c8c9d5d-h4c7b_efdb6007-fdd7-4a18-9dba-4f1571f6f822/manager/0.log" Jan 25 08:58:23 crc kubenswrapper[4832]: I0125 08:58:23.121494 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-77d5c5b54f-nzjmz_3f993c1e-81ae-4e86-9b28-eccb1db48f2b/manager/0.log" Jan 25 08:58:23 crc kubenswrapper[4832]: I0125 08:58:23.383201 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-598f7747c9-t8jng_44be34d2-851c-4bf5-a3fb-87607d045d1f/manager/0.log" Jan 25 08:58:23 crc kubenswrapper[4832]: I0125 08:58:23.424777 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-694cf4f878-vt5m9_29b29aa4-b326-4515-9842-6d848c208096/manager/0.log" Jan 25 08:58:23 crc kubenswrapper[4832]: I0125 08:58:23.564792 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-b8b6d4659-vvwcx_50da9b0d-da00-4211-95cd-0218828341e5/manager/0.log" Jan 25 08:58:23 crc kubenswrapper[4832]: I0125 08:58:23.621371 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-78c6999f6f-mstsp_d75c853c-428e-4f6a-8a82-a050b71af662/manager/0.log" Jan 25 08:58:23 crc kubenswrapper[4832]: I0125 08:58:23.757171 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-6b9fb5fdcb-4k5f7_31cef49b-390b-4029-bdc4-64893be3d183/manager/0.log" Jan 25 08:58:23 crc kubenswrapper[4832]: I0125 08:58:23.861304 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-78d58447c5-hpqjz_0c897c34-1c91-416c-91e2-65ae83958e10/manager/0.log" Jan 25 08:58:24 crc kubenswrapper[4832]: I0125 08:58:24.021520 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-7bdb645866-q67lr_d221c44f-6fb5-4b96-b84e-f1d55253ed08/manager/0.log" Jan 25 08:58:24 crc kubenswrapper[4832]: I0125 08:58:24.055652 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-5f4cd88d46-642xd_b618d12e-02c2-4ae7-872a-15bd233259b5/manager/0.log" Jan 25 08:58:24 crc kubenswrapper[4832]: I0125 08:58:24.223247 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-6b68b8b854b8jhw_3b784c4a-e1cf-42fb-ad96-dca059f63e79/manager/0.log" Jan 25 08:58:24 crc kubenswrapper[4832]: I0125 08:58:24.387243 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-init-6d9d58658-glj79_6daad9ca-374e-4351-b5f4-3b262d9816b6/operator/0.log" Jan 25 08:58:24 crc kubenswrapper[4832]: I0125 08:58:24.599735 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-k945x_40c93737-1880-48e7-a342-d3a8c8a5ad68/registry-server/0.log" Jan 25 08:58:24 crc kubenswrapper[4832]: I0125 08:58:24.846791 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-6f75f45d54-cf7rg_8d21c83b-b981-4466-b81a-ed7954d1f3cb/manager/0.log" Jan 25 08:58:24 crc kubenswrapper[4832]: I0125 08:58:24.883649 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-79d5ccc684-lrsxz_1e30c775-7a32-478e-8c3c-7312757f846b/manager/0.log" Jan 25 08:58:25 crc kubenswrapper[4832]: I0125 08:58:25.081901 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-668c99d594-f87nw_cdb822ca-2a1d-4b10-8d44-f2cb33173358/operator/0.log" Jan 25 08:58:25 crc kubenswrapper[4832]: I0125 08:58:25.304335 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-547cbdb99f-zwlrf_eb801494-724f-482a-a359-896e5b735b62/manager/0.log" Jan 25 08:58:25 crc kubenswrapper[4832]: I0125 08:58:25.493087 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-85cd9769bb-59gds_47605944-bcb8-4196-9eb3-b26c2e923e70/manager/0.log" Jan 25 08:58:25 crc kubenswrapper[4832]: I0125 08:58:25.608990 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-745947945d-jwhxb_1529f819-52bd-428f-970f-5f67f071e729/manager/0.log" Jan 25 08:58:25 crc kubenswrapper[4832]: I0125 08:58:25.630741 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-69797bbcbd-qnxqc_c3356b9d-3a3c-4583-9803-d08fcb621401/manager/0.log" Jan 25 08:58:25 crc kubenswrapper[4832]: I0125 08:58:25.777154 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-564965969-57npv_1f038807-2bed-41a2-aecd-35d29e529eb8/manager/0.log" Jan 25 08:58:26 crc kubenswrapper[4832]: I0125 08:58:26.669880 4832 scope.go:117] "RemoveContainer" containerID="47785627d9fed4967d30c7d530949092bec3ab3c86f8b6a114d139f561674311" Jan 25 08:58:26 crc kubenswrapper[4832]: E0125 08:58:26.670193 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9r9sz_openshift-machine-config-operator(1fb47e8e-c812-41b4-9be7-3fad81e121b0)\"" pod="openshift-machine-config-operator/machine-config-daemon-9r9sz" podUID="1fb47e8e-c812-41b4-9be7-3fad81e121b0" Jan 25 08:58:41 crc kubenswrapper[4832]: I0125 08:58:41.669790 4832 scope.go:117] "RemoveContainer" containerID="47785627d9fed4967d30c7d530949092bec3ab3c86f8b6a114d139f561674311" Jan 25 08:58:41 crc kubenswrapper[4832]: E0125 08:58:41.670608 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9r9sz_openshift-machine-config-operator(1fb47e8e-c812-41b4-9be7-3fad81e121b0)\"" pod="openshift-machine-config-operator/machine-config-daemon-9r9sz" podUID="1fb47e8e-c812-41b4-9be7-3fad81e121b0" Jan 25 08:58:43 crc kubenswrapper[4832]: I0125 08:58:43.658367 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-fns8l_a32ac557-809a-4a0d-8c18-3c8c5730e849/control-plane-machine-set-operator/0.log" Jan 25 08:58:43 crc kubenswrapper[4832]: I0125 08:58:43.842521 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-29fbk_6afbd903-07e1-4806-9a41-a073a6a4acb7/kube-rbac-proxy/0.log" Jan 25 08:58:43 crc kubenswrapper[4832]: I0125 08:58:43.896494 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-29fbk_6afbd903-07e1-4806-9a41-a073a6a4acb7/machine-api-operator/0.log" Jan 25 08:58:53 crc kubenswrapper[4832]: I0125 08:58:53.670432 4832 scope.go:117] "RemoveContainer" containerID="47785627d9fed4967d30c7d530949092bec3ab3c86f8b6a114d139f561674311" Jan 25 08:58:53 crc kubenswrapper[4832]: E0125 08:58:53.671249 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9r9sz_openshift-machine-config-operator(1fb47e8e-c812-41b4-9be7-3fad81e121b0)\"" pod="openshift-machine-config-operator/machine-config-daemon-9r9sz" podUID="1fb47e8e-c812-41b4-9be7-3fad81e121b0" Jan 25 08:58:55 crc kubenswrapper[4832]: I0125 08:58:55.614748 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-858654f9db-n5qlr_3f1a7c21-638b-4421-b695-12d246c8909c/cert-manager-controller/0.log" Jan 25 08:58:55 crc kubenswrapper[4832]: I0125 08:58:55.769767 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-cf98fcc89-m4mtp_93467136-4fbc-430d-88c8-44d921001d30/cert-manager-cainjector/0.log" Jan 25 08:58:55 crc kubenswrapper[4832]: I0125 08:58:55.845233 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-687f57d79b-5kx64_b8b3bc3a-3311-4381-98b3-546a392b9967/cert-manager-webhook/0.log" Jan 25 08:59:05 crc kubenswrapper[4832]: I0125 08:59:05.669516 4832 scope.go:117] "RemoveContainer" containerID="47785627d9fed4967d30c7d530949092bec3ab3c86f8b6a114d139f561674311" Jan 25 08:59:05 crc kubenswrapper[4832]: E0125 08:59:05.670370 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9r9sz_openshift-machine-config-operator(1fb47e8e-c812-41b4-9be7-3fad81e121b0)\"" pod="openshift-machine-config-operator/machine-config-daemon-9r9sz" podUID="1fb47e8e-c812-41b4-9be7-3fad81e121b0" Jan 25 08:59:08 crc kubenswrapper[4832]: I0125 08:59:08.472086 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-console-plugin-7754f76f8b-q6rnr_2a4c7b1f-f7e7-4fa7-b912-0950280f6c5c/nmstate-console-plugin/0.log" Jan 25 08:59:08 crc kubenswrapper[4832]: I0125 08:59:08.585763 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-handler-rjtfb_83613ef6-706d-43d4-b310-98579e87fb5a/nmstate-handler/0.log" Jan 25 08:59:08 crc kubenswrapper[4832]: I0125 08:59:08.654293 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-54757c584b-2kvpm_e53d5a55-a9e1-406f-a7c0-b3e6bee8e9ce/kube-rbac-proxy/0.log" Jan 25 08:59:08 crc kubenswrapper[4832]: I0125 08:59:08.701684 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-54757c584b-2kvpm_e53d5a55-a9e1-406f-a7c0-b3e6bee8e9ce/nmstate-metrics/0.log" Jan 25 08:59:08 crc kubenswrapper[4832]: I0125 08:59:08.829188 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-operator-646758c888-8j4d7_fdb77b21-70d0-4666-807f-60d0aed1040a/nmstate-operator/0.log" Jan 25 08:59:08 crc kubenswrapper[4832]: I0125 08:59:08.917341 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-webhook-8474b5b9d8-c4g4v_fe63b032-94cc-4495-bc9b-84040a04da49/nmstate-webhook/0.log" Jan 25 08:59:19 crc kubenswrapper[4832]: I0125 08:59:19.670332 4832 scope.go:117] "RemoveContainer" containerID="47785627d9fed4967d30c7d530949092bec3ab3c86f8b6a114d139f561674311" Jan 25 08:59:19 crc kubenswrapper[4832]: E0125 08:59:19.673968 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9r9sz_openshift-machine-config-operator(1fb47e8e-c812-41b4-9be7-3fad81e121b0)\"" pod="openshift-machine-config-operator/machine-config-daemon-9r9sz" podUID="1fb47e8e-c812-41b4-9be7-3fad81e121b0" Jan 25 08:59:32 crc kubenswrapper[4832]: I0125 08:59:32.671410 4832 scope.go:117] "RemoveContainer" containerID="47785627d9fed4967d30c7d530949092bec3ab3c86f8b6a114d139f561674311" Jan 25 08:59:32 crc kubenswrapper[4832]: E0125 08:59:32.672133 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9r9sz_openshift-machine-config-operator(1fb47e8e-c812-41b4-9be7-3fad81e121b0)\"" pod="openshift-machine-config-operator/machine-config-daemon-9r9sz" podUID="1fb47e8e-c812-41b4-9be7-3fad81e121b0" Jan 25 08:59:36 crc kubenswrapper[4832]: I0125 08:59:36.649922 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6968d8fdc4-z2hg2_80c752a5-a0c6-4968-8f2f-4b5aa047c6c5/kube-rbac-proxy/0.log" Jan 25 08:59:36 crc kubenswrapper[4832]: I0125 08:59:36.788540 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6968d8fdc4-z2hg2_80c752a5-a0c6-4968-8f2f-4b5aa047c6c5/controller/0.log" Jan 25 08:59:36 crc kubenswrapper[4832]: I0125 08:59:36.904500 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-6zmfq_c203bd63-9985-423a-bc14-8542960372f1/cp-frr-files/0.log" Jan 25 08:59:37 crc kubenswrapper[4832]: I0125 08:59:37.086764 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-6zmfq_c203bd63-9985-423a-bc14-8542960372f1/cp-reloader/0.log" Jan 25 08:59:37 crc kubenswrapper[4832]: I0125 08:59:37.092516 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-6zmfq_c203bd63-9985-423a-bc14-8542960372f1/cp-frr-files/0.log" Jan 25 08:59:37 crc kubenswrapper[4832]: I0125 08:59:37.099296 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-6zmfq_c203bd63-9985-423a-bc14-8542960372f1/cp-reloader/0.log" Jan 25 08:59:37 crc kubenswrapper[4832]: I0125 08:59:37.115851 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-6zmfq_c203bd63-9985-423a-bc14-8542960372f1/cp-metrics/0.log" Jan 25 08:59:37 crc kubenswrapper[4832]: I0125 08:59:37.267795 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-6zmfq_c203bd63-9985-423a-bc14-8542960372f1/cp-frr-files/0.log" Jan 25 08:59:37 crc kubenswrapper[4832]: I0125 08:59:37.312915 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-6zmfq_c203bd63-9985-423a-bc14-8542960372f1/cp-reloader/0.log" Jan 25 08:59:37 crc kubenswrapper[4832]: I0125 08:59:37.330166 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-6zmfq_c203bd63-9985-423a-bc14-8542960372f1/cp-metrics/0.log" Jan 25 08:59:37 crc kubenswrapper[4832]: I0125 08:59:37.336266 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-6zmfq_c203bd63-9985-423a-bc14-8542960372f1/cp-metrics/0.log" Jan 25 08:59:37 crc kubenswrapper[4832]: I0125 08:59:37.483982 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-6zmfq_c203bd63-9985-423a-bc14-8542960372f1/cp-frr-files/0.log" Jan 25 08:59:37 crc kubenswrapper[4832]: I0125 08:59:37.505755 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-6zmfq_c203bd63-9985-423a-bc14-8542960372f1/cp-reloader/0.log" Jan 25 08:59:37 crc kubenswrapper[4832]: I0125 08:59:37.510155 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-6zmfq_c203bd63-9985-423a-bc14-8542960372f1/cp-metrics/0.log" Jan 25 08:59:37 crc kubenswrapper[4832]: I0125 08:59:37.531830 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-6zmfq_c203bd63-9985-423a-bc14-8542960372f1/controller/0.log" Jan 25 08:59:37 crc kubenswrapper[4832]: I0125 08:59:37.684653 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-6zmfq_c203bd63-9985-423a-bc14-8542960372f1/frr-metrics/0.log" Jan 25 08:59:37 crc kubenswrapper[4832]: I0125 08:59:37.769070 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-6zmfq_c203bd63-9985-423a-bc14-8542960372f1/kube-rbac-proxy-frr/0.log" Jan 25 08:59:37 crc kubenswrapper[4832]: I0125 08:59:37.775087 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-6zmfq_c203bd63-9985-423a-bc14-8542960372f1/kube-rbac-proxy/0.log" Jan 25 08:59:37 crc kubenswrapper[4832]: I0125 08:59:37.891374 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-6zmfq_c203bd63-9985-423a-bc14-8542960372f1/reloader/0.log" Jan 25 08:59:38 crc kubenswrapper[4832]: I0125 08:59:38.022770 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-7df86c4f6c-np4h7_940e2830-7ef2-4237-a053-6981a3bbf2b3/frr-k8s-webhook-server/0.log" Jan 25 08:59:38 crc kubenswrapper[4832]: I0125 08:59:38.291057 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-5864b67f75-pvtmd_71c97cd3-3f75-4fbd-84d8-f08942aba882/manager/0.log" Jan 25 08:59:38 crc kubenswrapper[4832]: I0125 08:59:38.304975 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-ffcf449bb-jz2q4_d6219f5c-261f-419a-b3de-ec9119991024/webhook-server/0.log" Jan 25 08:59:38 crc kubenswrapper[4832]: I0125 08:59:38.502350 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-lbb8k_4095df57-d3c6-4d95-8f54-1d5eafc2a919/kube-rbac-proxy/0.log" Jan 25 08:59:38 crc kubenswrapper[4832]: I0125 08:59:38.952257 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-lbb8k_4095df57-d3c6-4d95-8f54-1d5eafc2a919/speaker/0.log" Jan 25 08:59:39 crc kubenswrapper[4832]: I0125 08:59:39.044120 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-6zmfq_c203bd63-9985-423a-bc14-8542960372f1/frr/0.log" Jan 25 08:59:47 crc kubenswrapper[4832]: I0125 08:59:47.676481 4832 scope.go:117] "RemoveContainer" containerID="47785627d9fed4967d30c7d530949092bec3ab3c86f8b6a114d139f561674311" Jan 25 08:59:47 crc kubenswrapper[4832]: E0125 08:59:47.677400 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9r9sz_openshift-machine-config-operator(1fb47e8e-c812-41b4-9be7-3fad81e121b0)\"" pod="openshift-machine-config-operator/machine-config-daemon-9r9sz" podUID="1fb47e8e-c812-41b4-9be7-3fad81e121b0" Jan 25 08:59:50 crc kubenswrapper[4832]: I0125 08:59:50.629703 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcfvv6m_c23342e3-9a86-4405-823c-ba9e4f90a4da/util/0.log" Jan 25 08:59:50 crc kubenswrapper[4832]: I0125 08:59:50.886127 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcfvv6m_c23342e3-9a86-4405-823c-ba9e4f90a4da/util/0.log" Jan 25 08:59:50 crc kubenswrapper[4832]: I0125 08:59:50.886856 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcfvv6m_c23342e3-9a86-4405-823c-ba9e4f90a4da/pull/0.log" Jan 25 08:59:50 crc kubenswrapper[4832]: I0125 08:59:50.952368 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcfvv6m_c23342e3-9a86-4405-823c-ba9e4f90a4da/pull/0.log" Jan 25 08:59:51 crc kubenswrapper[4832]: I0125 08:59:51.111448 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcfvv6m_c23342e3-9a86-4405-823c-ba9e4f90a4da/util/0.log" Jan 25 08:59:51 crc kubenswrapper[4832]: I0125 08:59:51.143905 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcfvv6m_c23342e3-9a86-4405-823c-ba9e4f90a4da/pull/0.log" Jan 25 08:59:51 crc kubenswrapper[4832]: I0125 08:59:51.144907 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcfvv6m_c23342e3-9a86-4405-823c-ba9e4f90a4da/extract/0.log" Jan 25 08:59:51 crc kubenswrapper[4832]: I0125 08:59:51.280707 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7139bh59_65372180-5040-413f-a789-bebad10ff6d8/util/0.log" Jan 25 08:59:51 crc kubenswrapper[4832]: I0125 08:59:51.459723 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7139bh59_65372180-5040-413f-a789-bebad10ff6d8/util/0.log" Jan 25 08:59:51 crc kubenswrapper[4832]: I0125 08:59:51.500885 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7139bh59_65372180-5040-413f-a789-bebad10ff6d8/pull/0.log" Jan 25 08:59:51 crc kubenswrapper[4832]: I0125 08:59:51.526188 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7139bh59_65372180-5040-413f-a789-bebad10ff6d8/pull/0.log" Jan 25 08:59:51 crc kubenswrapper[4832]: I0125 08:59:51.695744 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7139bh59_65372180-5040-413f-a789-bebad10ff6d8/util/0.log" Jan 25 08:59:51 crc kubenswrapper[4832]: I0125 08:59:51.714534 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7139bh59_65372180-5040-413f-a789-bebad10ff6d8/pull/0.log" Jan 25 08:59:51 crc kubenswrapper[4832]: I0125 08:59:51.717239 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7139bh59_65372180-5040-413f-a789-bebad10ff6d8/extract/0.log" Jan 25 08:59:51 crc kubenswrapper[4832]: I0125 08:59:51.874909 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-8dnnk_ab8542fb-edc3-4aac-9c78-41ec2ff8981f/extract-utilities/0.log" Jan 25 08:59:52 crc kubenswrapper[4832]: I0125 08:59:52.051305 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-8dnnk_ab8542fb-edc3-4aac-9c78-41ec2ff8981f/extract-utilities/0.log" Jan 25 08:59:52 crc kubenswrapper[4832]: I0125 08:59:52.076802 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-8dnnk_ab8542fb-edc3-4aac-9c78-41ec2ff8981f/extract-content/0.log" Jan 25 08:59:52 crc kubenswrapper[4832]: I0125 08:59:52.099310 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-8dnnk_ab8542fb-edc3-4aac-9c78-41ec2ff8981f/extract-content/0.log" Jan 25 08:59:52 crc kubenswrapper[4832]: I0125 08:59:52.280198 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-8dnnk_ab8542fb-edc3-4aac-9c78-41ec2ff8981f/extract-content/0.log" Jan 25 08:59:52 crc kubenswrapper[4832]: I0125 08:59:52.303668 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-8dnnk_ab8542fb-edc3-4aac-9c78-41ec2ff8981f/extract-utilities/0.log" Jan 25 08:59:52 crc kubenswrapper[4832]: I0125 08:59:52.486830 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-cjfdq_b4371fdc-00c0-4e6a-a877-b17501271922/extract-utilities/0.log" Jan 25 08:59:52 crc kubenswrapper[4832]: I0125 08:59:52.723564 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-cjfdq_b4371fdc-00c0-4e6a-a877-b17501271922/extract-utilities/0.log" Jan 25 08:59:52 crc kubenswrapper[4832]: I0125 08:59:52.729309 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-cjfdq_b4371fdc-00c0-4e6a-a877-b17501271922/extract-content/0.log" Jan 25 08:59:52 crc kubenswrapper[4832]: I0125 08:59:52.786054 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-cjfdq_b4371fdc-00c0-4e6a-a877-b17501271922/extract-content/0.log" Jan 25 08:59:52 crc kubenswrapper[4832]: I0125 08:59:52.910155 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-8dnnk_ab8542fb-edc3-4aac-9c78-41ec2ff8981f/registry-server/0.log" Jan 25 08:59:52 crc kubenswrapper[4832]: I0125 08:59:52.956558 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-cjfdq_b4371fdc-00c0-4e6a-a877-b17501271922/extract-content/0.log" Jan 25 08:59:52 crc kubenswrapper[4832]: I0125 08:59:52.984989 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-cjfdq_b4371fdc-00c0-4e6a-a877-b17501271922/extract-utilities/0.log" Jan 25 08:59:53 crc kubenswrapper[4832]: I0125 08:59:53.164441 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-ncr8s_12e3f428-4b38-471d-8048-e3d55ce0d4b4/marketplace-operator/0.log" Jan 25 08:59:53 crc kubenswrapper[4832]: I0125 08:59:53.312892 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-228pm_5c017036-4f0f-41d7-86b8-52d5216b44ba/extract-utilities/0.log" Jan 25 08:59:53 crc kubenswrapper[4832]: I0125 08:59:53.593197 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-228pm_5c017036-4f0f-41d7-86b8-52d5216b44ba/extract-utilities/0.log" Jan 25 08:59:53 crc kubenswrapper[4832]: I0125 08:59:53.646646 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-228pm_5c017036-4f0f-41d7-86b8-52d5216b44ba/extract-content/0.log" Jan 25 08:59:53 crc kubenswrapper[4832]: I0125 08:59:53.701102 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-cjfdq_b4371fdc-00c0-4e6a-a877-b17501271922/registry-server/0.log" Jan 25 08:59:53 crc kubenswrapper[4832]: I0125 08:59:53.743372 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-228pm_5c017036-4f0f-41d7-86b8-52d5216b44ba/extract-content/0.log" Jan 25 08:59:53 crc kubenswrapper[4832]: I0125 08:59:53.815046 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-228pm_5c017036-4f0f-41d7-86b8-52d5216b44ba/extract-utilities/0.log" Jan 25 08:59:53 crc kubenswrapper[4832]: I0125 08:59:53.848167 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-228pm_5c017036-4f0f-41d7-86b8-52d5216b44ba/extract-content/0.log" Jan 25 08:59:54 crc kubenswrapper[4832]: I0125 08:59:54.053350 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-228pm_5c017036-4f0f-41d7-86b8-52d5216b44ba/registry-server/0.log" Jan 25 08:59:54 crc kubenswrapper[4832]: I0125 08:59:54.076322 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-fnkc8_8676ecdd-5a18-4dfb-aa09-0c398279d340/extract-utilities/0.log" Jan 25 08:59:54 crc kubenswrapper[4832]: I0125 08:59:54.184010 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-fnkc8_8676ecdd-5a18-4dfb-aa09-0c398279d340/extract-content/0.log" Jan 25 08:59:54 crc kubenswrapper[4832]: I0125 08:59:54.219583 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-fnkc8_8676ecdd-5a18-4dfb-aa09-0c398279d340/extract-content/0.log" Jan 25 08:59:54 crc kubenswrapper[4832]: I0125 08:59:54.237561 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-fnkc8_8676ecdd-5a18-4dfb-aa09-0c398279d340/extract-utilities/0.log" Jan 25 08:59:54 crc kubenswrapper[4832]: I0125 08:59:54.440287 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-fnkc8_8676ecdd-5a18-4dfb-aa09-0c398279d340/extract-utilities/0.log" Jan 25 08:59:54 crc kubenswrapper[4832]: I0125 08:59:54.479128 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-fnkc8_8676ecdd-5a18-4dfb-aa09-0c398279d340/extract-content/0.log" Jan 25 08:59:54 crc kubenswrapper[4832]: I0125 08:59:54.884324 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-fnkc8_8676ecdd-5a18-4dfb-aa09-0c398279d340/registry-server/0.log" Jan 25 08:59:58 crc kubenswrapper[4832]: I0125 08:59:58.669973 4832 scope.go:117] "RemoveContainer" containerID="47785627d9fed4967d30c7d530949092bec3ab3c86f8b6a114d139f561674311" Jan 25 08:59:58 crc kubenswrapper[4832]: E0125 08:59:58.670851 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9r9sz_openshift-machine-config-operator(1fb47e8e-c812-41b4-9be7-3fad81e121b0)\"" pod="openshift-machine-config-operator/machine-config-daemon-9r9sz" podUID="1fb47e8e-c812-41b4-9be7-3fad81e121b0" Jan 25 09:00:00 crc kubenswrapper[4832]: I0125 09:00:00.167025 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29488860-7tx5g"] Jan 25 09:00:00 crc kubenswrapper[4832]: E0125 09:00:00.167815 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d6fcba51-8bea-4761-93a4-eb10626cef22" containerName="container-00" Jan 25 09:00:00 crc kubenswrapper[4832]: I0125 09:00:00.167834 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="d6fcba51-8bea-4761-93a4-eb10626cef22" containerName="container-00" Jan 25 09:00:00 crc kubenswrapper[4832]: I0125 09:00:00.168050 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="d6fcba51-8bea-4761-93a4-eb10626cef22" containerName="container-00" Jan 25 09:00:00 crc kubenswrapper[4832]: I0125 09:00:00.168774 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29488860-7tx5g" Jan 25 09:00:00 crc kubenswrapper[4832]: I0125 09:00:00.171372 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 25 09:00:00 crc kubenswrapper[4832]: I0125 09:00:00.172046 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 25 09:00:00 crc kubenswrapper[4832]: I0125 09:00:00.185254 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29488860-7tx5g"] Jan 25 09:00:00 crc kubenswrapper[4832]: I0125 09:00:00.248787 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c3106b1e-4681-44d9-afb2-f5cc69f93e50-secret-volume\") pod \"collect-profiles-29488860-7tx5g\" (UID: \"c3106b1e-4681-44d9-afb2-f5cc69f93e50\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29488860-7tx5g" Jan 25 09:00:00 crc kubenswrapper[4832]: I0125 09:00:00.248927 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6c8b2\" (UniqueName: \"kubernetes.io/projected/c3106b1e-4681-44d9-afb2-f5cc69f93e50-kube-api-access-6c8b2\") pod \"collect-profiles-29488860-7tx5g\" (UID: \"c3106b1e-4681-44d9-afb2-f5cc69f93e50\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29488860-7tx5g" Jan 25 09:00:00 crc kubenswrapper[4832]: I0125 09:00:00.248958 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c3106b1e-4681-44d9-afb2-f5cc69f93e50-config-volume\") pod \"collect-profiles-29488860-7tx5g\" (UID: \"c3106b1e-4681-44d9-afb2-f5cc69f93e50\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29488860-7tx5g" Jan 25 09:00:00 crc kubenswrapper[4832]: I0125 09:00:00.350959 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c3106b1e-4681-44d9-afb2-f5cc69f93e50-secret-volume\") pod \"collect-profiles-29488860-7tx5g\" (UID: \"c3106b1e-4681-44d9-afb2-f5cc69f93e50\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29488860-7tx5g" Jan 25 09:00:00 crc kubenswrapper[4832]: I0125 09:00:00.351088 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6c8b2\" (UniqueName: \"kubernetes.io/projected/c3106b1e-4681-44d9-afb2-f5cc69f93e50-kube-api-access-6c8b2\") pod \"collect-profiles-29488860-7tx5g\" (UID: \"c3106b1e-4681-44d9-afb2-f5cc69f93e50\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29488860-7tx5g" Jan 25 09:00:00 crc kubenswrapper[4832]: I0125 09:00:00.351116 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c3106b1e-4681-44d9-afb2-f5cc69f93e50-config-volume\") pod \"collect-profiles-29488860-7tx5g\" (UID: \"c3106b1e-4681-44d9-afb2-f5cc69f93e50\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29488860-7tx5g" Jan 25 09:00:00 crc kubenswrapper[4832]: I0125 09:00:00.352261 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c3106b1e-4681-44d9-afb2-f5cc69f93e50-config-volume\") pod \"collect-profiles-29488860-7tx5g\" (UID: \"c3106b1e-4681-44d9-afb2-f5cc69f93e50\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29488860-7tx5g" Jan 25 09:00:00 crc kubenswrapper[4832]: I0125 09:00:00.359088 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c3106b1e-4681-44d9-afb2-f5cc69f93e50-secret-volume\") pod \"collect-profiles-29488860-7tx5g\" (UID: \"c3106b1e-4681-44d9-afb2-f5cc69f93e50\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29488860-7tx5g" Jan 25 09:00:00 crc kubenswrapper[4832]: I0125 09:00:00.365939 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6c8b2\" (UniqueName: \"kubernetes.io/projected/c3106b1e-4681-44d9-afb2-f5cc69f93e50-kube-api-access-6c8b2\") pod \"collect-profiles-29488860-7tx5g\" (UID: \"c3106b1e-4681-44d9-afb2-f5cc69f93e50\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29488860-7tx5g" Jan 25 09:00:00 crc kubenswrapper[4832]: I0125 09:00:00.490711 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29488860-7tx5g" Jan 25 09:00:00 crc kubenswrapper[4832]: I0125 09:00:00.934376 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29488860-7tx5g"] Jan 25 09:00:00 crc kubenswrapper[4832]: W0125 09:00:00.939688 4832 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc3106b1e_4681_44d9_afb2_f5cc69f93e50.slice/crio-5a32b8729c1b9f5d5d1ad2031d8f1df7981547ad42f44d9016d32912cfedb024 WatchSource:0}: Error finding container 5a32b8729c1b9f5d5d1ad2031d8f1df7981547ad42f44d9016d32912cfedb024: Status 404 returned error can't find the container with id 5a32b8729c1b9f5d5d1ad2031d8f1df7981547ad42f44d9016d32912cfedb024 Jan 25 09:00:01 crc kubenswrapper[4832]: I0125 09:00:01.219678 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29488860-7tx5g" event={"ID":"c3106b1e-4681-44d9-afb2-f5cc69f93e50","Type":"ContainerStarted","Data":"d0320994aca0efc9012bf852dd32648ea488f799c2fdc5fe609115353bf4dd18"} Jan 25 09:00:01 crc kubenswrapper[4832]: I0125 09:00:01.219727 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29488860-7tx5g" event={"ID":"c3106b1e-4681-44d9-afb2-f5cc69f93e50","Type":"ContainerStarted","Data":"5a32b8729c1b9f5d5d1ad2031d8f1df7981547ad42f44d9016d32912cfedb024"} Jan 25 09:00:01 crc kubenswrapper[4832]: I0125 09:00:01.238789 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29488860-7tx5g" podStartSLOduration=1.23876339 podStartE2EDuration="1.23876339s" podCreationTimestamp="2026-01-25 09:00:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-25 09:00:01.236049515 +0000 UTC m=+3783.909873048" watchObservedRunningTime="2026-01-25 09:00:01.23876339 +0000 UTC m=+3783.912586923" Jan 25 09:00:02 crc kubenswrapper[4832]: I0125 09:00:02.232587 4832 generic.go:334] "Generic (PLEG): container finished" podID="c3106b1e-4681-44d9-afb2-f5cc69f93e50" containerID="d0320994aca0efc9012bf852dd32648ea488f799c2fdc5fe609115353bf4dd18" exitCode=0 Jan 25 09:00:02 crc kubenswrapper[4832]: I0125 09:00:02.232876 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29488860-7tx5g" event={"ID":"c3106b1e-4681-44d9-afb2-f5cc69f93e50","Type":"ContainerDied","Data":"d0320994aca0efc9012bf852dd32648ea488f799c2fdc5fe609115353bf4dd18"} Jan 25 09:00:03 crc kubenswrapper[4832]: I0125 09:00:03.595472 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29488860-7tx5g" Jan 25 09:00:03 crc kubenswrapper[4832]: I0125 09:00:03.716816 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6c8b2\" (UniqueName: \"kubernetes.io/projected/c3106b1e-4681-44d9-afb2-f5cc69f93e50-kube-api-access-6c8b2\") pod \"c3106b1e-4681-44d9-afb2-f5cc69f93e50\" (UID: \"c3106b1e-4681-44d9-afb2-f5cc69f93e50\") " Jan 25 09:00:03 crc kubenswrapper[4832]: I0125 09:00:03.716915 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c3106b1e-4681-44d9-afb2-f5cc69f93e50-config-volume\") pod \"c3106b1e-4681-44d9-afb2-f5cc69f93e50\" (UID: \"c3106b1e-4681-44d9-afb2-f5cc69f93e50\") " Jan 25 09:00:03 crc kubenswrapper[4832]: I0125 09:00:03.717083 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c3106b1e-4681-44d9-afb2-f5cc69f93e50-secret-volume\") pod \"c3106b1e-4681-44d9-afb2-f5cc69f93e50\" (UID: \"c3106b1e-4681-44d9-afb2-f5cc69f93e50\") " Jan 25 09:00:03 crc kubenswrapper[4832]: I0125 09:00:03.717516 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c3106b1e-4681-44d9-afb2-f5cc69f93e50-config-volume" (OuterVolumeSpecName: "config-volume") pod "c3106b1e-4681-44d9-afb2-f5cc69f93e50" (UID: "c3106b1e-4681-44d9-afb2-f5cc69f93e50"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 25 09:00:03 crc kubenswrapper[4832]: I0125 09:00:03.717945 4832 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c3106b1e-4681-44d9-afb2-f5cc69f93e50-config-volume\") on node \"crc\" DevicePath \"\"" Jan 25 09:00:03 crc kubenswrapper[4832]: I0125 09:00:03.722904 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c3106b1e-4681-44d9-afb2-f5cc69f93e50-kube-api-access-6c8b2" (OuterVolumeSpecName: "kube-api-access-6c8b2") pod "c3106b1e-4681-44d9-afb2-f5cc69f93e50" (UID: "c3106b1e-4681-44d9-afb2-f5cc69f93e50"). InnerVolumeSpecName "kube-api-access-6c8b2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 25 09:00:03 crc kubenswrapper[4832]: I0125 09:00:03.735492 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c3106b1e-4681-44d9-afb2-f5cc69f93e50-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "c3106b1e-4681-44d9-afb2-f5cc69f93e50" (UID: "c3106b1e-4681-44d9-afb2-f5cc69f93e50"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 25 09:00:03 crc kubenswrapper[4832]: I0125 09:00:03.820236 4832 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c3106b1e-4681-44d9-afb2-f5cc69f93e50-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 25 09:00:03 crc kubenswrapper[4832]: I0125 09:00:03.820274 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6c8b2\" (UniqueName: \"kubernetes.io/projected/c3106b1e-4681-44d9-afb2-f5cc69f93e50-kube-api-access-6c8b2\") on node \"crc\" DevicePath \"\"" Jan 25 09:00:04 crc kubenswrapper[4832]: I0125 09:00:04.298337 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29488860-7tx5g" event={"ID":"c3106b1e-4681-44d9-afb2-f5cc69f93e50","Type":"ContainerDied","Data":"5a32b8729c1b9f5d5d1ad2031d8f1df7981547ad42f44d9016d32912cfedb024"} Jan 25 09:00:04 crc kubenswrapper[4832]: I0125 09:00:04.298399 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29488860-7tx5g" Jan 25 09:00:04 crc kubenswrapper[4832]: I0125 09:00:04.298411 4832 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5a32b8729c1b9f5d5d1ad2031d8f1df7981547ad42f44d9016d32912cfedb024" Jan 25 09:00:04 crc kubenswrapper[4832]: I0125 09:00:04.322282 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29488815-gd6rm"] Jan 25 09:00:04 crc kubenswrapper[4832]: I0125 09:00:04.335307 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29488815-gd6rm"] Jan 25 09:00:05 crc kubenswrapper[4832]: I0125 09:00:05.681082 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a053d916-f24b-4013-b7bf-9a4abe14e218" path="/var/lib/kubelet/pods/a053d916-f24b-4013-b7bf-9a4abe14e218/volumes" Jan 25 09:00:12 crc kubenswrapper[4832]: I0125 09:00:12.670012 4832 scope.go:117] "RemoveContainer" containerID="47785627d9fed4967d30c7d530949092bec3ab3c86f8b6a114d139f561674311" Jan 25 09:00:12 crc kubenswrapper[4832]: E0125 09:00:12.670920 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9r9sz_openshift-machine-config-operator(1fb47e8e-c812-41b4-9be7-3fad81e121b0)\"" pod="openshift-machine-config-operator/machine-config-daemon-9r9sz" podUID="1fb47e8e-c812-41b4-9be7-3fad81e121b0" Jan 25 09:00:16 crc kubenswrapper[4832]: I0125 09:00:16.438249 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-h688r"] Jan 25 09:00:16 crc kubenswrapper[4832]: E0125 09:00:16.440618 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c3106b1e-4681-44d9-afb2-f5cc69f93e50" containerName="collect-profiles" Jan 25 09:00:16 crc kubenswrapper[4832]: I0125 09:00:16.440747 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="c3106b1e-4681-44d9-afb2-f5cc69f93e50" containerName="collect-profiles" Jan 25 09:00:16 crc kubenswrapper[4832]: I0125 09:00:16.441163 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="c3106b1e-4681-44d9-afb2-f5cc69f93e50" containerName="collect-profiles" Jan 25 09:00:16 crc kubenswrapper[4832]: I0125 09:00:16.443265 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-h688r" Jan 25 09:00:16 crc kubenswrapper[4832]: I0125 09:00:16.449017 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-h688r"] Jan 25 09:00:16 crc kubenswrapper[4832]: I0125 09:00:16.582780 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c1234276-4dc8-4975-8f62-c81eae9ac682-utilities\") pod \"redhat-operators-h688r\" (UID: \"c1234276-4dc8-4975-8f62-c81eae9ac682\") " pod="openshift-marketplace/redhat-operators-h688r" Jan 25 09:00:16 crc kubenswrapper[4832]: I0125 09:00:16.583044 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7np24\" (UniqueName: \"kubernetes.io/projected/c1234276-4dc8-4975-8f62-c81eae9ac682-kube-api-access-7np24\") pod \"redhat-operators-h688r\" (UID: \"c1234276-4dc8-4975-8f62-c81eae9ac682\") " pod="openshift-marketplace/redhat-operators-h688r" Jan 25 09:00:16 crc kubenswrapper[4832]: I0125 09:00:16.583087 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c1234276-4dc8-4975-8f62-c81eae9ac682-catalog-content\") pod \"redhat-operators-h688r\" (UID: \"c1234276-4dc8-4975-8f62-c81eae9ac682\") " pod="openshift-marketplace/redhat-operators-h688r" Jan 25 09:00:16 crc kubenswrapper[4832]: I0125 09:00:16.684558 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c1234276-4dc8-4975-8f62-c81eae9ac682-utilities\") pod \"redhat-operators-h688r\" (UID: \"c1234276-4dc8-4975-8f62-c81eae9ac682\") " pod="openshift-marketplace/redhat-operators-h688r" Jan 25 09:00:16 crc kubenswrapper[4832]: I0125 09:00:16.684627 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7np24\" (UniqueName: \"kubernetes.io/projected/c1234276-4dc8-4975-8f62-c81eae9ac682-kube-api-access-7np24\") pod \"redhat-operators-h688r\" (UID: \"c1234276-4dc8-4975-8f62-c81eae9ac682\") " pod="openshift-marketplace/redhat-operators-h688r" Jan 25 09:00:16 crc kubenswrapper[4832]: I0125 09:00:16.684670 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c1234276-4dc8-4975-8f62-c81eae9ac682-catalog-content\") pod \"redhat-operators-h688r\" (UID: \"c1234276-4dc8-4975-8f62-c81eae9ac682\") " pod="openshift-marketplace/redhat-operators-h688r" Jan 25 09:00:16 crc kubenswrapper[4832]: I0125 09:00:16.685180 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c1234276-4dc8-4975-8f62-c81eae9ac682-utilities\") pod \"redhat-operators-h688r\" (UID: \"c1234276-4dc8-4975-8f62-c81eae9ac682\") " pod="openshift-marketplace/redhat-operators-h688r" Jan 25 09:00:16 crc kubenswrapper[4832]: I0125 09:00:16.685234 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c1234276-4dc8-4975-8f62-c81eae9ac682-catalog-content\") pod \"redhat-operators-h688r\" (UID: \"c1234276-4dc8-4975-8f62-c81eae9ac682\") " pod="openshift-marketplace/redhat-operators-h688r" Jan 25 09:00:16 crc kubenswrapper[4832]: I0125 09:00:16.706099 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7np24\" (UniqueName: \"kubernetes.io/projected/c1234276-4dc8-4975-8f62-c81eae9ac682-kube-api-access-7np24\") pod \"redhat-operators-h688r\" (UID: \"c1234276-4dc8-4975-8f62-c81eae9ac682\") " pod="openshift-marketplace/redhat-operators-h688r" Jan 25 09:00:16 crc kubenswrapper[4832]: I0125 09:00:16.779405 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-h688r" Jan 25 09:00:17 crc kubenswrapper[4832]: I0125 09:00:17.383587 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-h688r"] Jan 25 09:00:17 crc kubenswrapper[4832]: I0125 09:00:17.446832 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-h688r" event={"ID":"c1234276-4dc8-4975-8f62-c81eae9ac682","Type":"ContainerStarted","Data":"238dd7e50e6e3a2d2a3361811d9a42b3943454527f2957e7c1ee88b80faf9314"} Jan 25 09:00:18 crc kubenswrapper[4832]: I0125 09:00:18.458217 4832 generic.go:334] "Generic (PLEG): container finished" podID="c1234276-4dc8-4975-8f62-c81eae9ac682" containerID="5d2f456b227792af8f8e5f5e32f6053714d003888c05fde944a114d598a06b4d" exitCode=0 Jan 25 09:00:18 crc kubenswrapper[4832]: I0125 09:00:18.458549 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-h688r" event={"ID":"c1234276-4dc8-4975-8f62-c81eae9ac682","Type":"ContainerDied","Data":"5d2f456b227792af8f8e5f5e32f6053714d003888c05fde944a114d598a06b4d"} Jan 25 09:00:18 crc kubenswrapper[4832]: I0125 09:00:18.461723 4832 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 25 09:00:19 crc kubenswrapper[4832]: I0125 09:00:19.471938 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-h688r" event={"ID":"c1234276-4dc8-4975-8f62-c81eae9ac682","Type":"ContainerStarted","Data":"b507b3c5c3e758c620c6ac9632549b96d8ef83f9b293732d573a7ef6beab3ec9"} Jan 25 09:00:22 crc kubenswrapper[4832]: I0125 09:00:22.500519 4832 generic.go:334] "Generic (PLEG): container finished" podID="c1234276-4dc8-4975-8f62-c81eae9ac682" containerID="b507b3c5c3e758c620c6ac9632549b96d8ef83f9b293732d573a7ef6beab3ec9" exitCode=0 Jan 25 09:00:22 crc kubenswrapper[4832]: I0125 09:00:22.500620 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-h688r" event={"ID":"c1234276-4dc8-4975-8f62-c81eae9ac682","Type":"ContainerDied","Data":"b507b3c5c3e758c620c6ac9632549b96d8ef83f9b293732d573a7ef6beab3ec9"} Jan 25 09:00:23 crc kubenswrapper[4832]: I0125 09:00:23.515266 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-h688r" event={"ID":"c1234276-4dc8-4975-8f62-c81eae9ac682","Type":"ContainerStarted","Data":"863cb5021a6c42b1e1c8b8ebfd4fdb3cab7db4ad545efa86370c5bc915334d5b"} Jan 25 09:00:23 crc kubenswrapper[4832]: I0125 09:00:23.534340 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-h688r" podStartSLOduration=3.004940888 podStartE2EDuration="7.53431875s" podCreationTimestamp="2026-01-25 09:00:16 +0000 UTC" firstStartedPulling="2026-01-25 09:00:18.461407616 +0000 UTC m=+3801.135231149" lastFinishedPulling="2026-01-25 09:00:22.990785478 +0000 UTC m=+3805.664609011" observedRunningTime="2026-01-25 09:00:23.533113953 +0000 UTC m=+3806.206937496" watchObservedRunningTime="2026-01-25 09:00:23.53431875 +0000 UTC m=+3806.208142283" Jan 25 09:00:26 crc kubenswrapper[4832]: I0125 09:00:26.779935 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-h688r" Jan 25 09:00:26 crc kubenswrapper[4832]: I0125 09:00:26.780267 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-h688r" Jan 25 09:00:27 crc kubenswrapper[4832]: I0125 09:00:27.669807 4832 scope.go:117] "RemoveContainer" containerID="47785627d9fed4967d30c7d530949092bec3ab3c86f8b6a114d139f561674311" Jan 25 09:00:27 crc kubenswrapper[4832]: E0125 09:00:27.670536 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9r9sz_openshift-machine-config-operator(1fb47e8e-c812-41b4-9be7-3fad81e121b0)\"" pod="openshift-machine-config-operator/machine-config-daemon-9r9sz" podUID="1fb47e8e-c812-41b4-9be7-3fad81e121b0" Jan 25 09:00:27 crc kubenswrapper[4832]: I0125 09:00:27.840915 4832 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-h688r" podUID="c1234276-4dc8-4975-8f62-c81eae9ac682" containerName="registry-server" probeResult="failure" output=< Jan 25 09:00:27 crc kubenswrapper[4832]: timeout: failed to connect service ":50051" within 1s Jan 25 09:00:27 crc kubenswrapper[4832]: > Jan 25 09:00:36 crc kubenswrapper[4832]: I0125 09:00:36.838747 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-h688r" Jan 25 09:00:36 crc kubenswrapper[4832]: I0125 09:00:36.926100 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-h688r" Jan 25 09:00:37 crc kubenswrapper[4832]: I0125 09:00:37.086271 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-h688r"] Jan 25 09:00:38 crc kubenswrapper[4832]: I0125 09:00:38.663417 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-h688r" podUID="c1234276-4dc8-4975-8f62-c81eae9ac682" containerName="registry-server" containerID="cri-o://863cb5021a6c42b1e1c8b8ebfd4fdb3cab7db4ad545efa86370c5bc915334d5b" gracePeriod=2 Jan 25 09:00:39 crc kubenswrapper[4832]: I0125 09:00:39.122512 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-h688r" Jan 25 09:00:39 crc kubenswrapper[4832]: I0125 09:00:39.293374 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7np24\" (UniqueName: \"kubernetes.io/projected/c1234276-4dc8-4975-8f62-c81eae9ac682-kube-api-access-7np24\") pod \"c1234276-4dc8-4975-8f62-c81eae9ac682\" (UID: \"c1234276-4dc8-4975-8f62-c81eae9ac682\") " Jan 25 09:00:39 crc kubenswrapper[4832]: I0125 09:00:39.295947 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c1234276-4dc8-4975-8f62-c81eae9ac682-utilities\") pod \"c1234276-4dc8-4975-8f62-c81eae9ac682\" (UID: \"c1234276-4dc8-4975-8f62-c81eae9ac682\") " Jan 25 09:00:39 crc kubenswrapper[4832]: I0125 09:00:39.296643 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c1234276-4dc8-4975-8f62-c81eae9ac682-catalog-content\") pod \"c1234276-4dc8-4975-8f62-c81eae9ac682\" (UID: \"c1234276-4dc8-4975-8f62-c81eae9ac682\") " Jan 25 09:00:39 crc kubenswrapper[4832]: I0125 09:00:39.296648 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c1234276-4dc8-4975-8f62-c81eae9ac682-utilities" (OuterVolumeSpecName: "utilities") pod "c1234276-4dc8-4975-8f62-c81eae9ac682" (UID: "c1234276-4dc8-4975-8f62-c81eae9ac682"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 25 09:00:39 crc kubenswrapper[4832]: I0125 09:00:39.299525 4832 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c1234276-4dc8-4975-8f62-c81eae9ac682-utilities\") on node \"crc\" DevicePath \"\"" Jan 25 09:00:39 crc kubenswrapper[4832]: I0125 09:00:39.312096 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c1234276-4dc8-4975-8f62-c81eae9ac682-kube-api-access-7np24" (OuterVolumeSpecName: "kube-api-access-7np24") pod "c1234276-4dc8-4975-8f62-c81eae9ac682" (UID: "c1234276-4dc8-4975-8f62-c81eae9ac682"). InnerVolumeSpecName "kube-api-access-7np24". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 25 09:00:39 crc kubenswrapper[4832]: I0125 09:00:39.402945 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7np24\" (UniqueName: \"kubernetes.io/projected/c1234276-4dc8-4975-8f62-c81eae9ac682-kube-api-access-7np24\") on node \"crc\" DevicePath \"\"" Jan 25 09:00:39 crc kubenswrapper[4832]: I0125 09:00:39.423236 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c1234276-4dc8-4975-8f62-c81eae9ac682-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c1234276-4dc8-4975-8f62-c81eae9ac682" (UID: "c1234276-4dc8-4975-8f62-c81eae9ac682"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 25 09:00:39 crc kubenswrapper[4832]: I0125 09:00:39.505792 4832 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c1234276-4dc8-4975-8f62-c81eae9ac682-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 25 09:00:39 crc kubenswrapper[4832]: I0125 09:00:39.691207 4832 generic.go:334] "Generic (PLEG): container finished" podID="c1234276-4dc8-4975-8f62-c81eae9ac682" containerID="863cb5021a6c42b1e1c8b8ebfd4fdb3cab7db4ad545efa86370c5bc915334d5b" exitCode=0 Jan 25 09:00:39 crc kubenswrapper[4832]: I0125 09:00:39.691447 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-h688r" Jan 25 09:00:39 crc kubenswrapper[4832]: I0125 09:00:39.700783 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-h688r" event={"ID":"c1234276-4dc8-4975-8f62-c81eae9ac682","Type":"ContainerDied","Data":"863cb5021a6c42b1e1c8b8ebfd4fdb3cab7db4ad545efa86370c5bc915334d5b"} Jan 25 09:00:39 crc kubenswrapper[4832]: I0125 09:00:39.700868 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-h688r" event={"ID":"c1234276-4dc8-4975-8f62-c81eae9ac682","Type":"ContainerDied","Data":"238dd7e50e6e3a2d2a3361811d9a42b3943454527f2957e7c1ee88b80faf9314"} Jan 25 09:00:39 crc kubenswrapper[4832]: I0125 09:00:39.700894 4832 scope.go:117] "RemoveContainer" containerID="863cb5021a6c42b1e1c8b8ebfd4fdb3cab7db4ad545efa86370c5bc915334d5b" Jan 25 09:00:39 crc kubenswrapper[4832]: I0125 09:00:39.742936 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-h688r"] Jan 25 09:00:39 crc kubenswrapper[4832]: I0125 09:00:39.746057 4832 scope.go:117] "RemoveContainer" containerID="b507b3c5c3e758c620c6ac9632549b96d8ef83f9b293732d573a7ef6beab3ec9" Jan 25 09:00:39 crc kubenswrapper[4832]: I0125 09:00:39.751559 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-h688r"] Jan 25 09:00:39 crc kubenswrapper[4832]: I0125 09:00:39.771754 4832 scope.go:117] "RemoveContainer" containerID="5d2f456b227792af8f8e5f5e32f6053714d003888c05fde944a114d598a06b4d" Jan 25 09:00:39 crc kubenswrapper[4832]: I0125 09:00:39.813162 4832 scope.go:117] "RemoveContainer" containerID="863cb5021a6c42b1e1c8b8ebfd4fdb3cab7db4ad545efa86370c5bc915334d5b" Jan 25 09:00:39 crc kubenswrapper[4832]: E0125 09:00:39.813675 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"863cb5021a6c42b1e1c8b8ebfd4fdb3cab7db4ad545efa86370c5bc915334d5b\": container with ID starting with 863cb5021a6c42b1e1c8b8ebfd4fdb3cab7db4ad545efa86370c5bc915334d5b not found: ID does not exist" containerID="863cb5021a6c42b1e1c8b8ebfd4fdb3cab7db4ad545efa86370c5bc915334d5b" Jan 25 09:00:39 crc kubenswrapper[4832]: I0125 09:00:39.813783 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"863cb5021a6c42b1e1c8b8ebfd4fdb3cab7db4ad545efa86370c5bc915334d5b"} err="failed to get container status \"863cb5021a6c42b1e1c8b8ebfd4fdb3cab7db4ad545efa86370c5bc915334d5b\": rpc error: code = NotFound desc = could not find container \"863cb5021a6c42b1e1c8b8ebfd4fdb3cab7db4ad545efa86370c5bc915334d5b\": container with ID starting with 863cb5021a6c42b1e1c8b8ebfd4fdb3cab7db4ad545efa86370c5bc915334d5b not found: ID does not exist" Jan 25 09:00:39 crc kubenswrapper[4832]: I0125 09:00:39.813869 4832 scope.go:117] "RemoveContainer" containerID="b507b3c5c3e758c620c6ac9632549b96d8ef83f9b293732d573a7ef6beab3ec9" Jan 25 09:00:39 crc kubenswrapper[4832]: E0125 09:00:39.814363 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b507b3c5c3e758c620c6ac9632549b96d8ef83f9b293732d573a7ef6beab3ec9\": container with ID starting with b507b3c5c3e758c620c6ac9632549b96d8ef83f9b293732d573a7ef6beab3ec9 not found: ID does not exist" containerID="b507b3c5c3e758c620c6ac9632549b96d8ef83f9b293732d573a7ef6beab3ec9" Jan 25 09:00:39 crc kubenswrapper[4832]: I0125 09:00:39.814492 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b507b3c5c3e758c620c6ac9632549b96d8ef83f9b293732d573a7ef6beab3ec9"} err="failed to get container status \"b507b3c5c3e758c620c6ac9632549b96d8ef83f9b293732d573a7ef6beab3ec9\": rpc error: code = NotFound desc = could not find container \"b507b3c5c3e758c620c6ac9632549b96d8ef83f9b293732d573a7ef6beab3ec9\": container with ID starting with b507b3c5c3e758c620c6ac9632549b96d8ef83f9b293732d573a7ef6beab3ec9 not found: ID does not exist" Jan 25 09:00:39 crc kubenswrapper[4832]: I0125 09:00:39.814571 4832 scope.go:117] "RemoveContainer" containerID="5d2f456b227792af8f8e5f5e32f6053714d003888c05fde944a114d598a06b4d" Jan 25 09:00:39 crc kubenswrapper[4832]: E0125 09:00:39.815020 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5d2f456b227792af8f8e5f5e32f6053714d003888c05fde944a114d598a06b4d\": container with ID starting with 5d2f456b227792af8f8e5f5e32f6053714d003888c05fde944a114d598a06b4d not found: ID does not exist" containerID="5d2f456b227792af8f8e5f5e32f6053714d003888c05fde944a114d598a06b4d" Jan 25 09:00:39 crc kubenswrapper[4832]: I0125 09:00:39.815131 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5d2f456b227792af8f8e5f5e32f6053714d003888c05fde944a114d598a06b4d"} err="failed to get container status \"5d2f456b227792af8f8e5f5e32f6053714d003888c05fde944a114d598a06b4d\": rpc error: code = NotFound desc = could not find container \"5d2f456b227792af8f8e5f5e32f6053714d003888c05fde944a114d598a06b4d\": container with ID starting with 5d2f456b227792af8f8e5f5e32f6053714d003888c05fde944a114d598a06b4d not found: ID does not exist" Jan 25 09:00:40 crc kubenswrapper[4832]: I0125 09:00:40.670094 4832 scope.go:117] "RemoveContainer" containerID="47785627d9fed4967d30c7d530949092bec3ab3c86f8b6a114d139f561674311" Jan 25 09:00:40 crc kubenswrapper[4832]: E0125 09:00:40.670729 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9r9sz_openshift-machine-config-operator(1fb47e8e-c812-41b4-9be7-3fad81e121b0)\"" pod="openshift-machine-config-operator/machine-config-daemon-9r9sz" podUID="1fb47e8e-c812-41b4-9be7-3fad81e121b0" Jan 25 09:00:41 crc kubenswrapper[4832]: I0125 09:00:41.681323 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c1234276-4dc8-4975-8f62-c81eae9ac682" path="/var/lib/kubelet/pods/c1234276-4dc8-4975-8f62-c81eae9ac682/volumes" Jan 25 09:00:42 crc kubenswrapper[4832]: I0125 09:00:42.597884 4832 scope.go:117] "RemoveContainer" containerID="2c535f6ce45bd6825b7b760a6f368451fd16bb9e78bb41f0b0003ddd1b5b96e9" Jan 25 09:00:51 crc kubenswrapper[4832]: I0125 09:00:51.671270 4832 scope.go:117] "RemoveContainer" containerID="47785627d9fed4967d30c7d530949092bec3ab3c86f8b6a114d139f561674311" Jan 25 09:00:51 crc kubenswrapper[4832]: E0125 09:00:51.672041 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9r9sz_openshift-machine-config-operator(1fb47e8e-c812-41b4-9be7-3fad81e121b0)\"" pod="openshift-machine-config-operator/machine-config-daemon-9r9sz" podUID="1fb47e8e-c812-41b4-9be7-3fad81e121b0" Jan 25 09:01:00 crc kubenswrapper[4832]: I0125 09:01:00.157204 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-cron-29488861-lgmdj"] Jan 25 09:01:00 crc kubenswrapper[4832]: E0125 09:01:00.158252 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c1234276-4dc8-4975-8f62-c81eae9ac682" containerName="registry-server" Jan 25 09:01:00 crc kubenswrapper[4832]: I0125 09:01:00.158269 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="c1234276-4dc8-4975-8f62-c81eae9ac682" containerName="registry-server" Jan 25 09:01:00 crc kubenswrapper[4832]: E0125 09:01:00.158314 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c1234276-4dc8-4975-8f62-c81eae9ac682" containerName="extract-utilities" Jan 25 09:01:00 crc kubenswrapper[4832]: I0125 09:01:00.158323 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="c1234276-4dc8-4975-8f62-c81eae9ac682" containerName="extract-utilities" Jan 25 09:01:00 crc kubenswrapper[4832]: E0125 09:01:00.158336 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c1234276-4dc8-4975-8f62-c81eae9ac682" containerName="extract-content" Jan 25 09:01:00 crc kubenswrapper[4832]: I0125 09:01:00.158343 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="c1234276-4dc8-4975-8f62-c81eae9ac682" containerName="extract-content" Jan 25 09:01:00 crc kubenswrapper[4832]: I0125 09:01:00.158587 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="c1234276-4dc8-4975-8f62-c81eae9ac682" containerName="registry-server" Jan 25 09:01:00 crc kubenswrapper[4832]: I0125 09:01:00.159304 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29488861-lgmdj" Jan 25 09:01:00 crc kubenswrapper[4832]: I0125 09:01:00.171523 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29488861-lgmdj"] Jan 25 09:01:00 crc kubenswrapper[4832]: I0125 09:01:00.227332 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/15559108-ea2b-4acc-908e-8b3d1f7a3dbf-config-data\") pod \"keystone-cron-29488861-lgmdj\" (UID: \"15559108-ea2b-4acc-908e-8b3d1f7a3dbf\") " pod="openstack/keystone-cron-29488861-lgmdj" Jan 25 09:01:00 crc kubenswrapper[4832]: I0125 09:01:00.227470 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/15559108-ea2b-4acc-908e-8b3d1f7a3dbf-combined-ca-bundle\") pod \"keystone-cron-29488861-lgmdj\" (UID: \"15559108-ea2b-4acc-908e-8b3d1f7a3dbf\") " pod="openstack/keystone-cron-29488861-lgmdj" Jan 25 09:01:00 crc kubenswrapper[4832]: I0125 09:01:00.227494 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/15559108-ea2b-4acc-908e-8b3d1f7a3dbf-fernet-keys\") pod \"keystone-cron-29488861-lgmdj\" (UID: \"15559108-ea2b-4acc-908e-8b3d1f7a3dbf\") " pod="openstack/keystone-cron-29488861-lgmdj" Jan 25 09:01:00 crc kubenswrapper[4832]: I0125 09:01:00.227577 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qcspv\" (UniqueName: \"kubernetes.io/projected/15559108-ea2b-4acc-908e-8b3d1f7a3dbf-kube-api-access-qcspv\") pod \"keystone-cron-29488861-lgmdj\" (UID: \"15559108-ea2b-4acc-908e-8b3d1f7a3dbf\") " pod="openstack/keystone-cron-29488861-lgmdj" Jan 25 09:01:00 crc kubenswrapper[4832]: I0125 09:01:00.328831 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qcspv\" (UniqueName: \"kubernetes.io/projected/15559108-ea2b-4acc-908e-8b3d1f7a3dbf-kube-api-access-qcspv\") pod \"keystone-cron-29488861-lgmdj\" (UID: \"15559108-ea2b-4acc-908e-8b3d1f7a3dbf\") " pod="openstack/keystone-cron-29488861-lgmdj" Jan 25 09:01:00 crc kubenswrapper[4832]: I0125 09:01:00.330416 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/15559108-ea2b-4acc-908e-8b3d1f7a3dbf-config-data\") pod \"keystone-cron-29488861-lgmdj\" (UID: \"15559108-ea2b-4acc-908e-8b3d1f7a3dbf\") " pod="openstack/keystone-cron-29488861-lgmdj" Jan 25 09:01:00 crc kubenswrapper[4832]: I0125 09:01:00.330526 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/15559108-ea2b-4acc-908e-8b3d1f7a3dbf-combined-ca-bundle\") pod \"keystone-cron-29488861-lgmdj\" (UID: \"15559108-ea2b-4acc-908e-8b3d1f7a3dbf\") " pod="openstack/keystone-cron-29488861-lgmdj" Jan 25 09:01:00 crc kubenswrapper[4832]: I0125 09:01:00.330552 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/15559108-ea2b-4acc-908e-8b3d1f7a3dbf-fernet-keys\") pod \"keystone-cron-29488861-lgmdj\" (UID: \"15559108-ea2b-4acc-908e-8b3d1f7a3dbf\") " pod="openstack/keystone-cron-29488861-lgmdj" Jan 25 09:01:00 crc kubenswrapper[4832]: I0125 09:01:00.336965 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/15559108-ea2b-4acc-908e-8b3d1f7a3dbf-combined-ca-bundle\") pod \"keystone-cron-29488861-lgmdj\" (UID: \"15559108-ea2b-4acc-908e-8b3d1f7a3dbf\") " pod="openstack/keystone-cron-29488861-lgmdj" Jan 25 09:01:00 crc kubenswrapper[4832]: I0125 09:01:00.337497 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/15559108-ea2b-4acc-908e-8b3d1f7a3dbf-config-data\") pod \"keystone-cron-29488861-lgmdj\" (UID: \"15559108-ea2b-4acc-908e-8b3d1f7a3dbf\") " pod="openstack/keystone-cron-29488861-lgmdj" Jan 25 09:01:00 crc kubenswrapper[4832]: I0125 09:01:00.338252 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/15559108-ea2b-4acc-908e-8b3d1f7a3dbf-fernet-keys\") pod \"keystone-cron-29488861-lgmdj\" (UID: \"15559108-ea2b-4acc-908e-8b3d1f7a3dbf\") " pod="openstack/keystone-cron-29488861-lgmdj" Jan 25 09:01:00 crc kubenswrapper[4832]: I0125 09:01:00.348827 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qcspv\" (UniqueName: \"kubernetes.io/projected/15559108-ea2b-4acc-908e-8b3d1f7a3dbf-kube-api-access-qcspv\") pod \"keystone-cron-29488861-lgmdj\" (UID: \"15559108-ea2b-4acc-908e-8b3d1f7a3dbf\") " pod="openstack/keystone-cron-29488861-lgmdj" Jan 25 09:01:00 crc kubenswrapper[4832]: I0125 09:01:00.489715 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29488861-lgmdj" Jan 25 09:01:00 crc kubenswrapper[4832]: I0125 09:01:00.934208 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29488861-lgmdj"] Jan 25 09:01:01 crc kubenswrapper[4832]: I0125 09:01:01.950864 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29488861-lgmdj" event={"ID":"15559108-ea2b-4acc-908e-8b3d1f7a3dbf","Type":"ContainerStarted","Data":"ea018940fe9d6b4920d532e1da1597ea1af1916948f856b109024fbaac1264b9"} Jan 25 09:01:01 crc kubenswrapper[4832]: I0125 09:01:01.951236 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29488861-lgmdj" event={"ID":"15559108-ea2b-4acc-908e-8b3d1f7a3dbf","Type":"ContainerStarted","Data":"78c4562405722751a9cfe1dcc0a327d7123e8d12d295d6d5f5f2339e8eeeb523"} Jan 25 09:01:01 crc kubenswrapper[4832]: I0125 09:01:01.975920 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-cron-29488861-lgmdj" podStartSLOduration=1.975896793 podStartE2EDuration="1.975896793s" podCreationTimestamp="2026-01-25 09:01:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-25 09:01:01.96645426 +0000 UTC m=+3844.640277813" watchObservedRunningTime="2026-01-25 09:01:01.975896793 +0000 UTC m=+3844.649720316" Jan 25 09:01:03 crc kubenswrapper[4832]: I0125 09:01:03.972650 4832 generic.go:334] "Generic (PLEG): container finished" podID="15559108-ea2b-4acc-908e-8b3d1f7a3dbf" containerID="ea018940fe9d6b4920d532e1da1597ea1af1916948f856b109024fbaac1264b9" exitCode=0 Jan 25 09:01:03 crc kubenswrapper[4832]: I0125 09:01:03.972827 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29488861-lgmdj" event={"ID":"15559108-ea2b-4acc-908e-8b3d1f7a3dbf","Type":"ContainerDied","Data":"ea018940fe9d6b4920d532e1da1597ea1af1916948f856b109024fbaac1264b9"} Jan 25 09:01:05 crc kubenswrapper[4832]: I0125 09:01:05.320708 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29488861-lgmdj" Jan 25 09:01:05 crc kubenswrapper[4832]: I0125 09:01:05.440796 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/15559108-ea2b-4acc-908e-8b3d1f7a3dbf-fernet-keys\") pod \"15559108-ea2b-4acc-908e-8b3d1f7a3dbf\" (UID: \"15559108-ea2b-4acc-908e-8b3d1f7a3dbf\") " Jan 25 09:01:05 crc kubenswrapper[4832]: I0125 09:01:05.440931 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/15559108-ea2b-4acc-908e-8b3d1f7a3dbf-combined-ca-bundle\") pod \"15559108-ea2b-4acc-908e-8b3d1f7a3dbf\" (UID: \"15559108-ea2b-4acc-908e-8b3d1f7a3dbf\") " Jan 25 09:01:05 crc kubenswrapper[4832]: I0125 09:01:05.441122 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qcspv\" (UniqueName: \"kubernetes.io/projected/15559108-ea2b-4acc-908e-8b3d1f7a3dbf-kube-api-access-qcspv\") pod \"15559108-ea2b-4acc-908e-8b3d1f7a3dbf\" (UID: \"15559108-ea2b-4acc-908e-8b3d1f7a3dbf\") " Jan 25 09:01:05 crc kubenswrapper[4832]: I0125 09:01:05.441277 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/15559108-ea2b-4acc-908e-8b3d1f7a3dbf-config-data\") pod \"15559108-ea2b-4acc-908e-8b3d1f7a3dbf\" (UID: \"15559108-ea2b-4acc-908e-8b3d1f7a3dbf\") " Jan 25 09:01:05 crc kubenswrapper[4832]: I0125 09:01:05.447494 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/15559108-ea2b-4acc-908e-8b3d1f7a3dbf-kube-api-access-qcspv" (OuterVolumeSpecName: "kube-api-access-qcspv") pod "15559108-ea2b-4acc-908e-8b3d1f7a3dbf" (UID: "15559108-ea2b-4acc-908e-8b3d1f7a3dbf"). InnerVolumeSpecName "kube-api-access-qcspv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 25 09:01:05 crc kubenswrapper[4832]: I0125 09:01:05.448152 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/15559108-ea2b-4acc-908e-8b3d1f7a3dbf-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "15559108-ea2b-4acc-908e-8b3d1f7a3dbf" (UID: "15559108-ea2b-4acc-908e-8b3d1f7a3dbf"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 25 09:01:05 crc kubenswrapper[4832]: I0125 09:01:05.479391 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/15559108-ea2b-4acc-908e-8b3d1f7a3dbf-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "15559108-ea2b-4acc-908e-8b3d1f7a3dbf" (UID: "15559108-ea2b-4acc-908e-8b3d1f7a3dbf"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 25 09:01:05 crc kubenswrapper[4832]: I0125 09:01:05.506633 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/15559108-ea2b-4acc-908e-8b3d1f7a3dbf-config-data" (OuterVolumeSpecName: "config-data") pod "15559108-ea2b-4acc-908e-8b3d1f7a3dbf" (UID: "15559108-ea2b-4acc-908e-8b3d1f7a3dbf"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 25 09:01:05 crc kubenswrapper[4832]: I0125 09:01:05.544132 4832 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/15559108-ea2b-4acc-908e-8b3d1f7a3dbf-fernet-keys\") on node \"crc\" DevicePath \"\"" Jan 25 09:01:05 crc kubenswrapper[4832]: I0125 09:01:05.544185 4832 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/15559108-ea2b-4acc-908e-8b3d1f7a3dbf-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 25 09:01:05 crc kubenswrapper[4832]: I0125 09:01:05.544208 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qcspv\" (UniqueName: \"kubernetes.io/projected/15559108-ea2b-4acc-908e-8b3d1f7a3dbf-kube-api-access-qcspv\") on node \"crc\" DevicePath \"\"" Jan 25 09:01:05 crc kubenswrapper[4832]: I0125 09:01:05.544227 4832 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/15559108-ea2b-4acc-908e-8b3d1f7a3dbf-config-data\") on node \"crc\" DevicePath \"\"" Jan 25 09:01:05 crc kubenswrapper[4832]: I0125 09:01:05.991907 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29488861-lgmdj" event={"ID":"15559108-ea2b-4acc-908e-8b3d1f7a3dbf","Type":"ContainerDied","Data":"78c4562405722751a9cfe1dcc0a327d7123e8d12d295d6d5f5f2339e8eeeb523"} Jan 25 09:01:05 crc kubenswrapper[4832]: I0125 09:01:05.991955 4832 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="78c4562405722751a9cfe1dcc0a327d7123e8d12d295d6d5f5f2339e8eeeb523" Jan 25 09:01:05 crc kubenswrapper[4832]: I0125 09:01:05.991999 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29488861-lgmdj" Jan 25 09:01:06 crc kubenswrapper[4832]: I0125 09:01:06.670055 4832 scope.go:117] "RemoveContainer" containerID="47785627d9fed4967d30c7d530949092bec3ab3c86f8b6a114d139f561674311" Jan 25 09:01:06 crc kubenswrapper[4832]: E0125 09:01:06.670749 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9r9sz_openshift-machine-config-operator(1fb47e8e-c812-41b4-9be7-3fad81e121b0)\"" pod="openshift-machine-config-operator/machine-config-daemon-9r9sz" podUID="1fb47e8e-c812-41b4-9be7-3fad81e121b0" Jan 25 09:01:20 crc kubenswrapper[4832]: I0125 09:01:20.670259 4832 scope.go:117] "RemoveContainer" containerID="47785627d9fed4967d30c7d530949092bec3ab3c86f8b6a114d139f561674311" Jan 25 09:01:20 crc kubenswrapper[4832]: E0125 09:01:20.671080 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9r9sz_openshift-machine-config-operator(1fb47e8e-c812-41b4-9be7-3fad81e121b0)\"" pod="openshift-machine-config-operator/machine-config-daemon-9r9sz" podUID="1fb47e8e-c812-41b4-9be7-3fad81e121b0" Jan 25 09:01:34 crc kubenswrapper[4832]: I0125 09:01:34.669290 4832 scope.go:117] "RemoveContainer" containerID="47785627d9fed4967d30c7d530949092bec3ab3c86f8b6a114d139f561674311" Jan 25 09:01:35 crc kubenswrapper[4832]: I0125 09:01:35.260695 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-9r9sz" event={"ID":"1fb47e8e-c812-41b4-9be7-3fad81e121b0","Type":"ContainerStarted","Data":"0ea911382d8d0a0eb2577340195474126353ecae004440333081f27f25b490d7"} Jan 25 09:01:44 crc kubenswrapper[4832]: I0125 09:01:44.337993 4832 generic.go:334] "Generic (PLEG): container finished" podID="c2c42541-00a2-4d5a-a875-3b52d73b08eb" containerID="a84cf7e2c40f7d1d7f0c37dfec4ad70f7b6e2f0a60e43def974d50bbc0b0ab17" exitCode=0 Jan 25 09:01:44 crc kubenswrapper[4832]: I0125 09:01:44.338220 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-t2k6c/must-gather-wf66j" event={"ID":"c2c42541-00a2-4d5a-a875-3b52d73b08eb","Type":"ContainerDied","Data":"a84cf7e2c40f7d1d7f0c37dfec4ad70f7b6e2f0a60e43def974d50bbc0b0ab17"} Jan 25 09:01:44 crc kubenswrapper[4832]: I0125 09:01:44.340559 4832 scope.go:117] "RemoveContainer" containerID="a84cf7e2c40f7d1d7f0c37dfec4ad70f7b6e2f0a60e43def974d50bbc0b0ab17" Jan 25 09:01:45 crc kubenswrapper[4832]: I0125 09:01:45.402102 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-t2k6c_must-gather-wf66j_c2c42541-00a2-4d5a-a875-3b52d73b08eb/gather/0.log" Jan 25 09:01:53 crc kubenswrapper[4832]: I0125 09:01:53.439982 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-t2k6c/must-gather-wf66j"] Jan 25 09:01:53 crc kubenswrapper[4832]: I0125 09:01:53.440907 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-must-gather-t2k6c/must-gather-wf66j" podUID="c2c42541-00a2-4d5a-a875-3b52d73b08eb" containerName="copy" containerID="cri-o://a31cf8509d9193a87d867f7d2bc61b2552efddc7ddb2431f6e4febfd80e63834" gracePeriod=2 Jan 25 09:01:53 crc kubenswrapper[4832]: I0125 09:01:53.448167 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-t2k6c/must-gather-wf66j"] Jan 25 09:01:53 crc kubenswrapper[4832]: I0125 09:01:53.871144 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-t2k6c_must-gather-wf66j_c2c42541-00a2-4d5a-a875-3b52d73b08eb/copy/0.log" Jan 25 09:01:53 crc kubenswrapper[4832]: I0125 09:01:53.871879 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-t2k6c/must-gather-wf66j" Jan 25 09:01:53 crc kubenswrapper[4832]: I0125 09:01:53.979883 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mf692\" (UniqueName: \"kubernetes.io/projected/c2c42541-00a2-4d5a-a875-3b52d73b08eb-kube-api-access-mf692\") pod \"c2c42541-00a2-4d5a-a875-3b52d73b08eb\" (UID: \"c2c42541-00a2-4d5a-a875-3b52d73b08eb\") " Jan 25 09:01:53 crc kubenswrapper[4832]: I0125 09:01:53.980049 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/c2c42541-00a2-4d5a-a875-3b52d73b08eb-must-gather-output\") pod \"c2c42541-00a2-4d5a-a875-3b52d73b08eb\" (UID: \"c2c42541-00a2-4d5a-a875-3b52d73b08eb\") " Jan 25 09:01:53 crc kubenswrapper[4832]: I0125 09:01:53.985421 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c2c42541-00a2-4d5a-a875-3b52d73b08eb-kube-api-access-mf692" (OuterVolumeSpecName: "kube-api-access-mf692") pod "c2c42541-00a2-4d5a-a875-3b52d73b08eb" (UID: "c2c42541-00a2-4d5a-a875-3b52d73b08eb"). InnerVolumeSpecName "kube-api-access-mf692". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 25 09:01:54 crc kubenswrapper[4832]: I0125 09:01:54.088826 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mf692\" (UniqueName: \"kubernetes.io/projected/c2c42541-00a2-4d5a-a875-3b52d73b08eb-kube-api-access-mf692\") on node \"crc\" DevicePath \"\"" Jan 25 09:01:54 crc kubenswrapper[4832]: I0125 09:01:54.135002 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c2c42541-00a2-4d5a-a875-3b52d73b08eb-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "c2c42541-00a2-4d5a-a875-3b52d73b08eb" (UID: "c2c42541-00a2-4d5a-a875-3b52d73b08eb"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 25 09:01:54 crc kubenswrapper[4832]: I0125 09:01:54.190887 4832 reconciler_common.go:293] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/c2c42541-00a2-4d5a-a875-3b52d73b08eb-must-gather-output\") on node \"crc\" DevicePath \"\"" Jan 25 09:01:54 crc kubenswrapper[4832]: I0125 09:01:54.449926 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-t2k6c_must-gather-wf66j_c2c42541-00a2-4d5a-a875-3b52d73b08eb/copy/0.log" Jan 25 09:01:54 crc kubenswrapper[4832]: I0125 09:01:54.450265 4832 generic.go:334] "Generic (PLEG): container finished" podID="c2c42541-00a2-4d5a-a875-3b52d73b08eb" containerID="a31cf8509d9193a87d867f7d2bc61b2552efddc7ddb2431f6e4febfd80e63834" exitCode=143 Jan 25 09:01:54 crc kubenswrapper[4832]: I0125 09:01:54.450319 4832 scope.go:117] "RemoveContainer" containerID="a31cf8509d9193a87d867f7d2bc61b2552efddc7ddb2431f6e4febfd80e63834" Jan 25 09:01:54 crc kubenswrapper[4832]: I0125 09:01:54.450351 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-t2k6c/must-gather-wf66j" Jan 25 09:01:54 crc kubenswrapper[4832]: I0125 09:01:54.470350 4832 scope.go:117] "RemoveContainer" containerID="a84cf7e2c40f7d1d7f0c37dfec4ad70f7b6e2f0a60e43def974d50bbc0b0ab17" Jan 25 09:01:54 crc kubenswrapper[4832]: I0125 09:01:54.573170 4832 scope.go:117] "RemoveContainer" containerID="a31cf8509d9193a87d867f7d2bc61b2552efddc7ddb2431f6e4febfd80e63834" Jan 25 09:01:54 crc kubenswrapper[4832]: E0125 09:01:54.573641 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a31cf8509d9193a87d867f7d2bc61b2552efddc7ddb2431f6e4febfd80e63834\": container with ID starting with a31cf8509d9193a87d867f7d2bc61b2552efddc7ddb2431f6e4febfd80e63834 not found: ID does not exist" containerID="a31cf8509d9193a87d867f7d2bc61b2552efddc7ddb2431f6e4febfd80e63834" Jan 25 09:01:54 crc kubenswrapper[4832]: I0125 09:01:54.573684 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a31cf8509d9193a87d867f7d2bc61b2552efddc7ddb2431f6e4febfd80e63834"} err="failed to get container status \"a31cf8509d9193a87d867f7d2bc61b2552efddc7ddb2431f6e4febfd80e63834\": rpc error: code = NotFound desc = could not find container \"a31cf8509d9193a87d867f7d2bc61b2552efddc7ddb2431f6e4febfd80e63834\": container with ID starting with a31cf8509d9193a87d867f7d2bc61b2552efddc7ddb2431f6e4febfd80e63834 not found: ID does not exist" Jan 25 09:01:54 crc kubenswrapper[4832]: I0125 09:01:54.573710 4832 scope.go:117] "RemoveContainer" containerID="a84cf7e2c40f7d1d7f0c37dfec4ad70f7b6e2f0a60e43def974d50bbc0b0ab17" Jan 25 09:01:54 crc kubenswrapper[4832]: E0125 09:01:54.573913 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a84cf7e2c40f7d1d7f0c37dfec4ad70f7b6e2f0a60e43def974d50bbc0b0ab17\": container with ID starting with a84cf7e2c40f7d1d7f0c37dfec4ad70f7b6e2f0a60e43def974d50bbc0b0ab17 not found: ID does not exist" containerID="a84cf7e2c40f7d1d7f0c37dfec4ad70f7b6e2f0a60e43def974d50bbc0b0ab17" Jan 25 09:01:54 crc kubenswrapper[4832]: I0125 09:01:54.573938 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a84cf7e2c40f7d1d7f0c37dfec4ad70f7b6e2f0a60e43def974d50bbc0b0ab17"} err="failed to get container status \"a84cf7e2c40f7d1d7f0c37dfec4ad70f7b6e2f0a60e43def974d50bbc0b0ab17\": rpc error: code = NotFound desc = could not find container \"a84cf7e2c40f7d1d7f0c37dfec4ad70f7b6e2f0a60e43def974d50bbc0b0ab17\": container with ID starting with a84cf7e2c40f7d1d7f0c37dfec4ad70f7b6e2f0a60e43def974d50bbc0b0ab17 not found: ID does not exist" Jan 25 09:01:55 crc kubenswrapper[4832]: I0125 09:01:55.682293 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c2c42541-00a2-4d5a-a875-3b52d73b08eb" path="/var/lib/kubelet/pods/c2c42541-00a2-4d5a-a875-3b52d73b08eb/volumes" Jan 25 09:02:42 crc kubenswrapper[4832]: I0125 09:02:42.688067 4832 scope.go:117] "RemoveContainer" containerID="a5f6cb748904837856822bcc5556449548eafb31f998f6b735fe970fe439417f" Jan 25 09:03:52 crc kubenswrapper[4832]: I0125 09:03:52.149673 4832 patch_prober.go:28] interesting pod/machine-config-daemon-9r9sz container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 25 09:03:52 crc kubenswrapper[4832]: I0125 09:03:52.150314 4832 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-9r9sz" podUID="1fb47e8e-c812-41b4-9be7-3fad81e121b0" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 25 09:04:22 crc kubenswrapper[4832]: I0125 09:04:22.149917 4832 patch_prober.go:28] interesting pod/machine-config-daemon-9r9sz container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 25 09:04:22 crc kubenswrapper[4832]: I0125 09:04:22.150324 4832 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-9r9sz" podUID="1fb47e8e-c812-41b4-9be7-3fad81e121b0" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 25 09:04:47 crc kubenswrapper[4832]: I0125 09:04:47.475115 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-v7wc8/must-gather-vqcpt"] Jan 25 09:04:47 crc kubenswrapper[4832]: E0125 09:04:47.476484 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="15559108-ea2b-4acc-908e-8b3d1f7a3dbf" containerName="keystone-cron" Jan 25 09:04:47 crc kubenswrapper[4832]: I0125 09:04:47.476510 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="15559108-ea2b-4acc-908e-8b3d1f7a3dbf" containerName="keystone-cron" Jan 25 09:04:47 crc kubenswrapper[4832]: E0125 09:04:47.476534 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c2c42541-00a2-4d5a-a875-3b52d73b08eb" containerName="gather" Jan 25 09:04:47 crc kubenswrapper[4832]: I0125 09:04:47.476543 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="c2c42541-00a2-4d5a-a875-3b52d73b08eb" containerName="gather" Jan 25 09:04:47 crc kubenswrapper[4832]: E0125 09:04:47.476574 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c2c42541-00a2-4d5a-a875-3b52d73b08eb" containerName="copy" Jan 25 09:04:47 crc kubenswrapper[4832]: I0125 09:04:47.476583 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="c2c42541-00a2-4d5a-a875-3b52d73b08eb" containerName="copy" Jan 25 09:04:47 crc kubenswrapper[4832]: I0125 09:04:47.476872 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="15559108-ea2b-4acc-908e-8b3d1f7a3dbf" containerName="keystone-cron" Jan 25 09:04:47 crc kubenswrapper[4832]: I0125 09:04:47.476907 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="c2c42541-00a2-4d5a-a875-3b52d73b08eb" containerName="gather" Jan 25 09:04:47 crc kubenswrapper[4832]: I0125 09:04:47.476939 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="c2c42541-00a2-4d5a-a875-3b52d73b08eb" containerName="copy" Jan 25 09:04:47 crc kubenswrapper[4832]: I0125 09:04:47.478441 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-v7wc8/must-gather-vqcpt" Jan 25 09:04:47 crc kubenswrapper[4832]: I0125 09:04:47.480933 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-v7wc8"/"default-dockercfg-xkthf" Jan 25 09:04:47 crc kubenswrapper[4832]: I0125 09:04:47.481421 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-v7wc8"/"kube-root-ca.crt" Jan 25 09:04:47 crc kubenswrapper[4832]: I0125 09:04:47.481694 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-v7wc8"/"openshift-service-ca.crt" Jan 25 09:04:47 crc kubenswrapper[4832]: I0125 09:04:47.486346 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-v7wc8/must-gather-vqcpt"] Jan 25 09:04:47 crc kubenswrapper[4832]: I0125 09:04:47.535225 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-64glj\" (UniqueName: \"kubernetes.io/projected/f683ac01-9d33-4a8d-8496-478b12af8e88-kube-api-access-64glj\") pod \"must-gather-vqcpt\" (UID: \"f683ac01-9d33-4a8d-8496-478b12af8e88\") " pod="openshift-must-gather-v7wc8/must-gather-vqcpt" Jan 25 09:04:47 crc kubenswrapper[4832]: I0125 09:04:47.535289 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/f683ac01-9d33-4a8d-8496-478b12af8e88-must-gather-output\") pod \"must-gather-vqcpt\" (UID: \"f683ac01-9d33-4a8d-8496-478b12af8e88\") " pod="openshift-must-gather-v7wc8/must-gather-vqcpt" Jan 25 09:04:47 crc kubenswrapper[4832]: I0125 09:04:47.637192 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-64glj\" (UniqueName: \"kubernetes.io/projected/f683ac01-9d33-4a8d-8496-478b12af8e88-kube-api-access-64glj\") pod \"must-gather-vqcpt\" (UID: \"f683ac01-9d33-4a8d-8496-478b12af8e88\") " pod="openshift-must-gather-v7wc8/must-gather-vqcpt" Jan 25 09:04:47 crc kubenswrapper[4832]: I0125 09:04:47.637252 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/f683ac01-9d33-4a8d-8496-478b12af8e88-must-gather-output\") pod \"must-gather-vqcpt\" (UID: \"f683ac01-9d33-4a8d-8496-478b12af8e88\") " pod="openshift-must-gather-v7wc8/must-gather-vqcpt" Jan 25 09:04:47 crc kubenswrapper[4832]: I0125 09:04:47.637757 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/f683ac01-9d33-4a8d-8496-478b12af8e88-must-gather-output\") pod \"must-gather-vqcpt\" (UID: \"f683ac01-9d33-4a8d-8496-478b12af8e88\") " pod="openshift-must-gather-v7wc8/must-gather-vqcpt" Jan 25 09:04:47 crc kubenswrapper[4832]: I0125 09:04:47.655554 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-64glj\" (UniqueName: \"kubernetes.io/projected/f683ac01-9d33-4a8d-8496-478b12af8e88-kube-api-access-64glj\") pod \"must-gather-vqcpt\" (UID: \"f683ac01-9d33-4a8d-8496-478b12af8e88\") " pod="openshift-must-gather-v7wc8/must-gather-vqcpt" Jan 25 09:04:47 crc kubenswrapper[4832]: I0125 09:04:47.800523 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-v7wc8/must-gather-vqcpt" Jan 25 09:04:48 crc kubenswrapper[4832]: I0125 09:04:48.244020 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-v7wc8/must-gather-vqcpt"] Jan 25 09:04:48 crc kubenswrapper[4832]: I0125 09:04:48.547707 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-v7wc8/must-gather-vqcpt" event={"ID":"f683ac01-9d33-4a8d-8496-478b12af8e88","Type":"ContainerStarted","Data":"4708d7280633af9595bd62d91e57140ea210b5205009b3bb7244bce712866e90"} Jan 25 09:04:48 crc kubenswrapper[4832]: I0125 09:04:48.547770 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-v7wc8/must-gather-vqcpt" event={"ID":"f683ac01-9d33-4a8d-8496-478b12af8e88","Type":"ContainerStarted","Data":"520860339b7af275740a24e83161c56944eb0e320dc26cbfb188bdaf1b7140e5"} Jan 25 09:04:49 crc kubenswrapper[4832]: I0125 09:04:49.556591 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-v7wc8/must-gather-vqcpt" event={"ID":"f683ac01-9d33-4a8d-8496-478b12af8e88","Type":"ContainerStarted","Data":"d86bbaf1ff464e699dc568103d1c45826d83b06a0024e3897e327977d80ce5c8"} Jan 25 09:04:52 crc kubenswrapper[4832]: I0125 09:04:52.149805 4832 patch_prober.go:28] interesting pod/machine-config-daemon-9r9sz container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 25 09:04:52 crc kubenswrapper[4832]: I0125 09:04:52.151624 4832 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-9r9sz" podUID="1fb47e8e-c812-41b4-9be7-3fad81e121b0" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 25 09:04:52 crc kubenswrapper[4832]: I0125 09:04:52.151775 4832 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-9r9sz" Jan 25 09:04:52 crc kubenswrapper[4832]: I0125 09:04:52.152784 4832 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"0ea911382d8d0a0eb2577340195474126353ecae004440333081f27f25b490d7"} pod="openshift-machine-config-operator/machine-config-daemon-9r9sz" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 25 09:04:52 crc kubenswrapper[4832]: I0125 09:04:52.152960 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-9r9sz" podUID="1fb47e8e-c812-41b4-9be7-3fad81e121b0" containerName="machine-config-daemon" containerID="cri-o://0ea911382d8d0a0eb2577340195474126353ecae004440333081f27f25b490d7" gracePeriod=600 Jan 25 09:04:52 crc kubenswrapper[4832]: I0125 09:04:52.189031 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-v7wc8/must-gather-vqcpt" podStartSLOduration=5.189006541 podStartE2EDuration="5.189006541s" podCreationTimestamp="2026-01-25 09:04:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-25 09:04:49.575254429 +0000 UTC m=+4072.249077962" watchObservedRunningTime="2026-01-25 09:04:52.189006541 +0000 UTC m=+4074.862830074" Jan 25 09:04:52 crc kubenswrapper[4832]: I0125 09:04:52.193121 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-v7wc8/crc-debug-49xd6"] Jan 25 09:04:52 crc kubenswrapper[4832]: I0125 09:04:52.194332 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-v7wc8/crc-debug-49xd6" Jan 25 09:04:52 crc kubenswrapper[4832]: I0125 09:04:52.230964 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xpsxg\" (UniqueName: \"kubernetes.io/projected/a7e54325-87ed-4348-8d78-cd6d696a8ff9-kube-api-access-xpsxg\") pod \"crc-debug-49xd6\" (UID: \"a7e54325-87ed-4348-8d78-cd6d696a8ff9\") " pod="openshift-must-gather-v7wc8/crc-debug-49xd6" Jan 25 09:04:52 crc kubenswrapper[4832]: I0125 09:04:52.231014 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/a7e54325-87ed-4348-8d78-cd6d696a8ff9-host\") pod \"crc-debug-49xd6\" (UID: \"a7e54325-87ed-4348-8d78-cd6d696a8ff9\") " pod="openshift-must-gather-v7wc8/crc-debug-49xd6" Jan 25 09:04:52 crc kubenswrapper[4832]: I0125 09:04:52.332945 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xpsxg\" (UniqueName: \"kubernetes.io/projected/a7e54325-87ed-4348-8d78-cd6d696a8ff9-kube-api-access-xpsxg\") pod \"crc-debug-49xd6\" (UID: \"a7e54325-87ed-4348-8d78-cd6d696a8ff9\") " pod="openshift-must-gather-v7wc8/crc-debug-49xd6" Jan 25 09:04:52 crc kubenswrapper[4832]: I0125 09:04:52.332999 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/a7e54325-87ed-4348-8d78-cd6d696a8ff9-host\") pod \"crc-debug-49xd6\" (UID: \"a7e54325-87ed-4348-8d78-cd6d696a8ff9\") " pod="openshift-must-gather-v7wc8/crc-debug-49xd6" Jan 25 09:04:52 crc kubenswrapper[4832]: I0125 09:04:52.333112 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/a7e54325-87ed-4348-8d78-cd6d696a8ff9-host\") pod \"crc-debug-49xd6\" (UID: \"a7e54325-87ed-4348-8d78-cd6d696a8ff9\") " pod="openshift-must-gather-v7wc8/crc-debug-49xd6" Jan 25 09:04:52 crc kubenswrapper[4832]: I0125 09:04:52.356010 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xpsxg\" (UniqueName: \"kubernetes.io/projected/a7e54325-87ed-4348-8d78-cd6d696a8ff9-kube-api-access-xpsxg\") pod \"crc-debug-49xd6\" (UID: \"a7e54325-87ed-4348-8d78-cd6d696a8ff9\") " pod="openshift-must-gather-v7wc8/crc-debug-49xd6" Jan 25 09:04:52 crc kubenswrapper[4832]: I0125 09:04:52.513217 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-v7wc8/crc-debug-49xd6" Jan 25 09:04:52 crc kubenswrapper[4832]: W0125 09:04:52.556424 4832 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda7e54325_87ed_4348_8d78_cd6d696a8ff9.slice/crio-dd413ee08ef9b6c3357373da58c5a34e3a0d609e10925df4747ca264bca5dc07 WatchSource:0}: Error finding container dd413ee08ef9b6c3357373da58c5a34e3a0d609e10925df4747ca264bca5dc07: Status 404 returned error can't find the container with id dd413ee08ef9b6c3357373da58c5a34e3a0d609e10925df4747ca264bca5dc07 Jan 25 09:04:52 crc kubenswrapper[4832]: I0125 09:04:52.583965 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-v7wc8/crc-debug-49xd6" event={"ID":"a7e54325-87ed-4348-8d78-cd6d696a8ff9","Type":"ContainerStarted","Data":"dd413ee08ef9b6c3357373da58c5a34e3a0d609e10925df4747ca264bca5dc07"} Jan 25 09:04:52 crc kubenswrapper[4832]: I0125 09:04:52.588358 4832 generic.go:334] "Generic (PLEG): container finished" podID="1fb47e8e-c812-41b4-9be7-3fad81e121b0" containerID="0ea911382d8d0a0eb2577340195474126353ecae004440333081f27f25b490d7" exitCode=0 Jan 25 09:04:52 crc kubenswrapper[4832]: I0125 09:04:52.588424 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-9r9sz" event={"ID":"1fb47e8e-c812-41b4-9be7-3fad81e121b0","Type":"ContainerDied","Data":"0ea911382d8d0a0eb2577340195474126353ecae004440333081f27f25b490d7"} Jan 25 09:04:52 crc kubenswrapper[4832]: I0125 09:04:52.588461 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-9r9sz" event={"ID":"1fb47e8e-c812-41b4-9be7-3fad81e121b0","Type":"ContainerStarted","Data":"26d3543bdf72052e3cc4cb665d039f1d2057d49c984f5249d087685baf77d7d0"} Jan 25 09:04:52 crc kubenswrapper[4832]: I0125 09:04:52.588481 4832 scope.go:117] "RemoveContainer" containerID="47785627d9fed4967d30c7d530949092bec3ab3c86f8b6a114d139f561674311" Jan 25 09:04:53 crc kubenswrapper[4832]: I0125 09:04:53.599222 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-v7wc8/crc-debug-49xd6" event={"ID":"a7e54325-87ed-4348-8d78-cd6d696a8ff9","Type":"ContainerStarted","Data":"b5fab16c9dc503ad4816d720fd8c63242c4b53817c0b7f628b7618e96979fee6"} Jan 25 09:04:53 crc kubenswrapper[4832]: I0125 09:04:53.621667 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-v7wc8/crc-debug-49xd6" podStartSLOduration=1.621640631 podStartE2EDuration="1.621640631s" podCreationTimestamp="2026-01-25 09:04:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-25 09:04:53.61646079 +0000 UTC m=+4076.290284313" watchObservedRunningTime="2026-01-25 09:04:53.621640631 +0000 UTC m=+4076.295464164" Jan 25 09:05:06 crc kubenswrapper[4832]: I0125 09:05:06.784352 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-xcmt8"] Jan 25 09:05:06 crc kubenswrapper[4832]: I0125 09:05:06.787763 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-xcmt8" Jan 25 09:05:06 crc kubenswrapper[4832]: I0125 09:05:06.803142 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-xcmt8"] Jan 25 09:05:06 crc kubenswrapper[4832]: I0125 09:05:06.837084 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mnkt9\" (UniqueName: \"kubernetes.io/projected/18e36e85-9425-49ab-90d7-f36434353dbe-kube-api-access-mnkt9\") pod \"certified-operators-xcmt8\" (UID: \"18e36e85-9425-49ab-90d7-f36434353dbe\") " pod="openshift-marketplace/certified-operators-xcmt8" Jan 25 09:05:06 crc kubenswrapper[4832]: I0125 09:05:06.837615 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/18e36e85-9425-49ab-90d7-f36434353dbe-utilities\") pod \"certified-operators-xcmt8\" (UID: \"18e36e85-9425-49ab-90d7-f36434353dbe\") " pod="openshift-marketplace/certified-operators-xcmt8" Jan 25 09:05:06 crc kubenswrapper[4832]: I0125 09:05:06.837761 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/18e36e85-9425-49ab-90d7-f36434353dbe-catalog-content\") pod \"certified-operators-xcmt8\" (UID: \"18e36e85-9425-49ab-90d7-f36434353dbe\") " pod="openshift-marketplace/certified-operators-xcmt8" Jan 25 09:05:06 crc kubenswrapper[4832]: I0125 09:05:06.940463 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/18e36e85-9425-49ab-90d7-f36434353dbe-utilities\") pod \"certified-operators-xcmt8\" (UID: \"18e36e85-9425-49ab-90d7-f36434353dbe\") " pod="openshift-marketplace/certified-operators-xcmt8" Jan 25 09:05:06 crc kubenswrapper[4832]: I0125 09:05:06.940546 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/18e36e85-9425-49ab-90d7-f36434353dbe-catalog-content\") pod \"certified-operators-xcmt8\" (UID: \"18e36e85-9425-49ab-90d7-f36434353dbe\") " pod="openshift-marketplace/certified-operators-xcmt8" Jan 25 09:05:06 crc kubenswrapper[4832]: I0125 09:05:06.940622 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mnkt9\" (UniqueName: \"kubernetes.io/projected/18e36e85-9425-49ab-90d7-f36434353dbe-kube-api-access-mnkt9\") pod \"certified-operators-xcmt8\" (UID: \"18e36e85-9425-49ab-90d7-f36434353dbe\") " pod="openshift-marketplace/certified-operators-xcmt8" Jan 25 09:05:06 crc kubenswrapper[4832]: I0125 09:05:06.941639 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/18e36e85-9425-49ab-90d7-f36434353dbe-catalog-content\") pod \"certified-operators-xcmt8\" (UID: \"18e36e85-9425-49ab-90d7-f36434353dbe\") " pod="openshift-marketplace/certified-operators-xcmt8" Jan 25 09:05:06 crc kubenswrapper[4832]: I0125 09:05:06.941667 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/18e36e85-9425-49ab-90d7-f36434353dbe-utilities\") pod \"certified-operators-xcmt8\" (UID: \"18e36e85-9425-49ab-90d7-f36434353dbe\") " pod="openshift-marketplace/certified-operators-xcmt8" Jan 25 09:05:06 crc kubenswrapper[4832]: I0125 09:05:06.969308 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mnkt9\" (UniqueName: \"kubernetes.io/projected/18e36e85-9425-49ab-90d7-f36434353dbe-kube-api-access-mnkt9\") pod \"certified-operators-xcmt8\" (UID: \"18e36e85-9425-49ab-90d7-f36434353dbe\") " pod="openshift-marketplace/certified-operators-xcmt8" Jan 25 09:05:07 crc kubenswrapper[4832]: I0125 09:05:07.163319 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-xcmt8" Jan 25 09:05:07 crc kubenswrapper[4832]: I0125 09:05:07.757345 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-xcmt8"] Jan 25 09:05:08 crc kubenswrapper[4832]: I0125 09:05:08.764126 4832 generic.go:334] "Generic (PLEG): container finished" podID="18e36e85-9425-49ab-90d7-f36434353dbe" containerID="68d460f79339b594d5d70fb15baa40ac39f768236f0bb70991fad9c99fefdaba" exitCode=0 Jan 25 09:05:08 crc kubenswrapper[4832]: I0125 09:05:08.764250 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xcmt8" event={"ID":"18e36e85-9425-49ab-90d7-f36434353dbe","Type":"ContainerDied","Data":"68d460f79339b594d5d70fb15baa40ac39f768236f0bb70991fad9c99fefdaba"} Jan 25 09:05:08 crc kubenswrapper[4832]: I0125 09:05:08.764469 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xcmt8" event={"ID":"18e36e85-9425-49ab-90d7-f36434353dbe","Type":"ContainerStarted","Data":"cd0e59ee980256f7be93cf2566b2701d70befaec3c69692b927432b9f06644f1"} Jan 25 09:05:09 crc kubenswrapper[4832]: I0125 09:05:09.781526 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xcmt8" event={"ID":"18e36e85-9425-49ab-90d7-f36434353dbe","Type":"ContainerStarted","Data":"573dc49a92250846c850becedcc1372dca07a7dce599bb26bb2ff8b4ea41ea9d"} Jan 25 09:05:10 crc kubenswrapper[4832]: I0125 09:05:10.792732 4832 generic.go:334] "Generic (PLEG): container finished" podID="18e36e85-9425-49ab-90d7-f36434353dbe" containerID="573dc49a92250846c850becedcc1372dca07a7dce599bb26bb2ff8b4ea41ea9d" exitCode=0 Jan 25 09:05:10 crc kubenswrapper[4832]: I0125 09:05:10.793070 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xcmt8" event={"ID":"18e36e85-9425-49ab-90d7-f36434353dbe","Type":"ContainerDied","Data":"573dc49a92250846c850becedcc1372dca07a7dce599bb26bb2ff8b4ea41ea9d"} Jan 25 09:05:11 crc kubenswrapper[4832]: I0125 09:05:11.807824 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xcmt8" event={"ID":"18e36e85-9425-49ab-90d7-f36434353dbe","Type":"ContainerStarted","Data":"493875a3caf68987143ed8d95f7e45024d4d7a7771c9c83a4c165cf85beab0dd"} Jan 25 09:05:11 crc kubenswrapper[4832]: I0125 09:05:11.859738 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-xcmt8" podStartSLOduration=3.410216322 podStartE2EDuration="5.859709493s" podCreationTimestamp="2026-01-25 09:05:06 +0000 UTC" firstStartedPulling="2026-01-25 09:05:08.767167923 +0000 UTC m=+4091.440991456" lastFinishedPulling="2026-01-25 09:05:11.216661094 +0000 UTC m=+4093.890484627" observedRunningTime="2026-01-25 09:05:11.834166898 +0000 UTC m=+4094.507990441" watchObservedRunningTime="2026-01-25 09:05:11.859709493 +0000 UTC m=+4094.533533046" Jan 25 09:05:17 crc kubenswrapper[4832]: I0125 09:05:17.164633 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-xcmt8" Jan 25 09:05:17 crc kubenswrapper[4832]: I0125 09:05:17.165172 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-xcmt8" Jan 25 09:05:17 crc kubenswrapper[4832]: I0125 09:05:17.221579 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-xcmt8" Jan 25 09:05:17 crc kubenswrapper[4832]: I0125 09:05:17.912789 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-xcmt8" Jan 25 09:05:17 crc kubenswrapper[4832]: I0125 09:05:17.971158 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-xcmt8"] Jan 25 09:05:19 crc kubenswrapper[4832]: I0125 09:05:19.880471 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-xcmt8" podUID="18e36e85-9425-49ab-90d7-f36434353dbe" containerName="registry-server" containerID="cri-o://493875a3caf68987143ed8d95f7e45024d4d7a7771c9c83a4c165cf85beab0dd" gracePeriod=2 Jan 25 09:05:20 crc kubenswrapper[4832]: I0125 09:05:20.349859 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-xcmt8" Jan 25 09:05:20 crc kubenswrapper[4832]: I0125 09:05:20.425403 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/18e36e85-9425-49ab-90d7-f36434353dbe-utilities\") pod \"18e36e85-9425-49ab-90d7-f36434353dbe\" (UID: \"18e36e85-9425-49ab-90d7-f36434353dbe\") " Jan 25 09:05:20 crc kubenswrapper[4832]: I0125 09:05:20.427352 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mnkt9\" (UniqueName: \"kubernetes.io/projected/18e36e85-9425-49ab-90d7-f36434353dbe-kube-api-access-mnkt9\") pod \"18e36e85-9425-49ab-90d7-f36434353dbe\" (UID: \"18e36e85-9425-49ab-90d7-f36434353dbe\") " Jan 25 09:05:20 crc kubenswrapper[4832]: I0125 09:05:20.427274 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/18e36e85-9425-49ab-90d7-f36434353dbe-utilities" (OuterVolumeSpecName: "utilities") pod "18e36e85-9425-49ab-90d7-f36434353dbe" (UID: "18e36e85-9425-49ab-90d7-f36434353dbe"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 25 09:05:20 crc kubenswrapper[4832]: I0125 09:05:20.429019 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/18e36e85-9425-49ab-90d7-f36434353dbe-catalog-content\") pod \"18e36e85-9425-49ab-90d7-f36434353dbe\" (UID: \"18e36e85-9425-49ab-90d7-f36434353dbe\") " Jan 25 09:05:20 crc kubenswrapper[4832]: I0125 09:05:20.429897 4832 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/18e36e85-9425-49ab-90d7-f36434353dbe-utilities\") on node \"crc\" DevicePath \"\"" Jan 25 09:05:20 crc kubenswrapper[4832]: I0125 09:05:20.435309 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/18e36e85-9425-49ab-90d7-f36434353dbe-kube-api-access-mnkt9" (OuterVolumeSpecName: "kube-api-access-mnkt9") pod "18e36e85-9425-49ab-90d7-f36434353dbe" (UID: "18e36e85-9425-49ab-90d7-f36434353dbe"). InnerVolumeSpecName "kube-api-access-mnkt9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 25 09:05:20 crc kubenswrapper[4832]: I0125 09:05:20.499180 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/18e36e85-9425-49ab-90d7-f36434353dbe-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "18e36e85-9425-49ab-90d7-f36434353dbe" (UID: "18e36e85-9425-49ab-90d7-f36434353dbe"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 25 09:05:20 crc kubenswrapper[4832]: I0125 09:05:20.531956 4832 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/18e36e85-9425-49ab-90d7-f36434353dbe-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 25 09:05:20 crc kubenswrapper[4832]: I0125 09:05:20.532006 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mnkt9\" (UniqueName: \"kubernetes.io/projected/18e36e85-9425-49ab-90d7-f36434353dbe-kube-api-access-mnkt9\") on node \"crc\" DevicePath \"\"" Jan 25 09:05:20 crc kubenswrapper[4832]: I0125 09:05:20.890269 4832 generic.go:334] "Generic (PLEG): container finished" podID="18e36e85-9425-49ab-90d7-f36434353dbe" containerID="493875a3caf68987143ed8d95f7e45024d4d7a7771c9c83a4c165cf85beab0dd" exitCode=0 Jan 25 09:05:20 crc kubenswrapper[4832]: I0125 09:05:20.890365 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-xcmt8" Jan 25 09:05:20 crc kubenswrapper[4832]: I0125 09:05:20.890349 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xcmt8" event={"ID":"18e36e85-9425-49ab-90d7-f36434353dbe","Type":"ContainerDied","Data":"493875a3caf68987143ed8d95f7e45024d4d7a7771c9c83a4c165cf85beab0dd"} Jan 25 09:05:20 crc kubenswrapper[4832]: I0125 09:05:20.890799 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xcmt8" event={"ID":"18e36e85-9425-49ab-90d7-f36434353dbe","Type":"ContainerDied","Data":"cd0e59ee980256f7be93cf2566b2701d70befaec3c69692b927432b9f06644f1"} Jan 25 09:05:20 crc kubenswrapper[4832]: I0125 09:05:20.890822 4832 scope.go:117] "RemoveContainer" containerID="493875a3caf68987143ed8d95f7e45024d4d7a7771c9c83a4c165cf85beab0dd" Jan 25 09:05:20 crc kubenswrapper[4832]: I0125 09:05:20.918558 4832 scope.go:117] "RemoveContainer" containerID="573dc49a92250846c850becedcc1372dca07a7dce599bb26bb2ff8b4ea41ea9d" Jan 25 09:05:20 crc kubenswrapper[4832]: I0125 09:05:20.950803 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-xcmt8"] Jan 25 09:05:20 crc kubenswrapper[4832]: I0125 09:05:20.959783 4832 scope.go:117] "RemoveContainer" containerID="68d460f79339b594d5d70fb15baa40ac39f768236f0bb70991fad9c99fefdaba" Jan 25 09:05:20 crc kubenswrapper[4832]: I0125 09:05:20.959970 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-xcmt8"] Jan 25 09:05:20 crc kubenswrapper[4832]: I0125 09:05:20.993276 4832 scope.go:117] "RemoveContainer" containerID="493875a3caf68987143ed8d95f7e45024d4d7a7771c9c83a4c165cf85beab0dd" Jan 25 09:05:20 crc kubenswrapper[4832]: E0125 09:05:20.994067 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"493875a3caf68987143ed8d95f7e45024d4d7a7771c9c83a4c165cf85beab0dd\": container with ID starting with 493875a3caf68987143ed8d95f7e45024d4d7a7771c9c83a4c165cf85beab0dd not found: ID does not exist" containerID="493875a3caf68987143ed8d95f7e45024d4d7a7771c9c83a4c165cf85beab0dd" Jan 25 09:05:20 crc kubenswrapper[4832]: I0125 09:05:20.994132 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"493875a3caf68987143ed8d95f7e45024d4d7a7771c9c83a4c165cf85beab0dd"} err="failed to get container status \"493875a3caf68987143ed8d95f7e45024d4d7a7771c9c83a4c165cf85beab0dd\": rpc error: code = NotFound desc = could not find container \"493875a3caf68987143ed8d95f7e45024d4d7a7771c9c83a4c165cf85beab0dd\": container with ID starting with 493875a3caf68987143ed8d95f7e45024d4d7a7771c9c83a4c165cf85beab0dd not found: ID does not exist" Jan 25 09:05:20 crc kubenswrapper[4832]: I0125 09:05:20.994165 4832 scope.go:117] "RemoveContainer" containerID="573dc49a92250846c850becedcc1372dca07a7dce599bb26bb2ff8b4ea41ea9d" Jan 25 09:05:20 crc kubenswrapper[4832]: E0125 09:05:20.994686 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"573dc49a92250846c850becedcc1372dca07a7dce599bb26bb2ff8b4ea41ea9d\": container with ID starting with 573dc49a92250846c850becedcc1372dca07a7dce599bb26bb2ff8b4ea41ea9d not found: ID does not exist" containerID="573dc49a92250846c850becedcc1372dca07a7dce599bb26bb2ff8b4ea41ea9d" Jan 25 09:05:20 crc kubenswrapper[4832]: I0125 09:05:20.994711 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"573dc49a92250846c850becedcc1372dca07a7dce599bb26bb2ff8b4ea41ea9d"} err="failed to get container status \"573dc49a92250846c850becedcc1372dca07a7dce599bb26bb2ff8b4ea41ea9d\": rpc error: code = NotFound desc = could not find container \"573dc49a92250846c850becedcc1372dca07a7dce599bb26bb2ff8b4ea41ea9d\": container with ID starting with 573dc49a92250846c850becedcc1372dca07a7dce599bb26bb2ff8b4ea41ea9d not found: ID does not exist" Jan 25 09:05:20 crc kubenswrapper[4832]: I0125 09:05:20.994726 4832 scope.go:117] "RemoveContainer" containerID="68d460f79339b594d5d70fb15baa40ac39f768236f0bb70991fad9c99fefdaba" Jan 25 09:05:20 crc kubenswrapper[4832]: E0125 09:05:20.994992 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"68d460f79339b594d5d70fb15baa40ac39f768236f0bb70991fad9c99fefdaba\": container with ID starting with 68d460f79339b594d5d70fb15baa40ac39f768236f0bb70991fad9c99fefdaba not found: ID does not exist" containerID="68d460f79339b594d5d70fb15baa40ac39f768236f0bb70991fad9c99fefdaba" Jan 25 09:05:20 crc kubenswrapper[4832]: I0125 09:05:20.995029 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"68d460f79339b594d5d70fb15baa40ac39f768236f0bb70991fad9c99fefdaba"} err="failed to get container status \"68d460f79339b594d5d70fb15baa40ac39f768236f0bb70991fad9c99fefdaba\": rpc error: code = NotFound desc = could not find container \"68d460f79339b594d5d70fb15baa40ac39f768236f0bb70991fad9c99fefdaba\": container with ID starting with 68d460f79339b594d5d70fb15baa40ac39f768236f0bb70991fad9c99fefdaba not found: ID does not exist" Jan 25 09:05:21 crc kubenswrapper[4832]: I0125 09:05:21.682199 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="18e36e85-9425-49ab-90d7-f36434353dbe" path="/var/lib/kubelet/pods/18e36e85-9425-49ab-90d7-f36434353dbe/volumes" Jan 25 09:05:27 crc kubenswrapper[4832]: I0125 09:05:27.958098 4832 generic.go:334] "Generic (PLEG): container finished" podID="a7e54325-87ed-4348-8d78-cd6d696a8ff9" containerID="b5fab16c9dc503ad4816d720fd8c63242c4b53817c0b7f628b7618e96979fee6" exitCode=0 Jan 25 09:05:27 crc kubenswrapper[4832]: I0125 09:05:27.958223 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-v7wc8/crc-debug-49xd6" event={"ID":"a7e54325-87ed-4348-8d78-cd6d696a8ff9","Type":"ContainerDied","Data":"b5fab16c9dc503ad4816d720fd8c63242c4b53817c0b7f628b7618e96979fee6"} Jan 25 09:05:29 crc kubenswrapper[4832]: I0125 09:05:29.080101 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-v7wc8/crc-debug-49xd6" Jan 25 09:05:29 crc kubenswrapper[4832]: I0125 09:05:29.118053 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-v7wc8/crc-debug-49xd6"] Jan 25 09:05:29 crc kubenswrapper[4832]: I0125 09:05:29.130581 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-v7wc8/crc-debug-49xd6"] Jan 25 09:05:29 crc kubenswrapper[4832]: I0125 09:05:29.194626 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/a7e54325-87ed-4348-8d78-cd6d696a8ff9-host\") pod \"a7e54325-87ed-4348-8d78-cd6d696a8ff9\" (UID: \"a7e54325-87ed-4348-8d78-cd6d696a8ff9\") " Jan 25 09:05:29 crc kubenswrapper[4832]: I0125 09:05:29.194796 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a7e54325-87ed-4348-8d78-cd6d696a8ff9-host" (OuterVolumeSpecName: "host") pod "a7e54325-87ed-4348-8d78-cd6d696a8ff9" (UID: "a7e54325-87ed-4348-8d78-cd6d696a8ff9"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 25 09:05:29 crc kubenswrapper[4832]: I0125 09:05:29.194903 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xpsxg\" (UniqueName: \"kubernetes.io/projected/a7e54325-87ed-4348-8d78-cd6d696a8ff9-kube-api-access-xpsxg\") pod \"a7e54325-87ed-4348-8d78-cd6d696a8ff9\" (UID: \"a7e54325-87ed-4348-8d78-cd6d696a8ff9\") " Jan 25 09:05:29 crc kubenswrapper[4832]: I0125 09:05:29.195744 4832 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/a7e54325-87ed-4348-8d78-cd6d696a8ff9-host\") on node \"crc\" DevicePath \"\"" Jan 25 09:05:29 crc kubenswrapper[4832]: I0125 09:05:29.201210 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a7e54325-87ed-4348-8d78-cd6d696a8ff9-kube-api-access-xpsxg" (OuterVolumeSpecName: "kube-api-access-xpsxg") pod "a7e54325-87ed-4348-8d78-cd6d696a8ff9" (UID: "a7e54325-87ed-4348-8d78-cd6d696a8ff9"). InnerVolumeSpecName "kube-api-access-xpsxg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 25 09:05:29 crc kubenswrapper[4832]: I0125 09:05:29.297664 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xpsxg\" (UniqueName: \"kubernetes.io/projected/a7e54325-87ed-4348-8d78-cd6d696a8ff9-kube-api-access-xpsxg\") on node \"crc\" DevicePath \"\"" Jan 25 09:05:29 crc kubenswrapper[4832]: I0125 09:05:29.695035 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a7e54325-87ed-4348-8d78-cd6d696a8ff9" path="/var/lib/kubelet/pods/a7e54325-87ed-4348-8d78-cd6d696a8ff9/volumes" Jan 25 09:05:29 crc kubenswrapper[4832]: I0125 09:05:29.981843 4832 scope.go:117] "RemoveContainer" containerID="b5fab16c9dc503ad4816d720fd8c63242c4b53817c0b7f628b7618e96979fee6" Jan 25 09:05:29 crc kubenswrapper[4832]: I0125 09:05:29.982001 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-v7wc8/crc-debug-49xd6" Jan 25 09:05:30 crc kubenswrapper[4832]: I0125 09:05:30.321034 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-v7wc8/crc-debug-qkv62"] Jan 25 09:05:30 crc kubenswrapper[4832]: E0125 09:05:30.321568 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="18e36e85-9425-49ab-90d7-f36434353dbe" containerName="extract-utilities" Jan 25 09:05:30 crc kubenswrapper[4832]: I0125 09:05:30.321584 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="18e36e85-9425-49ab-90d7-f36434353dbe" containerName="extract-utilities" Jan 25 09:05:30 crc kubenswrapper[4832]: E0125 09:05:30.321608 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a7e54325-87ed-4348-8d78-cd6d696a8ff9" containerName="container-00" Jan 25 09:05:30 crc kubenswrapper[4832]: I0125 09:05:30.321615 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="a7e54325-87ed-4348-8d78-cd6d696a8ff9" containerName="container-00" Jan 25 09:05:30 crc kubenswrapper[4832]: E0125 09:05:30.321639 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="18e36e85-9425-49ab-90d7-f36434353dbe" containerName="registry-server" Jan 25 09:05:30 crc kubenswrapper[4832]: I0125 09:05:30.321647 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="18e36e85-9425-49ab-90d7-f36434353dbe" containerName="registry-server" Jan 25 09:05:30 crc kubenswrapper[4832]: E0125 09:05:30.321669 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="18e36e85-9425-49ab-90d7-f36434353dbe" containerName="extract-content" Jan 25 09:05:30 crc kubenswrapper[4832]: I0125 09:05:30.321677 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="18e36e85-9425-49ab-90d7-f36434353dbe" containerName="extract-content" Jan 25 09:05:30 crc kubenswrapper[4832]: I0125 09:05:30.321964 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="18e36e85-9425-49ab-90d7-f36434353dbe" containerName="registry-server" Jan 25 09:05:30 crc kubenswrapper[4832]: I0125 09:05:30.321984 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="a7e54325-87ed-4348-8d78-cd6d696a8ff9" containerName="container-00" Jan 25 09:05:30 crc kubenswrapper[4832]: I0125 09:05:30.322753 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-v7wc8/crc-debug-qkv62" Jan 25 09:05:30 crc kubenswrapper[4832]: I0125 09:05:30.419877 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/bcaae825-88b7-4a6d-8288-1f289c5adfdb-host\") pod \"crc-debug-qkv62\" (UID: \"bcaae825-88b7-4a6d-8288-1f289c5adfdb\") " pod="openshift-must-gather-v7wc8/crc-debug-qkv62" Jan 25 09:05:30 crc kubenswrapper[4832]: I0125 09:05:30.420055 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xrm45\" (UniqueName: \"kubernetes.io/projected/bcaae825-88b7-4a6d-8288-1f289c5adfdb-kube-api-access-xrm45\") pod \"crc-debug-qkv62\" (UID: \"bcaae825-88b7-4a6d-8288-1f289c5adfdb\") " pod="openshift-must-gather-v7wc8/crc-debug-qkv62" Jan 25 09:05:30 crc kubenswrapper[4832]: I0125 09:05:30.522965 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/bcaae825-88b7-4a6d-8288-1f289c5adfdb-host\") pod \"crc-debug-qkv62\" (UID: \"bcaae825-88b7-4a6d-8288-1f289c5adfdb\") " pod="openshift-must-gather-v7wc8/crc-debug-qkv62" Jan 25 09:05:30 crc kubenswrapper[4832]: I0125 09:05:30.523058 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xrm45\" (UniqueName: \"kubernetes.io/projected/bcaae825-88b7-4a6d-8288-1f289c5adfdb-kube-api-access-xrm45\") pod \"crc-debug-qkv62\" (UID: \"bcaae825-88b7-4a6d-8288-1f289c5adfdb\") " pod="openshift-must-gather-v7wc8/crc-debug-qkv62" Jan 25 09:05:30 crc kubenswrapper[4832]: I0125 09:05:30.523141 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/bcaae825-88b7-4a6d-8288-1f289c5adfdb-host\") pod \"crc-debug-qkv62\" (UID: \"bcaae825-88b7-4a6d-8288-1f289c5adfdb\") " pod="openshift-must-gather-v7wc8/crc-debug-qkv62" Jan 25 09:05:30 crc kubenswrapper[4832]: I0125 09:05:30.547293 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xrm45\" (UniqueName: \"kubernetes.io/projected/bcaae825-88b7-4a6d-8288-1f289c5adfdb-kube-api-access-xrm45\") pod \"crc-debug-qkv62\" (UID: \"bcaae825-88b7-4a6d-8288-1f289c5adfdb\") " pod="openshift-must-gather-v7wc8/crc-debug-qkv62" Jan 25 09:05:30 crc kubenswrapper[4832]: I0125 09:05:30.643026 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-v7wc8/crc-debug-qkv62" Jan 25 09:05:30 crc kubenswrapper[4832]: I0125 09:05:30.993543 4832 generic.go:334] "Generic (PLEG): container finished" podID="bcaae825-88b7-4a6d-8288-1f289c5adfdb" containerID="c717040fbd9a77d1c22ec4dbe22636dce64ca4a09b9b5b8222f4aff5681c6b07" exitCode=0 Jan 25 09:05:30 crc kubenswrapper[4832]: I0125 09:05:30.993630 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-v7wc8/crc-debug-qkv62" event={"ID":"bcaae825-88b7-4a6d-8288-1f289c5adfdb","Type":"ContainerDied","Data":"c717040fbd9a77d1c22ec4dbe22636dce64ca4a09b9b5b8222f4aff5681c6b07"} Jan 25 09:05:30 crc kubenswrapper[4832]: I0125 09:05:30.993939 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-v7wc8/crc-debug-qkv62" event={"ID":"bcaae825-88b7-4a6d-8288-1f289c5adfdb","Type":"ContainerStarted","Data":"12205306d128783a94a4bf55ad8b5c5a57679a3f830d142293336aad26010332"} Jan 25 09:05:31 crc kubenswrapper[4832]: I0125 09:05:31.467478 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-v7wc8/crc-debug-qkv62"] Jan 25 09:05:31 crc kubenswrapper[4832]: I0125 09:05:31.476801 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-v7wc8/crc-debug-qkv62"] Jan 25 09:05:32 crc kubenswrapper[4832]: I0125 09:05:32.132709 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-v7wc8/crc-debug-qkv62" Jan 25 09:05:32 crc kubenswrapper[4832]: I0125 09:05:32.258336 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xrm45\" (UniqueName: \"kubernetes.io/projected/bcaae825-88b7-4a6d-8288-1f289c5adfdb-kube-api-access-xrm45\") pod \"bcaae825-88b7-4a6d-8288-1f289c5adfdb\" (UID: \"bcaae825-88b7-4a6d-8288-1f289c5adfdb\") " Jan 25 09:05:32 crc kubenswrapper[4832]: I0125 09:05:32.258483 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/bcaae825-88b7-4a6d-8288-1f289c5adfdb-host\") pod \"bcaae825-88b7-4a6d-8288-1f289c5adfdb\" (UID: \"bcaae825-88b7-4a6d-8288-1f289c5adfdb\") " Jan 25 09:05:32 crc kubenswrapper[4832]: I0125 09:05:32.258983 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bcaae825-88b7-4a6d-8288-1f289c5adfdb-host" (OuterVolumeSpecName: "host") pod "bcaae825-88b7-4a6d-8288-1f289c5adfdb" (UID: "bcaae825-88b7-4a6d-8288-1f289c5adfdb"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 25 09:05:32 crc kubenswrapper[4832]: I0125 09:05:32.259375 4832 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/bcaae825-88b7-4a6d-8288-1f289c5adfdb-host\") on node \"crc\" DevicePath \"\"" Jan 25 09:05:32 crc kubenswrapper[4832]: I0125 09:05:32.265101 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bcaae825-88b7-4a6d-8288-1f289c5adfdb-kube-api-access-xrm45" (OuterVolumeSpecName: "kube-api-access-xrm45") pod "bcaae825-88b7-4a6d-8288-1f289c5adfdb" (UID: "bcaae825-88b7-4a6d-8288-1f289c5adfdb"). InnerVolumeSpecName "kube-api-access-xrm45". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 25 09:05:32 crc kubenswrapper[4832]: I0125 09:05:32.361068 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xrm45\" (UniqueName: \"kubernetes.io/projected/bcaae825-88b7-4a6d-8288-1f289c5adfdb-kube-api-access-xrm45\") on node \"crc\" DevicePath \"\"" Jan 25 09:05:32 crc kubenswrapper[4832]: I0125 09:05:32.640374 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-v7wc8/crc-debug-dx89r"] Jan 25 09:05:32 crc kubenswrapper[4832]: E0125 09:05:32.640989 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bcaae825-88b7-4a6d-8288-1f289c5adfdb" containerName="container-00" Jan 25 09:05:32 crc kubenswrapper[4832]: I0125 09:05:32.641010 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="bcaae825-88b7-4a6d-8288-1f289c5adfdb" containerName="container-00" Jan 25 09:05:32 crc kubenswrapper[4832]: I0125 09:05:32.641241 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="bcaae825-88b7-4a6d-8288-1f289c5adfdb" containerName="container-00" Jan 25 09:05:32 crc kubenswrapper[4832]: I0125 09:05:32.642073 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-v7wc8/crc-debug-dx89r" Jan 25 09:05:32 crc kubenswrapper[4832]: I0125 09:05:32.768061 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/44bb552e-2a8d-4534-bd10-54a926ca3361-host\") pod \"crc-debug-dx89r\" (UID: \"44bb552e-2a8d-4534-bd10-54a926ca3361\") " pod="openshift-must-gather-v7wc8/crc-debug-dx89r" Jan 25 09:05:32 crc kubenswrapper[4832]: I0125 09:05:32.768589 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q5gmb\" (UniqueName: \"kubernetes.io/projected/44bb552e-2a8d-4534-bd10-54a926ca3361-kube-api-access-q5gmb\") pod \"crc-debug-dx89r\" (UID: \"44bb552e-2a8d-4534-bd10-54a926ca3361\") " pod="openshift-must-gather-v7wc8/crc-debug-dx89r" Jan 25 09:05:32 crc kubenswrapper[4832]: I0125 09:05:32.870456 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q5gmb\" (UniqueName: \"kubernetes.io/projected/44bb552e-2a8d-4534-bd10-54a926ca3361-kube-api-access-q5gmb\") pod \"crc-debug-dx89r\" (UID: \"44bb552e-2a8d-4534-bd10-54a926ca3361\") " pod="openshift-must-gather-v7wc8/crc-debug-dx89r" Jan 25 09:05:32 crc kubenswrapper[4832]: I0125 09:05:32.870598 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/44bb552e-2a8d-4534-bd10-54a926ca3361-host\") pod \"crc-debug-dx89r\" (UID: \"44bb552e-2a8d-4534-bd10-54a926ca3361\") " pod="openshift-must-gather-v7wc8/crc-debug-dx89r" Jan 25 09:05:32 crc kubenswrapper[4832]: I0125 09:05:32.870853 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/44bb552e-2a8d-4534-bd10-54a926ca3361-host\") pod \"crc-debug-dx89r\" (UID: \"44bb552e-2a8d-4534-bd10-54a926ca3361\") " pod="openshift-must-gather-v7wc8/crc-debug-dx89r" Jan 25 09:05:32 crc kubenswrapper[4832]: I0125 09:05:32.890660 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q5gmb\" (UniqueName: \"kubernetes.io/projected/44bb552e-2a8d-4534-bd10-54a926ca3361-kube-api-access-q5gmb\") pod \"crc-debug-dx89r\" (UID: \"44bb552e-2a8d-4534-bd10-54a926ca3361\") " pod="openshift-must-gather-v7wc8/crc-debug-dx89r" Jan 25 09:05:32 crc kubenswrapper[4832]: I0125 09:05:32.959577 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-v7wc8/crc-debug-dx89r" Jan 25 09:05:33 crc kubenswrapper[4832]: I0125 09:05:33.016753 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-v7wc8/crc-debug-dx89r" event={"ID":"44bb552e-2a8d-4534-bd10-54a926ca3361","Type":"ContainerStarted","Data":"b760df66f2ba2416eb782384a3abe954c658cfc60c0d21859624262b5485fa19"} Jan 25 09:05:33 crc kubenswrapper[4832]: I0125 09:05:33.018862 4832 scope.go:117] "RemoveContainer" containerID="c717040fbd9a77d1c22ec4dbe22636dce64ca4a09b9b5b8222f4aff5681c6b07" Jan 25 09:05:33 crc kubenswrapper[4832]: I0125 09:05:33.018932 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-v7wc8/crc-debug-qkv62" Jan 25 09:05:33 crc kubenswrapper[4832]: I0125 09:05:33.683690 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bcaae825-88b7-4a6d-8288-1f289c5adfdb" path="/var/lib/kubelet/pods/bcaae825-88b7-4a6d-8288-1f289c5adfdb/volumes" Jan 25 09:05:34 crc kubenswrapper[4832]: I0125 09:05:34.032095 4832 generic.go:334] "Generic (PLEG): container finished" podID="44bb552e-2a8d-4534-bd10-54a926ca3361" containerID="ccc59e75270c7d481a25644538e821d573142d0605157f69ca87db643b4bc921" exitCode=0 Jan 25 09:05:34 crc kubenswrapper[4832]: I0125 09:05:34.032140 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-v7wc8/crc-debug-dx89r" event={"ID":"44bb552e-2a8d-4534-bd10-54a926ca3361","Type":"ContainerDied","Data":"ccc59e75270c7d481a25644538e821d573142d0605157f69ca87db643b4bc921"} Jan 25 09:05:34 crc kubenswrapper[4832]: I0125 09:05:34.066681 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-v7wc8/crc-debug-dx89r"] Jan 25 09:05:34 crc kubenswrapper[4832]: I0125 09:05:34.076776 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-v7wc8/crc-debug-dx89r"] Jan 25 09:05:35 crc kubenswrapper[4832]: I0125 09:05:35.631121 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-v7wc8/crc-debug-dx89r" Jan 25 09:05:35 crc kubenswrapper[4832]: I0125 09:05:35.743007 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q5gmb\" (UniqueName: \"kubernetes.io/projected/44bb552e-2a8d-4534-bd10-54a926ca3361-kube-api-access-q5gmb\") pod \"44bb552e-2a8d-4534-bd10-54a926ca3361\" (UID: \"44bb552e-2a8d-4534-bd10-54a926ca3361\") " Jan 25 09:05:35 crc kubenswrapper[4832]: I0125 09:05:35.743381 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/44bb552e-2a8d-4534-bd10-54a926ca3361-host\") pod \"44bb552e-2a8d-4534-bd10-54a926ca3361\" (UID: \"44bb552e-2a8d-4534-bd10-54a926ca3361\") " Jan 25 09:05:35 crc kubenswrapper[4832]: I0125 09:05:35.743862 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/44bb552e-2a8d-4534-bd10-54a926ca3361-host" (OuterVolumeSpecName: "host") pod "44bb552e-2a8d-4534-bd10-54a926ca3361" (UID: "44bb552e-2a8d-4534-bd10-54a926ca3361"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 25 09:05:35 crc kubenswrapper[4832]: I0125 09:05:35.749815 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44bb552e-2a8d-4534-bd10-54a926ca3361-kube-api-access-q5gmb" (OuterVolumeSpecName: "kube-api-access-q5gmb") pod "44bb552e-2a8d-4534-bd10-54a926ca3361" (UID: "44bb552e-2a8d-4534-bd10-54a926ca3361"). InnerVolumeSpecName "kube-api-access-q5gmb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 25 09:05:35 crc kubenswrapper[4832]: I0125 09:05:35.845926 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q5gmb\" (UniqueName: \"kubernetes.io/projected/44bb552e-2a8d-4534-bd10-54a926ca3361-kube-api-access-q5gmb\") on node \"crc\" DevicePath \"\"" Jan 25 09:05:35 crc kubenswrapper[4832]: I0125 09:05:35.846260 4832 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/44bb552e-2a8d-4534-bd10-54a926ca3361-host\") on node \"crc\" DevicePath \"\"" Jan 25 09:05:36 crc kubenswrapper[4832]: I0125 09:05:36.050191 4832 scope.go:117] "RemoveContainer" containerID="ccc59e75270c7d481a25644538e821d573142d0605157f69ca87db643b4bc921" Jan 25 09:05:36 crc kubenswrapper[4832]: I0125 09:05:36.050248 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-v7wc8/crc-debug-dx89r" Jan 25 09:05:37 crc kubenswrapper[4832]: I0125 09:05:37.680971 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44bb552e-2a8d-4534-bd10-54a926ca3361" path="/var/lib/kubelet/pods/44bb552e-2a8d-4534-bd10-54a926ca3361/volumes" Jan 25 09:05:39 crc kubenswrapper[4832]: I0125 09:05:39.724447 4832 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/swift-proxy-658c5f7995-t6v6k" podUID="81bd3301-f264-4150-8f71-869af2c1ed3d" containerName="proxy-server" probeResult="failure" output="HTTP probe failed with statuscode: 502" Jan 25 09:05:44 crc kubenswrapper[4832]: I0125 09:05:44.904792 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-5qqrp"] Jan 25 09:05:44 crc kubenswrapper[4832]: E0125 09:05:44.905734 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="44bb552e-2a8d-4534-bd10-54a926ca3361" containerName="container-00" Jan 25 09:05:44 crc kubenswrapper[4832]: I0125 09:05:44.905748 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="44bb552e-2a8d-4534-bd10-54a926ca3361" containerName="container-00" Jan 25 09:05:44 crc kubenswrapper[4832]: I0125 09:05:44.905942 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="44bb552e-2a8d-4534-bd10-54a926ca3361" containerName="container-00" Jan 25 09:05:44 crc kubenswrapper[4832]: I0125 09:05:44.907309 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-5qqrp" Jan 25 09:05:44 crc kubenswrapper[4832]: I0125 09:05:44.927650 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-5qqrp"] Jan 25 09:05:44 crc kubenswrapper[4832]: I0125 09:05:44.982404 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tgbwn\" (UniqueName: \"kubernetes.io/projected/e2a84dac-610e-4493-be69-8487458103ea-kube-api-access-tgbwn\") pod \"redhat-marketplace-5qqrp\" (UID: \"e2a84dac-610e-4493-be69-8487458103ea\") " pod="openshift-marketplace/redhat-marketplace-5qqrp" Jan 25 09:05:44 crc kubenswrapper[4832]: I0125 09:05:44.982520 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e2a84dac-610e-4493-be69-8487458103ea-catalog-content\") pod \"redhat-marketplace-5qqrp\" (UID: \"e2a84dac-610e-4493-be69-8487458103ea\") " pod="openshift-marketplace/redhat-marketplace-5qqrp" Jan 25 09:05:44 crc kubenswrapper[4832]: I0125 09:05:44.982582 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e2a84dac-610e-4493-be69-8487458103ea-utilities\") pod \"redhat-marketplace-5qqrp\" (UID: \"e2a84dac-610e-4493-be69-8487458103ea\") " pod="openshift-marketplace/redhat-marketplace-5qqrp" Jan 25 09:05:45 crc kubenswrapper[4832]: I0125 09:05:45.085263 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tgbwn\" (UniqueName: \"kubernetes.io/projected/e2a84dac-610e-4493-be69-8487458103ea-kube-api-access-tgbwn\") pod \"redhat-marketplace-5qqrp\" (UID: \"e2a84dac-610e-4493-be69-8487458103ea\") " pod="openshift-marketplace/redhat-marketplace-5qqrp" Jan 25 09:05:45 crc kubenswrapper[4832]: I0125 09:05:45.085323 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e2a84dac-610e-4493-be69-8487458103ea-catalog-content\") pod \"redhat-marketplace-5qqrp\" (UID: \"e2a84dac-610e-4493-be69-8487458103ea\") " pod="openshift-marketplace/redhat-marketplace-5qqrp" Jan 25 09:05:45 crc kubenswrapper[4832]: I0125 09:05:45.085353 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e2a84dac-610e-4493-be69-8487458103ea-utilities\") pod \"redhat-marketplace-5qqrp\" (UID: \"e2a84dac-610e-4493-be69-8487458103ea\") " pod="openshift-marketplace/redhat-marketplace-5qqrp" Jan 25 09:05:45 crc kubenswrapper[4832]: I0125 09:05:45.086015 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e2a84dac-610e-4493-be69-8487458103ea-utilities\") pod \"redhat-marketplace-5qqrp\" (UID: \"e2a84dac-610e-4493-be69-8487458103ea\") " pod="openshift-marketplace/redhat-marketplace-5qqrp" Jan 25 09:05:45 crc kubenswrapper[4832]: I0125 09:05:45.086100 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e2a84dac-610e-4493-be69-8487458103ea-catalog-content\") pod \"redhat-marketplace-5qqrp\" (UID: \"e2a84dac-610e-4493-be69-8487458103ea\") " pod="openshift-marketplace/redhat-marketplace-5qqrp" Jan 25 09:05:45 crc kubenswrapper[4832]: I0125 09:05:45.104973 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tgbwn\" (UniqueName: \"kubernetes.io/projected/e2a84dac-610e-4493-be69-8487458103ea-kube-api-access-tgbwn\") pod \"redhat-marketplace-5qqrp\" (UID: \"e2a84dac-610e-4493-be69-8487458103ea\") " pod="openshift-marketplace/redhat-marketplace-5qqrp" Jan 25 09:05:45 crc kubenswrapper[4832]: I0125 09:05:45.295074 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-5qqrp" Jan 25 09:05:45 crc kubenswrapper[4832]: I0125 09:05:45.858042 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-5qqrp"] Jan 25 09:05:46 crc kubenswrapper[4832]: I0125 09:05:46.137059 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5qqrp" event={"ID":"e2a84dac-610e-4493-be69-8487458103ea","Type":"ContainerStarted","Data":"7f509261edf5e0cc4ff2c7cb3ea5a118e521bf1be385fce398adc671a925e8b6"} Jan 25 09:05:47 crc kubenswrapper[4832]: I0125 09:05:47.147763 4832 generic.go:334] "Generic (PLEG): container finished" podID="e2a84dac-610e-4493-be69-8487458103ea" containerID="73aa4bbdd65755672f3e5893d76d80238d74e9d31fe6d953bcbc7fb3233c6b9a" exitCode=0 Jan 25 09:05:47 crc kubenswrapper[4832]: I0125 09:05:47.147895 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5qqrp" event={"ID":"e2a84dac-610e-4493-be69-8487458103ea","Type":"ContainerDied","Data":"73aa4bbdd65755672f3e5893d76d80238d74e9d31fe6d953bcbc7fb3233c6b9a"} Jan 25 09:05:47 crc kubenswrapper[4832]: I0125 09:05:47.150226 4832 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 25 09:05:48 crc kubenswrapper[4832]: I0125 09:05:48.161852 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5qqrp" event={"ID":"e2a84dac-610e-4493-be69-8487458103ea","Type":"ContainerStarted","Data":"729861e9dc8521ff6ec8571476a57f8ddc99e9f5768f38a60c354bb6c4b2691e"} Jan 25 09:05:49 crc kubenswrapper[4832]: I0125 09:05:49.173800 4832 generic.go:334] "Generic (PLEG): container finished" podID="e2a84dac-610e-4493-be69-8487458103ea" containerID="729861e9dc8521ff6ec8571476a57f8ddc99e9f5768f38a60c354bb6c4b2691e" exitCode=0 Jan 25 09:05:49 crc kubenswrapper[4832]: I0125 09:05:49.173979 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5qqrp" event={"ID":"e2a84dac-610e-4493-be69-8487458103ea","Type":"ContainerDied","Data":"729861e9dc8521ff6ec8571476a57f8ddc99e9f5768f38a60c354bb6c4b2691e"} Jan 25 09:05:50 crc kubenswrapper[4832]: I0125 09:05:50.188052 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5qqrp" event={"ID":"e2a84dac-610e-4493-be69-8487458103ea","Type":"ContainerStarted","Data":"bd418f2598843250341b9a9cd1fcd3fb1ac63eae849745c47c433c2f5abfcf4f"} Jan 25 09:05:50 crc kubenswrapper[4832]: I0125 09:05:50.214597 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-5qqrp" podStartSLOduration=3.769903279 podStartE2EDuration="6.21456905s" podCreationTimestamp="2026-01-25 09:05:44 +0000 UTC" firstStartedPulling="2026-01-25 09:05:47.149954517 +0000 UTC m=+4129.823778050" lastFinishedPulling="2026-01-25 09:05:49.594620268 +0000 UTC m=+4132.268443821" observedRunningTime="2026-01-25 09:05:50.211655549 +0000 UTC m=+4132.885479102" watchObservedRunningTime="2026-01-25 09:05:50.21456905 +0000 UTC m=+4132.888392593" Jan 25 09:05:55 crc kubenswrapper[4832]: I0125 09:05:55.295512 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-5qqrp" Jan 25 09:05:55 crc kubenswrapper[4832]: I0125 09:05:55.296144 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-5qqrp" Jan 25 09:05:55 crc kubenswrapper[4832]: I0125 09:05:55.347204 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-5qqrp" Jan 25 09:05:56 crc kubenswrapper[4832]: I0125 09:05:56.619596 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-5qqrp" Jan 25 09:05:56 crc kubenswrapper[4832]: I0125 09:05:56.717290 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-5qqrp"] Jan 25 09:05:58 crc kubenswrapper[4832]: I0125 09:05:58.299886 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-5qqrp" podUID="e2a84dac-610e-4493-be69-8487458103ea" containerName="registry-server" containerID="cri-o://bd418f2598843250341b9a9cd1fcd3fb1ac63eae849745c47c433c2f5abfcf4f" gracePeriod=2 Jan 25 09:05:59 crc kubenswrapper[4832]: I0125 09:05:59.282573 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-5qqrp" Jan 25 09:05:59 crc kubenswrapper[4832]: I0125 09:05:59.317322 4832 generic.go:334] "Generic (PLEG): container finished" podID="e2a84dac-610e-4493-be69-8487458103ea" containerID="bd418f2598843250341b9a9cd1fcd3fb1ac63eae849745c47c433c2f5abfcf4f" exitCode=0 Jan 25 09:05:59 crc kubenswrapper[4832]: I0125 09:05:59.317413 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5qqrp" event={"ID":"e2a84dac-610e-4493-be69-8487458103ea","Type":"ContainerDied","Data":"bd418f2598843250341b9a9cd1fcd3fb1ac63eae849745c47c433c2f5abfcf4f"} Jan 25 09:05:59 crc kubenswrapper[4832]: I0125 09:05:59.317452 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5qqrp" event={"ID":"e2a84dac-610e-4493-be69-8487458103ea","Type":"ContainerDied","Data":"7f509261edf5e0cc4ff2c7cb3ea5a118e521bf1be385fce398adc671a925e8b6"} Jan 25 09:05:59 crc kubenswrapper[4832]: I0125 09:05:59.317501 4832 scope.go:117] "RemoveContainer" containerID="bd418f2598843250341b9a9cd1fcd3fb1ac63eae849745c47c433c2f5abfcf4f" Jan 25 09:05:59 crc kubenswrapper[4832]: I0125 09:05:59.317701 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-5qqrp" Jan 25 09:05:59 crc kubenswrapper[4832]: I0125 09:05:59.342631 4832 scope.go:117] "RemoveContainer" containerID="729861e9dc8521ff6ec8571476a57f8ddc99e9f5768f38a60c354bb6c4b2691e" Jan 25 09:05:59 crc kubenswrapper[4832]: I0125 09:05:59.381102 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e2a84dac-610e-4493-be69-8487458103ea-utilities\") pod \"e2a84dac-610e-4493-be69-8487458103ea\" (UID: \"e2a84dac-610e-4493-be69-8487458103ea\") " Jan 25 09:05:59 crc kubenswrapper[4832]: I0125 09:05:59.381256 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e2a84dac-610e-4493-be69-8487458103ea-catalog-content\") pod \"e2a84dac-610e-4493-be69-8487458103ea\" (UID: \"e2a84dac-610e-4493-be69-8487458103ea\") " Jan 25 09:05:59 crc kubenswrapper[4832]: I0125 09:05:59.381498 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tgbwn\" (UniqueName: \"kubernetes.io/projected/e2a84dac-610e-4493-be69-8487458103ea-kube-api-access-tgbwn\") pod \"e2a84dac-610e-4493-be69-8487458103ea\" (UID: \"e2a84dac-610e-4493-be69-8487458103ea\") " Jan 25 09:05:59 crc kubenswrapper[4832]: I0125 09:05:59.382453 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e2a84dac-610e-4493-be69-8487458103ea-utilities" (OuterVolumeSpecName: "utilities") pod "e2a84dac-610e-4493-be69-8487458103ea" (UID: "e2a84dac-610e-4493-be69-8487458103ea"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 25 09:05:59 crc kubenswrapper[4832]: I0125 09:05:59.390896 4832 scope.go:117] "RemoveContainer" containerID="73aa4bbdd65755672f3e5893d76d80238d74e9d31fe6d953bcbc7fb3233c6b9a" Jan 25 09:05:59 crc kubenswrapper[4832]: I0125 09:05:59.391631 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e2a84dac-610e-4493-be69-8487458103ea-kube-api-access-tgbwn" (OuterVolumeSpecName: "kube-api-access-tgbwn") pod "e2a84dac-610e-4493-be69-8487458103ea" (UID: "e2a84dac-610e-4493-be69-8487458103ea"). InnerVolumeSpecName "kube-api-access-tgbwn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 25 09:05:59 crc kubenswrapper[4832]: I0125 09:05:59.413153 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e2a84dac-610e-4493-be69-8487458103ea-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e2a84dac-610e-4493-be69-8487458103ea" (UID: "e2a84dac-610e-4493-be69-8487458103ea"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 25 09:05:59 crc kubenswrapper[4832]: I0125 09:05:59.453957 4832 scope.go:117] "RemoveContainer" containerID="bd418f2598843250341b9a9cd1fcd3fb1ac63eae849745c47c433c2f5abfcf4f" Jan 25 09:05:59 crc kubenswrapper[4832]: E0125 09:05:59.454533 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bd418f2598843250341b9a9cd1fcd3fb1ac63eae849745c47c433c2f5abfcf4f\": container with ID starting with bd418f2598843250341b9a9cd1fcd3fb1ac63eae849745c47c433c2f5abfcf4f not found: ID does not exist" containerID="bd418f2598843250341b9a9cd1fcd3fb1ac63eae849745c47c433c2f5abfcf4f" Jan 25 09:05:59 crc kubenswrapper[4832]: I0125 09:05:59.454584 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bd418f2598843250341b9a9cd1fcd3fb1ac63eae849745c47c433c2f5abfcf4f"} err="failed to get container status \"bd418f2598843250341b9a9cd1fcd3fb1ac63eae849745c47c433c2f5abfcf4f\": rpc error: code = NotFound desc = could not find container \"bd418f2598843250341b9a9cd1fcd3fb1ac63eae849745c47c433c2f5abfcf4f\": container with ID starting with bd418f2598843250341b9a9cd1fcd3fb1ac63eae849745c47c433c2f5abfcf4f not found: ID does not exist" Jan 25 09:05:59 crc kubenswrapper[4832]: I0125 09:05:59.454611 4832 scope.go:117] "RemoveContainer" containerID="729861e9dc8521ff6ec8571476a57f8ddc99e9f5768f38a60c354bb6c4b2691e" Jan 25 09:05:59 crc kubenswrapper[4832]: E0125 09:05:59.454939 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"729861e9dc8521ff6ec8571476a57f8ddc99e9f5768f38a60c354bb6c4b2691e\": container with ID starting with 729861e9dc8521ff6ec8571476a57f8ddc99e9f5768f38a60c354bb6c4b2691e not found: ID does not exist" containerID="729861e9dc8521ff6ec8571476a57f8ddc99e9f5768f38a60c354bb6c4b2691e" Jan 25 09:05:59 crc kubenswrapper[4832]: I0125 09:05:59.454975 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"729861e9dc8521ff6ec8571476a57f8ddc99e9f5768f38a60c354bb6c4b2691e"} err="failed to get container status \"729861e9dc8521ff6ec8571476a57f8ddc99e9f5768f38a60c354bb6c4b2691e\": rpc error: code = NotFound desc = could not find container \"729861e9dc8521ff6ec8571476a57f8ddc99e9f5768f38a60c354bb6c4b2691e\": container with ID starting with 729861e9dc8521ff6ec8571476a57f8ddc99e9f5768f38a60c354bb6c4b2691e not found: ID does not exist" Jan 25 09:05:59 crc kubenswrapper[4832]: I0125 09:05:59.454999 4832 scope.go:117] "RemoveContainer" containerID="73aa4bbdd65755672f3e5893d76d80238d74e9d31fe6d953bcbc7fb3233c6b9a" Jan 25 09:05:59 crc kubenswrapper[4832]: E0125 09:05:59.455301 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"73aa4bbdd65755672f3e5893d76d80238d74e9d31fe6d953bcbc7fb3233c6b9a\": container with ID starting with 73aa4bbdd65755672f3e5893d76d80238d74e9d31fe6d953bcbc7fb3233c6b9a not found: ID does not exist" containerID="73aa4bbdd65755672f3e5893d76d80238d74e9d31fe6d953bcbc7fb3233c6b9a" Jan 25 09:05:59 crc kubenswrapper[4832]: I0125 09:05:59.455353 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"73aa4bbdd65755672f3e5893d76d80238d74e9d31fe6d953bcbc7fb3233c6b9a"} err="failed to get container status \"73aa4bbdd65755672f3e5893d76d80238d74e9d31fe6d953bcbc7fb3233c6b9a\": rpc error: code = NotFound desc = could not find container \"73aa4bbdd65755672f3e5893d76d80238d74e9d31fe6d953bcbc7fb3233c6b9a\": container with ID starting with 73aa4bbdd65755672f3e5893d76d80238d74e9d31fe6d953bcbc7fb3233c6b9a not found: ID does not exist" Jan 25 09:05:59 crc kubenswrapper[4832]: I0125 09:05:59.483864 4832 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e2a84dac-610e-4493-be69-8487458103ea-utilities\") on node \"crc\" DevicePath \"\"" Jan 25 09:05:59 crc kubenswrapper[4832]: I0125 09:05:59.483905 4832 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e2a84dac-610e-4493-be69-8487458103ea-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 25 09:05:59 crc kubenswrapper[4832]: I0125 09:05:59.483916 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tgbwn\" (UniqueName: \"kubernetes.io/projected/e2a84dac-610e-4493-be69-8487458103ea-kube-api-access-tgbwn\") on node \"crc\" DevicePath \"\"" Jan 25 09:05:59 crc kubenswrapper[4832]: I0125 09:05:59.653513 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-5qqrp"] Jan 25 09:05:59 crc kubenswrapper[4832]: I0125 09:05:59.681180 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-5qqrp"] Jan 25 09:06:01 crc kubenswrapper[4832]: I0125 09:06:01.682273 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e2a84dac-610e-4493-be69-8487458103ea" path="/var/lib/kubelet/pods/e2a84dac-610e-4493-be69-8487458103ea/volumes" Jan 25 09:06:11 crc kubenswrapper[4832]: I0125 09:06:11.290159 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-9f466dd54-88fdd_ae8a1d7e-bb0c-4228-b39b-1de7e6c62ff5/barbican-api/0.log" Jan 25 09:06:11 crc kubenswrapper[4832]: I0125 09:06:11.502923 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-9f466dd54-88fdd_ae8a1d7e-bb0c-4228-b39b-1de7e6c62ff5/barbican-api-log/0.log" Jan 25 09:06:11 crc kubenswrapper[4832]: I0125 09:06:11.522883 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-7b4947bb84-pmdh6_4899f618-1f51-4d34-9970-7c096359b47e/barbican-keystone-listener/0.log" Jan 25 09:06:11 crc kubenswrapper[4832]: I0125 09:06:11.554376 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-7b4947bb84-pmdh6_4899f618-1f51-4d34-9970-7c096359b47e/barbican-keystone-listener-log/0.log" Jan 25 09:06:11 crc kubenswrapper[4832]: I0125 09:06:11.685786 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-855cdf875c-rxk79_26baac3d-6d07-4f33-956e-4048e3318099/barbican-worker/0.log" Jan 25 09:06:11 crc kubenswrapper[4832]: I0125 09:06:11.695466 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-855cdf875c-rxk79_26baac3d-6d07-4f33-956e-4048e3318099/barbican-worker-log/0.log" Jan 25 09:06:11 crc kubenswrapper[4832]: I0125 09:06:11.870938 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_bootstrap-edpm-deployment-openstack-edpm-ipam-hdzmf_146a1b8e-1733-40ca-81a5-d73122618f4d/bootstrap-edpm-deployment-openstack-edpm-ipam/0.log" Jan 25 09:06:11 crc kubenswrapper[4832]: I0125 09:06:11.959650 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_eb5b7f6d-8b64-475d-b4b4-c12ce7e9c468/ceilometer-central-agent/0.log" Jan 25 09:06:11 crc kubenswrapper[4832]: I0125 09:06:11.979172 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_eb5b7f6d-8b64-475d-b4b4-c12ce7e9c468/ceilometer-notification-agent/0.log" Jan 25 09:06:12 crc kubenswrapper[4832]: I0125 09:06:12.082366 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_eb5b7f6d-8b64-475d-b4b4-c12ce7e9c468/proxy-httpd/0.log" Jan 25 09:06:12 crc kubenswrapper[4832]: I0125 09:06:12.138900 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_eb5b7f6d-8b64-475d-b4b4-c12ce7e9c468/sg-core/0.log" Jan 25 09:06:12 crc kubenswrapper[4832]: I0125 09:06:12.249707 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_db0ff763-c24c-45a4-b3c5-7dc32962816f/cinder-api/0.log" Jan 25 09:06:12 crc kubenswrapper[4832]: I0125 09:06:12.317573 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_db0ff763-c24c-45a4-b3c5-7dc32962816f/cinder-api-log/0.log" Jan 25 09:06:12 crc kubenswrapper[4832]: I0125 09:06:12.486824 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_c3f65dba-194a-46be-b020-24ee852b965a/cinder-scheduler/0.log" Jan 25 09:06:12 crc kubenswrapper[4832]: I0125 09:06:12.521916 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_c3f65dba-194a-46be-b020-24ee852b965a/probe/0.log" Jan 25 09:06:12 crc kubenswrapper[4832]: I0125 09:06:12.633857 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-network-edpm-deployment-openstack-edpm-ipam-fr296_ef813e8a-d19f-4638-bd75-5cba3643b1d0/configure-network-edpm-deployment-openstack-edpm-ipam/0.log" Jan 25 09:06:12 crc kubenswrapper[4832]: I0125 09:06:12.720798 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-os-edpm-deployment-openstack-edpm-ipam-rk2l7_10ca3609-7786-4065-9125-f1460e9718f2/configure-os-edpm-deployment-openstack-edpm-ipam/0.log" Jan 25 09:06:12 crc kubenswrapper[4832]: I0125 09:06:12.841344 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-cb6ffcf87-5r9mm_8b7acd70-a72a-477f-af0d-455512cb4e81/init/0.log" Jan 25 09:06:13 crc kubenswrapper[4832]: I0125 09:06:13.017913 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-cb6ffcf87-5r9mm_8b7acd70-a72a-477f-af0d-455512cb4e81/init/0.log" Jan 25 09:06:13 crc kubenswrapper[4832]: I0125 09:06:13.040684 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_download-cache-edpm-deployment-openstack-edpm-ipam-5wttx_c2445bfc-4cb1-417b-9eea-3ef40a5dcb6f/download-cache-edpm-deployment-openstack-edpm-ipam/0.log" Jan 25 09:06:13 crc kubenswrapper[4832]: I0125 09:06:13.070922 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-cb6ffcf87-5r9mm_8b7acd70-a72a-477f-af0d-455512cb4e81/dnsmasq-dns/0.log" Jan 25 09:06:13 crc kubenswrapper[4832]: I0125 09:06:13.286674 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_2ba1988f-0ee4-4e4d-9b32-eff3fe30c959/glance-log/0.log" Jan 25 09:06:13 crc kubenswrapper[4832]: I0125 09:06:13.293188 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_2ba1988f-0ee4-4e4d-9b32-eff3fe30c959/glance-httpd/0.log" Jan 25 09:06:13 crc kubenswrapper[4832]: I0125 09:06:13.465482 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_ca10626f-eeda-438c-8d2b-5b7c734db90d/glance-httpd/0.log" Jan 25 09:06:13 crc kubenswrapper[4832]: I0125 09:06:13.468142 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_ca10626f-eeda-438c-8d2b-5b7c734db90d/glance-log/0.log" Jan 25 09:06:13 crc kubenswrapper[4832]: I0125 09:06:13.587161 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-f649cfc6-vzpx7_26fd6803-3263-4989-a86e-908f6a504d14/horizon/1.log" Jan 25 09:06:13 crc kubenswrapper[4832]: I0125 09:06:13.791737 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-certs-edpm-deployment-openstack-edpm-ipam-ftpbj_ca88c519-c20b-4e26-86c2-5b62b163af37/install-certs-edpm-deployment-openstack-edpm-ipam/0.log" Jan 25 09:06:13 crc kubenswrapper[4832]: I0125 09:06:13.805158 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-f649cfc6-vzpx7_26fd6803-3263-4989-a86e-908f6a504d14/horizon/0.log" Jan 25 09:06:14 crc kubenswrapper[4832]: I0125 09:06:14.068191 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-f649cfc6-vzpx7_26fd6803-3263-4989-a86e-908f6a504d14/horizon-log/0.log" Jan 25 09:06:14 crc kubenswrapper[4832]: I0125 09:06:14.103467 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-os-edpm-deployment-openstack-edpm-ipam-b4dhr_112e50b5-86e0-4401-b4f9-b32be27ab508/install-os-edpm-deployment-openstack-edpm-ipam/0.log" Jan 25 09:06:14 crc kubenswrapper[4832]: I0125 09:06:14.288798 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-cron-29488861-lgmdj_15559108-ea2b-4acc-908e-8b3d1f7a3dbf/keystone-cron/0.log" Jan 25 09:06:14 crc kubenswrapper[4832]: I0125 09:06:14.441694 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-699f4599dd-j695n_b32b998a-5689-42f6-9c15-b7e794acb916/keystone-api/0.log" Jan 25 09:06:14 crc kubenswrapper[4832]: I0125 09:06:14.502730 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_kube-state-metrics-0_ad2ea2ab-d727-4547-b2b4-d905b66428e5/kube-state-metrics/0.log" Jan 25 09:06:14 crc kubenswrapper[4832]: I0125 09:06:14.658653 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_libvirt-edpm-deployment-openstack-edpm-ipam-sllb7_d6839ea5-4201-48d8-b390-16fac4368cb9/libvirt-edpm-deployment-openstack-edpm-ipam/0.log" Jan 25 09:06:15 crc kubenswrapper[4832]: I0125 09:06:15.026261 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-857c8bdbcf-kwd2q_d1a230b2-45ba-4298-b3d6-2280431c592d/neutron-httpd/0.log" Jan 25 09:06:15 crc kubenswrapper[4832]: I0125 09:06:15.042192 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-857c8bdbcf-kwd2q_d1a230b2-45ba-4298-b3d6-2280431c592d/neutron-api/0.log" Jan 25 09:06:15 crc kubenswrapper[4832]: I0125 09:06:15.191669 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-metadata-edpm-deployment-openstack-edpm-ipam-cz2vj_e0e39d1f-665b-486a-bc7c-d89d1e50fee9/neutron-metadata-edpm-deployment-openstack-edpm-ipam/0.log" Jan 25 09:06:15 crc kubenswrapper[4832]: I0125 09:06:15.771470 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_853956ed-8d6c-401a-9d3b-7325013053a4/nova-api-log/0.log" Jan 25 09:06:15 crc kubenswrapper[4832]: I0125 09:06:15.799146 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell0-conductor-0_b0b4eea3-2f29-4f50-a197-b3e6531df0d5/nova-cell0-conductor-conductor/0.log" Jan 25 09:06:16 crc kubenswrapper[4832]: I0125 09:06:16.221953 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-novncproxy-0_c420c690-6a2a-4ccc-876b-b3ca1d5d8781/nova-cell1-novncproxy-novncproxy/0.log" Jan 25 09:06:16 crc kubenswrapper[4832]: I0125 09:06:16.228210 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-conductor-0_2052de31-aa8d-4127-b9ef-12bdb9d90fd9/nova-cell1-conductor-conductor/0.log" Jan 25 09:06:16 crc kubenswrapper[4832]: I0125 09:06:16.317327 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_853956ed-8d6c-401a-9d3b-7325013053a4/nova-api-api/0.log" Jan 25 09:06:16 crc kubenswrapper[4832]: I0125 09:06:16.476029 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-edpm-deployment-openstack-edpm-ipam-f8kjk_2859d34c-ae01-4c03-a14a-5256e17130ed/nova-edpm-deployment-openstack-edpm-ipam/0.log" Jan 25 09:06:16 crc kubenswrapper[4832]: I0125 09:06:16.620913 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_3c0a6750-31ec-4a66-8160-2f74a44a5d33/nova-metadata-log/0.log" Jan 25 09:06:16 crc kubenswrapper[4832]: I0125 09:06:16.922246 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_43f07a95-68ce-4138-b2ff-ef2543e68e46/mysql-bootstrap/0.log" Jan 25 09:06:16 crc kubenswrapper[4832]: I0125 09:06:16.943821 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-scheduler-0_d322a933-38eb-4eb0-81c7-86d11a5f2d2c/nova-scheduler-scheduler/0.log" Jan 25 09:06:17 crc kubenswrapper[4832]: I0125 09:06:17.093685 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_43f07a95-68ce-4138-b2ff-ef2543e68e46/galera/0.log" Jan 25 09:06:17 crc kubenswrapper[4832]: I0125 09:06:17.136379 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_43f07a95-68ce-4138-b2ff-ef2543e68e46/mysql-bootstrap/0.log" Jan 25 09:06:17 crc kubenswrapper[4832]: I0125 09:06:17.988606 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_9ca53255-293b-4c35-a202-ac7ad7ac8d65/mysql-bootstrap/0.log" Jan 25 09:06:18 crc kubenswrapper[4832]: I0125 09:06:18.159156 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_3c0a6750-31ec-4a66-8160-2f74a44a5d33/nova-metadata-metadata/0.log" Jan 25 09:06:18 crc kubenswrapper[4832]: I0125 09:06:18.178758 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_9ca53255-293b-4c35-a202-ac7ad7ac8d65/galera/0.log" Jan 25 09:06:18 crc kubenswrapper[4832]: I0125 09:06:18.182279 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_9ca53255-293b-4c35-a202-ac7ad7ac8d65/mysql-bootstrap/0.log" Jan 25 09:06:18 crc kubenswrapper[4832]: I0125 09:06:18.415982 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstackclient_a962ff03-629f-458b-b5dc-3980f55d9f66/openstackclient/0.log" Jan 25 09:06:18 crc kubenswrapper[4832]: I0125 09:06:18.422833 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-hcd8h_4b6aa9f6-e110-4147-a8d0-b1c8287226d1/openstack-network-exporter/0.log" Jan 25 09:06:18 crc kubenswrapper[4832]: I0125 09:06:18.630374 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-n6hrr_54cecc85-b18f-4136-bd00-cbcc0f680643/ovn-controller/0.log" Jan 25 09:06:18 crc kubenswrapper[4832]: I0125 09:06:18.666997 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-tk26k_1eb6b5ae-927c-4920-9ad4-bc1936555efb/ovsdb-server-init/0.log" Jan 25 09:06:18 crc kubenswrapper[4832]: I0125 09:06:18.943870 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-tk26k_1eb6b5ae-927c-4920-9ad4-bc1936555efb/ovs-vswitchd/0.log" Jan 25 09:06:18 crc kubenswrapper[4832]: I0125 09:06:18.945261 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-tk26k_1eb6b5ae-927c-4920-9ad4-bc1936555efb/ovsdb-server/0.log" Jan 25 09:06:18 crc kubenswrapper[4832]: I0125 09:06:18.975336 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-tk26k_1eb6b5ae-927c-4920-9ad4-bc1936555efb/ovsdb-server-init/0.log" Jan 25 09:06:19 crc kubenswrapper[4832]: I0125 09:06:19.696543 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_828fc400-0bbb-4fbb-ae6c-7aa12c12864a/openstack-network-exporter/0.log" Jan 25 09:06:19 crc kubenswrapper[4832]: I0125 09:06:19.733005 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_828fc400-0bbb-4fbb-ae6c-7aa12c12864a/ovn-northd/0.log" Jan 25 09:06:19 crc kubenswrapper[4832]: I0125 09:06:19.766432 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-edpm-deployment-openstack-edpm-ipam-bxs2f_23b2cd4e-4921-4082-8a44-50c065f88f52/ovn-edpm-deployment-openstack-edpm-ipam/0.log" Jan 25 09:06:20 crc kubenswrapper[4832]: I0125 09:06:20.013828 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_0d2475d7-df45-45d0-a604-22b5008d000f/ovsdbserver-nb/0.log" Jan 25 09:06:20 crc kubenswrapper[4832]: I0125 09:06:20.013840 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_0d2475d7-df45-45d0-a604-22b5008d000f/openstack-network-exporter/0.log" Jan 25 09:06:20 crc kubenswrapper[4832]: I0125 09:06:20.194141 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_666395bf-0cf6-4e7a-a0d0-2ad1a8928424/openstack-network-exporter/0.log" Jan 25 09:06:20 crc kubenswrapper[4832]: I0125 09:06:20.570556 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_666395bf-0cf6-4e7a-a0d0-2ad1a8928424/ovsdbserver-sb/0.log" Jan 25 09:06:20 crc kubenswrapper[4832]: I0125 09:06:20.730715 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-5cd5868dbb-cxxfw_c6f5e19c-ec70-424e-a446-09b1b78697be/placement-api/0.log" Jan 25 09:06:20 crc kubenswrapper[4832]: I0125 09:06:20.908208 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-5cd5868dbb-cxxfw_c6f5e19c-ec70-424e-a446-09b1b78697be/placement-log/0.log" Jan 25 09:06:20 crc kubenswrapper[4832]: I0125 09:06:20.971440 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_9cf62746-47cb-4e83-9211-57a799a06e93/setup-container/0.log" Jan 25 09:06:21 crc kubenswrapper[4832]: I0125 09:06:21.114266 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_9cf62746-47cb-4e83-9211-57a799a06e93/setup-container/0.log" Jan 25 09:06:21 crc kubenswrapper[4832]: I0125 09:06:21.219253 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_efe389bf-7e64-417c-96c8-d302858a0722/setup-container/0.log" Jan 25 09:06:21 crc kubenswrapper[4832]: I0125 09:06:21.291470 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_9cf62746-47cb-4e83-9211-57a799a06e93/rabbitmq/0.log" Jan 25 09:06:21 crc kubenswrapper[4832]: I0125 09:06:21.512823 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_efe389bf-7e64-417c-96c8-d302858a0722/setup-container/0.log" Jan 25 09:06:21 crc kubenswrapper[4832]: I0125 09:06:21.584960 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_efe389bf-7e64-417c-96c8-d302858a0722/rabbitmq/0.log" Jan 25 09:06:21 crc kubenswrapper[4832]: I0125 09:06:21.653267 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_reboot-os-edpm-deployment-openstack-edpm-ipam-x685s_63023ae6-5cfd-4940-8160-7547220bbb5b/reboot-os-edpm-deployment-openstack-edpm-ipam/0.log" Jan 25 09:06:21 crc kubenswrapper[4832]: I0125 09:06:21.786950 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_redhat-edpm-deployment-openstack-edpm-ipam-lr429_306310b5-6753-4a5a-b279-41e070c2f970/redhat-edpm-deployment-openstack-edpm-ipam/0.log" Jan 25 09:06:21 crc kubenswrapper[4832]: I0125 09:06:21.941540 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_repo-setup-edpm-deployment-openstack-edpm-ipam-97bvv_be2a25f4-32ba-4406-b6a6-bdae29720048/repo-setup-edpm-deployment-openstack-edpm-ipam/0.log" Jan 25 09:06:22 crc kubenswrapper[4832]: I0125 09:06:22.019257 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_run-os-edpm-deployment-openstack-edpm-ipam-qvjw2_acaaf210-0845-4432-b149-30c8c038bfcb/run-os-edpm-deployment-openstack-edpm-ipam/0.log" Jan 25 09:06:22 crc kubenswrapper[4832]: I0125 09:06:22.223570 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ssh-known-hosts-edpm-deployment-7xcl5_977dfa38-e1a5-4daf-b1b4-4be30da2ee0f/ssh-known-hosts-edpm-deployment/0.log" Jan 25 09:06:22 crc kubenswrapper[4832]: I0125 09:06:22.417830 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-658c5f7995-t6v6k_81bd3301-f264-4150-8f71-869af2c1ed3d/proxy-httpd/0.log" Jan 25 09:06:22 crc kubenswrapper[4832]: I0125 09:06:22.484534 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-ring-rebalance-s7nx7_8780670c-4459-4064-a5ee-d22abf7923aa/swift-ring-rebalance/0.log" Jan 25 09:06:22 crc kubenswrapper[4832]: I0125 09:06:22.497815 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-658c5f7995-t6v6k_81bd3301-f264-4150-8f71-869af2c1ed3d/proxy-server/0.log" Jan 25 09:06:22 crc kubenswrapper[4832]: I0125 09:06:22.713754 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_68ef9e02-9e33-48c3-a32b-ceae36687171/account-reaper/0.log" Jan 25 09:06:22 crc kubenswrapper[4832]: I0125 09:06:22.727240 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_68ef9e02-9e33-48c3-a32b-ceae36687171/account-auditor/0.log" Jan 25 09:06:23 crc kubenswrapper[4832]: I0125 09:06:23.053860 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_68ef9e02-9e33-48c3-a32b-ceae36687171/account-replicator/0.log" Jan 25 09:06:23 crc kubenswrapper[4832]: I0125 09:06:23.059136 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_68ef9e02-9e33-48c3-a32b-ceae36687171/account-server/0.log" Jan 25 09:06:23 crc kubenswrapper[4832]: I0125 09:06:23.147242 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_68ef9e02-9e33-48c3-a32b-ceae36687171/container-auditor/0.log" Jan 25 09:06:23 crc kubenswrapper[4832]: I0125 09:06:23.166930 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_68ef9e02-9e33-48c3-a32b-ceae36687171/container-replicator/0.log" Jan 25 09:06:23 crc kubenswrapper[4832]: I0125 09:06:23.259160 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_68ef9e02-9e33-48c3-a32b-ceae36687171/container-server/0.log" Jan 25 09:06:23 crc kubenswrapper[4832]: I0125 09:06:23.360516 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_68ef9e02-9e33-48c3-a32b-ceae36687171/container-updater/0.log" Jan 25 09:06:23 crc kubenswrapper[4832]: I0125 09:06:23.377819 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_68ef9e02-9e33-48c3-a32b-ceae36687171/object-expirer/0.log" Jan 25 09:06:23 crc kubenswrapper[4832]: I0125 09:06:23.412860 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_68ef9e02-9e33-48c3-a32b-ceae36687171/object-auditor/0.log" Jan 25 09:06:23 crc kubenswrapper[4832]: I0125 09:06:23.570249 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_68ef9e02-9e33-48c3-a32b-ceae36687171/object-updater/0.log" Jan 25 09:06:23 crc kubenswrapper[4832]: I0125 09:06:23.628373 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_68ef9e02-9e33-48c3-a32b-ceae36687171/object-replicator/0.log" Jan 25 09:06:23 crc kubenswrapper[4832]: I0125 09:06:23.635944 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_68ef9e02-9e33-48c3-a32b-ceae36687171/rsync/0.log" Jan 25 09:06:23 crc kubenswrapper[4832]: I0125 09:06:23.641537 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_68ef9e02-9e33-48c3-a32b-ceae36687171/object-server/0.log" Jan 25 09:06:23 crc kubenswrapper[4832]: I0125 09:06:23.799534 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_68ef9e02-9e33-48c3-a32b-ceae36687171/swift-recon-cron/0.log" Jan 25 09:06:23 crc kubenswrapper[4832]: I0125 09:06:23.907518 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_telemetry-edpm-deployment-openstack-edpm-ipam-548xj_303826b3-afb9-4ce0-a967-9a30c910c85b/telemetry-edpm-deployment-openstack-edpm-ipam/0.log" Jan 25 09:06:24 crc kubenswrapper[4832]: I0125 09:06:24.045280 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_tempest-tests-tempest_f075c376-fe6e-44de-bb3d-113de5b9fb3f/tempest-tests-tempest-tests-runner/0.log" Jan 25 09:06:24 crc kubenswrapper[4832]: I0125 09:06:24.155578 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_test-operator-logs-pod-tempest-tempest-tests-tempest_5d3f03a6-2f57-4a65-9e70-0828473a9469/test-operator-logs-container/0.log" Jan 25 09:06:24 crc kubenswrapper[4832]: I0125 09:06:24.368157 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_validate-network-edpm-deployment-openstack-edpm-ipam-jb565_51471519-c6e2-4ab1-9536-3443579b4bb1/validate-network-edpm-deployment-openstack-edpm-ipam/0.log" Jan 25 09:06:33 crc kubenswrapper[4832]: I0125 09:06:33.870616 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_memcached-0_44713664-4137-4321-baff-36c54dcbae96/memcached/0.log" Jan 25 09:06:40 crc kubenswrapper[4832]: I0125 09:06:40.199310 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-9tlbb"] Jan 25 09:06:40 crc kubenswrapper[4832]: E0125 09:06:40.203982 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e2a84dac-610e-4493-be69-8487458103ea" containerName="extract-utilities" Jan 25 09:06:40 crc kubenswrapper[4832]: I0125 09:06:40.204004 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="e2a84dac-610e-4493-be69-8487458103ea" containerName="extract-utilities" Jan 25 09:06:40 crc kubenswrapper[4832]: E0125 09:06:40.204021 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e2a84dac-610e-4493-be69-8487458103ea" containerName="registry-server" Jan 25 09:06:40 crc kubenswrapper[4832]: I0125 09:06:40.204027 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="e2a84dac-610e-4493-be69-8487458103ea" containerName="registry-server" Jan 25 09:06:40 crc kubenswrapper[4832]: E0125 09:06:40.204055 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e2a84dac-610e-4493-be69-8487458103ea" containerName="extract-content" Jan 25 09:06:40 crc kubenswrapper[4832]: I0125 09:06:40.204061 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="e2a84dac-610e-4493-be69-8487458103ea" containerName="extract-content" Jan 25 09:06:40 crc kubenswrapper[4832]: I0125 09:06:40.204253 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="e2a84dac-610e-4493-be69-8487458103ea" containerName="registry-server" Jan 25 09:06:40 crc kubenswrapper[4832]: I0125 09:06:40.205656 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-9tlbb" Jan 25 09:06:40 crc kubenswrapper[4832]: I0125 09:06:40.217799 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-9tlbb"] Jan 25 09:06:40 crc kubenswrapper[4832]: I0125 09:06:40.286356 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/86395d44-baee-4faa-8589-5212b9db3d14-catalog-content\") pod \"community-operators-9tlbb\" (UID: \"86395d44-baee-4faa-8589-5212b9db3d14\") " pod="openshift-marketplace/community-operators-9tlbb" Jan 25 09:06:40 crc kubenswrapper[4832]: I0125 09:06:40.286423 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zfrc8\" (UniqueName: \"kubernetes.io/projected/86395d44-baee-4faa-8589-5212b9db3d14-kube-api-access-zfrc8\") pod \"community-operators-9tlbb\" (UID: \"86395d44-baee-4faa-8589-5212b9db3d14\") " pod="openshift-marketplace/community-operators-9tlbb" Jan 25 09:06:40 crc kubenswrapper[4832]: I0125 09:06:40.286703 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/86395d44-baee-4faa-8589-5212b9db3d14-utilities\") pod \"community-operators-9tlbb\" (UID: \"86395d44-baee-4faa-8589-5212b9db3d14\") " pod="openshift-marketplace/community-operators-9tlbb" Jan 25 09:06:40 crc kubenswrapper[4832]: I0125 09:06:40.388331 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/86395d44-baee-4faa-8589-5212b9db3d14-utilities\") pod \"community-operators-9tlbb\" (UID: \"86395d44-baee-4faa-8589-5212b9db3d14\") " pod="openshift-marketplace/community-operators-9tlbb" Jan 25 09:06:40 crc kubenswrapper[4832]: I0125 09:06:40.388421 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/86395d44-baee-4faa-8589-5212b9db3d14-catalog-content\") pod \"community-operators-9tlbb\" (UID: \"86395d44-baee-4faa-8589-5212b9db3d14\") " pod="openshift-marketplace/community-operators-9tlbb" Jan 25 09:06:40 crc kubenswrapper[4832]: I0125 09:06:40.388455 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zfrc8\" (UniqueName: \"kubernetes.io/projected/86395d44-baee-4faa-8589-5212b9db3d14-kube-api-access-zfrc8\") pod \"community-operators-9tlbb\" (UID: \"86395d44-baee-4faa-8589-5212b9db3d14\") " pod="openshift-marketplace/community-operators-9tlbb" Jan 25 09:06:40 crc kubenswrapper[4832]: I0125 09:06:40.388795 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/86395d44-baee-4faa-8589-5212b9db3d14-utilities\") pod \"community-operators-9tlbb\" (UID: \"86395d44-baee-4faa-8589-5212b9db3d14\") " pod="openshift-marketplace/community-operators-9tlbb" Jan 25 09:06:40 crc kubenswrapper[4832]: I0125 09:06:40.388916 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/86395d44-baee-4faa-8589-5212b9db3d14-catalog-content\") pod \"community-operators-9tlbb\" (UID: \"86395d44-baee-4faa-8589-5212b9db3d14\") " pod="openshift-marketplace/community-operators-9tlbb" Jan 25 09:06:40 crc kubenswrapper[4832]: I0125 09:06:40.422310 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zfrc8\" (UniqueName: \"kubernetes.io/projected/86395d44-baee-4faa-8589-5212b9db3d14-kube-api-access-zfrc8\") pod \"community-operators-9tlbb\" (UID: \"86395d44-baee-4faa-8589-5212b9db3d14\") " pod="openshift-marketplace/community-operators-9tlbb" Jan 25 09:06:40 crc kubenswrapper[4832]: I0125 09:06:40.525176 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-9tlbb" Jan 25 09:06:41 crc kubenswrapper[4832]: I0125 09:06:41.212605 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-9tlbb"] Jan 25 09:06:41 crc kubenswrapper[4832]: I0125 09:06:41.729340 4832 generic.go:334] "Generic (PLEG): container finished" podID="86395d44-baee-4faa-8589-5212b9db3d14" containerID="98fdebeee26aeb5f38b65dcc7b52d6b1a6bba95274848a6752eed55bc8871715" exitCode=0 Jan 25 09:06:41 crc kubenswrapper[4832]: I0125 09:06:41.729432 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9tlbb" event={"ID":"86395d44-baee-4faa-8589-5212b9db3d14","Type":"ContainerDied","Data":"98fdebeee26aeb5f38b65dcc7b52d6b1a6bba95274848a6752eed55bc8871715"} Jan 25 09:06:41 crc kubenswrapper[4832]: I0125 09:06:41.729720 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9tlbb" event={"ID":"86395d44-baee-4faa-8589-5212b9db3d14","Type":"ContainerStarted","Data":"081f3bda5ecfc17d9b22febd13026483fe7828600cb94446ba63691cd532a929"} Jan 25 09:06:42 crc kubenswrapper[4832]: I0125 09:06:42.742247 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9tlbb" event={"ID":"86395d44-baee-4faa-8589-5212b9db3d14","Type":"ContainerStarted","Data":"5a367f2d4afbac6279b0252ad0ef81180dbe17856962e2b40f1d4cf30f0a1cb3"} Jan 25 09:06:43 crc kubenswrapper[4832]: I0125 09:06:43.751769 4832 generic.go:334] "Generic (PLEG): container finished" podID="86395d44-baee-4faa-8589-5212b9db3d14" containerID="5a367f2d4afbac6279b0252ad0ef81180dbe17856962e2b40f1d4cf30f0a1cb3" exitCode=0 Jan 25 09:06:43 crc kubenswrapper[4832]: I0125 09:06:43.751820 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9tlbb" event={"ID":"86395d44-baee-4faa-8589-5212b9db3d14","Type":"ContainerDied","Data":"5a367f2d4afbac6279b0252ad0ef81180dbe17856962e2b40f1d4cf30f0a1cb3"} Jan 25 09:06:44 crc kubenswrapper[4832]: I0125 09:06:44.763440 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9tlbb" event={"ID":"86395d44-baee-4faa-8589-5212b9db3d14","Type":"ContainerStarted","Data":"073a4d813e96fcfba5b437c8b61021fa0e685e9e2da0fc65d7a9a1e67d0b6a46"} Jan 25 09:06:44 crc kubenswrapper[4832]: I0125 09:06:44.786572 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-9tlbb" podStartSLOduration=2.131306609 podStartE2EDuration="4.786551572s" podCreationTimestamp="2026-01-25 09:06:40 +0000 UTC" firstStartedPulling="2026-01-25 09:06:41.731293392 +0000 UTC m=+4184.405116925" lastFinishedPulling="2026-01-25 09:06:44.386538355 +0000 UTC m=+4187.060361888" observedRunningTime="2026-01-25 09:06:44.781977209 +0000 UTC m=+4187.455800762" watchObservedRunningTime="2026-01-25 09:06:44.786551572 +0000 UTC m=+4187.460375105" Jan 25 09:06:50 crc kubenswrapper[4832]: I0125 09:06:50.525808 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-9tlbb" Jan 25 09:06:50 crc kubenswrapper[4832]: I0125 09:06:50.526393 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-9tlbb" Jan 25 09:06:50 crc kubenswrapper[4832]: I0125 09:06:50.573901 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-9tlbb" Jan 25 09:06:50 crc kubenswrapper[4832]: I0125 09:06:50.863818 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-9tlbb" Jan 25 09:06:50 crc kubenswrapper[4832]: I0125 09:06:50.921380 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-9tlbb"] Jan 25 09:06:52 crc kubenswrapper[4832]: I0125 09:06:52.149924 4832 patch_prober.go:28] interesting pod/machine-config-daemon-9r9sz container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 25 09:06:52 crc kubenswrapper[4832]: I0125 09:06:52.150061 4832 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-9r9sz" podUID="1fb47e8e-c812-41b4-9be7-3fad81e121b0" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 25 09:06:52 crc kubenswrapper[4832]: I0125 09:06:52.833873 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-9tlbb" podUID="86395d44-baee-4faa-8589-5212b9db3d14" containerName="registry-server" containerID="cri-o://073a4d813e96fcfba5b437c8b61021fa0e685e9e2da0fc65d7a9a1e67d0b6a46" gracePeriod=2 Jan 25 09:06:53 crc kubenswrapper[4832]: I0125 09:06:53.500852 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_2d2f0d7580858c77849655cfe8dde1d34625d82185eda51b1088a6ebe2g2vmq_f27419fd-d9b8-4ae4-ae3c-a9ad071152b2/util/0.log" Jan 25 09:06:53 crc kubenswrapper[4832]: I0125 09:06:53.822261 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-9tlbb" Jan 25 09:06:53 crc kubenswrapper[4832]: I0125 09:06:53.850312 4832 generic.go:334] "Generic (PLEG): container finished" podID="86395d44-baee-4faa-8589-5212b9db3d14" containerID="073a4d813e96fcfba5b437c8b61021fa0e685e9e2da0fc65d7a9a1e67d0b6a46" exitCode=0 Jan 25 09:06:53 crc kubenswrapper[4832]: I0125 09:06:53.850355 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9tlbb" event={"ID":"86395d44-baee-4faa-8589-5212b9db3d14","Type":"ContainerDied","Data":"073a4d813e96fcfba5b437c8b61021fa0e685e9e2da0fc65d7a9a1e67d0b6a46"} Jan 25 09:06:53 crc kubenswrapper[4832]: I0125 09:06:53.850406 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9tlbb" event={"ID":"86395d44-baee-4faa-8589-5212b9db3d14","Type":"ContainerDied","Data":"081f3bda5ecfc17d9b22febd13026483fe7828600cb94446ba63691cd532a929"} Jan 25 09:06:53 crc kubenswrapper[4832]: I0125 09:06:53.850427 4832 scope.go:117] "RemoveContainer" containerID="073a4d813e96fcfba5b437c8b61021fa0e685e9e2da0fc65d7a9a1e67d0b6a46" Jan 25 09:06:53 crc kubenswrapper[4832]: I0125 09:06:53.850434 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-9tlbb" Jan 25 09:06:53 crc kubenswrapper[4832]: I0125 09:06:53.876573 4832 scope.go:117] "RemoveContainer" containerID="5a367f2d4afbac6279b0252ad0ef81180dbe17856962e2b40f1d4cf30f0a1cb3" Jan 25 09:06:53 crc kubenswrapper[4832]: I0125 09:06:53.879205 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/86395d44-baee-4faa-8589-5212b9db3d14-utilities\") pod \"86395d44-baee-4faa-8589-5212b9db3d14\" (UID: \"86395d44-baee-4faa-8589-5212b9db3d14\") " Jan 25 09:06:53 crc kubenswrapper[4832]: I0125 09:06:53.879312 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zfrc8\" (UniqueName: \"kubernetes.io/projected/86395d44-baee-4faa-8589-5212b9db3d14-kube-api-access-zfrc8\") pod \"86395d44-baee-4faa-8589-5212b9db3d14\" (UID: \"86395d44-baee-4faa-8589-5212b9db3d14\") " Jan 25 09:06:53 crc kubenswrapper[4832]: I0125 09:06:53.879415 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/86395d44-baee-4faa-8589-5212b9db3d14-catalog-content\") pod \"86395d44-baee-4faa-8589-5212b9db3d14\" (UID: \"86395d44-baee-4faa-8589-5212b9db3d14\") " Jan 25 09:06:53 crc kubenswrapper[4832]: I0125 09:06:53.880853 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/86395d44-baee-4faa-8589-5212b9db3d14-utilities" (OuterVolumeSpecName: "utilities") pod "86395d44-baee-4faa-8589-5212b9db3d14" (UID: "86395d44-baee-4faa-8589-5212b9db3d14"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 25 09:06:53 crc kubenswrapper[4832]: I0125 09:06:53.886234 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/86395d44-baee-4faa-8589-5212b9db3d14-kube-api-access-zfrc8" (OuterVolumeSpecName: "kube-api-access-zfrc8") pod "86395d44-baee-4faa-8589-5212b9db3d14" (UID: "86395d44-baee-4faa-8589-5212b9db3d14"). InnerVolumeSpecName "kube-api-access-zfrc8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 25 09:06:53 crc kubenswrapper[4832]: I0125 09:06:53.938205 4832 scope.go:117] "RemoveContainer" containerID="98fdebeee26aeb5f38b65dcc7b52d6b1a6bba95274848a6752eed55bc8871715" Jan 25 09:06:53 crc kubenswrapper[4832]: I0125 09:06:53.957801 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/86395d44-baee-4faa-8589-5212b9db3d14-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "86395d44-baee-4faa-8589-5212b9db3d14" (UID: "86395d44-baee-4faa-8589-5212b9db3d14"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 25 09:06:53 crc kubenswrapper[4832]: I0125 09:06:53.981352 4832 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/86395d44-baee-4faa-8589-5212b9db3d14-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 25 09:06:53 crc kubenswrapper[4832]: I0125 09:06:53.981401 4832 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/86395d44-baee-4faa-8589-5212b9db3d14-utilities\") on node \"crc\" DevicePath \"\"" Jan 25 09:06:53 crc kubenswrapper[4832]: I0125 09:06:53.981412 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zfrc8\" (UniqueName: \"kubernetes.io/projected/86395d44-baee-4faa-8589-5212b9db3d14-kube-api-access-zfrc8\") on node \"crc\" DevicePath \"\"" Jan 25 09:06:53 crc kubenswrapper[4832]: I0125 09:06:53.982741 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_2d2f0d7580858c77849655cfe8dde1d34625d82185eda51b1088a6ebe2g2vmq_f27419fd-d9b8-4ae4-ae3c-a9ad071152b2/util/0.log" Jan 25 09:06:53 crc kubenswrapper[4832]: I0125 09:06:53.982853 4832 scope.go:117] "RemoveContainer" containerID="073a4d813e96fcfba5b437c8b61021fa0e685e9e2da0fc65d7a9a1e67d0b6a46" Jan 25 09:06:53 crc kubenswrapper[4832]: E0125 09:06:53.983125 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"073a4d813e96fcfba5b437c8b61021fa0e685e9e2da0fc65d7a9a1e67d0b6a46\": container with ID starting with 073a4d813e96fcfba5b437c8b61021fa0e685e9e2da0fc65d7a9a1e67d0b6a46 not found: ID does not exist" containerID="073a4d813e96fcfba5b437c8b61021fa0e685e9e2da0fc65d7a9a1e67d0b6a46" Jan 25 09:06:53 crc kubenswrapper[4832]: I0125 09:06:53.983153 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"073a4d813e96fcfba5b437c8b61021fa0e685e9e2da0fc65d7a9a1e67d0b6a46"} err="failed to get container status \"073a4d813e96fcfba5b437c8b61021fa0e685e9e2da0fc65d7a9a1e67d0b6a46\": rpc error: code = NotFound desc = could not find container \"073a4d813e96fcfba5b437c8b61021fa0e685e9e2da0fc65d7a9a1e67d0b6a46\": container with ID starting with 073a4d813e96fcfba5b437c8b61021fa0e685e9e2da0fc65d7a9a1e67d0b6a46 not found: ID does not exist" Jan 25 09:06:53 crc kubenswrapper[4832]: I0125 09:06:53.983173 4832 scope.go:117] "RemoveContainer" containerID="5a367f2d4afbac6279b0252ad0ef81180dbe17856962e2b40f1d4cf30f0a1cb3" Jan 25 09:06:53 crc kubenswrapper[4832]: E0125 09:06:53.983345 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5a367f2d4afbac6279b0252ad0ef81180dbe17856962e2b40f1d4cf30f0a1cb3\": container with ID starting with 5a367f2d4afbac6279b0252ad0ef81180dbe17856962e2b40f1d4cf30f0a1cb3 not found: ID does not exist" containerID="5a367f2d4afbac6279b0252ad0ef81180dbe17856962e2b40f1d4cf30f0a1cb3" Jan 25 09:06:53 crc kubenswrapper[4832]: I0125 09:06:53.983365 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5a367f2d4afbac6279b0252ad0ef81180dbe17856962e2b40f1d4cf30f0a1cb3"} err="failed to get container status \"5a367f2d4afbac6279b0252ad0ef81180dbe17856962e2b40f1d4cf30f0a1cb3\": rpc error: code = NotFound desc = could not find container \"5a367f2d4afbac6279b0252ad0ef81180dbe17856962e2b40f1d4cf30f0a1cb3\": container with ID starting with 5a367f2d4afbac6279b0252ad0ef81180dbe17856962e2b40f1d4cf30f0a1cb3 not found: ID does not exist" Jan 25 09:06:53 crc kubenswrapper[4832]: I0125 09:06:53.983379 4832 scope.go:117] "RemoveContainer" containerID="98fdebeee26aeb5f38b65dcc7b52d6b1a6bba95274848a6752eed55bc8871715" Jan 25 09:06:53 crc kubenswrapper[4832]: E0125 09:06:53.983660 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"98fdebeee26aeb5f38b65dcc7b52d6b1a6bba95274848a6752eed55bc8871715\": container with ID starting with 98fdebeee26aeb5f38b65dcc7b52d6b1a6bba95274848a6752eed55bc8871715 not found: ID does not exist" containerID="98fdebeee26aeb5f38b65dcc7b52d6b1a6bba95274848a6752eed55bc8871715" Jan 25 09:06:53 crc kubenswrapper[4832]: I0125 09:06:53.983680 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"98fdebeee26aeb5f38b65dcc7b52d6b1a6bba95274848a6752eed55bc8871715"} err="failed to get container status \"98fdebeee26aeb5f38b65dcc7b52d6b1a6bba95274848a6752eed55bc8871715\": rpc error: code = NotFound desc = could not find container \"98fdebeee26aeb5f38b65dcc7b52d6b1a6bba95274848a6752eed55bc8871715\": container with ID starting with 98fdebeee26aeb5f38b65dcc7b52d6b1a6bba95274848a6752eed55bc8871715 not found: ID does not exist" Jan 25 09:06:54 crc kubenswrapper[4832]: I0125 09:06:54.043474 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_2d2f0d7580858c77849655cfe8dde1d34625d82185eda51b1088a6ebe2g2vmq_f27419fd-d9b8-4ae4-ae3c-a9ad071152b2/pull/0.log" Jan 25 09:06:54 crc kubenswrapper[4832]: I0125 09:06:54.048817 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_2d2f0d7580858c77849655cfe8dde1d34625d82185eda51b1088a6ebe2g2vmq_f27419fd-d9b8-4ae4-ae3c-a9ad071152b2/pull/0.log" Jan 25 09:06:54 crc kubenswrapper[4832]: I0125 09:06:54.189952 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-9tlbb"] Jan 25 09:06:54 crc kubenswrapper[4832]: I0125 09:06:54.198632 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-9tlbb"] Jan 25 09:06:54 crc kubenswrapper[4832]: I0125 09:06:54.238732 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_2d2f0d7580858c77849655cfe8dde1d34625d82185eda51b1088a6ebe2g2vmq_f27419fd-d9b8-4ae4-ae3c-a9ad071152b2/pull/0.log" Jan 25 09:06:54 crc kubenswrapper[4832]: I0125 09:06:54.257377 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_2d2f0d7580858c77849655cfe8dde1d34625d82185eda51b1088a6ebe2g2vmq_f27419fd-d9b8-4ae4-ae3c-a9ad071152b2/util/0.log" Jan 25 09:06:54 crc kubenswrapper[4832]: I0125 09:06:54.275674 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_2d2f0d7580858c77849655cfe8dde1d34625d82185eda51b1088a6ebe2g2vmq_f27419fd-d9b8-4ae4-ae3c-a9ad071152b2/extract/0.log" Jan 25 09:06:54 crc kubenswrapper[4832]: I0125 09:06:54.544958 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-7f86f8796f-hr9t5_8251d5ba-3a9a-429c-ba20-1af897640ad3/manager/0.log" Jan 25 09:06:54 crc kubenswrapper[4832]: I0125 09:06:54.591056 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-7478f7dbf9-qdwdw_b3a8f752-cc73-4933-88d1-3b661a42ead2/manager/0.log" Jan 25 09:06:54 crc kubenswrapper[4832]: I0125 09:06:54.711345 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-b45d7bf98-75hsw_0cac9e7d-b342-4b55-a667-76fa1c144080/manager/0.log" Jan 25 09:06:54 crc kubenswrapper[4832]: I0125 09:06:54.825299 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-78fdd796fd-mgsq7_b1702aab-2dd8-488f-8a7f-93f43df4b0ab/manager/0.log" Jan 25 09:06:54 crc kubenswrapper[4832]: I0125 09:06:54.975640 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-594c8c9d5d-h4c7b_efdb6007-fdd7-4a18-9dba-4f1571f6f822/manager/0.log" Jan 25 09:06:55 crc kubenswrapper[4832]: I0125 09:06:55.065562 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-77d5c5b54f-nzjmz_3f993c1e-81ae-4e86-9b28-eccb1db48f2b/manager/0.log" Jan 25 09:06:55 crc kubenswrapper[4832]: I0125 09:06:55.333432 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-598f7747c9-t8jng_44be34d2-851c-4bf5-a3fb-87607d045d1f/manager/0.log" Jan 25 09:06:55 crc kubenswrapper[4832]: I0125 09:06:55.422737 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-694cf4f878-vt5m9_29b29aa4-b326-4515-9842-6d848c208096/manager/0.log" Jan 25 09:06:55 crc kubenswrapper[4832]: I0125 09:06:55.591534 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-b8b6d4659-vvwcx_50da9b0d-da00-4211-95cd-0218828341e5/manager/0.log" Jan 25 09:06:55 crc kubenswrapper[4832]: I0125 09:06:55.593446 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-78c6999f6f-mstsp_d75c853c-428e-4f6a-8a82-a050b71af662/manager/0.log" Jan 25 09:06:55 crc kubenswrapper[4832]: I0125 09:06:55.681021 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="86395d44-baee-4faa-8589-5212b9db3d14" path="/var/lib/kubelet/pods/86395d44-baee-4faa-8589-5212b9db3d14/volumes" Jan 25 09:06:55 crc kubenswrapper[4832]: I0125 09:06:55.808632 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-6b9fb5fdcb-4k5f7_31cef49b-390b-4029-bdc4-64893be3d183/manager/0.log" Jan 25 09:06:55 crc kubenswrapper[4832]: I0125 09:06:55.837807 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-78d58447c5-hpqjz_0c897c34-1c91-416c-91e2-65ae83958e10/manager/0.log" Jan 25 09:06:56 crc kubenswrapper[4832]: I0125 09:06:56.190311 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-5f4cd88d46-642xd_b618d12e-02c2-4ae7-872a-15bd233259b5/manager/0.log" Jan 25 09:06:56 crc kubenswrapper[4832]: I0125 09:06:56.295338 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-7bdb645866-q67lr_d221c44f-6fb5-4b96-b84e-f1d55253ed08/manager/0.log" Jan 25 09:06:56 crc kubenswrapper[4832]: I0125 09:06:56.399960 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-6b68b8b854b8jhw_3b784c4a-e1cf-42fb-ad96-dca059f63e79/manager/0.log" Jan 25 09:06:56 crc kubenswrapper[4832]: I0125 09:06:56.584145 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-init-6d9d58658-glj79_6daad9ca-374e-4351-b5f4-3b262d9816b6/operator/0.log" Jan 25 09:06:56 crc kubenswrapper[4832]: I0125 09:06:56.790883 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-k945x_40c93737-1880-48e7-a342-d3a8c8a5ad68/registry-server/0.log" Jan 25 09:06:57 crc kubenswrapper[4832]: I0125 09:06:57.063595 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-6f75f45d54-cf7rg_8d21c83b-b981-4466-b81a-ed7954d1f3cb/manager/0.log" Jan 25 09:06:57 crc kubenswrapper[4832]: I0125 09:06:57.081534 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-79d5ccc684-lrsxz_1e30c775-7a32-478e-8c3c-7312757f846b/manager/0.log" Jan 25 09:06:57 crc kubenswrapper[4832]: I0125 09:06:57.414217 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-668c99d594-f87nw_cdb822ca-2a1d-4b10-8d44-f2cb33173358/operator/0.log" Jan 25 09:06:57 crc kubenswrapper[4832]: I0125 09:06:57.647136 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-547cbdb99f-zwlrf_eb801494-724f-482a-a359-896e5b735b62/manager/0.log" Jan 25 09:06:57 crc kubenswrapper[4832]: I0125 09:06:57.721201 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-85cd9769bb-59gds_47605944-bcb8-4196-9eb3-b26c2e923e70/manager/0.log" Jan 25 09:06:57 crc kubenswrapper[4832]: I0125 09:06:57.787264 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-69797bbcbd-qnxqc_c3356b9d-3a3c-4583-9803-d08fcb621401/manager/0.log" Jan 25 09:06:57 crc kubenswrapper[4832]: I0125 09:06:57.791712 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-745947945d-jwhxb_1529f819-52bd-428f-970f-5f67f071e729/manager/0.log" Jan 25 09:06:57 crc kubenswrapper[4832]: I0125 09:06:57.893561 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-564965969-57npv_1f038807-2bed-41a2-aecd-35d29e529eb8/manager/0.log" Jan 25 09:07:18 crc kubenswrapper[4832]: I0125 09:07:18.519616 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-fns8l_a32ac557-809a-4a0d-8c18-3c8c5730e849/control-plane-machine-set-operator/0.log" Jan 25 09:07:19 crc kubenswrapper[4832]: I0125 09:07:19.344414 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-29fbk_6afbd903-07e1-4806-9a41-a073a6a4acb7/kube-rbac-proxy/0.log" Jan 25 09:07:19 crc kubenswrapper[4832]: I0125 09:07:19.354746 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-29fbk_6afbd903-07e1-4806-9a41-a073a6a4acb7/machine-api-operator/0.log" Jan 25 09:07:22 crc kubenswrapper[4832]: I0125 09:07:22.150180 4832 patch_prober.go:28] interesting pod/machine-config-daemon-9r9sz container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 25 09:07:22 crc kubenswrapper[4832]: I0125 09:07:22.150766 4832 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-9r9sz" podUID="1fb47e8e-c812-41b4-9be7-3fad81e121b0" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 25 09:07:33 crc kubenswrapper[4832]: I0125 09:07:33.167662 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-858654f9db-n5qlr_3f1a7c21-638b-4421-b695-12d246c8909c/cert-manager-controller/0.log" Jan 25 09:07:33 crc kubenswrapper[4832]: I0125 09:07:33.309610 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-cf98fcc89-m4mtp_93467136-4fbc-430d-88c8-44d921001d30/cert-manager-cainjector/0.log" Jan 25 09:07:33 crc kubenswrapper[4832]: I0125 09:07:33.392209 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-687f57d79b-5kx64_b8b3bc3a-3311-4381-98b3-546a392b9967/cert-manager-webhook/0.log" Jan 25 09:07:45 crc kubenswrapper[4832]: I0125 09:07:45.710282 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-console-plugin-7754f76f8b-q6rnr_2a4c7b1f-f7e7-4fa7-b912-0950280f6c5c/nmstate-console-plugin/0.log" Jan 25 09:07:45 crc kubenswrapper[4832]: I0125 09:07:45.870083 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-handler-rjtfb_83613ef6-706d-43d4-b310-98579e87fb5a/nmstate-handler/0.log" Jan 25 09:07:45 crc kubenswrapper[4832]: I0125 09:07:45.904186 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-54757c584b-2kvpm_e53d5a55-a9e1-406f-a7c0-b3e6bee8e9ce/kube-rbac-proxy/0.log" Jan 25 09:07:45 crc kubenswrapper[4832]: I0125 09:07:45.967738 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-54757c584b-2kvpm_e53d5a55-a9e1-406f-a7c0-b3e6bee8e9ce/nmstate-metrics/0.log" Jan 25 09:07:46 crc kubenswrapper[4832]: I0125 09:07:46.106101 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-operator-646758c888-8j4d7_fdb77b21-70d0-4666-807f-60d0aed1040a/nmstate-operator/0.log" Jan 25 09:07:46 crc kubenswrapper[4832]: I0125 09:07:46.200898 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-webhook-8474b5b9d8-c4g4v_fe63b032-94cc-4495-bc9b-84040a04da49/nmstate-webhook/0.log" Jan 25 09:07:52 crc kubenswrapper[4832]: I0125 09:07:52.150149 4832 patch_prober.go:28] interesting pod/machine-config-daemon-9r9sz container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 25 09:07:52 crc kubenswrapper[4832]: I0125 09:07:52.150753 4832 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-9r9sz" podUID="1fb47e8e-c812-41b4-9be7-3fad81e121b0" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 25 09:07:52 crc kubenswrapper[4832]: I0125 09:07:52.150804 4832 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-9r9sz" Jan 25 09:07:52 crc kubenswrapper[4832]: I0125 09:07:52.151710 4832 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"26d3543bdf72052e3cc4cb665d039f1d2057d49c984f5249d087685baf77d7d0"} pod="openshift-machine-config-operator/machine-config-daemon-9r9sz" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 25 09:07:52 crc kubenswrapper[4832]: I0125 09:07:52.151760 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-9r9sz" podUID="1fb47e8e-c812-41b4-9be7-3fad81e121b0" containerName="machine-config-daemon" containerID="cri-o://26d3543bdf72052e3cc4cb665d039f1d2057d49c984f5249d087685baf77d7d0" gracePeriod=600 Jan 25 09:07:52 crc kubenswrapper[4832]: E0125 09:07:52.599574 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9r9sz_openshift-machine-config-operator(1fb47e8e-c812-41b4-9be7-3fad81e121b0)\"" pod="openshift-machine-config-operator/machine-config-daemon-9r9sz" podUID="1fb47e8e-c812-41b4-9be7-3fad81e121b0" Jan 25 09:07:53 crc kubenswrapper[4832]: I0125 09:07:53.422677 4832 generic.go:334] "Generic (PLEG): container finished" podID="1fb47e8e-c812-41b4-9be7-3fad81e121b0" containerID="26d3543bdf72052e3cc4cb665d039f1d2057d49c984f5249d087685baf77d7d0" exitCode=0 Jan 25 09:07:53 crc kubenswrapper[4832]: I0125 09:07:53.422735 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-9r9sz" event={"ID":"1fb47e8e-c812-41b4-9be7-3fad81e121b0","Type":"ContainerDied","Data":"26d3543bdf72052e3cc4cb665d039f1d2057d49c984f5249d087685baf77d7d0"} Jan 25 09:07:53 crc kubenswrapper[4832]: I0125 09:07:53.422790 4832 scope.go:117] "RemoveContainer" containerID="0ea911382d8d0a0eb2577340195474126353ecae004440333081f27f25b490d7" Jan 25 09:07:53 crc kubenswrapper[4832]: I0125 09:07:53.423303 4832 scope.go:117] "RemoveContainer" containerID="26d3543bdf72052e3cc4cb665d039f1d2057d49c984f5249d087685baf77d7d0" Jan 25 09:07:53 crc kubenswrapper[4832]: E0125 09:07:53.423704 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9r9sz_openshift-machine-config-operator(1fb47e8e-c812-41b4-9be7-3fad81e121b0)\"" pod="openshift-machine-config-operator/machine-config-daemon-9r9sz" podUID="1fb47e8e-c812-41b4-9be7-3fad81e121b0" Jan 25 09:08:07 crc kubenswrapper[4832]: I0125 09:08:07.677361 4832 scope.go:117] "RemoveContainer" containerID="26d3543bdf72052e3cc4cb665d039f1d2057d49c984f5249d087685baf77d7d0" Jan 25 09:08:07 crc kubenswrapper[4832]: E0125 09:08:07.678402 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9r9sz_openshift-machine-config-operator(1fb47e8e-c812-41b4-9be7-3fad81e121b0)\"" pod="openshift-machine-config-operator/machine-config-daemon-9r9sz" podUID="1fb47e8e-c812-41b4-9be7-3fad81e121b0" Jan 25 09:08:15 crc kubenswrapper[4832]: I0125 09:08:15.010893 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6968d8fdc4-z2hg2_80c752a5-a0c6-4968-8f2f-4b5aa047c6c5/kube-rbac-proxy/0.log" Jan 25 09:08:15 crc kubenswrapper[4832]: I0125 09:08:15.124582 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6968d8fdc4-z2hg2_80c752a5-a0c6-4968-8f2f-4b5aa047c6c5/controller/0.log" Jan 25 09:08:15 crc kubenswrapper[4832]: I0125 09:08:15.261557 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-6zmfq_c203bd63-9985-423a-bc14-8542960372f1/cp-frr-files/0.log" Jan 25 09:08:15 crc kubenswrapper[4832]: I0125 09:08:15.483719 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-6zmfq_c203bd63-9985-423a-bc14-8542960372f1/cp-reloader/0.log" Jan 25 09:08:15 crc kubenswrapper[4832]: I0125 09:08:15.484739 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-6zmfq_c203bd63-9985-423a-bc14-8542960372f1/cp-metrics/0.log" Jan 25 09:08:15 crc kubenswrapper[4832]: I0125 09:08:15.504328 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-6zmfq_c203bd63-9985-423a-bc14-8542960372f1/cp-reloader/0.log" Jan 25 09:08:15 crc kubenswrapper[4832]: I0125 09:08:15.511272 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-6zmfq_c203bd63-9985-423a-bc14-8542960372f1/cp-frr-files/0.log" Jan 25 09:08:15 crc kubenswrapper[4832]: I0125 09:08:15.679650 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-6zmfq_c203bd63-9985-423a-bc14-8542960372f1/cp-frr-files/0.log" Jan 25 09:08:15 crc kubenswrapper[4832]: I0125 09:08:15.703820 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-6zmfq_c203bd63-9985-423a-bc14-8542960372f1/cp-reloader/0.log" Jan 25 09:08:15 crc kubenswrapper[4832]: I0125 09:08:15.735404 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-6zmfq_c203bd63-9985-423a-bc14-8542960372f1/cp-metrics/0.log" Jan 25 09:08:15 crc kubenswrapper[4832]: I0125 09:08:15.735756 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-6zmfq_c203bd63-9985-423a-bc14-8542960372f1/cp-metrics/0.log" Jan 25 09:08:15 crc kubenswrapper[4832]: I0125 09:08:15.882359 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-6zmfq_c203bd63-9985-423a-bc14-8542960372f1/cp-reloader/0.log" Jan 25 09:08:15 crc kubenswrapper[4832]: I0125 09:08:15.917101 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-6zmfq_c203bd63-9985-423a-bc14-8542960372f1/controller/0.log" Jan 25 09:08:15 crc kubenswrapper[4832]: I0125 09:08:15.957659 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-6zmfq_c203bd63-9985-423a-bc14-8542960372f1/cp-metrics/0.log" Jan 25 09:08:15 crc kubenswrapper[4832]: I0125 09:08:15.962655 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-6zmfq_c203bd63-9985-423a-bc14-8542960372f1/cp-frr-files/0.log" Jan 25 09:08:16 crc kubenswrapper[4832]: I0125 09:08:16.153890 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-6zmfq_c203bd63-9985-423a-bc14-8542960372f1/frr-metrics/0.log" Jan 25 09:08:16 crc kubenswrapper[4832]: I0125 09:08:16.183718 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-6zmfq_c203bd63-9985-423a-bc14-8542960372f1/kube-rbac-proxy-frr/0.log" Jan 25 09:08:16 crc kubenswrapper[4832]: I0125 09:08:16.207225 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-6zmfq_c203bd63-9985-423a-bc14-8542960372f1/kube-rbac-proxy/0.log" Jan 25 09:08:16 crc kubenswrapper[4832]: I0125 09:08:16.434462 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-6zmfq_c203bd63-9985-423a-bc14-8542960372f1/reloader/0.log" Jan 25 09:08:16 crc kubenswrapper[4832]: I0125 09:08:16.443204 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-7df86c4f6c-np4h7_940e2830-7ef2-4237-a053-6981a3bbf2b3/frr-k8s-webhook-server/0.log" Jan 25 09:08:16 crc kubenswrapper[4832]: I0125 09:08:16.729340 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-5864b67f75-pvtmd_71c97cd3-3f75-4fbd-84d8-f08942aba882/manager/0.log" Jan 25 09:08:16 crc kubenswrapper[4832]: I0125 09:08:16.904238 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-ffcf449bb-jz2q4_d6219f5c-261f-419a-b3de-ec9119991024/webhook-server/0.log" Jan 25 09:08:16 crc kubenswrapper[4832]: I0125 09:08:16.991655 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-lbb8k_4095df57-d3c6-4d95-8f54-1d5eafc2a919/kube-rbac-proxy/0.log" Jan 25 09:08:17 crc kubenswrapper[4832]: I0125 09:08:17.605133 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-lbb8k_4095df57-d3c6-4d95-8f54-1d5eafc2a919/speaker/0.log" Jan 25 09:08:17 crc kubenswrapper[4832]: I0125 09:08:17.614314 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-6zmfq_c203bd63-9985-423a-bc14-8542960372f1/frr/0.log" Jan 25 09:08:22 crc kubenswrapper[4832]: I0125 09:08:22.670152 4832 scope.go:117] "RemoveContainer" containerID="26d3543bdf72052e3cc4cb665d039f1d2057d49c984f5249d087685baf77d7d0" Jan 25 09:08:22 crc kubenswrapper[4832]: E0125 09:08:22.671280 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9r9sz_openshift-machine-config-operator(1fb47e8e-c812-41b4-9be7-3fad81e121b0)\"" pod="openshift-machine-config-operator/machine-config-daemon-9r9sz" podUID="1fb47e8e-c812-41b4-9be7-3fad81e121b0" Jan 25 09:08:31 crc kubenswrapper[4832]: I0125 09:08:31.612140 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcfvv6m_c23342e3-9a86-4405-823c-ba9e4f90a4da/util/0.log" Jan 25 09:08:31 crc kubenswrapper[4832]: I0125 09:08:31.776235 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcfvv6m_c23342e3-9a86-4405-823c-ba9e4f90a4da/util/0.log" Jan 25 09:08:31 crc kubenswrapper[4832]: I0125 09:08:31.818985 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcfvv6m_c23342e3-9a86-4405-823c-ba9e4f90a4da/pull/0.log" Jan 25 09:08:31 crc kubenswrapper[4832]: I0125 09:08:31.827494 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcfvv6m_c23342e3-9a86-4405-823c-ba9e4f90a4da/pull/0.log" Jan 25 09:08:31 crc kubenswrapper[4832]: I0125 09:08:31.990617 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcfvv6m_c23342e3-9a86-4405-823c-ba9e4f90a4da/extract/0.log" Jan 25 09:08:31 crc kubenswrapper[4832]: I0125 09:08:31.998022 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcfvv6m_c23342e3-9a86-4405-823c-ba9e4f90a4da/util/0.log" Jan 25 09:08:32 crc kubenswrapper[4832]: I0125 09:08:32.007192 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcfvv6m_c23342e3-9a86-4405-823c-ba9e4f90a4da/pull/0.log" Jan 25 09:08:32 crc kubenswrapper[4832]: I0125 09:08:32.209324 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7139bh59_65372180-5040-413f-a789-bebad10ff6d8/util/0.log" Jan 25 09:08:32 crc kubenswrapper[4832]: I0125 09:08:32.404276 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7139bh59_65372180-5040-413f-a789-bebad10ff6d8/util/0.log" Jan 25 09:08:32 crc kubenswrapper[4832]: I0125 09:08:32.406721 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7139bh59_65372180-5040-413f-a789-bebad10ff6d8/pull/0.log" Jan 25 09:08:32 crc kubenswrapper[4832]: I0125 09:08:32.408525 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7139bh59_65372180-5040-413f-a789-bebad10ff6d8/pull/0.log" Jan 25 09:08:32 crc kubenswrapper[4832]: I0125 09:08:32.550471 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7139bh59_65372180-5040-413f-a789-bebad10ff6d8/util/0.log" Jan 25 09:08:32 crc kubenswrapper[4832]: I0125 09:08:32.588432 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7139bh59_65372180-5040-413f-a789-bebad10ff6d8/pull/0.log" Jan 25 09:08:32 crc kubenswrapper[4832]: I0125 09:08:32.598240 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7139bh59_65372180-5040-413f-a789-bebad10ff6d8/extract/0.log" Jan 25 09:08:32 crc kubenswrapper[4832]: I0125 09:08:32.746935 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-8dnnk_ab8542fb-edc3-4aac-9c78-41ec2ff8981f/extract-utilities/0.log" Jan 25 09:08:32 crc kubenswrapper[4832]: I0125 09:08:32.901606 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-8dnnk_ab8542fb-edc3-4aac-9c78-41ec2ff8981f/extract-utilities/0.log" Jan 25 09:08:32 crc kubenswrapper[4832]: I0125 09:08:32.921430 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-8dnnk_ab8542fb-edc3-4aac-9c78-41ec2ff8981f/extract-content/0.log" Jan 25 09:08:32 crc kubenswrapper[4832]: I0125 09:08:32.921430 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-8dnnk_ab8542fb-edc3-4aac-9c78-41ec2ff8981f/extract-content/0.log" Jan 25 09:08:33 crc kubenswrapper[4832]: I0125 09:08:33.448856 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-8dnnk_ab8542fb-edc3-4aac-9c78-41ec2ff8981f/extract-utilities/0.log" Jan 25 09:08:33 crc kubenswrapper[4832]: I0125 09:08:33.450616 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-8dnnk_ab8542fb-edc3-4aac-9c78-41ec2ff8981f/extract-content/0.log" Jan 25 09:08:33 crc kubenswrapper[4832]: I0125 09:08:33.706198 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-cjfdq_b4371fdc-00c0-4e6a-a877-b17501271922/extract-utilities/0.log" Jan 25 09:08:33 crc kubenswrapper[4832]: I0125 09:08:33.878740 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-cjfdq_b4371fdc-00c0-4e6a-a877-b17501271922/extract-utilities/0.log" Jan 25 09:08:33 crc kubenswrapper[4832]: I0125 09:08:33.934539 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-cjfdq_b4371fdc-00c0-4e6a-a877-b17501271922/extract-content/0.log" Jan 25 09:08:33 crc kubenswrapper[4832]: I0125 09:08:33.946670 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-cjfdq_b4371fdc-00c0-4e6a-a877-b17501271922/extract-content/0.log" Jan 25 09:08:34 crc kubenswrapper[4832]: I0125 09:08:34.063737 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-8dnnk_ab8542fb-edc3-4aac-9c78-41ec2ff8981f/registry-server/0.log" Jan 25 09:08:34 crc kubenswrapper[4832]: I0125 09:08:34.145021 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-cjfdq_b4371fdc-00c0-4e6a-a877-b17501271922/extract-utilities/0.log" Jan 25 09:08:34 crc kubenswrapper[4832]: I0125 09:08:34.169707 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-cjfdq_b4371fdc-00c0-4e6a-a877-b17501271922/extract-content/0.log" Jan 25 09:08:34 crc kubenswrapper[4832]: I0125 09:08:34.440005 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-ncr8s_12e3f428-4b38-471d-8048-e3d55ce0d4b4/marketplace-operator/0.log" Jan 25 09:08:34 crc kubenswrapper[4832]: I0125 09:08:34.551294 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-228pm_5c017036-4f0f-41d7-86b8-52d5216b44ba/extract-utilities/0.log" Jan 25 09:08:34 crc kubenswrapper[4832]: I0125 09:08:34.669531 4832 scope.go:117] "RemoveContainer" containerID="26d3543bdf72052e3cc4cb665d039f1d2057d49c984f5249d087685baf77d7d0" Jan 25 09:08:34 crc kubenswrapper[4832]: E0125 09:08:34.669809 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9r9sz_openshift-machine-config-operator(1fb47e8e-c812-41b4-9be7-3fad81e121b0)\"" pod="openshift-machine-config-operator/machine-config-daemon-9r9sz" podUID="1fb47e8e-c812-41b4-9be7-3fad81e121b0" Jan 25 09:08:34 crc kubenswrapper[4832]: I0125 09:08:34.772626 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-228pm_5c017036-4f0f-41d7-86b8-52d5216b44ba/extract-content/0.log" Jan 25 09:08:34 crc kubenswrapper[4832]: I0125 09:08:34.826263 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-228pm_5c017036-4f0f-41d7-86b8-52d5216b44ba/extract-utilities/0.log" Jan 25 09:08:34 crc kubenswrapper[4832]: I0125 09:08:34.857377 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-228pm_5c017036-4f0f-41d7-86b8-52d5216b44ba/extract-content/0.log" Jan 25 09:08:34 crc kubenswrapper[4832]: I0125 09:08:34.931874 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-cjfdq_b4371fdc-00c0-4e6a-a877-b17501271922/registry-server/0.log" Jan 25 09:08:35 crc kubenswrapper[4832]: I0125 09:08:35.481412 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-228pm_5c017036-4f0f-41d7-86b8-52d5216b44ba/extract-utilities/0.log" Jan 25 09:08:35 crc kubenswrapper[4832]: I0125 09:08:35.525852 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-228pm_5c017036-4f0f-41d7-86b8-52d5216b44ba/extract-content/0.log" Jan 25 09:08:35 crc kubenswrapper[4832]: I0125 09:08:35.620408 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-fnkc8_8676ecdd-5a18-4dfb-aa09-0c398279d340/extract-utilities/0.log" Jan 25 09:08:35 crc kubenswrapper[4832]: I0125 09:08:35.648524 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-228pm_5c017036-4f0f-41d7-86b8-52d5216b44ba/registry-server/0.log" Jan 25 09:08:35 crc kubenswrapper[4832]: I0125 09:08:35.794546 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-fnkc8_8676ecdd-5a18-4dfb-aa09-0c398279d340/extract-utilities/0.log" Jan 25 09:08:35 crc kubenswrapper[4832]: I0125 09:08:35.797028 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-fnkc8_8676ecdd-5a18-4dfb-aa09-0c398279d340/extract-content/0.log" Jan 25 09:08:35 crc kubenswrapper[4832]: I0125 09:08:35.797220 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-fnkc8_8676ecdd-5a18-4dfb-aa09-0c398279d340/extract-content/0.log" Jan 25 09:08:35 crc kubenswrapper[4832]: I0125 09:08:35.963440 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-fnkc8_8676ecdd-5a18-4dfb-aa09-0c398279d340/extract-utilities/0.log" Jan 25 09:08:35 crc kubenswrapper[4832]: I0125 09:08:35.991625 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-fnkc8_8676ecdd-5a18-4dfb-aa09-0c398279d340/extract-content/0.log" Jan 25 09:08:36 crc kubenswrapper[4832]: I0125 09:08:36.515705 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-fnkc8_8676ecdd-5a18-4dfb-aa09-0c398279d340/registry-server/0.log" Jan 25 09:08:47 crc kubenswrapper[4832]: I0125 09:08:47.675948 4832 scope.go:117] "RemoveContainer" containerID="26d3543bdf72052e3cc4cb665d039f1d2057d49c984f5249d087685baf77d7d0" Jan 25 09:08:47 crc kubenswrapper[4832]: E0125 09:08:47.676726 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9r9sz_openshift-machine-config-operator(1fb47e8e-c812-41b4-9be7-3fad81e121b0)\"" pod="openshift-machine-config-operator/machine-config-daemon-9r9sz" podUID="1fb47e8e-c812-41b4-9be7-3fad81e121b0" Jan 25 09:08:59 crc kubenswrapper[4832]: I0125 09:08:59.674383 4832 scope.go:117] "RemoveContainer" containerID="26d3543bdf72052e3cc4cb665d039f1d2057d49c984f5249d087685baf77d7d0" Jan 25 09:08:59 crc kubenswrapper[4832]: E0125 09:08:59.675042 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9r9sz_openshift-machine-config-operator(1fb47e8e-c812-41b4-9be7-3fad81e121b0)\"" pod="openshift-machine-config-operator/machine-config-daemon-9r9sz" podUID="1fb47e8e-c812-41b4-9be7-3fad81e121b0" Jan 25 09:09:11 crc kubenswrapper[4832]: I0125 09:09:11.670013 4832 scope.go:117] "RemoveContainer" containerID="26d3543bdf72052e3cc4cb665d039f1d2057d49c984f5249d087685baf77d7d0" Jan 25 09:09:11 crc kubenswrapper[4832]: E0125 09:09:11.670723 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9r9sz_openshift-machine-config-operator(1fb47e8e-c812-41b4-9be7-3fad81e121b0)\"" pod="openshift-machine-config-operator/machine-config-daemon-9r9sz" podUID="1fb47e8e-c812-41b4-9be7-3fad81e121b0" Jan 25 09:09:24 crc kubenswrapper[4832]: I0125 09:09:24.671226 4832 scope.go:117] "RemoveContainer" containerID="26d3543bdf72052e3cc4cb665d039f1d2057d49c984f5249d087685baf77d7d0" Jan 25 09:09:24 crc kubenswrapper[4832]: E0125 09:09:24.672003 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9r9sz_openshift-machine-config-operator(1fb47e8e-c812-41b4-9be7-3fad81e121b0)\"" pod="openshift-machine-config-operator/machine-config-daemon-9r9sz" podUID="1fb47e8e-c812-41b4-9be7-3fad81e121b0" Jan 25 09:09:36 crc kubenswrapper[4832]: I0125 09:09:36.670916 4832 scope.go:117] "RemoveContainer" containerID="26d3543bdf72052e3cc4cb665d039f1d2057d49c984f5249d087685baf77d7d0" Jan 25 09:09:36 crc kubenswrapper[4832]: E0125 09:09:36.671672 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9r9sz_openshift-machine-config-operator(1fb47e8e-c812-41b4-9be7-3fad81e121b0)\"" pod="openshift-machine-config-operator/machine-config-daemon-9r9sz" podUID="1fb47e8e-c812-41b4-9be7-3fad81e121b0" Jan 25 09:09:47 crc kubenswrapper[4832]: I0125 09:09:47.679543 4832 scope.go:117] "RemoveContainer" containerID="26d3543bdf72052e3cc4cb665d039f1d2057d49c984f5249d087685baf77d7d0" Jan 25 09:09:47 crc kubenswrapper[4832]: E0125 09:09:47.681519 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9r9sz_openshift-machine-config-operator(1fb47e8e-c812-41b4-9be7-3fad81e121b0)\"" pod="openshift-machine-config-operator/machine-config-daemon-9r9sz" podUID="1fb47e8e-c812-41b4-9be7-3fad81e121b0" Jan 25 09:10:02 crc kubenswrapper[4832]: I0125 09:10:02.669867 4832 scope.go:117] "RemoveContainer" containerID="26d3543bdf72052e3cc4cb665d039f1d2057d49c984f5249d087685baf77d7d0" Jan 25 09:10:02 crc kubenswrapper[4832]: E0125 09:10:02.670627 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9r9sz_openshift-machine-config-operator(1fb47e8e-c812-41b4-9be7-3fad81e121b0)\"" pod="openshift-machine-config-operator/machine-config-daemon-9r9sz" podUID="1fb47e8e-c812-41b4-9be7-3fad81e121b0" Jan 25 09:10:14 crc kubenswrapper[4832]: I0125 09:10:14.670261 4832 scope.go:117] "RemoveContainer" containerID="26d3543bdf72052e3cc4cb665d039f1d2057d49c984f5249d087685baf77d7d0" Jan 25 09:10:14 crc kubenswrapper[4832]: E0125 09:10:14.670995 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9r9sz_openshift-machine-config-operator(1fb47e8e-c812-41b4-9be7-3fad81e121b0)\"" pod="openshift-machine-config-operator/machine-config-daemon-9r9sz" podUID="1fb47e8e-c812-41b4-9be7-3fad81e121b0" Jan 25 09:10:18 crc kubenswrapper[4832]: I0125 09:10:18.400803 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-4rhsv"] Jan 25 09:10:18 crc kubenswrapper[4832]: E0125 09:10:18.401858 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="86395d44-baee-4faa-8589-5212b9db3d14" containerName="extract-content" Jan 25 09:10:18 crc kubenswrapper[4832]: I0125 09:10:18.401873 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="86395d44-baee-4faa-8589-5212b9db3d14" containerName="extract-content" Jan 25 09:10:18 crc kubenswrapper[4832]: E0125 09:10:18.401885 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="86395d44-baee-4faa-8589-5212b9db3d14" containerName="extract-utilities" Jan 25 09:10:18 crc kubenswrapper[4832]: I0125 09:10:18.401891 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="86395d44-baee-4faa-8589-5212b9db3d14" containerName="extract-utilities" Jan 25 09:10:18 crc kubenswrapper[4832]: E0125 09:10:18.401927 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="86395d44-baee-4faa-8589-5212b9db3d14" containerName="registry-server" Jan 25 09:10:18 crc kubenswrapper[4832]: I0125 09:10:18.401934 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="86395d44-baee-4faa-8589-5212b9db3d14" containerName="registry-server" Jan 25 09:10:18 crc kubenswrapper[4832]: I0125 09:10:18.402114 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="86395d44-baee-4faa-8589-5212b9db3d14" containerName="registry-server" Jan 25 09:10:18 crc kubenswrapper[4832]: I0125 09:10:18.404123 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-4rhsv" Jan 25 09:10:18 crc kubenswrapper[4832]: I0125 09:10:18.424089 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-4rhsv"] Jan 25 09:10:18 crc kubenswrapper[4832]: I0125 09:10:18.546670 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/453fca9b-5867-4b42-9587-103ca5fc562f-utilities\") pod \"redhat-operators-4rhsv\" (UID: \"453fca9b-5867-4b42-9587-103ca5fc562f\") " pod="openshift-marketplace/redhat-operators-4rhsv" Jan 25 09:10:18 crc kubenswrapper[4832]: I0125 09:10:18.547120 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5rqmb\" (UniqueName: \"kubernetes.io/projected/453fca9b-5867-4b42-9587-103ca5fc562f-kube-api-access-5rqmb\") pod \"redhat-operators-4rhsv\" (UID: \"453fca9b-5867-4b42-9587-103ca5fc562f\") " pod="openshift-marketplace/redhat-operators-4rhsv" Jan 25 09:10:18 crc kubenswrapper[4832]: I0125 09:10:18.547158 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/453fca9b-5867-4b42-9587-103ca5fc562f-catalog-content\") pod \"redhat-operators-4rhsv\" (UID: \"453fca9b-5867-4b42-9587-103ca5fc562f\") " pod="openshift-marketplace/redhat-operators-4rhsv" Jan 25 09:10:18 crc kubenswrapper[4832]: I0125 09:10:18.649253 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/453fca9b-5867-4b42-9587-103ca5fc562f-utilities\") pod \"redhat-operators-4rhsv\" (UID: \"453fca9b-5867-4b42-9587-103ca5fc562f\") " pod="openshift-marketplace/redhat-operators-4rhsv" Jan 25 09:10:18 crc kubenswrapper[4832]: I0125 09:10:18.649353 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5rqmb\" (UniqueName: \"kubernetes.io/projected/453fca9b-5867-4b42-9587-103ca5fc562f-kube-api-access-5rqmb\") pod \"redhat-operators-4rhsv\" (UID: \"453fca9b-5867-4b42-9587-103ca5fc562f\") " pod="openshift-marketplace/redhat-operators-4rhsv" Jan 25 09:10:18 crc kubenswrapper[4832]: I0125 09:10:18.649380 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/453fca9b-5867-4b42-9587-103ca5fc562f-catalog-content\") pod \"redhat-operators-4rhsv\" (UID: \"453fca9b-5867-4b42-9587-103ca5fc562f\") " pod="openshift-marketplace/redhat-operators-4rhsv" Jan 25 09:10:18 crc kubenswrapper[4832]: I0125 09:10:18.649853 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/453fca9b-5867-4b42-9587-103ca5fc562f-utilities\") pod \"redhat-operators-4rhsv\" (UID: \"453fca9b-5867-4b42-9587-103ca5fc562f\") " pod="openshift-marketplace/redhat-operators-4rhsv" Jan 25 09:10:18 crc kubenswrapper[4832]: I0125 09:10:18.649872 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/453fca9b-5867-4b42-9587-103ca5fc562f-catalog-content\") pod \"redhat-operators-4rhsv\" (UID: \"453fca9b-5867-4b42-9587-103ca5fc562f\") " pod="openshift-marketplace/redhat-operators-4rhsv" Jan 25 09:10:18 crc kubenswrapper[4832]: I0125 09:10:18.669504 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5rqmb\" (UniqueName: \"kubernetes.io/projected/453fca9b-5867-4b42-9587-103ca5fc562f-kube-api-access-5rqmb\") pod \"redhat-operators-4rhsv\" (UID: \"453fca9b-5867-4b42-9587-103ca5fc562f\") " pod="openshift-marketplace/redhat-operators-4rhsv" Jan 25 09:10:18 crc kubenswrapper[4832]: I0125 09:10:18.742116 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-4rhsv" Jan 25 09:10:19 crc kubenswrapper[4832]: I0125 09:10:19.038993 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-4rhsv"] Jan 25 09:10:19 crc kubenswrapper[4832]: I0125 09:10:19.876005 4832 generic.go:334] "Generic (PLEG): container finished" podID="453fca9b-5867-4b42-9587-103ca5fc562f" containerID="eac0500e5cd9aed819b27b95a04002e711dcfa809688b074c1f1f3cd12729765" exitCode=0 Jan 25 09:10:19 crc kubenswrapper[4832]: I0125 09:10:19.876143 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4rhsv" event={"ID":"453fca9b-5867-4b42-9587-103ca5fc562f","Type":"ContainerDied","Data":"eac0500e5cd9aed819b27b95a04002e711dcfa809688b074c1f1f3cd12729765"} Jan 25 09:10:19 crc kubenswrapper[4832]: I0125 09:10:19.877692 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4rhsv" event={"ID":"453fca9b-5867-4b42-9587-103ca5fc562f","Type":"ContainerStarted","Data":"50ed1fa367a0fe070152cde38788f7b3a08a8450572fbab30c9e1d1992694fe2"} Jan 25 09:10:21 crc kubenswrapper[4832]: I0125 09:10:21.900306 4832 generic.go:334] "Generic (PLEG): container finished" podID="453fca9b-5867-4b42-9587-103ca5fc562f" containerID="24b08795740a188ad473611b259a91bd890cb684edccf24d7ba097f5eed14691" exitCode=0 Jan 25 09:10:21 crc kubenswrapper[4832]: I0125 09:10:21.901695 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4rhsv" event={"ID":"453fca9b-5867-4b42-9587-103ca5fc562f","Type":"ContainerDied","Data":"24b08795740a188ad473611b259a91bd890cb684edccf24d7ba097f5eed14691"} Jan 25 09:10:23 crc kubenswrapper[4832]: I0125 09:10:23.930467 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4rhsv" event={"ID":"453fca9b-5867-4b42-9587-103ca5fc562f","Type":"ContainerStarted","Data":"e8246fc4fd05847505f8f673016237e62e3a8bba76fca42aba6f7bc4a4ea433d"} Jan 25 09:10:23 crc kubenswrapper[4832]: I0125 09:10:23.955007 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-4rhsv" podStartSLOduration=3.48878834 podStartE2EDuration="5.95498418s" podCreationTimestamp="2026-01-25 09:10:18 +0000 UTC" firstStartedPulling="2026-01-25 09:10:19.879312878 +0000 UTC m=+4402.553136411" lastFinishedPulling="2026-01-25 09:10:22.345508718 +0000 UTC m=+4405.019332251" observedRunningTime="2026-01-25 09:10:23.954922918 +0000 UTC m=+4406.628746451" watchObservedRunningTime="2026-01-25 09:10:23.95498418 +0000 UTC m=+4406.628807723" Jan 25 09:10:28 crc kubenswrapper[4832]: I0125 09:10:28.669519 4832 scope.go:117] "RemoveContainer" containerID="26d3543bdf72052e3cc4cb665d039f1d2057d49c984f5249d087685baf77d7d0" Jan 25 09:10:28 crc kubenswrapper[4832]: E0125 09:10:28.670724 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9r9sz_openshift-machine-config-operator(1fb47e8e-c812-41b4-9be7-3fad81e121b0)\"" pod="openshift-machine-config-operator/machine-config-daemon-9r9sz" podUID="1fb47e8e-c812-41b4-9be7-3fad81e121b0" Jan 25 09:10:28 crc kubenswrapper[4832]: I0125 09:10:28.743037 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-4rhsv" Jan 25 09:10:28 crc kubenswrapper[4832]: I0125 09:10:28.743192 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-4rhsv" Jan 25 09:10:28 crc kubenswrapper[4832]: I0125 09:10:28.787467 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-4rhsv" Jan 25 09:10:29 crc kubenswrapper[4832]: I0125 09:10:29.026445 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-4rhsv" Jan 25 09:10:29 crc kubenswrapper[4832]: I0125 09:10:29.083226 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-4rhsv"] Jan 25 09:10:29 crc kubenswrapper[4832]: I0125 09:10:29.993747 4832 generic.go:334] "Generic (PLEG): container finished" podID="f683ac01-9d33-4a8d-8496-478b12af8e88" containerID="4708d7280633af9595bd62d91e57140ea210b5205009b3bb7244bce712866e90" exitCode=0 Jan 25 09:10:29 crc kubenswrapper[4832]: I0125 09:10:29.993877 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-v7wc8/must-gather-vqcpt" event={"ID":"f683ac01-9d33-4a8d-8496-478b12af8e88","Type":"ContainerDied","Data":"4708d7280633af9595bd62d91e57140ea210b5205009b3bb7244bce712866e90"} Jan 25 09:10:29 crc kubenswrapper[4832]: I0125 09:10:29.995255 4832 scope.go:117] "RemoveContainer" containerID="4708d7280633af9595bd62d91e57140ea210b5205009b3bb7244bce712866e90" Jan 25 09:10:30 crc kubenswrapper[4832]: I0125 09:10:30.085896 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-v7wc8_must-gather-vqcpt_f683ac01-9d33-4a8d-8496-478b12af8e88/gather/0.log" Jan 25 09:10:31 crc kubenswrapper[4832]: I0125 09:10:31.003518 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-4rhsv" podUID="453fca9b-5867-4b42-9587-103ca5fc562f" containerName="registry-server" containerID="cri-o://e8246fc4fd05847505f8f673016237e62e3a8bba76fca42aba6f7bc4a4ea433d" gracePeriod=2 Jan 25 09:10:31 crc kubenswrapper[4832]: I0125 09:10:31.879909 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-4rhsv" Jan 25 09:10:31 crc kubenswrapper[4832]: I0125 09:10:31.935466 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5rqmb\" (UniqueName: \"kubernetes.io/projected/453fca9b-5867-4b42-9587-103ca5fc562f-kube-api-access-5rqmb\") pod \"453fca9b-5867-4b42-9587-103ca5fc562f\" (UID: \"453fca9b-5867-4b42-9587-103ca5fc562f\") " Jan 25 09:10:31 crc kubenswrapper[4832]: I0125 09:10:31.935525 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/453fca9b-5867-4b42-9587-103ca5fc562f-utilities\") pod \"453fca9b-5867-4b42-9587-103ca5fc562f\" (UID: \"453fca9b-5867-4b42-9587-103ca5fc562f\") " Jan 25 09:10:31 crc kubenswrapper[4832]: I0125 09:10:31.935609 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/453fca9b-5867-4b42-9587-103ca5fc562f-catalog-content\") pod \"453fca9b-5867-4b42-9587-103ca5fc562f\" (UID: \"453fca9b-5867-4b42-9587-103ca5fc562f\") " Jan 25 09:10:31 crc kubenswrapper[4832]: I0125 09:10:31.936667 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/453fca9b-5867-4b42-9587-103ca5fc562f-utilities" (OuterVolumeSpecName: "utilities") pod "453fca9b-5867-4b42-9587-103ca5fc562f" (UID: "453fca9b-5867-4b42-9587-103ca5fc562f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 25 09:10:31 crc kubenswrapper[4832]: I0125 09:10:31.941206 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/453fca9b-5867-4b42-9587-103ca5fc562f-kube-api-access-5rqmb" (OuterVolumeSpecName: "kube-api-access-5rqmb") pod "453fca9b-5867-4b42-9587-103ca5fc562f" (UID: "453fca9b-5867-4b42-9587-103ca5fc562f"). InnerVolumeSpecName "kube-api-access-5rqmb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 25 09:10:32 crc kubenswrapper[4832]: I0125 09:10:32.013421 4832 generic.go:334] "Generic (PLEG): container finished" podID="453fca9b-5867-4b42-9587-103ca5fc562f" containerID="e8246fc4fd05847505f8f673016237e62e3a8bba76fca42aba6f7bc4a4ea433d" exitCode=0 Jan 25 09:10:32 crc kubenswrapper[4832]: I0125 09:10:32.013507 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4rhsv" event={"ID":"453fca9b-5867-4b42-9587-103ca5fc562f","Type":"ContainerDied","Data":"e8246fc4fd05847505f8f673016237e62e3a8bba76fca42aba6f7bc4a4ea433d"} Jan 25 09:10:32 crc kubenswrapper[4832]: I0125 09:10:32.013576 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4rhsv" event={"ID":"453fca9b-5867-4b42-9587-103ca5fc562f","Type":"ContainerDied","Data":"50ed1fa367a0fe070152cde38788f7b3a08a8450572fbab30c9e1d1992694fe2"} Jan 25 09:10:32 crc kubenswrapper[4832]: I0125 09:10:32.013602 4832 scope.go:117] "RemoveContainer" containerID="e8246fc4fd05847505f8f673016237e62e3a8bba76fca42aba6f7bc4a4ea433d" Jan 25 09:10:32 crc kubenswrapper[4832]: I0125 09:10:32.014566 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-4rhsv" Jan 25 09:10:32 crc kubenswrapper[4832]: I0125 09:10:32.035461 4832 scope.go:117] "RemoveContainer" containerID="24b08795740a188ad473611b259a91bd890cb684edccf24d7ba097f5eed14691" Jan 25 09:10:32 crc kubenswrapper[4832]: I0125 09:10:32.037852 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5rqmb\" (UniqueName: \"kubernetes.io/projected/453fca9b-5867-4b42-9587-103ca5fc562f-kube-api-access-5rqmb\") on node \"crc\" DevicePath \"\"" Jan 25 09:10:32 crc kubenswrapper[4832]: I0125 09:10:32.037884 4832 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/453fca9b-5867-4b42-9587-103ca5fc562f-utilities\") on node \"crc\" DevicePath \"\"" Jan 25 09:10:32 crc kubenswrapper[4832]: I0125 09:10:32.061027 4832 scope.go:117] "RemoveContainer" containerID="eac0500e5cd9aed819b27b95a04002e711dcfa809688b074c1f1f3cd12729765" Jan 25 09:10:32 crc kubenswrapper[4832]: I0125 09:10:32.073077 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/453fca9b-5867-4b42-9587-103ca5fc562f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "453fca9b-5867-4b42-9587-103ca5fc562f" (UID: "453fca9b-5867-4b42-9587-103ca5fc562f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 25 09:10:32 crc kubenswrapper[4832]: I0125 09:10:32.098703 4832 scope.go:117] "RemoveContainer" containerID="e8246fc4fd05847505f8f673016237e62e3a8bba76fca42aba6f7bc4a4ea433d" Jan 25 09:10:32 crc kubenswrapper[4832]: E0125 09:10:32.099152 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e8246fc4fd05847505f8f673016237e62e3a8bba76fca42aba6f7bc4a4ea433d\": container with ID starting with e8246fc4fd05847505f8f673016237e62e3a8bba76fca42aba6f7bc4a4ea433d not found: ID does not exist" containerID="e8246fc4fd05847505f8f673016237e62e3a8bba76fca42aba6f7bc4a4ea433d" Jan 25 09:10:32 crc kubenswrapper[4832]: I0125 09:10:32.099206 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e8246fc4fd05847505f8f673016237e62e3a8bba76fca42aba6f7bc4a4ea433d"} err="failed to get container status \"e8246fc4fd05847505f8f673016237e62e3a8bba76fca42aba6f7bc4a4ea433d\": rpc error: code = NotFound desc = could not find container \"e8246fc4fd05847505f8f673016237e62e3a8bba76fca42aba6f7bc4a4ea433d\": container with ID starting with e8246fc4fd05847505f8f673016237e62e3a8bba76fca42aba6f7bc4a4ea433d not found: ID does not exist" Jan 25 09:10:32 crc kubenswrapper[4832]: I0125 09:10:32.099242 4832 scope.go:117] "RemoveContainer" containerID="24b08795740a188ad473611b259a91bd890cb684edccf24d7ba097f5eed14691" Jan 25 09:10:32 crc kubenswrapper[4832]: E0125 09:10:32.099634 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"24b08795740a188ad473611b259a91bd890cb684edccf24d7ba097f5eed14691\": container with ID starting with 24b08795740a188ad473611b259a91bd890cb684edccf24d7ba097f5eed14691 not found: ID does not exist" containerID="24b08795740a188ad473611b259a91bd890cb684edccf24d7ba097f5eed14691" Jan 25 09:10:32 crc kubenswrapper[4832]: I0125 09:10:32.099668 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"24b08795740a188ad473611b259a91bd890cb684edccf24d7ba097f5eed14691"} err="failed to get container status \"24b08795740a188ad473611b259a91bd890cb684edccf24d7ba097f5eed14691\": rpc error: code = NotFound desc = could not find container \"24b08795740a188ad473611b259a91bd890cb684edccf24d7ba097f5eed14691\": container with ID starting with 24b08795740a188ad473611b259a91bd890cb684edccf24d7ba097f5eed14691 not found: ID does not exist" Jan 25 09:10:32 crc kubenswrapper[4832]: I0125 09:10:32.099692 4832 scope.go:117] "RemoveContainer" containerID="eac0500e5cd9aed819b27b95a04002e711dcfa809688b074c1f1f3cd12729765" Jan 25 09:10:32 crc kubenswrapper[4832]: E0125 09:10:32.099907 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"eac0500e5cd9aed819b27b95a04002e711dcfa809688b074c1f1f3cd12729765\": container with ID starting with eac0500e5cd9aed819b27b95a04002e711dcfa809688b074c1f1f3cd12729765 not found: ID does not exist" containerID="eac0500e5cd9aed819b27b95a04002e711dcfa809688b074c1f1f3cd12729765" Jan 25 09:10:32 crc kubenswrapper[4832]: I0125 09:10:32.099937 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"eac0500e5cd9aed819b27b95a04002e711dcfa809688b074c1f1f3cd12729765"} err="failed to get container status \"eac0500e5cd9aed819b27b95a04002e711dcfa809688b074c1f1f3cd12729765\": rpc error: code = NotFound desc = could not find container \"eac0500e5cd9aed819b27b95a04002e711dcfa809688b074c1f1f3cd12729765\": container with ID starting with eac0500e5cd9aed819b27b95a04002e711dcfa809688b074c1f1f3cd12729765 not found: ID does not exist" Jan 25 09:10:32 crc kubenswrapper[4832]: I0125 09:10:32.139410 4832 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/453fca9b-5867-4b42-9587-103ca5fc562f-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 25 09:10:32 crc kubenswrapper[4832]: I0125 09:10:32.352154 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-4rhsv"] Jan 25 09:10:32 crc kubenswrapper[4832]: I0125 09:10:32.360949 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-4rhsv"] Jan 25 09:10:33 crc kubenswrapper[4832]: I0125 09:10:33.680715 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="453fca9b-5867-4b42-9587-103ca5fc562f" path="/var/lib/kubelet/pods/453fca9b-5867-4b42-9587-103ca5fc562f/volumes" Jan 25 09:10:41 crc kubenswrapper[4832]: I0125 09:10:41.088026 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-v7wc8/must-gather-vqcpt"] Jan 25 09:10:41 crc kubenswrapper[4832]: I0125 09:10:41.088906 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-must-gather-v7wc8/must-gather-vqcpt" podUID="f683ac01-9d33-4a8d-8496-478b12af8e88" containerName="copy" containerID="cri-o://d86bbaf1ff464e699dc568103d1c45826d83b06a0024e3897e327977d80ce5c8" gracePeriod=2 Jan 25 09:10:41 crc kubenswrapper[4832]: I0125 09:10:41.099187 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-v7wc8/must-gather-vqcpt"] Jan 25 09:10:41 crc kubenswrapper[4832]: I0125 09:10:41.556490 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-v7wc8_must-gather-vqcpt_f683ac01-9d33-4a8d-8496-478b12af8e88/copy/0.log" Jan 25 09:10:41 crc kubenswrapper[4832]: I0125 09:10:41.557261 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-v7wc8/must-gather-vqcpt" Jan 25 09:10:41 crc kubenswrapper[4832]: I0125 09:10:41.648957 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-64glj\" (UniqueName: \"kubernetes.io/projected/f683ac01-9d33-4a8d-8496-478b12af8e88-kube-api-access-64glj\") pod \"f683ac01-9d33-4a8d-8496-478b12af8e88\" (UID: \"f683ac01-9d33-4a8d-8496-478b12af8e88\") " Jan 25 09:10:41 crc kubenswrapper[4832]: I0125 09:10:41.649288 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/f683ac01-9d33-4a8d-8496-478b12af8e88-must-gather-output\") pod \"f683ac01-9d33-4a8d-8496-478b12af8e88\" (UID: \"f683ac01-9d33-4a8d-8496-478b12af8e88\") " Jan 25 09:10:41 crc kubenswrapper[4832]: I0125 09:10:41.655145 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f683ac01-9d33-4a8d-8496-478b12af8e88-kube-api-access-64glj" (OuterVolumeSpecName: "kube-api-access-64glj") pod "f683ac01-9d33-4a8d-8496-478b12af8e88" (UID: "f683ac01-9d33-4a8d-8496-478b12af8e88"). InnerVolumeSpecName "kube-api-access-64glj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 25 09:10:41 crc kubenswrapper[4832]: I0125 09:10:41.755265 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-64glj\" (UniqueName: \"kubernetes.io/projected/f683ac01-9d33-4a8d-8496-478b12af8e88-kube-api-access-64glj\") on node \"crc\" DevicePath \"\"" Jan 25 09:10:41 crc kubenswrapper[4832]: I0125 09:10:41.809199 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f683ac01-9d33-4a8d-8496-478b12af8e88-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "f683ac01-9d33-4a8d-8496-478b12af8e88" (UID: "f683ac01-9d33-4a8d-8496-478b12af8e88"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 25 09:10:41 crc kubenswrapper[4832]: I0125 09:10:41.857373 4832 reconciler_common.go:293] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/f683ac01-9d33-4a8d-8496-478b12af8e88-must-gather-output\") on node \"crc\" DevicePath \"\"" Jan 25 09:10:42 crc kubenswrapper[4832]: I0125 09:10:42.109781 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-v7wc8_must-gather-vqcpt_f683ac01-9d33-4a8d-8496-478b12af8e88/copy/0.log" Jan 25 09:10:42 crc kubenswrapper[4832]: I0125 09:10:42.110101 4832 generic.go:334] "Generic (PLEG): container finished" podID="f683ac01-9d33-4a8d-8496-478b12af8e88" containerID="d86bbaf1ff464e699dc568103d1c45826d83b06a0024e3897e327977d80ce5c8" exitCode=143 Jan 25 09:10:42 crc kubenswrapper[4832]: I0125 09:10:42.110149 4832 scope.go:117] "RemoveContainer" containerID="d86bbaf1ff464e699dc568103d1c45826d83b06a0024e3897e327977d80ce5c8" Jan 25 09:10:42 crc kubenswrapper[4832]: I0125 09:10:42.110286 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-v7wc8/must-gather-vqcpt" Jan 25 09:10:42 crc kubenswrapper[4832]: I0125 09:10:42.133230 4832 scope.go:117] "RemoveContainer" containerID="4708d7280633af9595bd62d91e57140ea210b5205009b3bb7244bce712866e90" Jan 25 09:10:42 crc kubenswrapper[4832]: I0125 09:10:42.654738 4832 scope.go:117] "RemoveContainer" containerID="d86bbaf1ff464e699dc568103d1c45826d83b06a0024e3897e327977d80ce5c8" Jan 25 09:10:42 crc kubenswrapper[4832]: E0125 09:10:42.655365 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d86bbaf1ff464e699dc568103d1c45826d83b06a0024e3897e327977d80ce5c8\": container with ID starting with d86bbaf1ff464e699dc568103d1c45826d83b06a0024e3897e327977d80ce5c8 not found: ID does not exist" containerID="d86bbaf1ff464e699dc568103d1c45826d83b06a0024e3897e327977d80ce5c8" Jan 25 09:10:42 crc kubenswrapper[4832]: I0125 09:10:42.655438 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d86bbaf1ff464e699dc568103d1c45826d83b06a0024e3897e327977d80ce5c8"} err="failed to get container status \"d86bbaf1ff464e699dc568103d1c45826d83b06a0024e3897e327977d80ce5c8\": rpc error: code = NotFound desc = could not find container \"d86bbaf1ff464e699dc568103d1c45826d83b06a0024e3897e327977d80ce5c8\": container with ID starting with d86bbaf1ff464e699dc568103d1c45826d83b06a0024e3897e327977d80ce5c8 not found: ID does not exist" Jan 25 09:10:42 crc kubenswrapper[4832]: I0125 09:10:42.655463 4832 scope.go:117] "RemoveContainer" containerID="4708d7280633af9595bd62d91e57140ea210b5205009b3bb7244bce712866e90" Jan 25 09:10:42 crc kubenswrapper[4832]: E0125 09:10:42.656060 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4708d7280633af9595bd62d91e57140ea210b5205009b3bb7244bce712866e90\": container with ID starting with 4708d7280633af9595bd62d91e57140ea210b5205009b3bb7244bce712866e90 not found: ID does not exist" containerID="4708d7280633af9595bd62d91e57140ea210b5205009b3bb7244bce712866e90" Jan 25 09:10:42 crc kubenswrapper[4832]: I0125 09:10:42.656185 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4708d7280633af9595bd62d91e57140ea210b5205009b3bb7244bce712866e90"} err="failed to get container status \"4708d7280633af9595bd62d91e57140ea210b5205009b3bb7244bce712866e90\": rpc error: code = NotFound desc = could not find container \"4708d7280633af9595bd62d91e57140ea210b5205009b3bb7244bce712866e90\": container with ID starting with 4708d7280633af9595bd62d91e57140ea210b5205009b3bb7244bce712866e90 not found: ID does not exist" Jan 25 09:10:43 crc kubenswrapper[4832]: I0125 09:10:43.670521 4832 scope.go:117] "RemoveContainer" containerID="26d3543bdf72052e3cc4cb665d039f1d2057d49c984f5249d087685baf77d7d0" Jan 25 09:10:43 crc kubenswrapper[4832]: E0125 09:10:43.671308 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9r9sz_openshift-machine-config-operator(1fb47e8e-c812-41b4-9be7-3fad81e121b0)\"" pod="openshift-machine-config-operator/machine-config-daemon-9r9sz" podUID="1fb47e8e-c812-41b4-9be7-3fad81e121b0" Jan 25 09:10:43 crc kubenswrapper[4832]: I0125 09:10:43.680545 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f683ac01-9d33-4a8d-8496-478b12af8e88" path="/var/lib/kubelet/pods/f683ac01-9d33-4a8d-8496-478b12af8e88/volumes" Jan 25 09:10:56 crc kubenswrapper[4832]: I0125 09:10:56.670055 4832 scope.go:117] "RemoveContainer" containerID="26d3543bdf72052e3cc4cb665d039f1d2057d49c984f5249d087685baf77d7d0" Jan 25 09:10:56 crc kubenswrapper[4832]: E0125 09:10:56.672181 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9r9sz_openshift-machine-config-operator(1fb47e8e-c812-41b4-9be7-3fad81e121b0)\"" pod="openshift-machine-config-operator/machine-config-daemon-9r9sz" podUID="1fb47e8e-c812-41b4-9be7-3fad81e121b0" Jan 25 09:11:09 crc kubenswrapper[4832]: I0125 09:11:09.669846 4832 scope.go:117] "RemoveContainer" containerID="26d3543bdf72052e3cc4cb665d039f1d2057d49c984f5249d087685baf77d7d0" Jan 25 09:11:09 crc kubenswrapper[4832]: E0125 09:11:09.670772 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9r9sz_openshift-machine-config-operator(1fb47e8e-c812-41b4-9be7-3fad81e121b0)\"" pod="openshift-machine-config-operator/machine-config-daemon-9r9sz" podUID="1fb47e8e-c812-41b4-9be7-3fad81e121b0" Jan 25 09:11:20 crc kubenswrapper[4832]: I0125 09:11:20.670583 4832 scope.go:117] "RemoveContainer" containerID="26d3543bdf72052e3cc4cb665d039f1d2057d49c984f5249d087685baf77d7d0" Jan 25 09:11:20 crc kubenswrapper[4832]: E0125 09:11:20.671691 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9r9sz_openshift-machine-config-operator(1fb47e8e-c812-41b4-9be7-3fad81e121b0)\"" pod="openshift-machine-config-operator/machine-config-daemon-9r9sz" podUID="1fb47e8e-c812-41b4-9be7-3fad81e121b0" Jan 25 09:11:33 crc kubenswrapper[4832]: I0125 09:11:33.671703 4832 scope.go:117] "RemoveContainer" containerID="26d3543bdf72052e3cc4cb665d039f1d2057d49c984f5249d087685baf77d7d0" Jan 25 09:11:33 crc kubenswrapper[4832]: E0125 09:11:33.672535 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9r9sz_openshift-machine-config-operator(1fb47e8e-c812-41b4-9be7-3fad81e121b0)\"" pod="openshift-machine-config-operator/machine-config-daemon-9r9sz" podUID="1fb47e8e-c812-41b4-9be7-3fad81e121b0" Jan 25 09:11:45 crc kubenswrapper[4832]: I0125 09:11:45.670015 4832 scope.go:117] "RemoveContainer" containerID="26d3543bdf72052e3cc4cb665d039f1d2057d49c984f5249d087685baf77d7d0" Jan 25 09:11:45 crc kubenswrapper[4832]: E0125 09:11:45.670771 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9r9sz_openshift-machine-config-operator(1fb47e8e-c812-41b4-9be7-3fad81e121b0)\"" pod="openshift-machine-config-operator/machine-config-daemon-9r9sz" podUID="1fb47e8e-c812-41b4-9be7-3fad81e121b0" Jan 25 09:12:00 crc kubenswrapper[4832]: I0125 09:12:00.672464 4832 scope.go:117] "RemoveContainer" containerID="26d3543bdf72052e3cc4cb665d039f1d2057d49c984f5249d087685baf77d7d0" Jan 25 09:12:00 crc kubenswrapper[4832]: E0125 09:12:00.673331 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9r9sz_openshift-machine-config-operator(1fb47e8e-c812-41b4-9be7-3fad81e121b0)\"" pod="openshift-machine-config-operator/machine-config-daemon-9r9sz" podUID="1fb47e8e-c812-41b4-9be7-3fad81e121b0" Jan 25 09:12:11 crc kubenswrapper[4832]: I0125 09:12:11.673321 4832 scope.go:117] "RemoveContainer" containerID="26d3543bdf72052e3cc4cb665d039f1d2057d49c984f5249d087685baf77d7d0" Jan 25 09:12:11 crc kubenswrapper[4832]: E0125 09:12:11.674292 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9r9sz_openshift-machine-config-operator(1fb47e8e-c812-41b4-9be7-3fad81e121b0)\"" pod="openshift-machine-config-operator/machine-config-daemon-9r9sz" podUID="1fb47e8e-c812-41b4-9be7-3fad81e121b0" Jan 25 09:12:25 crc kubenswrapper[4832]: I0125 09:12:25.670414 4832 scope.go:117] "RemoveContainer" containerID="26d3543bdf72052e3cc4cb665d039f1d2057d49c984f5249d087685baf77d7d0" Jan 25 09:12:25 crc kubenswrapper[4832]: E0125 09:12:25.678308 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9r9sz_openshift-machine-config-operator(1fb47e8e-c812-41b4-9be7-3fad81e121b0)\"" pod="openshift-machine-config-operator/machine-config-daemon-9r9sz" podUID="1fb47e8e-c812-41b4-9be7-3fad81e121b0" Jan 25 09:12:40 crc kubenswrapper[4832]: I0125 09:12:40.672233 4832 scope.go:117] "RemoveContainer" containerID="26d3543bdf72052e3cc4cb665d039f1d2057d49c984f5249d087685baf77d7d0" Jan 25 09:12:40 crc kubenswrapper[4832]: E0125 09:12:40.679903 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9r9sz_openshift-machine-config-operator(1fb47e8e-c812-41b4-9be7-3fad81e121b0)\"" pod="openshift-machine-config-operator/machine-config-daemon-9r9sz" podUID="1fb47e8e-c812-41b4-9be7-3fad81e121b0" Jan 25 09:12:51 crc kubenswrapper[4832]: I0125 09:12:51.670534 4832 scope.go:117] "RemoveContainer" containerID="26d3543bdf72052e3cc4cb665d039f1d2057d49c984f5249d087685baf77d7d0" Jan 25 09:12:51 crc kubenswrapper[4832]: E0125 09:12:51.671215 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9r9sz_openshift-machine-config-operator(1fb47e8e-c812-41b4-9be7-3fad81e121b0)\"" pod="openshift-machine-config-operator/machine-config-daemon-9r9sz" podUID="1fb47e8e-c812-41b4-9be7-3fad81e121b0" Jan 25 09:13:06 crc kubenswrapper[4832]: I0125 09:13:06.671020 4832 scope.go:117] "RemoveContainer" containerID="26d3543bdf72052e3cc4cb665d039f1d2057d49c984f5249d087685baf77d7d0" Jan 25 09:13:07 crc kubenswrapper[4832]: I0125 09:13:07.454827 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-9r9sz" event={"ID":"1fb47e8e-c812-41b4-9be7-3fad81e121b0","Type":"ContainerStarted","Data":"7910105ff456eb344f80a25dc8a912036f7a2e9898f4017110f02598968ef346"} Jan 25 09:15:00 crc kubenswrapper[4832]: I0125 09:15:00.173749 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29488875-v74pg"] Jan 25 09:15:00 crc kubenswrapper[4832]: E0125 09:15:00.174817 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="453fca9b-5867-4b42-9587-103ca5fc562f" containerName="extract-content" Jan 25 09:15:00 crc kubenswrapper[4832]: I0125 09:15:00.174835 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="453fca9b-5867-4b42-9587-103ca5fc562f" containerName="extract-content" Jan 25 09:15:00 crc kubenswrapper[4832]: E0125 09:15:00.174859 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f683ac01-9d33-4a8d-8496-478b12af8e88" containerName="gather" Jan 25 09:15:00 crc kubenswrapper[4832]: I0125 09:15:00.174865 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="f683ac01-9d33-4a8d-8496-478b12af8e88" containerName="gather" Jan 25 09:15:00 crc kubenswrapper[4832]: E0125 09:15:00.174880 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="453fca9b-5867-4b42-9587-103ca5fc562f" containerName="extract-utilities" Jan 25 09:15:00 crc kubenswrapper[4832]: I0125 09:15:00.174887 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="453fca9b-5867-4b42-9587-103ca5fc562f" containerName="extract-utilities" Jan 25 09:15:00 crc kubenswrapper[4832]: E0125 09:15:00.174929 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="453fca9b-5867-4b42-9587-103ca5fc562f" containerName="registry-server" Jan 25 09:15:00 crc kubenswrapper[4832]: I0125 09:15:00.174937 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="453fca9b-5867-4b42-9587-103ca5fc562f" containerName="registry-server" Jan 25 09:15:00 crc kubenswrapper[4832]: E0125 09:15:00.174956 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f683ac01-9d33-4a8d-8496-478b12af8e88" containerName="copy" Jan 25 09:15:00 crc kubenswrapper[4832]: I0125 09:15:00.174964 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="f683ac01-9d33-4a8d-8496-478b12af8e88" containerName="copy" Jan 25 09:15:00 crc kubenswrapper[4832]: I0125 09:15:00.175148 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="f683ac01-9d33-4a8d-8496-478b12af8e88" containerName="gather" Jan 25 09:15:00 crc kubenswrapper[4832]: I0125 09:15:00.175174 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="453fca9b-5867-4b42-9587-103ca5fc562f" containerName="registry-server" Jan 25 09:15:00 crc kubenswrapper[4832]: I0125 09:15:00.175188 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="f683ac01-9d33-4a8d-8496-478b12af8e88" containerName="copy" Jan 25 09:15:00 crc kubenswrapper[4832]: I0125 09:15:00.175829 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29488875-v74pg" Jan 25 09:15:00 crc kubenswrapper[4832]: I0125 09:15:00.178491 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 25 09:15:00 crc kubenswrapper[4832]: I0125 09:15:00.178760 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 25 09:15:00 crc kubenswrapper[4832]: I0125 09:15:00.226335 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29488875-v74pg"] Jan 25 09:15:00 crc kubenswrapper[4832]: I0125 09:15:00.239842 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/f63a7fb8-a63e-44a5-8d15-132236ba167c-secret-volume\") pod \"collect-profiles-29488875-v74pg\" (UID: \"f63a7fb8-a63e-44a5-8d15-132236ba167c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29488875-v74pg" Jan 25 09:15:00 crc kubenswrapper[4832]: I0125 09:15:00.240294 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f63a7fb8-a63e-44a5-8d15-132236ba167c-config-volume\") pod \"collect-profiles-29488875-v74pg\" (UID: \"f63a7fb8-a63e-44a5-8d15-132236ba167c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29488875-v74pg" Jan 25 09:15:00 crc kubenswrapper[4832]: I0125 09:15:00.240536 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dq2mz\" (UniqueName: \"kubernetes.io/projected/f63a7fb8-a63e-44a5-8d15-132236ba167c-kube-api-access-dq2mz\") pod \"collect-profiles-29488875-v74pg\" (UID: \"f63a7fb8-a63e-44a5-8d15-132236ba167c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29488875-v74pg" Jan 25 09:15:00 crc kubenswrapper[4832]: I0125 09:15:00.342052 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f63a7fb8-a63e-44a5-8d15-132236ba167c-config-volume\") pod \"collect-profiles-29488875-v74pg\" (UID: \"f63a7fb8-a63e-44a5-8d15-132236ba167c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29488875-v74pg" Jan 25 09:15:00 crc kubenswrapper[4832]: I0125 09:15:00.342177 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dq2mz\" (UniqueName: \"kubernetes.io/projected/f63a7fb8-a63e-44a5-8d15-132236ba167c-kube-api-access-dq2mz\") pod \"collect-profiles-29488875-v74pg\" (UID: \"f63a7fb8-a63e-44a5-8d15-132236ba167c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29488875-v74pg" Jan 25 09:15:00 crc kubenswrapper[4832]: I0125 09:15:00.342208 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/f63a7fb8-a63e-44a5-8d15-132236ba167c-secret-volume\") pod \"collect-profiles-29488875-v74pg\" (UID: \"f63a7fb8-a63e-44a5-8d15-132236ba167c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29488875-v74pg" Jan 25 09:15:00 crc kubenswrapper[4832]: I0125 09:15:00.343060 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f63a7fb8-a63e-44a5-8d15-132236ba167c-config-volume\") pod \"collect-profiles-29488875-v74pg\" (UID: \"f63a7fb8-a63e-44a5-8d15-132236ba167c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29488875-v74pg" Jan 25 09:15:00 crc kubenswrapper[4832]: I0125 09:15:00.349201 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/f63a7fb8-a63e-44a5-8d15-132236ba167c-secret-volume\") pod \"collect-profiles-29488875-v74pg\" (UID: \"f63a7fb8-a63e-44a5-8d15-132236ba167c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29488875-v74pg" Jan 25 09:15:00 crc kubenswrapper[4832]: I0125 09:15:00.360010 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dq2mz\" (UniqueName: \"kubernetes.io/projected/f63a7fb8-a63e-44a5-8d15-132236ba167c-kube-api-access-dq2mz\") pod \"collect-profiles-29488875-v74pg\" (UID: \"f63a7fb8-a63e-44a5-8d15-132236ba167c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29488875-v74pg" Jan 25 09:15:00 crc kubenswrapper[4832]: I0125 09:15:00.519047 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29488875-v74pg" Jan 25 09:15:00 crc kubenswrapper[4832]: I0125 09:15:00.955952 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29488875-v74pg"] Jan 25 09:15:00 crc kubenswrapper[4832]: W0125 09:15:00.969710 4832 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf63a7fb8_a63e_44a5_8d15_132236ba167c.slice/crio-1328c53447ceba1433e76facb13659cd823239882e089706d7eb5545e0f3f279 WatchSource:0}: Error finding container 1328c53447ceba1433e76facb13659cd823239882e089706d7eb5545e0f3f279: Status 404 returned error can't find the container with id 1328c53447ceba1433e76facb13659cd823239882e089706d7eb5545e0f3f279 Jan 25 09:15:01 crc kubenswrapper[4832]: I0125 09:15:01.585658 4832 generic.go:334] "Generic (PLEG): container finished" podID="f63a7fb8-a63e-44a5-8d15-132236ba167c" containerID="f13261d6eadadf2671ebecefb28faa82102b49a6f42c1bae40b039a9cbe5b3c6" exitCode=0 Jan 25 09:15:01 crc kubenswrapper[4832]: I0125 09:15:01.585771 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29488875-v74pg" event={"ID":"f63a7fb8-a63e-44a5-8d15-132236ba167c","Type":"ContainerDied","Data":"f13261d6eadadf2671ebecefb28faa82102b49a6f42c1bae40b039a9cbe5b3c6"} Jan 25 09:15:01 crc kubenswrapper[4832]: I0125 09:15:01.585980 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29488875-v74pg" event={"ID":"f63a7fb8-a63e-44a5-8d15-132236ba167c","Type":"ContainerStarted","Data":"1328c53447ceba1433e76facb13659cd823239882e089706d7eb5545e0f3f279"} Jan 25 09:15:03 crc kubenswrapper[4832]: I0125 09:15:03.245405 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29488875-v74pg" Jan 25 09:15:03 crc kubenswrapper[4832]: I0125 09:15:03.304456 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/f63a7fb8-a63e-44a5-8d15-132236ba167c-secret-volume\") pod \"f63a7fb8-a63e-44a5-8d15-132236ba167c\" (UID: \"f63a7fb8-a63e-44a5-8d15-132236ba167c\") " Jan 25 09:15:03 crc kubenswrapper[4832]: I0125 09:15:03.304570 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f63a7fb8-a63e-44a5-8d15-132236ba167c-config-volume\") pod \"f63a7fb8-a63e-44a5-8d15-132236ba167c\" (UID: \"f63a7fb8-a63e-44a5-8d15-132236ba167c\") " Jan 25 09:15:03 crc kubenswrapper[4832]: I0125 09:15:03.304697 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dq2mz\" (UniqueName: \"kubernetes.io/projected/f63a7fb8-a63e-44a5-8d15-132236ba167c-kube-api-access-dq2mz\") pod \"f63a7fb8-a63e-44a5-8d15-132236ba167c\" (UID: \"f63a7fb8-a63e-44a5-8d15-132236ba167c\") " Jan 25 09:15:03 crc kubenswrapper[4832]: I0125 09:15:03.305238 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f63a7fb8-a63e-44a5-8d15-132236ba167c-config-volume" (OuterVolumeSpecName: "config-volume") pod "f63a7fb8-a63e-44a5-8d15-132236ba167c" (UID: "f63a7fb8-a63e-44a5-8d15-132236ba167c"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 25 09:15:03 crc kubenswrapper[4832]: I0125 09:15:03.310995 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f63a7fb8-a63e-44a5-8d15-132236ba167c-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "f63a7fb8-a63e-44a5-8d15-132236ba167c" (UID: "f63a7fb8-a63e-44a5-8d15-132236ba167c"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 25 09:15:03 crc kubenswrapper[4832]: I0125 09:15:03.312288 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f63a7fb8-a63e-44a5-8d15-132236ba167c-kube-api-access-dq2mz" (OuterVolumeSpecName: "kube-api-access-dq2mz") pod "f63a7fb8-a63e-44a5-8d15-132236ba167c" (UID: "f63a7fb8-a63e-44a5-8d15-132236ba167c"). InnerVolumeSpecName "kube-api-access-dq2mz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 25 09:15:03 crc kubenswrapper[4832]: I0125 09:15:03.406919 4832 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/f63a7fb8-a63e-44a5-8d15-132236ba167c-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 25 09:15:03 crc kubenswrapper[4832]: I0125 09:15:03.406993 4832 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f63a7fb8-a63e-44a5-8d15-132236ba167c-config-volume\") on node \"crc\" DevicePath \"\"" Jan 25 09:15:03 crc kubenswrapper[4832]: I0125 09:15:03.407011 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dq2mz\" (UniqueName: \"kubernetes.io/projected/f63a7fb8-a63e-44a5-8d15-132236ba167c-kube-api-access-dq2mz\") on node \"crc\" DevicePath \"\"" Jan 25 09:15:03 crc kubenswrapper[4832]: I0125 09:15:03.608238 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29488875-v74pg" event={"ID":"f63a7fb8-a63e-44a5-8d15-132236ba167c","Type":"ContainerDied","Data":"1328c53447ceba1433e76facb13659cd823239882e089706d7eb5545e0f3f279"} Jan 25 09:15:03 crc kubenswrapper[4832]: I0125 09:15:03.608287 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29488875-v74pg" Jan 25 09:15:03 crc kubenswrapper[4832]: I0125 09:15:03.608294 4832 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1328c53447ceba1433e76facb13659cd823239882e089706d7eb5545e0f3f279" Jan 25 09:15:04 crc kubenswrapper[4832]: I0125 09:15:04.321578 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29488830-4gsj2"] Jan 25 09:15:04 crc kubenswrapper[4832]: I0125 09:15:04.329641 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29488830-4gsj2"] Jan 25 09:15:05 crc kubenswrapper[4832]: I0125 09:15:05.685555 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a25d2383-1995-4dda-ab68-ab5872da5a5e" path="/var/lib/kubelet/pods/a25d2383-1995-4dda-ab68-ab5872da5a5e/volumes" Jan 25 09:15:07 crc kubenswrapper[4832]: I0125 09:15:07.844038 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-xddlg"] Jan 25 09:15:07 crc kubenswrapper[4832]: E0125 09:15:07.845659 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f63a7fb8-a63e-44a5-8d15-132236ba167c" containerName="collect-profiles" Jan 25 09:15:07 crc kubenswrapper[4832]: I0125 09:15:07.845746 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="f63a7fb8-a63e-44a5-8d15-132236ba167c" containerName="collect-profiles" Jan 25 09:15:07 crc kubenswrapper[4832]: I0125 09:15:07.845988 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="f63a7fb8-a63e-44a5-8d15-132236ba167c" containerName="collect-profiles" Jan 25 09:15:07 crc kubenswrapper[4832]: I0125 09:15:07.847578 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-xddlg" Jan 25 09:15:07 crc kubenswrapper[4832]: I0125 09:15:07.866079 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-xddlg"] Jan 25 09:15:07 crc kubenswrapper[4832]: I0125 09:15:07.915429 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9a28ddbc-6530-40a4-9bf2-dc7b141fdd78-utilities\") pod \"certified-operators-xddlg\" (UID: \"9a28ddbc-6530-40a4-9bf2-dc7b141fdd78\") " pod="openshift-marketplace/certified-operators-xddlg" Jan 25 09:15:07 crc kubenswrapper[4832]: I0125 09:15:07.915506 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9a28ddbc-6530-40a4-9bf2-dc7b141fdd78-catalog-content\") pod \"certified-operators-xddlg\" (UID: \"9a28ddbc-6530-40a4-9bf2-dc7b141fdd78\") " pod="openshift-marketplace/certified-operators-xddlg" Jan 25 09:15:07 crc kubenswrapper[4832]: I0125 09:15:07.915538 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vgcpr\" (UniqueName: \"kubernetes.io/projected/9a28ddbc-6530-40a4-9bf2-dc7b141fdd78-kube-api-access-vgcpr\") pod \"certified-operators-xddlg\" (UID: \"9a28ddbc-6530-40a4-9bf2-dc7b141fdd78\") " pod="openshift-marketplace/certified-operators-xddlg" Jan 25 09:15:08 crc kubenswrapper[4832]: I0125 09:15:08.017461 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9a28ddbc-6530-40a4-9bf2-dc7b141fdd78-utilities\") pod \"certified-operators-xddlg\" (UID: \"9a28ddbc-6530-40a4-9bf2-dc7b141fdd78\") " pod="openshift-marketplace/certified-operators-xddlg" Jan 25 09:15:08 crc kubenswrapper[4832]: I0125 09:15:08.017553 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9a28ddbc-6530-40a4-9bf2-dc7b141fdd78-catalog-content\") pod \"certified-operators-xddlg\" (UID: \"9a28ddbc-6530-40a4-9bf2-dc7b141fdd78\") " pod="openshift-marketplace/certified-operators-xddlg" Jan 25 09:15:08 crc kubenswrapper[4832]: I0125 09:15:08.017582 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vgcpr\" (UniqueName: \"kubernetes.io/projected/9a28ddbc-6530-40a4-9bf2-dc7b141fdd78-kube-api-access-vgcpr\") pod \"certified-operators-xddlg\" (UID: \"9a28ddbc-6530-40a4-9bf2-dc7b141fdd78\") " pod="openshift-marketplace/certified-operators-xddlg" Jan 25 09:15:08 crc kubenswrapper[4832]: I0125 09:15:08.017936 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9a28ddbc-6530-40a4-9bf2-dc7b141fdd78-utilities\") pod \"certified-operators-xddlg\" (UID: \"9a28ddbc-6530-40a4-9bf2-dc7b141fdd78\") " pod="openshift-marketplace/certified-operators-xddlg" Jan 25 09:15:08 crc kubenswrapper[4832]: I0125 09:15:08.018143 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9a28ddbc-6530-40a4-9bf2-dc7b141fdd78-catalog-content\") pod \"certified-operators-xddlg\" (UID: \"9a28ddbc-6530-40a4-9bf2-dc7b141fdd78\") " pod="openshift-marketplace/certified-operators-xddlg" Jan 25 09:15:08 crc kubenswrapper[4832]: I0125 09:15:08.041375 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vgcpr\" (UniqueName: \"kubernetes.io/projected/9a28ddbc-6530-40a4-9bf2-dc7b141fdd78-kube-api-access-vgcpr\") pod \"certified-operators-xddlg\" (UID: \"9a28ddbc-6530-40a4-9bf2-dc7b141fdd78\") " pod="openshift-marketplace/certified-operators-xddlg" Jan 25 09:15:08 crc kubenswrapper[4832]: I0125 09:15:08.173064 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-xddlg" Jan 25 09:15:08 crc kubenswrapper[4832]: I0125 09:15:08.722072 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-xddlg"] Jan 25 09:15:09 crc kubenswrapper[4832]: I0125 09:15:09.670774 4832 generic.go:334] "Generic (PLEG): container finished" podID="9a28ddbc-6530-40a4-9bf2-dc7b141fdd78" containerID="2d74fd92e96abf5b4667950e9af6fb535db6908f7eb7c8af68273d286398cf43" exitCode=0 Jan 25 09:15:09 crc kubenswrapper[4832]: I0125 09:15:09.674001 4832 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 25 09:15:09 crc kubenswrapper[4832]: I0125 09:15:09.687031 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xddlg" event={"ID":"9a28ddbc-6530-40a4-9bf2-dc7b141fdd78","Type":"ContainerDied","Data":"2d74fd92e96abf5b4667950e9af6fb535db6908f7eb7c8af68273d286398cf43"} Jan 25 09:15:09 crc kubenswrapper[4832]: I0125 09:15:09.687080 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xddlg" event={"ID":"9a28ddbc-6530-40a4-9bf2-dc7b141fdd78","Type":"ContainerStarted","Data":"20675e35e218c47e512c8b7551c8019cf560609d237e80648fae3f4c2c39a12f"}